
For example, there is an agent that is an expert at processing a PCAP file and understanding what that is. Another agent is an expert at understanding the configuration that is put into the product. Yet another agent is expert at collecting all that information and prioritizing where the root cause is.
“We have about 10 to 12 agents, and we continue to add more, that work together to deliver or execute an entire workflow for the customer, starting from collecting the information like PCAPs, logs and KPIs, and processing all of that information, interpreting the results, and coming up with the root cause,” Kollipara explained.
Determinism is based on domain knowledge
A critical concern with AI is that the LLMs are not deterministic and can potentially hallucinate incorrect information. As it turns out, the way to minimize hallucination is to minimize LLM usage, at least for the Spirent use case.
“We brought in third-party AI platforms, we tried different LLMs, and we quickly, very quickly, realized that the LLM is just part of the solution, and not the entire solution,” Kollipara said. “In Luma, the LLM role is almost just 10% of the whole thing, where it is processing the natural language. Most of the domain knowledge is built into this RAG database, this knowledge graph that we have built in.”
On top of the database layer, Spirent added deterministic rule sets tied to protocol stack behavior.
“We’ve seen cases where you have an issue happening at the subscriber level, there’s a KPI that is wrong, and any ML can find out there is some deviation that is happening,” Kollipara said. “But the root cause is difficult to be articulated just by using machine learning algorithms, because it requires intimate knowledge about the domain, how these are stacked from a protocol stack perspective. Building that rule set in is very important so that you get that deterministic aspect of the output.”
