
“It’s a mixture of multiple models,” Omar told Network World. “The conversion and the core capability are not an LLM; it’s our own conditional model.”
A standard LLM sits at the front end to parse user intent. The Terraform generation and cloud-to-cloud conversion work runs on custom foundation models trained on infrastructure patterns. The training data is entirely synthetic. FluidCloud generated its own Terraform configurations and used its own conversion technology to build the training corpus.
“We have generated a lot of Terraform, and we use our own technology to generate more and more Terraform,” Omar said. “That’s what is powering the LIM.”
FluidCloud benchmarked LIM using BLEU score, a standard metric for evaluating generated output accuracy against reference results. Omar said the model currently scores 0.58. A score of 0.60 represents human-level performance on Terraform generation tasks.
What LIM adds to the platform
Before LIM, FluidCloud’s platform required a direct cloud scan as input and covered roughly 25 to 30 resource types. Coverage has since expanded to 150-plus resources across cloud providers.
The input model has also changed. Previously, the platform required a controlled scan to produce output. LIM accepts existing GitHub repositories containing Terraform code. It handles multiple Terraform syntax styles, including module-based, workspace-based, and variable configurations. It also supports custom mapping overrides.
