How Lineaje Wants To Make Your Software And AI Supply Chains Boringly Safe
If you have spent the last few years drowning in SBOMs, critical CVEs and increasingly anxious board questions about “our AI posture,” you are not alone. The software supply chain problem has gone from niche topic to front-page concern, and most CISOs now live in a world where one vulnerable open source library halfway across the dependency graph can trigger a full-blown executive incident.

That is exactly the universe that Lineaje lives in. Or, as co-founder and CEO Javed Hasan puts it, the company exists to answer a very simple, very painful question: Where does your software actually come from, and how risky is it really?
In a recent conversation for CyberDefense Magazine’s Innovator Spotlight, Hasan walked through how Lineaje started with deep software lineage and gold-standard open source and is now betting big on securing what he calls the emerging AI-centric infrastructure. If you are responsible for application security, software supply chain risk, or AI governance, this is the kind of story you want to hear with both eyes open.
From SBOMs To “The Deepest SBOM Possible”
When Hasan describes Lineaje’s origin story, he does not start with vulnerabilities or CVEs. He starts with provenance.
“So Lineage is about the lineage of software,” he explains. “What we try to solve, as the name implies, is figure out, where does your software components come from? And of course, they come from three places, open source, third party, and what we call first party, which is built by a company.”
Plenty of vendors will tell you they “do SBOMs.” Hasan is not impressed by that as a finish line. He treats it as the starting point.
“What Lineage can essentially do is decompose software from any state, figure out all of its components, the entire supply which, of course, is first party code or the supply chain, and decompose it to the deepest level in the industry,” he says.
He stresses that this is not yet another shallow dependency scan. In their customer base, they routinely see open source dependency chains “go almost 60 levels deep.” Every open source developer is using someone else’s open source, and that pattern repeats until you end up with a deeply nested, barely understood web of code.
Hasan describes it simply: “So it is a deep supply chain problem.”
Lineaje decomposes an application down that entire chain, then layers multiple views of risk on top of it – vulnerabilities, code quality, security posture and even geo-provenance.
“In open source, for example, 33% of the commits came from China. Seven percent came from Russia. So we can essentially detect even things like geo provenance by author and map it to risk.”
It is all still an SBOM in the strict sense, he acknowledges, but not in the simplistic way the industry sometimes treats it. Hasan calls it “the deepest SBOM possible.”
That depth is what lets Lineaje quickly assess entire portfolios. He notes that “like we did for Cisco security, in about three days, three four days,” they can decompose a significant portfolio and tell a CISO which applications are most and least risky, with clear sources of risk.
For a lot of organizations, that level of visibility alone would be a step function improvement over today’s spreadsheet chaos. But Lineaje treats visibility as table stakes. The bigger question is: what do you actually do with that knowledge?
Gold Open Source: Reducing 95 Percent Of The Risk, Not Just Cataloging It
Hasan is blunt about the distribution of risk in modern software.
“If 70% of your components come from open source, I can now give you 70% of those components basically all of those components to be critical and high and exploited, vulnerability free, no malware, fully attested,” he explains.
Lineaje calls this gold open source. The idea is simple, but operationally nontrivial: rebuild open source at scale, scrub it, attest it, and then ship it to customers as a safe baseline.

“So essentially, I can now eliminate 70% of the components,” he says, not by ripping them out, but by replacing them with “gold” versions that are cleaned up and continuously maintained.
He continues: “If 70% of the components that come from open source normally are now 95% of the risk, so by taking gold open source, you eliminate 95% of the risk.”
That line tends to get the attorneys’ attention.
“Attorneys love that,” Hasan notes, responding to Speaker 2, who is a CISO and describes the now-familiar legal emails:
“I get messages from our attorneys all the time saying, this vendor has identified an open source library that they are concerned about and that they do not want used in their code, in your code base, and we have to go track it down. We have to go find it. Who is using it. Why are they using it, and what is the alternative?”
Lineaje’s decomposition engine plus gold open source is designed to kill that fire drill. Once they have decomposed your portfolio, “you can search for it in, let us say, 10 seconds,” Hasan says. They can tell you which applications are using a risky library, and because they map code back to commits and developers, “I can map it to the right team, and say, not only these applications are using it, but here are the teams who are using it.”
That linkage from SBOM to org chart is one of those seemingly small capabilities that, in practice, determines whether you are staring at yet another static report or you can actually assign work and drive remediation.
Autonomous Fix: Freeing Hundreds Of Developers Without Firing Anyone
Once you have visibility and a supply of safe components, the next barrier is predictable. Developers ask the same thing every time security hands them a “fixed” version of something: Is this going to break my application?
Hasan sees that question as the core friction that keeps organizations from modernizing software safely.
“When we generate an SBOM of first party code, everything, we know the software structure,” he says. “So one of the things we can do with it, we can detect what changes are compatible and which ones are breaking changes, both in source code and containers.”
The practical payoff is that Lineaje can “autonomously fix all the compatible changes,” and then group incompatible changes for developers, so they can bring the application up once, with a focused remediation effort instead of endless piecemeal patching.
He is explicit about the impact this has on developer productivity. With one customer, Fannie Mae, “there were 2000 developers, 20 to 25% of the time is spent on fixing vulnerabilities.” That is the equivalent of four to five hundred people full time just doing security cleanup.
By eliminating 95 percent of open source risk via gold open source and then automating the compatible fixes, Hasan says they “eliminate about 80 to 85% of developer effort in fixing vulnerabilities.”
You do not have to believe every marketing number to see the direction of travel here. Taken seriously, this is not just shift left. It is let the robot carry the luggage while developers do actual product work. Hasan frames it as “build safe,” not bolt on security as an afterthought, and he is not shy about the role AI plays in making that achievable at scale.
“Of course, we are using a lot of AI to rebuild open source at scale. Autonomous fix, of course, is AI driven, but that is, if you think of it, AI being used for better security.”

Which is a nice segue into the part of the conversation where he shifts from traditional software to the newly forming AI tech stack.
A New AI-Centric Infrastructure And A Completely New Attack Surface
Hasan does not talk about AI like a bolt-on feature. He treats it as a new layer of infrastructure with its own supply chain.
“We have seen this emergence of this new real estate. I call it a new infrastructure, which is AI centric,” he says. That includes “LLMs, MCP servers, agents, agent swarms, super agents,” and the platforms that let not just developers, but business users create powerful automations.
“If you think of the AI tech stack, it is actually entirely different from traditional software tech stack,” he argues. “So we are seeing the emergence of a new software stack.”
With that comes a familiar problem in a new costume. Where does this AI come from, what is it connected to, and who can it talk to?
“Is the LLM that my developer is using, was it derived from DeepSeek? Classic, for example,” he asks. “These MCP servers. So I used to have Salesforce. Salesforce now gave me an MCP server. ServiceNow gave me an MCP server. It now has, now everyone who can talk to that MCP server has access to all my data.”
That last sentence should make any CISO wince. Suddenly, AI agents are wiring themselves into internal systems and data lakes with a few lines of code, and the control plane is, at best, immature.
To understand what CISOs actually needed here, Hasan says they “went and interviewed a whole bunch of CISOs like you,” and the themes were quite consistent.
First, AI is arriving at a speed that governance cannot match. Developers bring in tools, business units adopt low code and no code AI agent builders, and even HR people can now “write an agent.”
“So the first problem is, can you give me visibility into all the AI coming into my organization, and give me the reputation of everything,” he explains.
In the SBOM world, that meant software bills of materials. In the AI world, Hasan is now talking about AI BOMs.
“We went from SBOMs to now AI BOMs,” he says. “So now this is basically your AI BOM with the same rate with the lineage and reputation.”
Second, CISOs know they need policies, but the ground is shifting under their feet.
“We do not know what the right security policies for agent tech AI are,” Hasan says. “Agent to agent communication should be encrypted. Is it? There are new compliance standards like the EU AI Act. You are seeing new threat vectors, new kinds of attacks, prompt injection is famous. We are seeing reasoning compromise, we are seeing LLM poisoning.”
And then there are best practices that sound obvious in hindsight but are rarely enforced:
“Agent or LLM to MCP should never be allowed,” he notes. “LLMs are gossipy. Once they get the data, they will tell you something. Someone will ask it in the right way, and they will tell you. So you should always have an LLM filtered by an agent. PII information should always be masked.”
The punchline from CISOs was clear. Tell us what good looks like, give us policies in categories like threats, best practices and compliance, and then do not just hand us a 40 page PDF.
Because, as Hasan points out, some organizations have already written that policy tome. The problem is enforcing it.
“If you say, PII data should always be masked, how you want to enforce it? How do you know that perhaps you wrote code for it and so on. So enforcing those policies is hard, especially they are constantly evolving.”
Which is where Lineaje’s new product comes in.
Unify: Central Policy Brain For AI, Embedded In The Build And Run Path
Hasan describes Unify as “a central policy manager and implementer for all AI.”
At a high level, it does three things.
First, it discovers AI assets and generates that AI BOM. You point it at source code, IDEs, containers and other environments, and it “will generate all the embedded AI list for you.” That covers LLMs, MCP servers, agents, skills and other components of the modern AI stack.
Second, it lets you allow and block AI components, and recommends policies across the key buckets Hasan mentioned earlier: threats, best practices, compliance, and within those, data, identity, vulnerabilities and more. Organizations can enable or disable recommended policies and add their own existing rules.
Third, and this is the important bit, Unify is not just a dashboard. It is implemented as an MCP server that integrates into your development and CI/CD workflows.
“So you have a central place,” Hasan says. “What it does is it discovers, generates an AI BOM. The second thing it does, it allows you to allow and block, and it also recommends policy. And then what it does is, as agents are being created, it will take every policy, create guardrail equivalents, and insert that in code so developers do not have to.”
Speaker 2 summarizes it succinctly: “Your tool is actively becoming a component of CI/CD for AI.” Hasan’s answer is simple: “Exactly. For AI, but centered around AI because it is a completely new attack surface.”
He is candid about the pattern repeating from the early internet.
“We have a saying now, which says, AI has been made very easy to build. All of us can now build AI applications, but it is amazingly hard to make it run securely,” he says. “We did that with the internet. We said, very easy to use, completely insecure. We are doing it again with AI.”
Unify is intended to make “build AI, run AI” as simple as today, but with security policies actually applied at build time and runtime, rather than bolted on in a panic.

An AI Kill Chain For A Different Class Of Attacks
Traditional cyber frameworks like the MITRE ATT&CK matrix and cyber kill chain were built around endpoint, network and application threats. Hasan argues that AI-enabled attacks behave differently enough that they deserve their own framework.
“As part of Unify, like I mentioned, the fact that we are seeing a completely new threat landscape with new attack,” he says. “So you have prompt injection that did not exist two years ago. I can now compromise the reasoning that my agents go through with LLMs by asking different kinds of questions so it will reason differently. I can get it to learn new things with skills.”
Lineaje has created what they call an AI kill chain, analogous in spirit to the traditional cybersecurity kill chain, but tuned to AI specific attack patterns.
“We built an AI kill chain. We announced that it has 54 techniques, just like the old one,” Hasan explains, “and we have policies for each one of the techniques. So out of the box you get those protections.”
He offers a concrete example. Consider a simple invoice processing agent that ingests PDF invoices and processes them. Straightforward business use case. Now look at it through a threat lens.
“If the PDF has malicious content, it has hidden content, it has obfuscated content, what will the agent do?” he asks. “And the developer who wrote the agent, did they actually know enough to write code, not to just accept a PDF and process it, but to go, here are the ways it can be attacked and write code to clean it up? Likely not.”
Those controls might be buried somewhere in a 40 page policy document, but that does not mean they show up correctly in code. With Unify, when it sees a PDF being ingested into a prompt, “it will basically go and say, okay, here are the 10 policies that should be applied for document cleaning.” The guardrails are inserted automatically.
From the developer’s perspective, Hasan wants this to feel like a security co-pilot that just quietly does the right thing.
“We are basically giving developers a security pilot, a co pilot, but it also writes code. It does it for them, so that they do not have to learn the policies. They can go build their application, and the security guardrails are autonomously injected.”
For CISOs, the appeal is obvious. Instead of trusting that thousands of developers and citizen developers will not only read, but perfectly implement evolving AI security policies, you get a central system that standardizes those protections and enforces them automatically, across both traditional software and the emerging AI stack.

Hasan is unapologetically confident about Unify’s positioning. As he puts it, “there is no equivalence right now, there is nothing equal to it. It is very first to market.”
What CISOs Should Do Next
When asked for his call to action for potential clients, Hasan does not talk about feature checklists. He talks about a mindset.
“The call to action is build secure agent AI applications. If you are interested in that, Unify is the solution,” he says.
For CISOs and security leaders, that translates into a few concrete steps:
- Inventory your software and AI stack reality, not the PowerPoint version.
If you do not already have deep visibility into your open source, third party and first party components, and now your AI components and connections, you are not actually managing risk, you are estimating it. Products in the Lineaje category exist to eliminate that excuse. - Treat SBOMs and AI BOMs as living artifacts, not compliance theater.
The game is not “have a document.” The game is “know what changed yesterday, where risky components live, and which teams own them.” If you cannot answer “where is this disallowed library or model used” in seconds, your future incidents will be slower, louder and more expensive than they have to be. - Use automation to take real work off developers’ plates.
With 20 to 25 percent of developer time going to vulnerability cleanup in some shops, anything that can safely automate compatible fixes and reduce manual toil is worth a hard look. If you can give the business back the effective productivity of hundreds of engineers without hiring, that is real security ROI. - Accept that AI is a new attack surface and engineer accordingly.
Prompt injection, reasoning compromise, LLM poisoning, skill abuse and uncontrolled MCP access are not sci fi. They are today’s bugs and tomorrow’s incidents. Whether it is Lineaje Unify or another platform, you will need a central AI policy brain that can express rules once and enforce them everywhere in the AI development and runtime lifecycle. - Ask vendors for proof they understand AI-native threats.
When someone says they secure “AI applications,” press for specifics. How do they detect hidden content in prompts? Can they model an AI kill chain? Do they handle skills and MCP servers, or are they just slapping a buzzword on existing tooling?
If you are a CISO looking at your 2026 roadmap and wondering how to keep both your software and your rapidly multiplying AI efforts from becoming your next board-level incident, it is worth seeing what players like Lineaje are doing. Gold open source, autonomous fix and an AI-native policy platform are not silver bullets, but they are tangible attempts to make software and AI supply chains more predictable, visible and, frankly, boring again.
In this industry, boring is underrated.
Call to Action for CISOs
If you want to move beyond compliance checkboxes and slideware to a more operationally grounded approach to software and AI supply chain security, consider the following next steps:
- Schedule a technical deep dive or demo focused on your specific pain points around open source risk, vulnerability remediation effort or AI agent governance.
- Ask to see how a tool like Unify discovers AI dependencies in your own repositories or environments, and how its policies map to your existing standards and regulatory requirements.
- Pilot gold open source and autonomous fix workflows with one or two high value, high risk applications and measure the impact on vulnerability backlog and developer time.
The organizations that get ahead of software and AI lineage, policy and automation will be the ones answering board questions with data instead of hand waving. If you want to be in that camp, now is the time to start evaluating platforms that treat supply chain and AI security as first class problems, not bolt ons.
Author’s Note
The author sat down with Javed Hasan, cofounder and CEO of Lineaje, at the 2026 RSAC Conference in San Francisco, held March 23rd to 25th, 2026.
For more information, please visit www.lineaje.com.
About the Author
Pete Green is the CISO / CTO of Anvil Works, a ProCloud SaaS company and co-author of “The vCISO Playbook: How Virtual CISOs Deliver Enterprise-Grade Cybersecurity to Small and Medium Businesses (SMBs)”. With over 25 years of experience in information technology and cybersecurity, Pete is a seasoned and accomplished security practitioner.
Throughout his career, he has held a wide range of technical and leadership roles, including LAN/WLAN Engineer, Threat Analyst, Security Project Manager, Security Architect, Cloud Security Architect, Principal Security Consultant, Director of IT, CTO, CEO, Virtual CISO, and CISO.
Pete has supported clients across numerous industries, including federal, state, and local government, as well as financial services, healthcare, food services, manufacturing, technology, transportation, and hospitality.
He holds a Master of Computer Information Systems in Information Security from Boston University, which is recognized as a National Center of Academic Excellence in Information Assurance / Cyber Defense (CAE IA/CD) by the NSA and DHS. He also holds a Master of Business Administration in Informatics.
