
“If an LLM is just handling public data, it is fine. But if it is processing data like client records, internal documents, financial data, etc, then even a small leak matters. The bigger worry is for companies that run their own AI models or connect them to cloud APIs. Like banks, healthcare, legal firms, defence, where data sensitivity is too high,” Dhar said.
While it is the AI providers that will have to address the issue, Microsoft researchers’ recommendations include avoiding discussing highly sensitive topics over AI chatbots when on untrusted networks, using VPN services for adding an additional layer of protection, opting for providers that have already implemented mitigation, and using non-streaming models of large language model providers.
Dhar pointed out that most AI security checklists do not even mention side channels yet. CISOs need to start asking their teams and vendors how they test for these kinds of probable issues.
