
“Privacy and compliance risks are significant, especially in Europe under GDPR and labor laws, where capturing keystrokes and screen activity may be restricted or require explicit consent,” Jain said. “Security risks increase because training datasets may contain credentials, IP, and sensitive workflows, making them high-value attack targets.”
Gogia said the risks should not be viewed in isolation. “These risks stack. They interact. They reinforce each other,” he said, adding that data gathered for AI training could also be repurposed over time for productivity monitoring or other employment-related decisions.
Jain added that governance could become more difficult because companies may struggle to trace what AI systems learned from specific employees. Employee awareness of monitoring could also affect the quality of the data itself. “People do not behave the same way when they know they are being observed,” Gogia said. Over time, that could mean systems are trained not on how work naturally happens, but on behavior shaped by observation.
