
What to do?
To some extent, when using AI agents to help craft code, it becomes even more important to put that code through human code review. Developers — and users — absolutely must put their code through robust security checks, particularly against rogue permissions, data sharing, or worse. AI-generated code should never be allowed to bypass established security processes.
Further out, app distribution service providers must also wake up to the need to insert additional layers of protection within automated or human-driven code review in order to protect against this kind of weaponization in vibe coding.
This could emerge as a particular threat in the current legislative environment concerning app stores. If you think about Europe, there is a danger that as new App Stores appear, not every single code review process they put in place will be capable of catching these kinds of inserted risks. Think about the complex tapestries of spoofs, infected depositories, and fake name malwares that can be created to side-step automated code verification services.
