In fact, NIST has already published a set of encryption tools designed to resist attacks from quantum computers and has urged system administrators to start transitioning to these new standards “as soon as possible.”
Some tech providers are aware of the risks and are moving quickly, but most enterprises, especially those with legacy systems, will require time, planning, and new capabilities to make the switch.
“To begin the journey toward quantum resistance, start small but start now,” Soroko says. “Automating things like digital certificate renewals is a low-hanging fruit that builds momentum and prepares your IT infrastructure for bigger shifts like quantum-safe encryption.”
We must allow law enforcement to break end-to-end encryption to keep us safe
Some governments around the world are looking to pass legislation that would allow law enforcement institutions to intercept, store, and even decrypt instant messages exchanged through applications such as WhatsApp, Telegram and Signal.
Some of these proposals mandate client-side scanning on citizens’ devices, “effectively breaking the promise of end-to-end encryption,” says Sabina-Alexandra Stefanescu, an independent security researcher. “The pushback against such laws from the civic society and security experts alike stands on firm principles: every individual has an inalienable right to privacy,” she adds.
In countries where journalists or human rights activists can face consequences, encrypted messaging and file storage “are the last bastions at their disposal in order to conduct their investigations,” according to Stefanescu.
The independent researcher argues that allowing law enforcement to decrypt messages “can make every person vulnerable and every device less secure.”
Deregulating generative AI is necessary to drive innovation
While some believe that loosening regulations around generative AI would unleash a new wave of innovation, others argue that the opposite is true: we need stronger safeguards in place.
Stefanescu points to the AI Incident Tracker, an MIT-led initiative that documents real-world harms caused by AI systems. The data gathered by the researchers show a steady rise in concerning cases over the past few years, with the most significant surge linked to the spread of misinformation and the actions of malicious actors.
“We chose to believe a mythical image of GenAI as a technology that is sure to evolve into a state that can do no harm, even while all evidence points to the contrary,” Stefanescu adds.