
If you haven’t heard of Arm, you haven’t been paying attention to how ubiquitous the chipmaker has become. Arm’s processor designs power Macs, iPhones, and every other major smartphone line. Queries made through ChatGPT, Gemini, or Claude pass through an Arm-based chip at some point.
For more than 40 years, Arm’s focus was on chip design. Major device and AI chip makers then licensed those designs and turned them into hardware.
But the company’s focus is changing: Arm is now making hardware using its own AGI CPU, which OpenAI and Meta will use and which will allow the chipmaker itself to compete with the likes of Apple, Intel, Nvidia, Amazon and Google.
Arm’s envisions its new Performix software suite using “recipes” and AI insights to help engineers identify suspect code and CPU hotspots.
Alex Spinelli, who leads Arm’s software initiatives as senior vice president for AI and developer platforms, is as AI-native an engineer as you’ll find; he played a central role in the TensorFlow stack used to launch Gemini and was on the team at Amazon that developed Alexa.
Computerworld sat down with Spinelli to get his views about the ongoing shift in software and engineering driven by AI, and how engineers can keep up with the fast pace of change.
How does your group support Arm’s shift to building its own hardware? “Our mission is to enable application developers to take full advantage of Arm hardware the day it’s released. That’s the exciting part of the AGI CPU.”
How is software engineering itself changing? “What we’ve seen through computing, going back 60, 70 years, is a gradual progression to a higher order abstraction. You started with punch cards, assembly, low-level languages, higher-order languages, interpreted languages. We’re entering the era of human language becoming the language of programming…. Now, English is the highest level language.
“Programming doesn’t go away, engineering doesn’t go away. The way we express it is going away.”
Where does this transition leave today’s software engineer? “Engineering is moving to a much greater blending of technical product management thinking, design thinking, and architecture thinking in a different programming model where I’m using natural language to create my programs.
“As an engineer, embracing and understanding where you sit in that tool chain becomes really important. Where AI rubber really hits the road is with agents. Agents use a lot of AI and agents are software.”
How does this engineering structure work in this new model? “Thinking about how I structure that application stack requires a lot of experience and know-how.
“[For example], I have an OpenClaw instance installed in the cloud that I use to build out my hobby and side projects. I have 15 or so small models, embedding models, SLMs — all running on CPU within my agent application framework.
“Then I’m selectively calling out to different foundation models, fast, low-cost ones like Haiku or Flash, and foundation models like ChatGPT 5.5 for the most important reasoning problems. That is engineering.”
What do you tell engineers on your team about the future of their careers? “I have hundreds of software engineers on my team. The future of engineering is embracing this new model and not trying to fight it.
“For new entrants out of college and [with] master’s degrees, I don’t know what the right mix of learning is yet. AI tooling is a power tool for mid-to-senior engineers to embrace.
“If you look at the biggest technical innovations in the world — electricity, assembly line, railroad — they’re automations. When you radically reduce the cost of production of something, humans in history have not used less of it. People are finding new roles and new businesses are launching.
How do you view the “death of the engineer” predictions? “I’ve always had a chip on my shoulder because I didn’t do a [computer science degree. That drove me to go deep…into assembly, into how memory works. Even as LLMs become the new compiler that processes natural language into tool calls and Java or Python, those fundamentals matter.
“Think of an LLM like the smartest, most informed, overconfident, eager, arrogant recent MIT master’s grad. They know every language, but they would need a senior engineer to guide and help them. The importance of great engineers has been elevated. AI needs that guidance.
“We also need to dust off agile skills. Now we’re shifting back to applications and agents, where things change every week.”
Should new developers learn the tools first, or go to school for fundamentals? “I came out at a time when so much was changing. I went deep. I started looking at assembly. That deep understanding, especially in an era of high-level languages — [with] English … the highest level language — is always valuable.
“Even when I’m working with my agent, really knowing how a computer works has never not been valuable. You might never write C++ or C code, but fundamentally understanding what’s happening is really important. There are mistakes, there are corners cut. AI loves to roll its own libraries and not use tried-and-true best practices that understand the quirks of a particular system.
“Do you need formal education and training? I don’t know. There are so many ways to educate yourself if you’re motivated. Go deep, understand how computers work, understand what a compiler is. It’ll pay dividends.”
What are the biggest pitfalls you see for engineers today? “Cost is a big one. Tokens are expensive. In my OpenClaw, when I had it configured wrong, I got a bill for $500 in one weekend, and I was like, what the hell happened here? There’s no free lunch. Rents will be extracted when they’re available in economics.
“Security is another major pitfall. The challenges are less inherent to the frameworks themselves and more about what people are doing with them…, putting passwords and tokens in clear text. You see a response from the industry like NemoClaw, which is really a layer on top from Nvidia to push security policies.
“My advice to enterprises: don’t try to standardize too quickly across one model, but don’t allow the full Wild West either. You need to institutionalize your policies into your agent frameworks.”
What does the future of the AI-built world look like? “We’re moving toward fast software, similar to fast fashion. When you radically reduce the cost of production, humans in history have not used less of [what’s being produced].
“You’re going to have disposable software. We’re going to build things quickly. If they don’t quite work, that’s okay. The agent remembers how to do it. I’ll just rebuild it.
“But we have to accept a different kind of failure. Things might fail hilariously or catastrophically, and then we’ll fix it in an automated way.
“My target is that every engineer has an expert sidecar agent and a swarm of agent developers they can lean on. You use Claude Code or Codex or Gemini to spin up agents, each with a specific role…designer, architect, coder, tester. Research says when you bind an agent to a role with procedures, policies, and standards around it, and you allow those agents to interact, the outputs are orders of magnitude higher quality than leaning on a single agent.
“We’re looking at literally 10xing the ability for our engineers to produce. We’re not looking for cost savings. We’re looking to do more, because there’s so much more to do.
How do you make projections when AI changes every week? “You need diverse opinions, people with different ways of thinking. The tried-and-true…component-based, modular-based architectures…user-centered design, service-oriented design…are super important. You need the ability to flex and bend.
“I subscribe to: Think ahead, but don’t future proof, because often you’re going to assume something that needs to change. The pace is new.
“We almost went away from agile in the industry. Resurfacing those principles…ends up being pretty important now because stuff’s changing.”
