
I don’t know about you, but I spend a lot of time offline. And not by choice. That’s why I love new tools that work offline like the great one Google just launched.
I know, I’m an outlier. As a full-time digital nomad who travels constantly, I have unusual connectivity problems. Right now, I’m living on a farm in Tuscany. It’s amazing. I love it. But for two days recently, the connectivity got so bad I could barely work. There was little I could do except drink Chianti and gaze at the rolling green hills. (On Easter Sunday and the day after — a local day off — everybody was at home stressing their internet connections, which made connectivity close to impossible.)
I often find myself in this position. My wife and I tend to favor old houses in old neighborhoods, usually in Europe or Latin America, and the connectivity can be bad to nonexistent.
I lose connections while driving, while in or near very old stone buildings, while flying in airplanes, and while driving through remote areas.
But even for people who don’t travel and move around like I do, being offline can also be a choice. It’s much more secure to disconnect, especially in public spaces like coffeeshops and airports and when using one of the many untrustworthy cloud-centric companies. Sometimes you need desperately to save battery life. Sometimes it can feel healthy psychologically to know you’re offline.
Tools can and should work better offline. I have an expensive iPhone that would have been considered a supercomputer just 10 years ago. A modern smartphone is powerful enough to do a lot of the work that’s currently performed in the cloud.
Cloud computing is necessary for chatbots like ChatGPT, Perplexity, Claude and Gemini because all-purpose AI models require hundreds of billions of parameters, massive amounts of RAM, and huge amounts of electricity to be ready to do anything and everything very quickly. Forcing these workloads onto a mobile device fundamentally caps the intelligence and capability of general-purpose AI. But breaking down individual tasks (like transcription) doesn’t require massive data centers.
The biggest problems for me are two of the tools I use most: MyMind and Lex.
I wrote about MyMind in August. It’s a lifelogging, bookmarking, remember-everything tool that makes it very fast at recalling information. It uses AI to auto-tag and takes the work out of both saving and recalling information.
Unfortunately, without a connection, I lose MyMind. It simply has no offline capability. So when I’m disconnected and want to save or recall something, I can’t. The more I rely on this prosthetic memory tool, the more being offline gives me amnesia. This is my biggest complaint about MyMind.
I’ve also told you about Lex. Lex is essentially a word processor with built-in AI tools designed not to write for you (and make you worse at writing), but instead to point things out and advise you in ways that make your writing better.
Lex also doesn’t work offline. Which is a shame, because its major alternatives like Google Docs and Apple Pages do. You can simply use them offline, and later when you get a connection they sync to the cloud. Lex’s lack of offline support is the main reason I often think about cancelling my subscription and going back to Pages. (Note that I use a Bluetooth keyboard with my phone to do real writing of columns, newsletters, blog posts and even books.)
Both MyMind and Lex use AI and I expect that in the very near future we’ll see a shift away from all-purpose chatbots to smaller, special-purpose AI-based tools like these running on the edge or on our phones.
One great example of this shift is a new tool from Google called AI Edge Eloquent.
Talk to the handheld
Google launched its free, iOS-only, English-only offline dictation app on Monday. While dictation doesn’t sound very interesting, Google has built in several features that make it really great.
Firstly, it uses AI, with Gemma-based speech recognition models running locally on the phone. It doesn’t just capture what you say, but what you meant to say. Which is to say that it ignores your ums and ahs and repetitions, capturing only the clean words you intended. (If you toggle on cloud processing, it works even better.) It’s very good at adding punctuation automatically.
When you’re done talking, the app automatically loads the clean text to the clipboard. That means you can talk to the app, then just switch over to your word processor, social media app, email app or other app and simply paste in the results.
The app can re-write your transcripts using one of four default style options:
- Key points (condenses speech into a bulleted list)
- Formal (shifts the text into a professional tone)
- Short (summarizes the message)
- Long (expands on the initial text)
(For most writing, I don’t recommend these kinds of stylistic shortcuts; I recommend communicating in your own style.)
After you dictate something, you can press a stop button or a pause button. This is a great pair of choices because if you’re working on a longer piece, the pause button lets you gather your thoughts, do a bit of research, then resume, ending up with the whole screed in the clipboard.
The most surprising feature is that it can learn custom words. For example, it learns from your edits, from the manual addition of words or — wait for it — from your Gmail conversation history (a button asks your permission, and you need to choose to explicitly log in to Gmail). The Gmail option brings in not only jargon, but also names, brand names you’ve talked about, abbreviations, foreign words, place names, and others.
And, finally, the app prominently displays “usage stats,” including how many words, how many words per minute, average dictation speed, total number of words dictated, and the total number of “polishing edits” made by the app.
AI Edge Eloquent sherlocks Wispr Flow and Willow, which each cost $15 per month. It also sherlocks SuperWhisper, priced at $85 per year. (In Silicon Valley parlance, “sherlocking” is when a major company copies a major feature of a competitor’s product, thereby rendering the competitor’s product obsolete.)
In short, AI Edge Eloquent is kind of perfect and extremely useful for anyone who wants to dictate anything.
The slow rise of offline AI
I’m seeing a few other tools emerge that are based on the idea that AI should be on the edge and offline.
One interesting new tool released this week is called WarClaw from a Bellevue, WA-based startup called Edgerunner AI. The company calls the tool a “digital adjutant” (an adjutant is a military officer who serves as an assistant to a military commander).
The company claims WarClaw was built by former soldiers for use by active-duty military personnel. It’s a secure operating layer built on top of OpenClaw, according to the company. (I talked about OpenClaw earlier this year, as did my colleague Steven Vaughan-Nichols, who explained about how incredibly insecure OpenClaw is.
The software is designed to work during combat in what they call DDIL settings (Denied, Disconnected, Intermittent, and Low bandwidth).
WarClaw runs on a disconnected mobile device and was trained on specific military data. It automates mission planning, scheduling, and the analyzing of information. Surprisingly, it can directly control office tools like Microsoft Word, PowerPoint, Excel, Slack, web browsers, and email.
The company has already won contracts to supply WarClaw to three US military branches.
While WarClaw is for soldiers, I think business people could benefit from such a tool. For example, it would be great to have an offline assistant while traveling on business to data-insecure places (like China) and environments (like airports).
I’d love to see nearly all the AI jobs currently requiring a connection to be turned into an app that runs locally, disconnected on the phone. Beyond the obvious convenience, that also represents a big opportunity for Google and Apple: they can match their AI tools to increasingly powerful smartphones, which gives phone buyers a powerful reason to upgrade their hardware more frequently.
AI disclosure: I don’t use AI for writing. The words you see here are mine. I do use a variety of AI tools via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I use a word processing application called Lex, which has AI tools, and after writing use Lex’s grammar checking tools to find typos and errors and suggest word changes. Here’s why I disclose my AI use and encourage you to do the same.
