
A second path enterprises like had only about 35% buy-in, but generated the most enthusiasm. It is to use an online AI tool that offers more than a simple answer to a question, something more like an “interactive AI agent” than a chatbot. Two that got all the attention are a tool to generate a detailed and almost scholarly report and one to do a multi-source analysis of documents, generating some sort of audio summary or podcast. The specific tools cited most often are Google’s Gemini Deep Research and NotebookLM.
The things all users like about the research report option is the depth of content and the abundance of references, but a bit over half said that these reports are often insightful in themselves. I tried out Deep Research, asking for a report on the network equipment market since 1980 and identification of the main factors driving the future. The result developed the evolution as I recalled it as an analyst through that period, and also identified AI and the HPE/Juniper deal as the most significant developments, which mirrors my own views. All this, over twenty pages with references, from a single-sentence prompt.
Only one-fifth of the users tried the audio output option, but this actually got the most enthusiastic comments. Sales, marketing, and product planning teams absolutely loved the ability to generate a two-party podcast analysis of multiple sources, using it as a means of comparing their own material with that of competitors, for example. A few also tried creating a market report like the one I described, then using a competitor’s material or even the transcript of their earnings call and asking for a “podcast” to compare the two.
What tended to get people excited about this is the value of audio material in training and explaining. One enterprise had a project to build a sales training program from the tools, and another had one to prep salespeople for calls by giving them literal talking points. Could you get something like this without an AI tool? Sure, but it would take a lot of time and effort, and some enterprises noted that the AI tool produced an “objective” output, where human-authored material often has a bias in favor of the company’s own positioning. Thus, the AI tool is also useful in uncovering material that doesn’t paint the company’s products/services in the best light. I tried this too, and found the results to be uncannily realistic and totally absorbing.
The only issue that users mentioned regarding these AI tools is that of copyright. About half said that they would be concerned taking the output public, because they believe legal views of the copyrighting of AI-generated material are skeptical, that they are concerned that their AI tool might be accused of infringing on someone else’s copyright, or both. However, this didn’t impact internal use and some said they had external uses their legal teams had accepted.
What about errors? Did AI turn out some trashy results in either or both examples? Not often. Less than 10% of those who tried either of these AI-tool applications said they had to discard the results in any situation, and about half said that when there was a problem, it came down more often to a careless wording of the prompt rather than to an error in AI analysis. I found no serious issues in my own tests, and I truthfully have to wonder whether the output couldn’t have been used as-is in place of the kind of thing you might ask an analyst to do.
