
Or consider a customer who chats with a high-end restaurant bot, saying: “We have a reservation for 12 at your restaurant tomorrow night. The problem is that seven of those people have dietary issues, including one vegan, one who is strictly kosher, one gluten-free and several others who have rare allergies to specific ingredients. I am pasting a detailed description of the dietary issues for all 12 people. Can you review the full ingredients for all of your menu items and recommend to us several entrees, side orders, soups, salads and desserts that would accommodate all of our guests? That way, we don’t have to pepper the waitstaff with questions such as ‘Is the sugar you use vegan?’ or ‘Have you segregated the cookware for strict kosher?’”
GenAI is especially well suited to handle that kind of question and an accurate answer might win customers for life (though it might use up a large number of tokens). But if it buys the loyalty of new customers, that’s a powerful win.
That said, there remains a serious concern. I have argued that AI can be a powerful tool, but its hallucinations make it a bad choice for direct customer interactions. It’s the same reason I don’t back enterprise use of autonomous agents. Agents are great, but they are not nearly ready to function autonomously.
For some companies, “GenAI can sometimes make things up and do so in a highly confident manner” is going to remain a deal killer. And it’s not like there’s a reasonable chance hallucinations will be eliminated anytime soon. (Indeed, the more sophisticated these models get, the more they hallucinate. Lovely.)
