Enough with the AI FOMO, go slow-mo, says Domo CDO

Snail crawling on a plank balanced on piles of coins on a light surface

A brown snail inches along a wooden plank resting on neatly stacked coins, illustrating the concept of slow economic progress.

Chris Willis, chief design officer and futurist for data platform biz Domo, wonders why people aren’t more annoyed with AI companies.

Willis said he was in San Francisco a few weeks ago and he couldn’t fathom the lack of resentment.

“Why aren’t people more resentful that these companies have pushed this technology upon them and now everyone is feeling a tremendous amount of anxiety,” he told The Register in an interview. “I’m sure you’ve seen the surveys and the research. Everyone from the C-suite on down feels like the clock is ticking and their careers are on the line.”

San Francisco is the home of OpenAI and Anthropic. Google, Microsoft, and Amazon are also in town. So there’s a lot of self-interested AI enthusiasm in the city by the bay.

The resentment is there if you look beyond the billboard evangelism shouting its way down the US 101 corridor that connects the city to Silicon Valley proper. But the existential dread behind Stop AI, Pause AI, Poison Fountain, and the firebombing of OpenAI CEO’s Sam Altman’s home isn’t quite what Willis has in mind.

He’s concerned with the way AI has been marketed through fear – act now or be left behind by this technology that might just take everyone’s job and enable DIY biological weapons, now that LLMs can more reliably count the number of “r”s in “strawberry.”

“Fear,” he said, “is not a durable strategy for innovating.”

The problem as Willis sees it begins with the fact that AI models are a product without a spec.

“When you’re trying to create a product and you’re trying to figure out how that product fits in the market, you have to figure out who it’s for and what it’s going to do and what it’s not going to do,” he said. “And these large language models, essentially the feature spec is: ‘It’ll do anything for anyone, anyway, anyhow, in any language.'”

So it’s not surprising, he said, that there’s some confusion.

“From a leadership perspective, we’ve seen many times the pattern where there is a lot of pressure for companies to suddenly innovate with a technology that’s not well understood,” he said. “And so organizations are spending a lot on buying these AI tools and then expecting innovation to just happen. And that’s not usually how innovation works.”

What company leaders face, he said, is not an innovation problem but an impatience problem.

“They’re thinking, ‘we have to do something now,” he said, “and so AI in many ways is becoming a sort of theater. We have to show that we’re doing something.”

The phenomenon known as “tokenmaxxing” – buying access to AI models and directing or expecting employees to use them as much as possible – illustrates the lack of strategy, Willis said.

“In certain organizations where AI is theater and impatience is driving rather than innovation, tokenmaxxing is a convenient way to feed that narrative,” he said. “But it doesn’t change anything. The research does suggest that you might have people putting through a lot of tokens and maybe they are personally becoming more productive. But it’s not changing the bottom line.”

The deeper problem, he said, is that companies are treating AI itself as a solution rather than as a tool to help power the solution.

The result is a lot of proof-of-concept projects that lack what’s required to make them durable, trustworthy, and deployable at scale. Starting with business needs first is essential, Willis argues.

“If you don’t understand the process and the automations and the workflows in your business, you run the risk of putting in a very powerful engine that’s going to drive your business way faster, but with the lights off, at night,” he said.

Willis suggests companies should not set moonshot goals for AI, and start with something simple, like automating processes tied to a spreadsheet.

He described work done with one customer that involved developing an app to go through company invoices, check for discrepancies, and surface anomalies for review by a person. The clients were thrilled.

Understanding where human judgement is required and where decisions can be verified and hence automated, is key, he said. “Usually that question is not asked.”

Failing to ask questions like that invites problems. Willis pointed to the way that Swedish fintech biz Klarna replaced customer service staff with AI, only to return to replace the AI with people.

“It’s very enticing to say we’re just going to replace everything with a chatbot,” he said. “Frankly, no customer ever just wants to talk to your chatbot.”

Willis said there’s no magic for innovating. Companies need to do the hard work of understanding how AI may or may not be useful for the desired outcome.

“There will be a reckoning when it comes to budgets around these things,” he said, “because CFOs are starting to as ‘Why are we spending all this money and not gaining anything?'” ®

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *