INTERVIEW Enthusiasm among managers to adopt AI tools has outpaced developers’ ability to learn those tools and use them effectively.
Moshe Sambol, VP of customer solutions at software observability outfit Lightrun, told The Register in an interview that he speaks with a lot of companies. Some of the developers in those organizations, he said, are very comfortable with AI tools.
“But the reality is that a lot of developers are much earlier in the curve,” he said. “The expectations of businesses are getting ahead of where the developers are in terms of their mental model and in terms of the training that they’re providing, the enablement they’re providing to make their teams comfortable with the tools, and the rate at which these tools are evolving.”
Sambol said the degree of AI tool adoption varies.
“I absolutely have customers who’ve told their developers, ‘You don’t write code anymore. You review code. No one should write a line of code unless for some reason you failed after three attempts getting GenAI to do it,'” he said. “I have customers like that. I don’t know if I should name them, but absolutely.”
And he said on the other side of the spectrum, there are organizations like banks that are just starting to roll AI tools out due to compliance obligations and traditional industry caution.
“It’s an exciting time to be adopting these tools and learning these tools, but it puts a lot of pressure on the developer,” he said. “It puts this expectation of being more productive.”
Not everyone manages that, and Sambol said he has a lot of sympathy for developers who have been directed to use AI tools without training and organizational guidance. Generative AI models will produce a lot of code quickly, he said, and because the code seems correct initially, it often gets pushed forward.
“If it’s not creating bugs en masse today, it’s just pain waiting to happen,” he said. “The number one question I think we have to be asking developers is, ‘Can you explain that code? Have you validated that the code actually fits in the context of the broader system?'”
Sambol said the answer isn’t necessarily yes or no because developers have different levels of experience and often work on large projects where they focus only on a specific part of the code base. It’s common in enterprises, he said, that no one person will understand the entire system end-to-end, which is why problem resolution often requires a group of people.
The issue he sees is that generative AI systems don’t help bridge the missing knowledge gap. They don’t provide the context to understand all the components involved.
Sambol went on to describe an incident in which a developer was using an AI assistant to build an Ansible automated workflow. “The generative AI was creating the Ansible template for him, which seems like a perfect match – it’s drudge work,” he explained. “And it’s much better at getting the syntax exactly right.”
It worked. And then it stopped working.
“The system that he was deploying to, all of a sudden, he could not get the component up,” Sambol said. “It just wouldn’t start. A process that had been going smoothly for a couple of hours in the morning, now all of a sudden, his service is down and it will not run.
“And he’s pulling his hair out trying to unstitch the day’s work so far to figure out what went wrong, why is the service not working,” he said, adding that the AI agent proved unhelpful by going off in the wrong direction, reinstalling the operating system, and undertaking other ineffective steps to effect repairs.
What happened, Sambol explained, is that earlier in the day, the developer had installed the component in a certain way – it was running in a container with a systemd service.
As such, it needed access to the ports on the device, which precluded running the component in Docker.
“So the AI model re-wrapped it, repackaged it, and deployed it in a different way, but kept the original one running,” he explained. “So it was simply a matter of the fact that the one he had initially deployed was still running and it was blocking the port and the second one couldn’t run.
“It’s a fairly simple, easy-to-understand problem once you see it, but he lost the entire afternoon going down all kinds of dead ends with the AI looking at this, looking at that, because the AI model didn’t remember that it had guided him to deploy the system a certain way earlier in the day.”
Sambol said various studies show a significant percentage of AI generated code contains errors and creates technical debt.
That’s not to say human developers are without fault. Sambol said developers have their own weaknesses. Many companies, he said, have offshored or globally distributed development teams, so there’s a lot of variation. He argues that it’s important to acknowledge that imperfection and work toward processes that improve results.
One way to do that is to automate the prompting process in a way that makes it more repeatable. “When you do that, you identify where you’re starting to get good results and you don’t expect everybody to come up with a well-structured long prompt.”
Sambol added, “I think these tools are absolutely getting better. And so I’m reluctant to call any of them junk or deeply flawed. They’re getting better shockingly rapidly. If you can take advantage of a couple of different ones – with a human being in the loop – then you are more likely to get output that is at least as good as you were getting before.” ®