Companies exploring automated workflows would be well advised to keep their AI agents on a short leash.
Microsoft researchers have found that even the priciest frontier models introduce errors in long workflows, the very thing for which AI software has been pitched.
Anthropic, for example, says, “Claude Cowork handles tasks autonomously. Give it a goal and Claude works on your computer, local files, and applications to return a finished deliverable.”Â
Redmond promotes similar usage, touting Microsoft 365 Copilot’s ability to “Tackle complex, multistep research across your work data and the web.”
The Windows maker’s scientists aren’t so sure about that.
Philippe Laban, Tobias Schnabel, and Jennifer Neville from Microsoft Research set out to study what happens when large language models (LLMs) are asked to complete multistep tasks.
They recently published their findings in a preprint paper with a spoiler title: “LLMs Corrupt Your Documents When You Delegate.”
To test how LLMs handle long-running knowledge work tasks, the researchers devised a benchmark called DELEGATE-52.
It simulates multistep workflows across 52 professional domains, such as writing code, crystallography, and music notation. It is a more taxing test than sorting a spreadsheet, a task that should be table stakes for any aspiring workflow agent.
In the accounting domain, for example, the challenge involves a seed document that represents the accounting ledger of Hack Club, a nonprofit organization. The model is asked to split the seed document into separate category-based files and then to merge these chronologically back into a single file.
“Our findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing on average 25 percent of document content over 20 delegated interactions, and an average degradation across all models of 50 percent,” the authors report.
The authors found that LLMs did better on programming tasks and worse on natural language tasks.Â
To be considered “ready” for a given work domain, the researchers set the bar at 98 percent or higher after 20 interactions. They only found one domain qualified: Python programming. For every other domain, the authors found LLMs fell short of “ready.”
“A per-domain breakdown of end-of-simulation scores reveals that models are not ready for delegated workflows in the vast majority of domains, with models severely corrupting documents (at least -20 percent degradation) in 80 percent of our simulated conditions,” the authors state.
The study found that “catastrophic corruption,” meaning a benchmark score of 80 percent or less, occurred in more than 80 percent of model/domain combinations. The best performing model, Google Gemini 3.1 Pro, was ready for only 11 of 52 domains.
In weaker models, degradation took the form of content deletion; in frontier models, it took the form of content corruption.
And when errors occurred, they tended to happen all at once, resulting in the loss of 10 to 30 points in a single round-trip interaction, rather than accumulating over the entire test run.Â
“The stronger models (Gemini 3.1 Pro, Claude 4.6, GPT 5.4) aren’t avoiding small errors better, they delay critical failures to later rounds and experience them in fewer interactions,” the researchers observe in their paper.
The Microsoft authors went on to test how agents – LLMs given access to file reading, writing, and code execution through a basic harness – handle the DELEGATE-52 benchmark.
Tools in this instance didn’t help. “The four tested models perform worse when operated agentically with tools than without, incurring an average additional degradation of 6 percent by the end of simulation,” the authors observe, in reference to GPT-5.4, 5.2, 5.1, and 4.1.
Given that task delegation is the whole point of an AI agent – if you wanted to do it yourself, you wouldn’t have tried to automate the task – this casts a bit of a shadow on the AI hype train.Â
An intern who corrupted a quarter of a document over a long workflow would be shown the door. Yet companies are showing AI the money: according to Deloitte, organizations are spending an average of 36 percent of their digital budgets on AI automation.
That might make sense if arming LLMs with the tools to function as full-blown agents meant less document degradation. But that’s not the case. The authors found “using a basic agentic harness does not improve the performance of LLMs” with regard to the DELEGATE-52 test and that LLM performance after two interactions doesn’t reflect how models perform after 20, which they argue underscores the need for long-horizon evaluation.
“Current LLMs are ready for delegated workflows in some domains such as Python coding, but not in other less common domains,” the authors conclude. “In general, users still need to closely monitor LLM systems as they operate and complete tasks on their behalf.”
Yet they also note that LLMs have been getting better, pointing to the performance of OpenAI’s GPT model family, which has seen its benchmark performance increase over 16 months from 14.7 percent to 71.5 percent. ®