This is a brief overview of a recent release by Transluce. You can see the full write-up on the Transluce website.
AI systems are increasingly being used as agents: scaffolded systems in which large language models are invoked across multiple turns and given access to tools, persistent state, and so on. Understanding and overseeing agents is challenging, because they produce a lot of text: a single agent transcript can have hundreds of thousands of tokens.
At Transluce, we built a system, Docent, that accelerates analysis of AI agent transcripts. Docent lets you quickly and automatically identify corrupted tasks, fix scaffolding issues, uncover unexpected behaviors, and understand an agent's weaknesses.
In the video below, Docent identifies issues in an agent's environment, like missing packages the agent invoked. On the InterCode [1] benchmark, simply adding those packages increased GPT-4o's solve rate from 68.6% to 78%. This is important, since tasks like InterCode are often used to assess AI cyberrisk and other societal impacts of AI.
Docent also surfaces surprising behaviors. After repeatedly failing to solve a task, GPT-4o generates nonsense text about "chestnut facts" and tries to "re-examine for latent conceptual derivation."
For more, read the full write-up , use the live tool, or watch the brief walk-through below.
Feedback welcome!