2025 was the year of the agent. The goalposts for AGI shifted; we stopped asking AI to merely "talk" and demanded that it "act". As an outsider looking at the architecture of these new agents and agentic system, I noticed something strange. The engineering tricks used to make AI smarter felt oddly familiar. They read less like computer science and more like … self-help advice.The secret to agentic intelligence seems to lie in three very human habits: writing things down, talking to yourself, and pretending to be someone else. They are almost too simple. The Unreasonable Effectiveness of WritingOne of the most profound pieces of advice I ever read as a PhD student came from Prof. Manuel Blum, a Turing Award winner. In his essay "Advice to a Beginning Graduate Student", he wrote: "Without writing, you are reduced to a finite automaton. With writing you have the extraordinary power of a Turing machine."If you try to hold a complex argument entirely in your head, you will fail. Your working memory is a "finite automaton”, it has short limits. But if you write things down, you offload that memory to the paper. You can then reference it, critique it, and build on it. The paper becomes your external hard drive.AI Agents are now being built on this exact principle. An LLM has a limited context window, its effective attention span is limited. If it tries to solve a 50-step coding problem in one breath, it will crash and burn. To fix this, we give agents scratchpads. We force them to write down a plan before they execute code. We give them memory buffers to store facts for later. We are essentially asking AI to take notes. By externalizing their internal state onto a digital piece of paper, agents evolve from simple pattern-matchers into robust thinkers.Thinking is Just Talking to Yourself in a LoopFor a long time, we treated LLMs as simple input-output machines: we asked a question, and it produced an answer. Often, the answers were unsatisfying—hallucinated or superficial. ...
First seen: 2026-01-07 02:41
Last seen: 2026-01-07 03:42