It鈥檚 early 2026. Industry practice is divided on how to structure tool descriptions within the context window of an LLM. One strategy is to provide top-level tools that perform fine grained actions (e.g. list pull requests in a GitHub repo). Another increasingly popular strategy is to eschew new tools per se and to simply inform the model of useful shell commands it may invoke. In both cases reusable skills can be defined that give the model tips on how to perform useful work with the tools; the main difference is whether the model emits a direct tool call or instead an exec_bash call containing a reference to CLIs. To me it is clear that the latter represents an innovation on the former. The best feature of the unix shell is command composition. Enabling the model to form pipelines of tool calls without re-prompting the model after each stage should present huge savings in token cost. The resulting pipelines can also be saved to scripts or be customized and interactively executed by human operators. The command line is an interface compatible with humans and machines. If the model is adept at using it (it鈥檚 already text), why fall back to a machine-native protocol? One good response is that MCP is an easy way to expose SaaS functionality to agents. In lieu of MCP, how can we achieve that? I鈥檒l answer this question by providing two quite different examples from my recent work: giving an agent access to Google Docs and to Google Groups. HTTP APIs I wanted my agent to be able to list my cloud-based Google Docs, to read them as markdown, and to read and understand any attached comment threads. Google provides a very nice API to fulfill all of this functionality (well, comments are harder). I did the obvious thing and spun up a Google Cloud project, pasted the API documentation into an LLM, and the result was a gdrive CLI with subcommands to list files and to export a particular one. That worked. But as in the title of this post, the best code is no code. This script se...
First seen: 2026-01-22 23:45
Last seen: 2026-01-23 00:45