What if AI is both good and not that disruptive?

https://news.ycombinator.com/rss Hits: 1
Summary

There’s a strange dynamic in AI discourse where you’re only allowed to hold one of two positions: either large language models will automate all knowledge work, collapse employment, and fundamentally restructure civilisation within a decade, or they’re stochastic parrots that can’t really do anything useful and the whole thing is a bubble. The measured take, that LLMs are a significant productivity tool comparable to previous technological shifts but not a rupture in the basic economic fabric, doesn’t generate much engagement. It’s boring.I want to make the case for boring.Consider how we talk about LLMs as a new abstraction layer for programming. You write intent in English, the model translates it to code, you debug at the level of English when things go wrong. This is framed as revolutionary, but there’s another way to see it: it’s the same transition we’ve made repeatedly throughout computing history. Assembly programmers became C programmers became Python programmers. The abstraction rose, individual productivity increased, more total software got written, and roughly similar numbers of people were employed writing it.If English-to-code is just another abstraction layer, maybe the equilibrium looks like “same number of software engineers, each individually more productive, much more total software in the world.” That’s a big deal, but it’s not mass unemployment. It’s not the end of software engineering as a career. It’s what happens every time we get better tools.The counterargument is that previous transitions still required learning a formal language with precise syntax, whereas English is natural and everyone speaks it already. This should dramatically lower barriers to entry. Perhaps. Though I suspect the binding constraint was never syntax but the underlying skill of thinking precisely about systems, edge cases, state management, and failure modes. The compiler was pedagogical in that it forced you to confront ambiguity. If the LLM just does something plau...

First seen: 2026-01-21 22:41

Last seen: 2026-01-21 22:41