There is an AI code review bubble

https://news.ycombinator.com/rss Hits: 7
Summary

Today, we're in the hard seltzer era of AI code review: everybody's doing them. OpenAI, Anthropic, Cursor, Augment, now Cognition, and even Linear. Of course, there's also the "White Claws" of code review: pure-play code review agents like Greptile (that's us!), CodeRabbit, Macroscope, and a litter of fledgling YC startups. Then there are the adjacent Budweisers of this world: Amazingly, these two were announced practically within 24 hours of each other. As the proprietors of an, er, AI code review tool suddenly beset by an avalanche of competition, we're asking ourselves: what makes us different? How does one differentiate? Based on our benchmarks, we are uniquely good at catching bugs. However, if all company blogs are to be trusted, this is something we have in common with every other AI code review product. One just has to try a few, and pick the one that feels the best. Unfortunately, code review performance is ephemeral and subjective, and is ultimately not an interesting way to discern the agents before trying them. It's useless for me to try to convince you that we're the best. You should just try us and make up your own mind. Instead of telling you how our product is differentiated, I am going to tell you how our viewpoint is differentiated - how we think code review will look in the long-term, and what we're doing today to prepare our customers for that future. Our thesis can be distilled into three pillars: independence, autonomy, and feedback loops. We strongly believe that the review agent should be different from the coding agent. We are opinionated on the importance of independent code validation agents. In spite of multiple requests, we have never shipped codegen features. We don't write code; an auditor doesn't prepare the books, a fox doesn't guard the henhouse, and a student doesn't grade their own essays. Today's agents are better than the median human code reviewer at catching issues and enforcing standards, and they're only getting better. It's...

First seen: 2026-01-26 18:58

Last seen: 2026-01-27 00:59