Without benchmarking LLMs, you're likely overpaying 5-10x

https://news.ycombinator.com/rss Hits: 4
Summary

Last month I helped a friend cut his LLM API bill by 80%. He's a non-technical founder building an AI-powered business. Like most people, he picked GPT-5 because it's the default: You have the API already, it has solid benchmarks, everyone uses it, why bother?! But as usage grew, so did his bill. $1,500/month for API calls alone. So we benchmarked his actual prompts against 100+ models and quickly realized that while GPT-5 is a solid choice, it almost never is the cheapest and there are always cheaper options with comparable quality. Figuring out which saved him thousands of dollars in the process. Here's how we did it. The Problem: Benchmarks don't predict performance on your task When picking an LLM, most people just choose a model from their favorite provider. For me, that's Anthropic, so depending on the task, I pick Opus, Sonnet, or Haiku. If you're sophisticated, you might check Artificial Analysis, or LM Arena, or whatever benchmark seems relevant: GPQA Diamond, AIME, SWE Bench, MATH 500, Humanity's Last Exam, ARC-AGI, MMLU... But let's not fool ourselves here: none of these predict performance on your specific task. A model that tops reasoning benchmarks might be mediocre at damage cost estimation. Or customer support in your customers' native language. Or data extraction via Playwright. Or whatever you're actually building. At best, they're a rough indicator of performance. And they do not account for costs at all. The only way to know is to test on your actual prompts. And make a decision considering quality, cost, and latency. Building benchmarks ourselves So to figure this out, we built our own benchmarks. Let me walk through one use case: customer support. Step 1: Collect real examples We extracted actual support chats via WHAPI. Each chat gave us the conversation history, the customer's latest message, and the response my friend actually sent. My friend also gave me the prompts he used manually and inside this chat tool to generate responses. Based on ...

First seen: 2026-01-20 21:35

Last seen: 2026-01-21 20:41