If you use LLMs, you've probably heard about structured outputs. You might think they're the greatest thing since sliced bread. Unfortunately, structured outputs also degrade response quality. Specifically, if you use an LLM provider's structured outputs API, you will get a lower quality response than if you use their normal text output API: 鈿狅笍 you're more likely to make mistakes when extracting data, even in simple cases; 鈿狅笍 you're probably not modeling errors correctly; 鈿狅笍 it's harder to use techniques like chain-of-thought reasoning; and 鈿狅笍 in the extreme case, it can be easier to steal your customer data using prompt injection. These are very contentious claims, so let's start with an example: extracting data from a receipt. If I use an LLM to extract the receipt entries, it should be able to tell me that one of the items is (name="banana", quantity=0.46), right? Well, using OpenAI's structured outputs API with gpt-5.2 - released literally this week! - it will claim that the banana quantity is 1.0: { "establishment_name": "PC Market of Choice", "date": "2007-01-20", "total": 0.32, "currency": "USD", "items": [ { "name": "Bananas", "price": 0.32, "quantity": 1 } ] } However, with the same model, if you just use the completions API and then parse the output, it will return the correct quantity: { "establishment_name": "PC Market of Choice", "date": "2007-01-20", "total": 0.32, "currency": "USD", "items": [ { "name": "Bananas", "price": 0.69, "quantity": 0.46 } ] } Click here to see the code that was used to generate the above outputs.This code is also available on GitHub.#!/usr/bin/env -S uv run # /// script # requires-python = ">=3.10" # dependencies = ["openai", "pydantic", "rich"] # /// """ If you have uv, you can run this code by saving it as structured_outputs_quality_demo.py and then running: chmod u+x structured_outputs_quality_demo.py ./structured_outputs_quality_demo.py This script is a companion to https://boundaryml.com/blog/structured-outputs-create-fa...
First seen: 2025-12-21 16:32
Last seen: 2025-12-22 07:33