How to use Structured Outputs in LLMs
Lately, I heard about structure outputs for LLMs:
And I could not just but to think on the possibilities for projects…
OpenAI: Has a native function calling mechanism through the dedicated
functions
(ortools
withtype: "function"
) parameter in their Chat Completions API. The model is specifically trained to recognize the described functions and output a structuredfunction_call
object when it deems appropriate.Groq: Also has native function calling through the
tools
parameter in theirchat.completions.create
endpoint. Their implementation is designed to be largely compatible with OpenAI’s structure, including thetool_calls
object in the response. So, while it’s a separate API, it’s not purely leveraging unstructured output.Claude: Leverages structured outputs to achieve similar functionality. While it has a
tools
parameter in its Messages API, the model is instructed via the prompt and the tool descriptions to output a structured JSON object representing the function call. The parsing of this structured output is then handled by the developer. The key difference is that Claude’s models weren’t initially built with a specific “function calling” training objective in the same way as OpenAI’s older models. However, with the Claude 3 family, Anthropic has significantly enhanced its ability to understand and utilize tools in a structured manner.
Here’s a more detailed breakdown of the nuance:
“Native Function Calling” (OpenAI & Groq): These APIs have parameters and response structures explicitly designed for function calling. The models are trained to understand these structures and generate the calls directly as part of their API response in a predictable format.
“Leveraging Structured Outputs” (Claude): While Claude’s
tools
parameter guides the model, the actual function call is achieved by instructing the model to generate a specific structured output (like JSON) that adheres to the tool’s schema. The reliability of this depends heavily on the prompt engineering. However, with the advancements in Claude 3, its ability to generate these structured outputs for tool use has become much more robust and comparable to native function calling.
In essence:
- OpenAI and Groq provide a more direct and built-in mechanism for function calling.
- Claude achieves similar results by being very good at following instructions to produce structured outputs that represent function calls, and its newer models have strong tool use capabilities.
Therefore, while the underlying goal is the same (allowing the LLM to interact with external tools), the implementation details and the level of “nativeness” in the API design differ slightly.
Conclusions
This is a great feature that can be applied to project like:
- CV Generation as per a certain latex/opencv framework
- Generate
.md
posts with their proper headers for your SSG powered sites…As we can have proper front matter - aaaand more
FAQ
What have I learnt recently?
Using LLMs to apply to Linkedin Jobs
How to use LLMs to create a CV
Use a CV Builder Framework: OpenResume or Reactive Resume
With Reactive-Resume
Or with OpenResume: https://github.com/JAlcocerT/open-resume
#version: '3'
services:
open-resume:
container_name: openresume #https://github.com/xitanggg/open-resume
image: ghcr.io/jalcocert/open-resume:latest #https://github.com/users/JAlcocerT/packages/container/package/open-resume
ports:
- "3333:3000"
# networks:
# - cloudflare_tunnel
# networks:
# cloudflare_tunnel:
# external: true
Definitely structure outputs is a feature to have a look together with Overleaf (Latex) or this kind of projects!