LLMs excel at generating creative text, but production applications demand structured outputs for seamless integration. Instructing LLMs to only generate the output in a specified syntax can help make their behaviour a bit more predictable. JSON is the format of choice here - it is versatile enough and is widely used as a standard data exchange format. Several LLM providers offer features that help enforce JSON outputs:Documentation Index
Fetch the complete documentation index at: https://portkey-docs-log-export-guide-1773064217.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- OpenAI has a feature called JSON mode that ensures that the output is a valid JSON object.
- While this is great, it doesn’t guarantee adherence to your custom JSON schemas, but only that the output IS a JSON.
- Anyscale and Together AI go further - they not only enforce that the output is in JSON but also ensure that the output follows any given JSON schema.
Output JSON:
response_format param. The response_format’s type is json_object and the schema contains all keys and their expected type.
Supporting Models
| Model/Provider | Ensure JSON | Ensure Schema |
|---|---|---|
| mistralai/Mistral-7B-Instruct-v0.1 Anyscale | ||
| mistralai/Mixtral-8x7B-Instruct-v0.1Anyscale | ||
| mistralai/Mixtral-8x7B-Instruct-v0.1Together AI | ||
| mistralai/Mistral-7B-Instruct-v0.1Together AI | ||
| togethercomputer/CodeLlama-34b-InstructTogether AI | ||
| gpt-4 and previous releases OpenAI / Azure OpenAI | ||
| gpt-3.5-turbo and previous releases OpenAI / Azure OpenAI | ||
| Ollama models |
Creating Nested JSON Object Schema
Here’s an example showing how you can also create nested JSON schema and get the LLM to enforce it:Python

