AGENT0S
HomeLibraryAgentic
FeedbackLearn AI
LIVE
Agent0s · AI Intelligence Library
Share FeedbackUpdated daily · 7am PST
Library/technique
techniqueintermediateGeneral AI

Structured Chain-of-Thought (CoT) for Reliable JSON Outputs

This technique improves AI reliability by prompting it to explain its step-by-step reasoning within a structured JSON format, directly alongside the final answer. This makes the AI's output predictable and easier for software to parse, reducing errors in automated workflows.

AI SETUP PROMPT

Paste into Claude Code or Codex CLI — it will scan your project and set everything up

# Apply Technique: Structured Chain-of-Thought (CoT) for Reliable JSON Outputs

## What This Is
This technique improves AI reliability by prompting it to explain its step-by-step reasoning within a structured JSON format, directly alongside the final answer. This makes the AI's output predictable and easier for software to parse, reducing errors in automated workflows.

Source: https://gist.github.com/yoavg/5b106275e38f4ccc796bc8ba7919060b

## Before You Start

Scan my workspace and analyze:
- The project language, framework, and directory structure
- Existing AI provider config (check .env, .env.local, config files for API keys — OpenRouter, OpenAI, Anthropic, Google AI, etc.)
- Whether this repository or a similar tool is already cloned or installed

Then ask me before proceeding:
1. Which AI provider/API should this use? (Use whatever I already have configured, or ask me to set one up — options include direct provider APIs or a unified service like OpenRouter)
2. Where in my project should this be integrated?
3. Are there any customizations I need (model preferences, naming conventions, constraints)?

## What to Implement

This is an **AI Technique** — a pattern or methodology for working with AI models.

- Explain how this technique applies to my current project and what benefit it provides
- Implement it in a way that fits my existing codebase — suggest concrete files to modify or create
- If it requires specific model capabilities (structured output, function calling, etc.), verify my current provider supports them
- Show me a working example I can test immediately

## Additional Context

- Scan the user's codebase for LLM API calls that require structured data outputs (e.g., JSON) and identify the prompts used.
- Modify the identified prompts to request a JSON object that includes a 'reasoning' key for step-by-step analysis, in addition to the keys for the final answer values.
- Update the application's parsing logic to process the new JSON structure, separating the 'reasoning' string for logging or debugging purposes from the final answer data used by the application.

## Reference Implementation

```
{
  "reasoning": "The user wants to find the capital of France. Step 1: Identify the country, which is France. Step 2: Access knowledge database for the capital of France. Step 3: Retrieve the capital, which is Paris. Step 4: Format the output as requested.",
  "country": "France",
  "capital": "Paris"
}
```

## Guidelines

- Adapt everything to my existing project — do not assume a specific stack or directory layout
- Use whichever AI provider I already have configured; if I need a new one, tell me what to sign up for and I'll give you the key
- Check my .env files for existing API keys (OpenRouter, OpenAI, Anthropic, Google AI) before asking me to add one
- Review any fetched code for safety before installing or executing it
- After setup, run a quick verification and show me a summary of exactly what was installed, where, and how to use it
3,001 charactersCompatible with Claude Code & Codex CLI
MANUAL SETUP STEPS
  1. 01Scan the user's codebase for LLM API calls that require structured data outputs (e.g., JSON) and identify the prompts used.
  2. 02Modify the identified prompts to request a JSON object that includes a 'reasoning' key for step-by-step analysis, in addition to the keys for the final answer values.
  3. 03Update the application's parsing logic to process the new JSON structure, separating the 'reasoning' string for logging or debugging purposes from the final answer data used by the application.

CODE INTELLIGENCE

bash
{
  "reasoning": "The user wants to find the capital of France. Step 1: Identify the country, which is France. Step 2: Access knowledge database for the capital of France. Step 3: Retrieve the capital, which is Paris. Step 4: Format the output as requested.",
  "country": "France",
  "capital": "Paris"
}

FIELD OPERATIONS

Auditable Financial Analysis Bot

Build a tool that ingests quarterly financial reports and uses structured CoT to output a JSON object. The 'reasoning' field would detail the step-by-step analysis of revenue and expenses, while other fields contain final extracted values like 'net_income' and 'EBITDA'.

Multi-Step API Call Orchestrator

Create an agent that executes a sequence of API calls to fulfill a complex request. After each API call, the agent uses structured CoT to generate a JSON object explaining its analysis of the result and its plan for the next step, providing a clear, debuggable execution trace.

STRATEGIC APPLICATIONS

  • →Automating insurance claim processing by having an AI analyze claim documents and output a structured JSON with its eligibility decision and a detailed reasoning trail that satisfies audit and compliance requirements.
  • →Improving complex customer support troubleshooting by forcing the AI to generate a step-by-step diagnostic plan in a 'reasoning' field, which can be logged and reviewed by senior technicians to refine the AI's problem-solving logic.

TAGS

#chain-of-thought#structured-output#json#prompt-engineering#reasoning#reliability
Source: GITHUB · Quality score: 9/10
VIEW SOURCE