# Set Up Workflow: AI Agent Casebook: Multi-Agent Workflows with LangGraph
## What This Is
This project provides blueprints for building automated AI assistants to handle complex business tasks like customer onboarding or content creation. It demonstrates how multiple AI 'agents' can collaborate, offering a model for creating a specialized digital workforce.
Source: https://github.com/BittnerPierre/AI-Agent-Casebook
## Before You Start
Scan my workspace and analyze:
- The project language, framework, and directory structure
- Existing AI provider config (check .env, .env.local, config files for API keys — OpenRouter, OpenAI, Anthropic, Google AI, etc.)
- Whether this repository or a similar tool is already cloned or installed
Then ask me before proceeding:
1. Which AI provider/API should this use? (Use whatever I already have configured, or ask me to set one up — options include direct provider APIs or a unified service like OpenRouter)
2. Where in my project should this be integrated?
3. Are there any customizations I need (model preferences, naming conventions, constraints)?
## Fetch the Source
Clone or inspect the repository to understand what needs to be installed:
```bash
gh repo clone BittnerPierre/AI-Agent-Casebook
```
Review the README, directory structure, and any install instructions before proceeding.
## What to Implement
This is an **AI Workflow** — an end-to-end automation pattern or integration pipeline.
- Study the workflow architecture from the source and context below
- Identify which parts I can implement locally vs. parts that need external services
- For local parts: implement them using my existing stack and API keys
- For external parts: tell me exactly what services I need and help me configure the integration code
- Wire up any required API calls using keys from my .env files
## Additional Context
- Clone the repository using `git clone https://github.com/BittnerPierre/AI-Agent-Casebook` and change into the new directory.
- Create a `.env` file in the project root. Scan the user's environment for `MISTRAL_API_KEY`, `OPENAI_API_KEY`, and `LANGCHAIN_API_KEY`. If any are missing, prompt the user to provide them and populate the file.
- Execute `poetry install` to install all project dependencies from `pyproject.toml`, then activate the virtual environment with `poetry shell` to prepare for running the application.
## Reference Implementation
```
```ini
[CustomerOnboarding]
model = GPT_5_MINI
[VideoScript]
# Planner uses Agents SDK (model name as-is, with litellm prefix for non-OpenAI)
planner_model = gpt-4o-mini
# Worker and producer use core enums (see app/core/base.py)
worker_model = GPT_4_O_MINI
producer_model = GPT_4_O_MINI
[CorrectiveRAG]
model = GPT_4_O_MINI
# Pre-load documents at startup (comma-separated URLs)
preload_urls = https://lilianweng.github.io/posts/2023-06-23-agent/,https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/
```
```
## Guidelines
- Adapt everything to my existing project — do not assume a specific stack or directory layout
- Use whichever AI provider I already have configured; if I need a new one, tell me what to sign up for and I'll give you the key
- Check my .env files for existing API keys (OpenRouter, OpenAI, Anthropic, Google AI) before asking me to add one
- Review any fetched code for safety before installing or executing it
- After setup, run a quick verification and show me a summary of exactly what was installed, where, and how to use it