# Set Up Workflow: Agents Towards Production: An Open-Source Playbook
## What This Is
This is a comprehensive guide for building professional AI agents that are ready for real-world use. It provides code-based tutorials to take an AI application from a simple prototype to a scalable, secure, and observable product.
Source: https://github.com/NirDiamant/agents-towards-production
## Before You Start
Scan my workspace and analyze:
- The project language, framework, and directory structure
- Existing AI provider config (check .env, .env.local, config files for API keys — OpenRouter, OpenAI, Anthropic, Google AI, etc.)
- Whether this repository or a similar tool is already cloned or installed
Then ask me before proceeding:
1. Which AI provider/API should this use? (Use whatever I already have configured, or ask me to set one up — options include direct provider APIs or a unified service like OpenRouter)
2. Where in my project should this be integrated?
3. Are there any customizations I need (model preferences, naming conventions, constraints)?
## Fetch the Source
Clone or inspect the repository to understand what needs to be installed:
```bash
gh repo clone NirDiamant/agents-towards-production
```
Review the README, directory structure, and any install instructions before proceeding.
## What to Implement
This is an **AI Workflow** — an end-to-end automation pattern or integration pipeline.
- Study the workflow architecture from the source and context below
- Identify which parts I can implement locally vs. parts that need external services
- For local parts: implement them using my existing stack and API keys
- For external parts: tell me exactly what services I need and help me configure the integration code
- Wire up any required API calls using keys from my .env files
## Additional Context
- Clone the repository 'NirDiamant/agents-towards-production' from GitHub into the user's workspace.
- Analyze the user's project requirements and recommend the most relevant tutorial notebook (e.g., 'LangGraph-agent', 'agent-memory-with-redis'). Then, install the specific dependencies for that tutorial using its 'requirements.txt' file.
- Guide the user to configure a `.env` file for the selected tutorial, prompting for required API keys (e.g., OPENAI_API_KEY, TAVILY_API_KEY) and service credentials (e.g., REDIS_URL) by referencing the tutorial's setup instructions.
## Guidelines
- Adapt everything to my existing project — do not assume a specific stack or directory layout
- Use whichever AI provider I already have configured; if I need a new one, tell me what to sign up for and I'll give you the key
- Check my .env files for existing API keys (OpenRouter, OpenAI, Anthropic, Google AI) before asking me to add one
- Review any fetched code for safety before installing or executing it
- After setup, run a quick verification and show me a summary of exactly what was installed, where, and how to use it