# Set Up Workflow: Ollama Production Deployment Guide with OpenCLAW
## What This Is
Ollama is a tool that lets you run powerful AI models like Llama 3.3 and Gemma 3 directly on your own computers, from laptops to production servers. This eliminates API costs, improves data privacy for sensitive applications, and provides very fast performance for demanding workloads.
Source: https://skywork.ai/skypage/en/openclaw-ollama-model-setup/2037024471542140928
## Before You Start
Scan my workspace and analyze:
- The project language, framework, and directory structure
- Existing AI provider config (check .env, .env.local, config files for API keys — OpenRouter, OpenAI, Anthropic, Google AI, etc.)
Then ask me before proceeding:
1. Which AI provider/API should this use? (Use whatever I already have configured, or ask me to set one up — options include direct provider APIs or a unified service like OpenRouter)
2. Where in my project should this be integrated?
3. Are there any customizations I need (model preferences, naming conventions, constraints)?
## Source Access Note
The source URL (https://skywork.ai/skypage/en/openclaw-ollama-model-setup/2037024471542140928) may not be directly accessible from the terminal. Use the Reference Implementation and Additional Context sections below instead. If you need more details, ask me to paste relevant content from the source.
## What to Implement
This is an **AI Workflow** — an end-to-end automation pattern or integration pipeline.
- Study the workflow architecture from the source and context below
- Identify which parts I can implement locally vs. parts that need external services
- For local parts: implement them using my existing stack and API keys
- For external parts: tell me exactly what services I need and help me configure the integration code
- Wire up any required API calls using keys from my .env files
## Additional Context
- Execute the command `docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama` to install and run the Ollama server in a detached Docker container, mapping the local port 11434.
- Scan the user's project files (e.g., `openclaw.json`, `pyproject.toml`) to identify the required local model, such as 'llama3.3'. Execute `ollama pull [model_name]` to download the specified model to the local Ollama server.
- Modify the OpenCLAW configuration file (e.g., `config.json`) to point to the local model. Add a new provider configuration setting the `api_base` to `http://localhost:11434/v1`, the `model` name to the pulled model, and the `api_key` to 'ollama' to use the OpenAI-compatible endpoint.
## Reference Implementation
```
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```
## Guidelines
- Adapt everything to my existing project — do not assume a specific stack or directory layout
- Use whichever AI provider I already have configured; if I need a new one, tell me what to sign up for and I'll give you the key
- Check my .env files for existing API keys (OpenRouter, OpenAI, Anthropic, Google AI) before asking me to add one
- Review any fetched code for safety before installing or executing it
- After setup, run a quick verification and show me a summary of exactly what was installed, where, and how to use it