AGENT0S
HomeLibraryAgentic
FeedbackLearn AI
LIVE
Agent0s · AI Intelligence Library
Share FeedbackUpdated daily · 7am PST
Library/workflow
workflowbeginnerOpenCLAW

OpenClaw: Getting Started

OpenClaw is a personal AI assistant framework that lets you connect your own AI models (like GPT or Claude) to various chat platforms such as Telegram or WhatsApp. This guide provides the initial setup instructions to install the framework, connect it to your AI provider, and have a working AI chat assistant running in minutes.

AI SETUP PROMPT

Paste into Claude Code or Codex CLI — it will scan your project and set everything up

# Set Up Workflow: OpenClaw: Getting Started

## What This Is
OpenClaw is a personal AI assistant framework that lets you connect your own AI models (like GPT or Claude) to various chat platforms such as Telegram or WhatsApp. This guide provides the initial setup instructions to install the framework, connect it to your AI provider, and have a working AI chat assistant running in minutes.

Source: https://github.com/openclaw/openclaw/blob/main/docs/start/getting-started.md

## Before You Start

Scan my workspace and analyze:
- The project language, framework, and directory structure
- Existing AI provider config (check .env, .env.local, config files for API keys — OpenRouter, OpenAI, Anthropic, Google AI, etc.)

Then ask me before proceeding:
1. Which AI provider/API should this use? (Use whatever I already have configured, or ask me to set one up — options include direct provider APIs or a unified service like OpenRouter)
2. Where in my project should this be integrated?
3. Are there any customizations I need (model preferences, naming conventions, constraints)?

## Fetch the Source

Fetch the raw file content from GitHub:
```bash
curl -sL "https://raw.githubusercontent.com/openclaw/openclaw/main/docs/start/getting-started.md"
```

## What to Implement

This is an **AI Workflow** — an end-to-end automation pattern or integration pipeline.

- Study the workflow architecture from the source and context below
- Identify which parts I can implement locally vs. parts that need external services
- For local parts: implement them using my existing stack and API keys
- For external parts: tell me exactly what services I need and help me configure the integration code
- Wire up any required API calls using keys from my .env files

## Additional Context

- Detect the user's operating system (macOS, Linux, or Windows) and execute the appropriate one-line installation command: `curl -fsSL https://openclaw.ai/install.sh | bash` for Unix-based systems or `iwr -useb https://openclaw.ai/install.ps1 | iex` for Windows PowerShell.
- Initiate the interactive onboarding process by running `openclaw onboard --install-daemon`. Instruct the user that they will be prompted for an AI model provider and their API key. First, scan the user's environment variables and common config files (e.g., .env) for existing keys like `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` and suggest one if found.
- Verify the installation by executing `openclaw gateway status`. Confirm that the output shows the gateway is running and listening on a port (typically 18789), then execute `openclaw dashboard` to launch the control UI and confirm the setup is complete.

## Reference Implementation

```
```bash
# Install on macOS / Linux
curl -fsSL https://openclaw.ai/install.sh | bash

# Run onboarding wizard
openclaw onboard --install-daemon

# Check status
openclaw gateway status
```
```

## Guidelines

- Adapt everything to my existing project — do not assume a specific stack or directory layout
- Use whichever AI provider I already have configured; if I need a new one, tell me what to sign up for and I'll give you the key
- Check my .env files for existing API keys (OpenRouter, OpenAI, Anthropic, Google AI) before asking me to add one
- Review any fetched code for safety before installing or executing it
- After setup, run a quick verification and show me a summary of exactly what was installed, where, and how to use it
3,429 charactersCompatible with Claude Code & Codex CLI
MANUAL SETUP STEPS
  1. 01Detect the user's operating system (macOS, Linux, or Windows) and execute the appropriate one-line installation command: `curl -fsSL https://openclaw.ai/install.sh | bash` for Unix-based systems or `iwr -useb https://openclaw.ai/install.ps1 | iex` for Windows PowerShell.
  2. 02Initiate the interactive onboarding process by running `openclaw onboard --install-daemon`. Instruct the user that they will be prompted for an AI model provider and their API key. First, scan the user's environment variables and common config files (e.g., .env) for existing keys like `OPENAI_API_KEY` or `ANTHROPIC_API_KEY` and suggest one if found.
  3. 03Verify the installation by executing `openclaw gateway status`. Confirm that the output shows the gateway is running and listening on a port (typically 18789), then execute `openclaw dashboard` to launch the control UI and confirm the setup is complete.

CODE INTELLIGENCE

bash
```bash
# Install on macOS / Linux
curl -fsSL https://openclaw.ai/install.sh | bash

# Run onboarding wizard
openclaw onboard --install-daemon

# Check status
openclaw gateway status
```

FIELD OPERATIONS

Automated Code Review Assistant

Connect OpenClaw to a private Discord or Slack channel for your dev team. Configure it with a custom system prompt to analyze code snippets, check for common errors, and suggest improvements based on project-specific coding standards.

Personal Knowledge Base Butler

Configure OpenClaw tools to read a local directory of your personal markdown notes. Create a Telegram bot channel that allows you to ask questions in natural language, and have the OpenClaw agent search your notes and synthesize answers.

STRATEGIC APPLICATIONS

  • →Deploy an OpenClaw instance as a first-line customer support agent on WhatsApp. It can answer frequently asked questions by connecting to a knowledge base tool and escalate complex queries to a human agent by tagging them in a shared channel.
  • →Create an internal DevOps assistant integrated with Slack. The assistant can respond to commands like `@devops-bot check server status` by using the `exec` tool to run predefined shell scripts, report back the status, and notify the on-call engineer if an anomaly is detected.

TAGS

#installation#setup#getting-started#onboarding#gateway#personal-assistant#typescript
Source: GITHUB · Quality score: 8/10
VIEW SOURCE