AGENT0S
HomeLibraryAgentic
FeedbackLearn AI
LIVE
Agent0s · AI Intelligence Library
Share FeedbackUpdated daily · 7am PST
Library/technique
techniqueintermediateGeneral AI

Awesome Edge AI Agents: A Curated List for On-Device Multimodal AI

This is a curated list of research papers and software frameworks for running advanced AI, like language and image models, directly on mobile phones and other small devices. This technique enables faster, more private AI applications that work without a constant internet connection.

AI SETUP PROMPT

Paste into Claude Code or Codex CLI — it will scan your project and set everything up

# Apply Technique: Awesome Edge AI Agents: A Curated List for On-Device Multimodal AI

## What This Is
This is a curated list of research papers and software frameworks for running advanced AI, like language and image models, directly on mobile phones and other small devices. This technique enables faster, more private AI applications that work without a constant internet connection.

Source: https://github.com/yh-yao/awesome-edge-ai-agents

## Before You Start

Scan my workspace and analyze:
- The project language, framework, and directory structure
- Existing AI provider config (check .env, .env.local, config files for API keys — OpenRouter, OpenAI, Anthropic, Google AI, etc.)
- Whether this repository or a similar tool is already cloned or installed

Then ask me before proceeding:
1. Which AI provider/API should this use? (Use whatever I already have configured, or ask me to set one up — options include direct provider APIs or a unified service like OpenRouter)
2. Where in my project should this be integrated?
3. Are there any customizations I need (model preferences, naming conventions, constraints)?

## Fetch the Source

Clone or inspect the repository to understand what needs to be installed:
```bash
gh repo clone yh-yao/awesome-edge-ai-agents
```
Review the README, directory structure, and any install instructions before proceeding.

## What to Implement

This is an **AI Technique** — a pattern or methodology for working with AI models.

- Explain how this technique applies to my current project and what benefit it provides
- Implement it in a way that fits my existing codebase — suggest concrete files to modify or create
- If it requires specific model capabilities (structured output, function calling, etc.), verify my current provider supports them
- Show me a working example I can test immediately

## Additional Context

- Clone the repository `https://github.com/yh-yao/awesome-edge-ai-agents.git` into a temporary directory to access its structured resource links.
- Parse the `README.md` file, focusing on the 'Frameworks & Inference Engines' and 'LLM Inference on Edge' sections to identify relevant tools such as `llama.cpp` or `MLC-LLM` based on the user's project requirements.
- Based on the user's specified goal (e.g., 'run a vision model on a Raspberry Pi'), extract the most relevant paper or framework link from the list and provide a summary of its approach and a direct link to its source.

## Guidelines

- Adapt everything to my existing project — do not assume a specific stack or directory layout
- Use whichever AI provider I already have configured; if I need a new one, tell me what to sign up for and I'll give you the key
- Check my .env files for existing API keys (OpenRouter, OpenAI, Anthropic, Google AI) before asking me to add one
- Review any fetched code for safety before installing or executing it
- After setup, run a quick verification and show me a summary of exactly what was installed, where, and how to use it
2,991 charactersCompatible with Claude Code & Codex CLI
MANUAL SETUP STEPS
  1. 01Clone the repository `https://github.com/yh-yao/awesome-edge-ai-agents.git` into a temporary directory to access its structured resource links.
  2. 02Parse the `README.md` file, focusing on the 'Frameworks & Inference Engines' and 'LLM Inference on Edge' sections to identify relevant tools such as `llama.cpp` or `MLC-LLM` based on the user's project requirements.
  3. 03Based on the user's specified goal (e.g., 'run a vision model on a Raspberry Pi'), extract the most relevant paper or framework link from the list and provide a summary of its approach and a direct link to its source.

FIELD OPERATIONS

On-Device Personal Health Assistant

Build a mobile app that uses a small, on-device language model (deployed using a framework like `llama.cpp`) to privately track and answer questions about a user's health logs (e.g., diet, exercise, symptoms) without sending personal data to the cloud.

Real-Time Object Description for Visually Impaired

Create an Android or iOS application using MobileCLIP or a similar efficient multimodal model. The app will use the phone's camera to identify objects in real-time and provide audio descriptions, running entirely offline for maximum privacy and speed.

STRATEGIC APPLICATIONS

  • →Deploy AI agents on edge devices (e.g., cameras on an assembly line) to perform real-time quality control checks, identifying product defects without the latency or bandwidth cost of streaming video to the cloud.
  • →Develop an in-store retail mobile app that uses on-device AI to provide customers with instant product information, recommendations, or virtual try-on features by scanning products, ensuring a smooth experience even with poor store Wi-Fi.

TAGS

#edge-ai#mobile-ai#on-device#multimodal#llama-cpp#inference-optimization#awesome-list
Source: GITHUB · Quality score: 7/10
VIEW SOURCE