Introduction
Build AI copilots for your app in minutes, not weeks
Hero image / Demo GIF placeholder
Build AI Copilots That Actually Get Your App
Your AI assistant shouldn't be clueless about what's happening on screen. YourGPT Copilot SDK gives your AI eyes, context, and the ability to take action.
3 lines of code. That's it. Wrap your app, drop in the chat component, done.
What Makes This Different?
Context Aware
AI can see screenshots, read console errors, and inspect network failures. It actually understands what went wrong.
Tool Execution
Define tools with Zod schemas. AI calls them, you handle the result. Full agentic loop support.
Stupid Simple
No complex setup. No state management headaches. Just React hooks that work.
Multi-LLM
OpenAI, Anthropic, Google, Groq, Ollama. Swap providers without changing your code.
The Gist
import { YourGPTProvider } from '@yourgpt/copilot-sdk-react';
import { CopilotChat } from '@yourgpt/copilot-sdk-ui';
function App() {
return (
<YourGPTProvider runtimeUrl="/api/chat">
<CopilotChat />
</YourGPTProvider>
);
}That's a working AI chat. Want the AI to see your screen when users say "I have an error"?
<YourGPTProvider
runtimeUrl="/api/chat"
tools={{ screenshot: true, console: true, requireConsent: true }}
>
<CopilotChat />
</YourGPTProvider>Done. The SDK handles consent UI, captures context, sends it to the AI.
Packages
| Package | What it does |
|---|---|
@yourgpt/copilot-sdk-react | Hooks + Provider. The brain. |
@yourgpt/copilot-sdk-ui | Chat components. The face. |
@yourgpt/copilot-sdk-runtime | Server-side adapters + streaming. The backend. |
@yourgpt/copilot-sdk-core | Types, utils, smart context tools. The foundation. |
@yourgpt/copilot-sdk-knowledge | Knowledge base integration for RAG. Optional. |
Quick Install
npm install @yourgpt/copilot-sdk-react @yourgpt/copilot-sdk-ui @yourgpt/copilot-sdk-runtimepnpm add @yourgpt/copilot-sdk-react @yourgpt/copilot-sdk-ui @yourgpt/copilot-sdk-runtimebun add @yourgpt/copilot-sdk-react @yourgpt/copilot-sdk-ui @yourgpt/copilot-sdk-runtimeThe Flow
User types message
↓
YourGPTProvider sends to your /api/chat
↓
Runtime talks to OpenAI/Anthropic/etc
↓
AI decides: respond OR call a tool
↓
Tool executes client-side → result sent back
↓
AI continues until done (agentic loop)
↓
Response streams to UIAll of this is handled. You just define tools and build UI.
Real Example: Navigation Tool
import { useToolWithSchema } from '@yourgpt/copilot-sdk-react';
import { z } from 'zod';
function NavigationTool() {
const navigate = useNavigate();
useToolWithSchema({
name: 'navigate_to_page',
description: 'Navigate user to a specific page',
schema: z.object({
path: z.string().describe('The URL path to navigate to'),
}),
handler: async ({ path }) => {
navigate(path);
return { success: true, navigatedTo: path };
},
});
return null;
}Now when user says "take me to settings", AI calls navigate_to_page({ path: '/settings' }).