Providers
Custom Provider
Build your own LLM provider adapter
Custom Provider
Build an adapter for any LLM API that isn't natively supported.
Custom providers follow the same interface as built-in providers. Your frontend code doesn't need to change.
Provider Interface
A provider adapter must implement:
interface LLMProvider {
chat(options: ChatOptions): Promise<ChatResponse>;
streamChat(options: ChatOptions): AsyncIterable<StreamChunk>;
}
interface ChatOptions {
messages: Message[];
model: string;
tools?: Tool[];
temperature?: number;
maxTokens?: number;
}Basic Implementation
// lib/providers/my-provider.ts
import { LLMProvider, ChatOptions, StreamChunk } from '@yourgpt/copilot-sdk-runtime';
export class MyCustomProvider implements LLMProvider {
private apiKey: string;
private baseUrl: string;
constructor(config: { apiKey: string; baseUrl: string }) {
this.apiKey = config.apiKey;
this.baseUrl = config.baseUrl;
}
async chat(options: ChatOptions) {
const response = await fetch(`${this.baseUrl}/chat`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: options.model,
messages: options.messages,
// Map to your API's format
}),
});
const data = await response.json();
return {
content: data.response,
role: 'assistant',
// Map response to standard format
};
}
async *streamChat(options: ChatOptions): AsyncIterable<StreamChunk> {
const response = await fetch(`${this.baseUrl}/chat/stream`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: options.model,
messages: options.messages,
stream: true,
}),
});
const reader = response.body?.getReader();
if (!reader) throw new Error('No response body');
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Parse your API's streaming format
yield {
type: 'content_delta',
content: chunk,
};
}
yield { type: 'done' };
}
}Register with Runtime
// app/api/chat/route.ts
import { createRuntime } from '@yourgpt/copilot-sdk-runtime';
import { MyCustomProvider } from '@/lib/providers/my-provider';
const runtime = createRuntime({
providers: {
'my-custom': new MyCustomProvider({
apiKey: process.env.MY_API_KEY!,
baseUrl: 'https://api.my-llm.com',
}),
},
});
export async function POST(req: Request) {
return runtime.handleRequest(req);
}Use on Frontend
<YourGPTProvider
runtimeUrl="/api/chat"
llm={{
provider: 'my-custom',
model: 'my-model-id',
}}
>
<CopilotChat />
</YourGPTProvider>Tool Support
If your provider supports function calling, implement tool handling:
async chat(options: ChatOptions) {
const response = await fetch(`${this.baseUrl}/chat`, {
// ...
body: JSON.stringify({
model: options.model,
messages: options.messages,
tools: options.tools?.map(tool => ({
// Map to your API's tool format
name: tool.name,
description: tool.description,
parameters: tool.inputSchema,
})),
}),
});
const data = await response.json();
// Check for tool calls in response
if (data.tool_calls) {
return {
content: '',
role: 'assistant',
toolCalls: data.tool_calls.map(tc => ({
id: tc.id,
name: tc.function.name,
arguments: tc.function.arguments,
})),
};
}
return {
content: data.response,
role: 'assistant',
};
}OpenAI-Compatible APIs
Many providers are OpenAI-compatible. Use the built-in adapter:
import { createRuntime, OpenAICompatibleProvider } from '@yourgpt/copilot-sdk-runtime';
const runtime = createRuntime({
providers: {
'together': new OpenAICompatibleProvider({
apiKey: process.env.TOGETHER_API_KEY!,
baseUrl: 'https://api.together.xyz/v1',
}),
},
});OpenAI-compatible providers include: Together AI, Anyscale, Fireworks, Perplexity, and many others.
Next Steps
- Runtime Setup - Full server configuration
- Architecture - Understanding the SDK