Providers

Google (Gemini)

Use Gemini 1.5 Pro, Gemini Flash models

🔵

Google (Gemini)

Gemini 1.5 Pro, Gemini Flash

Google (Gemini)

Google's Gemini models. Excellent for multimodal tasks with massive context windows.


Setup

1. Get API Key

Get your API key from Google AI Studio

2. Add Environment Variable

# .env.local
GOOGLE_API_KEY=...

3. Configure Provider

<YourGPTProvider
  runtimeUrl="/api/chat"
  llm={{
    provider: 'google',
    model: 'gemini-1.5-pro',
  }}
>
  <CopilotChat />
</YourGPTProvider>

Available Models

ModelContextBest For
gemini-1.5-pro1M+Complex tasks, huge context
gemini-1.5-flash1M+Fast, efficient
gemini-1.0-pro32KPrevious generation

Recommended: Use gemini-1.5-flash for speed or gemini-1.5-pro for complex tasks.


Configuration Options

llm={{
  provider: 'google',
  model: 'gemini-1.5-pro',
  temperature: 0.7,        // 0-1
  maxTokens: 8192,         // Max response length
  topP: 0.95,              // Nucleus sampling
  topK: 40,                // Top-k sampling
}}

Massive Context Window

Gemini supports 1M+ token context:

// Process entire codebases or books
<YourGPTProvider
  systemPrompt={`Here is the entire codebase:

  ${entireCodebase}  // Can be 500K+ tokens!

  Help users understand and modify this code.`}
>

Multimodal (Images, Video, Audio)

Gemini excels at multimodal understanding:

const { sendMessage } = useYourGPT();

// Image analysis
sendMessage("What's in this image?", [
  { type: 'image', data: imageBase64, mimeType: 'image/png' }
]);

// Video analysis (if supported)
sendMessage("Summarize this video", [
  { type: 'video', data: videoBase64, mimeType: 'video/mp4' }
]);

Tool Calling

useToolWithSchema({
  name: 'search_knowledge',
  description: 'Search internal knowledge base',
  schema: z.object({
    query: z.string(),
    filters: z.object({
      category: z.string().optional(),
      dateRange: z.string().optional(),
    }).optional(),
  }),
  handler: async ({ query, filters }) => {
    const results = await searchKnowledge(query, filters);
    return { success: true, data: results };
  },
});

Pricing

ModelInputOutput
gemini-1.5-pro$1.25/1M tokens$5/1M tokens
gemini-1.5-flash$0.075/1M tokens$0.30/1M tokens

Very competitive pricing. Check Google AI pricing for current rates.


Next Steps

  • Groq - Ultra-fast inference
  • Features - Explore SDK features

On this page