Posts tagged "Ai-gateway"

Subscribe to feed
  • Gemini 3 Flash Preview now available in AI Gateway

    Google’s Gemini 3 Flash Preview is now available through AI Gateway. You can call this model from Netlify Functions without configuring API keys; the AI Gateway provides the connection to Google for you.

    Example usage in a Function:

    import { GoogleGenAI } from '@google/genai';
    export default async (request: Request, context: Context) => {
    const ai = new GoogleGenAI({});
    const response = await ai.models.generateContent({
    model: 'gemini-3-flash-preview',
    contents: 'How does AI work?'
    });
    return new Response(JSON.stringify({ answer: response.text }), {
    headers: { 'Content-Type': 'application/json' }
    });
    };

    This model works across any function type and is compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.

    See the AI Gateway documentation for details.

    Permalink to Gemini 3 Flash Preview now available in AI Gateway
  • GPT-image-1.5 now available in AI Gateway

    OpenAI’s GPT-image-1.5 is now available through AI Gateway. You can call this model from Netlify Functions without configuring API keys; the AI Gateway provides the connection to OpenAI for you.

    Example usage in a Function:

    import OpenAI from 'openai';
    const ai = new OpenAI();
    export default async (req, context) => {
    const response = await ai.images.generate({
    model: 'gpt-image-1.5',
    prompt: 'Generate a realistic image of a golden retriever working in an office',
    n: 1,
    size: '1024x1024',
    quality: 'low',
    output_format: 'jpeg',
    output_compression: 80
    });
    const imageBase64 = response.data[0].b64_json;
    const imageBuffer = Uint8Array.from(atob(imageBase64), c => c.charCodeAt(0));
    return new Response(imageBuffer, {
    status: 200,
    headers: {
    'content-type': 'image/jpeg',
    'cache-control': 'no-store'
    }
    });
    }

    This model works across any function type and is compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.

    See the AI Gateway documentation for details.

    Permalink to GPT-image-1.5 now available in AI Gateway
  • GPT-5.2 and GPT-5.2-Pro now available in AI Gateway and Agent Runners

    OpenAI’s GPT-5.2 and GPT-5.2-Pro are now available through AI Gateway and Agent Runners. You can call these models from Netlify Functions without configuring API keys; the AI Gateway provides the connection to OpenAI for you.

    Example usage in a Function:

    import { OpenAI } from "openai";
    export default async () => {
    const openai = new OpenAI();
    const response = await openai.chat.completions.create({
    model: "gpt-5.2",
    messages: [
    { role: "user", content: "What are the key improvements in GPT-5.2?" }
    ]
    });
    return new Response(JSON.stringify(response), {
    headers: { "Content-Type": "application/json" }
    });
    };

    These models work across any function type and are compatible with other Netlify primitives such as caching and rate limiting, giving you control over request behavior across your site.

    See the AI Gateway documentation for details.

    Agent Runners support the same models, enabling AI to complete long-running coding tasks. You can learn more in the Agent Runners documentation.

    Permalink to GPT-5.2 and GPT-5.2-Pro now available in AI Gateway and Agent Runners
  • GPT-5.1-Codex-Max now available in AI Gateway and Agent Runners

    OpenAI’s GPT-5.1-Codex-Max model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.

    Use the OpenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the GPT-5.1-Codex-Max model:

    import OpenAI from 'openai';
    export default async () => {
    const openai = new OpenAI();
    const response = await openai.responses.create({
    model: 'gpt-5.1-codex-max',
    input: 'What improvements are in GPT‑5.1-Codex-Max?'
    });
    return new Response(JSON.stringify(response), {
    headers: { 'Content-Type': 'application/json' }
    });
    };

    GPT-5.1-Codex-Max is available across Background Functions, Scheduled Functions, and Edge Functions. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.

    Learn more in the AI Gateway documentation.

    You can also leverage GPT-5.1-Codex-Max with Agent Runners to build powerful AI-powered workflows, including expanded tool use and support for long-running agent tasks. Learn more in the Agent Runners documentation.

    Permalink to GPT-5.1-Codex-Max now available in AI Gateway and Agent Runners
  • Netlify Vite Plugin now supports AI Gateway locally

    You can now use AI Gateway in local development with just npm run dev when using the Netlify Vite Plugin. Previously, AI Gateway’s auto-configured environment variables only worked when running netlify dev, which added friction for developers using Vite-powered frameworks like Astro.

    With this update, AI Gateway environment variables are automatically populated when running your Vite development server directly. This means you can run standard framework commands without extra steps:

    # Works with any Vite-based framework
    npm run dev

    This is part of our ongoing effort to streamline the developer experience for Vite frameworks. Modern frameworks like Astro let you specify Netlify as your deployment target and handle everything automatically—now AI Gateway works the same way.

    This change also improves compatibility with AI coding agents and other automated workflows that expect standard development commands to work without additional configuration.

    Learn more about the Netlify Vite Plugin and AI Gateway in the documentation.

    Permalink to Netlify Vite Plugin now supports AI Gateway locally
  • Claude Opus 4.5 now live in AI Gateway, plus latest Claude Code via Agent Runners

    Anthropic’s Claude Opus 4.5 model is now available through Netlify’s AI Gateway with zero configuration required.

    Use the Anthropic SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Claude Opus 4.5 model:

    import Anthropic from "@anthropic-ai/sdk";
    export default async () => {
    const anthropic = new Anthropic();
    const response = await anthropic.messages.create({
    model: "claude-opus-4-5-20251101",
    max_tokens: 4096,
    messages: [
    {
    role: "user",
    content: "Give me pros and cons of using claude-opus-4-5-20251120 over other models."
    },
    ],
    });
    return new Response(JSON.stringify(response), {
    headers: { "Content-Type": "application/json" }
    });
    }

    Claude Opus 4.5 is available across Background Functions, Scheduled Functions, and Edge Functions. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.

    Learn more in the AI Gateway documentation.

    You can also access the newest Claude Code capabilities via Agent Runners, including expanded tool use and support for long-running agent workflows. Learn more in the Agent runner documentation.

    Permalink to Claude Opus 4.5 now live in AI Gateway, plus latest Claude Code via Agent Runners
  • Gemini 3 now available in AI Gateway and Agent Runners

    Google’s Gemini 3 Pro Preview model is now available through Netlify’s AI Gateway and Agent Runners with zero configuration required.

    Use the Google GenAI SDK directly in your Netlify Functions without managing API keys or authentication. The AI Gateway handles everything automatically. Here’s an example using the Gemini 3 Pro Preview model:

    import { GoogleGenAI } from "@google/genai";
    export default async (request: Request, context: Context) => {
    const ai = new GoogleGenAI({});
    const response = await ai.models.generateContent({
    model: "gemini-3-pro-preview",
    contents: "Explain why gemini 3 is better than other models",
    });
    return new Response(JSON.stringify({ answer: response.text }), {
    headers: { "Content-Type": "application/json" }
    });
    };

    Gemini 3 is available across Background Functions, Scheduled Functions, and Agent Runners. You get automatic access to Netlify’s caching, rate limiting, and authentication infrastructure.

    Learn more in the AI Gateway documentation and Agent Runners documentation.

    Permalink to Gemini 3 now available in AI Gateway and Agent Runners
  • GPT-5.1 model now available in AI Gateway

    OpenAI’s latest GPT-5.1 model (gpt-5.1) is now available through Netlify’s AI Gateway. This model brings enhanced performance and efficiency with no additional setup required.

    Use the OpenAI SDK directly in your Netlify Functions without managing API keys. The AI Gateway handles authentication, rate limiting, and caching automatically. Here’s an example using the GPT-5.1 model:

    import { OpenAI } from "openai";
    export default async () => {
    const openai = new OpenAI();
    const response = await openai.chat.completions.create({
    model: "gpt-5.1",
    messages: [
    {
    role: "user",
    content: "Compare GPT-5.1's improvements over GPT-5 Pro"
    }
    ]
    });
    return new Response(JSON.stringify(response), {
    headers: { "Content-Type": "application/json" }
    });
    };

    The GPT-5.1 model works seamlessly across Edge Functions, Background Functions, and Scheduled Functions. You also get access to Netlify’s advanced caching primitives and built-in rate limiting.

    Learn more in the AI Gateway documentation.

    Permalink to GPT-5.1 model now available in AI Gateway