Silicon Analysts
Integration Guide~10 min

Use Silicon Analysts with the Vercel AI SDK

Stream Silicon Analysts MCP tools — chip costs, wafer pricing, packaging economics, HBM market data — into Next.js, SvelteKit, or any TypeScript app via experimental_createMCPClient.

Prerequisites

  1. A Silicon Analysts API key from /developers. Set it as SA_API_KEY.
  2. An LLM provider key for your chosen model (Anthropic, OpenAI, Google, etc.).
  3. Node.js 18+ and the ai package (Vercel AI SDK ≥ 4.0). For Next.js, the App Router is recommended.

Install

npm install ai @ai-sdk/anthropic

Swap @ai-sdk/anthropic for @ai-sdk/openai, @ai-sdk/google, etc., depending on which model you want to use.

Quickstart — Next.js Route Handler

Drop this into app/api/chat/route.ts to get a streaming endpoint that has Silicon Analysts MCP tools wired in. The model decides when to call them.

// app/api/chat/route.ts
import { anthropic } from '@ai-sdk/anthropic';
import {
  experimental_createMCPClient as createMCPClient,
  streamText,
} from 'ai';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const transport = new StreamableHTTPClientTransport(
    new URL('https://siliconanalysts.com/api/mcp'),
    {
      requestInit: {
        headers: {
          Authorization: `Bearer ${process.env.SA_API_KEY}`,
        },
      },
    },
  );

  const mcpClient = await createMCPClient({ transport });
  const tools = await mcpClient.tools();

  const result = streamText({
    model: anthropic('claude-sonnet-4-5'),
    system:
      'You are a semiconductor cost analyst. Use the silicon-analysts ' +
      'tools to ground your answers. Cite provenance.last_updated.',
    messages,
    tools,
    onFinish: async () => {
      await mcpClient.close();
    },
  });

  return result.toDataStreamResponse();
}

Pair it with the standard useChat() hook on the client. The model will stream a grounded answer that transparently calls the MCP tools when they help.

One-shot Generation (No Streaming)

For background jobs or scripts where you don’t need streaming, use generateText.

// scripts/cost-summary.ts
import { anthropic } from '@ai-sdk/anthropic';
import {
  experimental_createMCPClient as createMCPClient,
  generateText,
} from 'ai';
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';

const transport = new StreamableHTTPClientTransport(
  new URL('https://siliconanalysts.com/api/mcp'),
  {
    requestInit: {
      headers: { Authorization: `Bearer ${process.env.SA_API_KEY}` },
    },
  },
);

const mcpClient = await createMCPClient({ transport });

try {
  const tools = await mcpClient.tools();
  const { text } = await generateText({
    model: anthropic('claude-sonnet-4-5'),
    tools,
    maxSteps: 5,
    prompt:
      'Summarize the manufacturing cost breakdown of NVIDIA B200 ' +
      'and call out any data points marked confidence_tier=low.',
  });
  console.log(text);
} finally {
  await mcpClient.close();
}
Set maxSteps high enough to allow multi-tool plans (3-5 is typical for analysis questions). Without it, the SDK stops after the first tool call.

Production Tips

1. Close the client (or reuse it)

In a route handler, call await mcpClient.close() in onFinish (streaming) or a finally block (one-shot). In a long-running service, build the client once at startup.

2. Cache stable responses with unstable_cache

Wafer prices and packaging benchmarks change monthly. Wrap your generation in Next.js unstable_cache with a 1-hour TTL keyed on the user prompt for FAQ-style queries. The free tier’s 100 req/24h budget evaporates quickly without caching.

3. Surface provenance to your UI

Tool call results are visible in the SDK’s toolCalls and toolResults arrays. Render the provenance block as a footnote so users see which numbers are research vs. derived vs. estimated.

4. Set a reasonable maxSteps

Without it, the SDK runs one tool call and stops. Set maxSteps: 5 for analysis questions that may need multiple lookups (wafer pricing → cost calculation → packaging cost → summary).

5. Deploy to Vercel

Add SA_API_KEY and your model provider key (e.g. ANTHROPIC_API_KEY) to Vercel project settings. The streamable HTTP transport is fully serverless-compatible — no special runtime needed.

Frequently Asked Questions

How do I add an MCP server to a Vercel AI SDK app?

Import experimental_createMCPClient from the ai package, instantiate it with a StreamableHTTPClientTransport pointing at https://siliconanalysts.com/api/mcp with your Silicon Analysts API key in an Authorization: Bearer header, then call client.tools() and pass the result to streamText or generateText.

Does this work in a Next.js route handler?

Yes — it’s the most common deployment shape. Create the MCP client in a route handler at app/api/chat/route.ts, fetch the tool list, pass the tools to streamText, and return the resulting StreamingTextResponse.

Do I need to close the MCP client?

For one-shot calls, await client.close() after the response finishes. For long-running services, you can reuse the client across requests; the streamable HTTP transport is connectionless and safe to share.

Which models work with Vercel AI SDK + MCP?

Any Vercel AI SDK model that supports tool calling: OpenAI GPT-4 family, Anthropic Claude, Google Gemini, Mistral, and most open models with structured tool calling. The MCP layer is provider-agnostic.

How do I deploy this to Vercel?

Set SA_API_KEY and your model provider key (e.g. ANTHROPIC_API_KEY) in Vercel project settings, then deploy normally. The streamable HTTP transport is fully serverless-compatible.

Does the response include data provenance?

Yes. Every Silicon Analysts tool returns a provenance block with last_updated, source_type, confidence_tier, and dataset_version. See /data-quality for the canonical schema.

Related