Use Silicon Analysts with LangChain
Wire Silicon Analysts MCP tools — chip costs, wafer pricing, packaging economics, HBM market data — into any LangChain agent or LangGraph workflow using langchain-mcp-adapters.
Prerequisites
- A Silicon Analysts API key from /developers. Set it as
SA_API_KEY. - An LLM provider key for whichever model you use with LangChain (OpenAI, Anthropic, Google, etc.).
- Python 3.10+ with
langchain,langchain-mcp-adapters, and a chat-model integration likelangchain-anthropic.
Install
pip install langchain langgraph langchain-mcp-adapters langchain-anthropic
Quickstart — Tools-only
Load Silicon Analysts MCP tools as LangChain tools. You can pass them to any agent, chain, or graph that accepts BaseTool instances.
# load_silicon_analysts_tools.py
import asyncio
import os
from langchain_mcp_adapters.client import MultiServerMCPClient
async def main():
client = MultiServerMCPClient({
"silicon-analysts": {
"transport": "streamable_http",
"url": "https://siliconanalysts.com/api/mcp",
"headers": {
"Authorization": f"Bearer {os.environ['SA_API_KEY']}",
},
}
})
tools = await client.get_tools()
for t in tools:
print(t.name, "—", t.description[:80])
asyncio.run(main())You should see all 6 tools printed: get_accelerator_costs, calculate_chip_cost, get_hbm_market_data, get_market_pulse, get_wafer_pricing, get_packaging_costs.
Realistic Use Case: ReAct Agent for Chip Cost Analysis
The same tools, plugged into a LangGraph ReAct agent. The agent decides which Silicon Analysts tool to call given a natural-language question about semiconductors.
# react_agent.py
import asyncio
import os
from langchain_anthropic import ChatAnthropic
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
SYSTEM_PROMPT = (
"You are a semiconductor cost analyst. When asked about chip economics, "
"wafer prices, packaging, or HBM, use the silicon-analysts tools to "
"ground your answers in current data. Always cite the "
"provenance.last_updated field returned by each tool."
)
async def main():
client = MultiServerMCPClient({
"silicon-analysts": {
"transport": "streamable_http",
"url": "https://siliconanalysts.com/api/mcp",
"headers": {
"Authorization": f"Bearer {os.environ['SA_API_KEY']}",
},
}
})
tools = await client.get_tools()
model = ChatAnthropic(model="claude-sonnet-4-5")
agent = create_react_agent(
model=model,
tools=tools,
prompt=SYSTEM_PROMPT,
)
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": (
"Compare the manufacturing cost breakdowns of NVIDIA H100 "
"and B200. Which has higher gross margins? Cite freshness."
),
}]
})
print(result["messages"][-1].content)
asyncio.run(main())get_accelerator_costs twice (once per chip), reads the cost-breakdown and gross-margin fields, and writes a comparison citing the provenance.last_updated date returned by the tool.Combine with Your Own Tools
The MCP tools are standard LangChain BaseTool instances. Mix them with your own retrieval, search, or domain logic.
from langchain_core.tools import tool
@tool
def lookup_internal_bom(part_number: str) -> str:
"""Look up our internal BOM database by part number."""
return query_my_db(part_number)
mcp_tools = await client.get_tools()
all_tools = mcp_tools + [lookup_internal_bom]
agent = create_react_agent(model=model, tools=all_tools, prompt=SYSTEM_PROMPT)Production Tips
1. Reuse MultiServerMCPClient across requests
Streamable HTTP is connectionless on the wire, but the adapter still caches schemas internally. In a long-running service, build the client once at startup and call get_tools() once. In a serverless function, accept the cold-start cost or warm the function with a scheduled ping.
2. Cache stable responses
Wafer prices, accelerator BOMs, and packaging benchmarks change on monthly-to-quarterly cadence. Use langchain.cache.SQLiteCache or your own Redis layer to cache the model output for at least 1 hour. Free tier (100 req/24h) leaves headroom only with caching.
3. Surface provenance to the user
Add “Cite the provenance.last_updated field for any number you report” to your system prompt. It prevents the model from quoting yesterday’s price as if it were live and bakes auditability into every answer.
4. Handle errors with with_fallbacks
Wrap the MCP-backed model in a fallback chain. If the MCP server returns a JSON-RPC error code -32004 (rate-limit exceeded) or -32001 (auth failed), fall back to a tool-less model that politely degrades.
5. Use LangSmith for tool-call traces
Set LANGCHAIN_TRACING_V2=true and you’ll see every Silicon Analysts MCP request, its arguments, and its response in the LangSmith UI — invaluable for debugging which tool the model picked and why.
Frequently Asked Questions
How do I add an MCP server to a LangChain agent?
Install langchain-mcp-adapters, instantiate MultiServerMCPClient with the Silicon Analysts MCP URL and your API key in an Authorization: Bearer header, then await client.get_tools() to receive a list of LangChain tools you can pass to any agent or LangGraph workflow.
Does this work with LangGraph?
Yes. The tools returned by langchain-mcp-adapters are standard LangChain BaseTool instances, so they work with create_react_agent, LangGraph StateGraphs, and any other tool-calling pattern.
Which transport does Silicon Analysts MCP support?
Streamable HTTP (the modern MCP transport). Configure langchain-mcp-adapters with transport="streamable_http" and the URL https://siliconanalysts.com/api/mcp.
Can I combine Silicon Analysts MCP with other LangChain tools?
Yes. Just concatenate the lists. Use Silicon Analysts MCP for semiconductor data and add your own retrieval, web search, or domain-specific tools to the same agent.
How should I handle rate limits in a LangChain agent?
Wrap the MCP client in a retry policy or use LangChain’s with_fallbacks. Free tier allows 100 requests per 24 hours per API key; Pro tier 10,000 per hour. Cache stable responses (wafer pricing, accelerator BOMs) for at least 1 hour.
Does the response include data provenance?
Yes. Every record includes a provenance block with last_updated, source_type, confidence_tier, and dataset_version. See /data-quality for the canonical schema.