Building an MCP Server for Salesforce: Sales Analytics for Claude
Learn to build a custom MCP server exposing Salesforce data to Claude. Step-by-step guide for AI-powered sales analytics and text-to-SQL queries.
What Is an MCP Server and Why It Matters for Sales Analytics
The Model Context Protocol (MCP) is a standardized interface that lets AI models like Claude access external systems, databases, and APIs in a structured, secure way. Think of it as a bridge between Claude’s reasoning capabilities and your live Salesforce data—without exposing raw credentials or building custom API wrappers for every integration.
For sales and analytics teams, this opens a concrete opportunity: instead of exporting CSV reports or waiting for a dashboard refresh, you can ask Claude direct questions about pipeline, forecast accuracy, or customer churn, and get answers grounded in real-time Salesforce data. An MCP server acts as the middleman, translating Claude’s natural language requests into Salesforce queries and returning structured results.
Why build this instead of using Salesforce’s native reporting tools? Speed. Flexibility. Integration with your existing AI workflows. If your engineering team already uses Claude for code generation, documentation, or decision support, adding Salesforce data access through an MCP server keeps that context in one place—no context switching, no manual data entry, no stale exports.
MCP is particularly valuable for teams already investing in managed Apache Superset for embedded analytics and self-serve BI. When you combine Superset’s visual dashboards with Claude’s conversational analytics via MCP, you get both the structured exploration (dashboards) and the ad-hoc intelligence (Claude) that data-driven organizations need.
Understanding the MCP Architecture
Before writing code, you need to understand how MCP works at a high level. The protocol defines three main components:
The Client — Claude (or any LLM using the MCP standard) that initiates requests for data or actions.
The Server — Your custom application that listens for MCP requests, executes them, and returns results. In this case, the server will handle Salesforce authentication, query construction, and data transformation.
The Transport Layer — The communication mechanism between client and server. For Claude integrations, this is typically stdio (standard input/output) or HTTP/WebSocket.
When you ask Claude a question like “What’s our pipeline value by stage this quarter?”, here’s what happens:
- Claude parses your question and identifies that it needs Salesforce data
- Claude sends an MCP request to your server with the query intent
- Your server receives the request, authenticates with Salesforce, and constructs a query (often using SOQL, Salesforce’s query language)
- The server executes the query against Salesforce’s API
- Results are formatted and returned to Claude
- Claude processes the data and generates a natural language response
This is fundamentally different from traditional BI tools. You’re not building a static dashboard—you’re building a dynamic, conversational interface to your data.
For detailed technical specifications, the Model Context Protocol Documentation provides comprehensive reference material on server implementation, resource definitions, and tool specifications.
Setting Up Your Development Environment
You’ll need a few things before you start coding:
Node.js (v16 or higher) — MCP servers are typically built in Node.js, though Python and other languages are supported. Install from the official Node.js website.
A Salesforce Developer Account — Free tier available at developer.salesforce.com. You’ll need API access enabled and a connected app created.
Claude API Access — Sign up for Anthropic’s API at console.anthropic.com. You’ll use this to test your MCP server.
Git and a Code Editor — Clone the MCP server template and use VS Code or your preferred editor.
Start by creating a new Node.js project:
mkdir mcp-salesforce-server
cd mcp-salesforce-server
npm init -y
npm install @anthropic-sdk/sdk jsforce dotenv
The jsforce library is critical—it’s a robust Salesforce JavaScript client that handles OAuth, SOQL queries, and connection pooling. The dotenv package manages environment variables securely.
Next, create a .env file for your Salesforce credentials:
SALESFORCE_CLIENT_ID=your_connected_app_client_id
SALESFORCE_CLIENT_SECRET=your_connected_app_secret
SALESFORCE_USERNAME=your_salesforce_username
SALESFORCE_PASSWORD=your_salesforce_password
SALESFORCE_SECURITY_TOKEN=your_security_token
Don’t commit .env to version control. Add it to .gitignore.
For a production setup, consider using Salesforce’s OAuth 2.0 flow instead of username/password authentication. The Salesforce MCP Integration: Step-by-Step Setup for 2026 guide walks through environment variable configuration and secure credential handling in detail.
Creating Your First MCP Server
Now let’s build the core server. Create a file called server.js:
const Anthropic = require('@anthropic-sdk/sdk');
const jsforce = require('jsforce');
require('dotenv').config();
const conn = new jsforce.Connection({
oauth2: {
clientId: process.env.SALESFORCE_CLIENT_ID,
clientSecret: process.env.SALESFORCE_CLIENT_SECRET,
redirectUri: 'http://localhost:3000/oauth/_callback',
},
});
// Authenticate with Salesforce
conn.login(process.env.SALESFORCE_USERNAME, process.env.SALESFORCE_PASSWORD + process.env.SALESFORCE_SECURITY_TOKEN, (err, userInfo) => {
if (err) {
console.error('Salesforce login failed:', err);
process.exit(1);
}
console.log('Connected to Salesforce as:', userInfo.id);
});
// Define MCP tools for sales analytics
const tools = [
{
name: 'query_opportunities',
description: 'Query Salesforce opportunities by stage, amount, close date, or account name',
input_schema: {
type: 'object',
properties: {
stage: { type: 'string', description: 'Opportunity stage (e.g., "Prospecting", "Negotiation", "Closed Won")' },
min_amount: { type: 'number', description: 'Minimum opportunity amount in USD' },
max_amount: { type: 'number', description: 'Maximum opportunity amount in USD' },
account_name: { type: 'string', description: 'Filter by account name (partial match)' },
},
required: [],
},
},
{
name: 'get_pipeline_summary',
description: 'Get total pipeline value grouped by stage',
input_schema: { type: 'object', properties: {} },
},
{
name: 'query_accounts',
description: 'Query Salesforce accounts by industry, revenue, or employee count',
input_schema: {
type: 'object',
properties: {
industry: { type: 'string' },
min_employees: { type: 'number' },
max_employees: { type: 'number' },
},
},
},
];
// Tool execution handlers
const executeQuery = async (toolName, toolInput) => {
try {
if (toolName === 'query_opportunities') {
let soql = 'SELECT Id, Name, Amount, StageName, CloseDate, Account.Name FROM Opportunity';
const conditions = [];
if (toolInput.stage) conditions.push(`StageName = '${toolInput.stage}'`);
if (toolInput.min_amount) conditions.push(`Amount >= ${toolInput.min_amount}`);
if (toolInput.max_amount) conditions.push(`Amount <= ${toolInput.max_amount}`);
if (toolInput.account_name) conditions.push(`Account.Name LIKE '%${toolInput.account_name}%'`);
if (conditions.length > 0) soql += ' WHERE ' + conditions.join(' AND ');
soql += ' ORDER BY Amount DESC LIMIT 100';
const records = await conn.query(soql);
return { success: true, records: records.records };
}
if (toolName === 'get_pipeline_summary') {
const soql = 'SELECT StageName, SUM(Amount) total_amount, COUNT() total_count FROM Opportunity GROUP BY StageName';
const records = await conn.query(soql);
return { success: true, pipeline: records.records };
}
if (toolName === 'query_accounts') {
let soql = 'SELECT Id, Name, Industry, AnnualRevenue, NumberOfEmployees FROM Account';
const conditions = [];
if (toolInput.industry) conditions.push(`Industry = '${toolInput.industry}'`);
if (toolInput.min_employees) conditions.push(`NumberOfEmployees >= ${toolInput.min_employees}`);
if (toolInput.max_employees) conditions.push(`NumberOfEmployees <= ${toolInput.max_employees}`);
if (conditions.length > 0) soql += ' WHERE ' + conditions.join(' AND ');
soql += ' LIMIT 100';
const records = await conn.query(soql);
return { success: true, records: records.records };
}
return { success: false, error: 'Unknown tool' };
} catch (error) {
return { success: false, error: error.message };
}
};
// Main Claude interaction loop
const runClaudeWithTools = async (userMessage) => {
const client = new Anthropic();
const messages = [{ role: 'user', content: userMessage }];
console.log('\nUser:', userMessage);
// Agentic loop
while (true) {
const response = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
tools: tools,
messages: messages,
});
// Add assistant response to messages
messages.push({ role: 'assistant', content: response.content });
// Check if we're done
if (response.stop_reason === 'end_turn') {
// Extract and print the final text response
const textBlock = response.content.find((block) => block.type === 'text');
if (textBlock) console.log('\nAssistant:', textBlock.text);
break;
}
// Process tool calls
if (response.stop_reason === 'tool_use') {
const toolUseBlocks = response.content.filter((block) => block.type === 'tool_use');
for (const toolUse of toolUseBlocks) {
console.log(`\nCalling tool: ${toolUse.name}`);
console.log('Input:', JSON.stringify(toolUse.input, null, 2));
const result = await executeQuery(toolUse.name, toolUse.input);
console.log('Result:', JSON.stringify(result, null, 2));
// Add tool result to messages
messages.push({
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: toolUse.id,
content: JSON.stringify(result),
},
],
});
}
}
}
};
// Test the server
runClaudeWithTools('What is our total pipeline value by stage this quarter?');
This server defines three MCP tools:
- query_opportunities — Filters opportunities by stage, amount range, or account name
- get_pipeline_summary — Returns aggregated pipeline value grouped by stage
- query_accounts — Queries accounts by industry or employee count
When Claude receives a user question, it examines the available tools, decides which ones to use, and sends structured requests to your server. The server executes SOQL queries against Salesforce and returns results. Claude then processes those results and generates a human-readable response.
Run the server:
node server.js
You should see output like:
Connected to Salesforce as: 0051h000005VFZAA2
User: What is our total pipeline value by stage this quarter?
Calling tool: get_pipeline_summary
Input: {}
Result: { success: true, pipeline: [...] }
Assistant: Based on your Salesforce data, here's your pipeline summary...
Adding Text-to-SQL and Natural Language Queries
The basic MCP server works, but it’s limited to predefined tools. For true flexibility, you want Claude to generate SOQL queries from natural language—text-to-SQL for Salesforce.
Add a new tool that lets Claude write custom SOQL:
const tools = [
// ... existing tools ...
{
name: 'execute_soql',
description: 'Execute a custom SOQL query against Salesforce. Use for complex queries that don\'t fit predefined tools.',
input_schema: {
type: 'object',
properties: {
soql: {
type: 'string',
description: 'A valid SOQL query string (e.g., "SELECT Id, Name, Amount FROM Opportunity WHERE StageName = \'Closed Won\'")',
},
},
required: ['soql'],
},
},
];
// In executeQuery function, add:
if (toolName === 'execute_soql') {
// Validate SOQL to prevent injection attacks
if (!toolInput.soql.toUpperCase().startsWith('SELECT')) {
return { success: false, error: 'Only SELECT queries are allowed' };
}
const records = await conn.query(toolInput.soql);
return { success: true, records: records.records, query: toolInput.soql };
}
Now Claude can ask questions like “Show me all opportunities with a close date in the next 30 days and an amount greater than $50k” and generate the appropriate SOQL query itself.
However, this introduces security risk. Claude might generate queries that are slow, inefficient, or that expose sensitive data. Mitigate this by:
- Validating queries — Check that they only SELECT and don’t modify data
- Adding query limits — Enforce LIMIT clauses to prevent runaway queries
- Logging all queries — Track what Claude queries for audit purposes
- Using field-level security — Ensure the Salesforce user running queries only has access to appropriate fields
For production deployments, consider wrapping the execute_soql tool in additional validation:
const validateSOQL = (soql) => {
const dangerous = ['DELETE', 'UPDATE', 'INSERT', 'UNDELETE'];
const upper = soql.toUpperCase();
if (dangerous.some((keyword) => upper.includes(keyword))) {
return { valid: false, error: 'Write operations are not allowed' };
}
if (!upper.includes('LIMIT')) {
return { valid: false, error: 'All queries must include a LIMIT clause' };
}
return { valid: true };
};
Integrating with Claude and Testing
Once your server is running, test it with different questions to see how Claude uses the tools:
const testQueries = [
'What is our total pipeline value by stage this quarter?',
'Show me all opportunities in the Negotiation stage with amounts over $100k',
'Which accounts in the Technology industry have the most open opportunities?',
'What is the average deal size by stage?',
];
// Run each test query
for (const query of testQueries) {
await runClaudeWithTools(query);
console.log('\n---\n');
}
Observe how Claude:
- Interprets intent — Understands that “this quarter” means filtering by close date
- Selects tools — Chooses between predefined tools and custom SOQL based on complexity
- Iterates — May call multiple tools to answer a single question
- Synthesizes — Combines results from multiple queries into a coherent response
For teams already using D23’s managed Apache Superset, this MCP server complements your analytics stack. Superset handles scheduled dashboards and team-wide exploration; Claude via MCP handles ad-hoc questions and integration with AI workflows.
Deploying Your MCP Server for Production
Local testing is one thing; production deployment requires additional considerations.
Option 1: Stdio Transport (Recommended for Claude Desktop)
If you’re using Claude Desktop or a local Claude client, stdio (standard input/output) is the simplest transport. Your server listens on stdin and writes responses to stdout.
Create a claude_desktop_config.json file in your user’s config directory:
{
"mcpServers": {
"salesforce": {
"command": "node",
"args": ["/path/to/server.js"],
"env": {
"SALESFORCE_CLIENT_ID": "your_client_id",
"SALESFORCE_CLIENT_SECRET": "your_client_secret",
"SALESFORCE_USERNAME": "your_username",
"SALESFORCE_PASSWORD": "your_password",
"SALESFORCE_SECURITY_TOKEN": "your_token"
}
}
}
}
Option 2: HTTP/WebSocket Transport (For Web Apps)
If you’re building a web application that needs to call Claude with Salesforce data, use HTTP transport:
const express = require('express');
const app = express();
app.use(express.json());
app.post('/mcp/tool', async (req, res) => {
const { toolName, toolInput } = req.body;
const result = await executeQuery(toolName, toolInput);
res.json(result);
});
app.listen(3000, () => {
console.log('MCP server listening on port 3000');
});
Then configure your Claude client to call http://localhost:3000/mcp/tool.
For teams deploying at scale, the Salesforce Hosted MCP Servers (Beta) documentation outlines how to host MCP servers within Salesforce’s infrastructure, reducing operational overhead.
Option 3: Docker Containerization
For cloud deployment (AWS, GCP, Azure), containerize your server:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY server.js .
EXPOSE 3000
CMD ["node", "server.js"]
Build and push to your container registry:
docker build -t mcp-salesforce-server:latest .
docker push your-registry/mcp-salesforce-server:latest
Deploy to Kubernetes or managed container services with environment variables injected securely via secrets.
Handling Authentication and Security
Your MCP server will have direct access to Salesforce data. Security is non-negotiable.
Use OAuth 2.0, Not Username/Password
The example above uses username/password for simplicity, but production deployments should use OAuth 2.0:
const conn = new jsforce.Connection({
oauth2: {
clientId: process.env.SALESFORCE_CLIENT_ID,
clientSecret: process.env.SALESFORCE_CLIENT_SECRET,
redirectUri: 'https://your-domain.com/oauth/callback',
},
});
// Use refresh token for long-lived connections
conn.refreshToken = process.env.SALESFORCE_REFRESH_TOKEN;
conn.refresh((err) => {
if (err) console.error('Token refresh failed:', err);
});
Implement Rate Limiting
Salesforce API has rate limits (typically 15,000 API calls per 24 hours for most orgs). Prevent runaway queries:
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 10, // 10 requests per minute
message: 'Too many API calls, please try again later',
});
app.post('/mcp/tool', limiter, async (req, res) => {
// ... handle request ...
});
Audit and Monitor
Log all queries and results for compliance:
const logger = require('winston');
logger.info('SOQL Query Executed', {
query: soql,
timestamp: new Date(),
userId: process.env.SALESFORCE_USERNAME,
resultCount: records.length,
});
For additional context on Salesforce MCP security patterns, the Salesforce MCP: Connecting AI Agents to Enterprise Data article discusses secure data access and enterprise integration considerations.
Advanced: Combining MCP with Superset Dashboards
While your MCP server gives Claude direct query access, you probably also want static dashboards for your team. This is where D23’s managed Apache Superset becomes valuable.
You can use the same Salesforce data connection in both:
- In Superset — Create dashboards with pipeline by stage, forecast accuracy, and customer health metrics. Share with the broader team.
- In Claude via MCP — Answer ad-hoc questions that don’t fit predefined dashboards. Use for forecasting, anomaly detection, or integration with other AI workflows.
Both tools query the same Salesforce data but serve different purposes. Superset is for structured, repeatable analytics; Claude is for exploratory, conversational intelligence.
For teams evaluating managed BI platforms, D23 offers embedded analytics capabilities that let you embed Superset dashboards directly into your product, alongside Claude-powered conversational analytics via MCP.
Real-World Example: Quarterly Business Review
Let’s walk through a realistic scenario. Your VP of Sales needs a QBR deck in 2 hours. Normally, this means:
- Exporting data from Salesforce
- Building pivot tables in Excel
- Waiting for the analytics team to create charts
- Manually updating slides
With Claude + MCP, you can ask:
“Generate a QBR summary including: (1) total pipeline value by stage, (2) win rate by sales rep, (3) average deal cycle time, (4) top 10 accounts by opportunity count, (5) forecast vs. actual for the last three quarters.”
Claude would:
- Call
get_pipeline_summaryfor pipeline by stage - Call
execute_soqlto calculate win rates by sales rep - Call
execute_soqlto calculate average deal cycle time - Call
query_opportunitiesfiltered by top accounts - Call
execute_soqlfor historical forecast data
Then synthesize all results into a narrative summary:
Your Q4 pipeline stands at $8.2M across 42 opportunities.
The Negotiation stage represents 35% of total value,
with an average deal size of $195k. Win rate is 32% overall,
with top performers (Sarah Chen, Marcus Rodriguez) achieving 48-52%.
Average deal cycle is 87 days, up from 72 days in Q3—
likely due to increased deal complexity and longer procurement cycles.
Top accounts by opportunity count:
1. Acme Corp (12 opps, $2.1M)
2. TechVentures Inc (8 opps, $1.4M)
3. Global Solutions Ltd (7 opps, $980k)
You can then copy this into your deck, ask Claude to format it as bullet points, and have a QBR ready in 15 minutes instead of 2 hours.
Extending Your MCP Server with Additional Tools
As you grow more sophisticated, add tools for:
Forecast Accuracy Analysis
{
name: 'analyze_forecast_accuracy',
description: 'Compare forecast amounts to actual close amounts for closed opportunities',
input_schema: {
type: 'object',
properties: {
months_back: { type: 'number', description: 'How many months back to analyze' },
},
},
}
Customer Health Scoring
{
name: 'get_account_health_scores',
description: 'Calculate account health based on opportunity pipeline, contract renewals, and support tickets',
input_schema: {
type: 'object',
properties: {
min_score: { type: 'number', description: 'Filter accounts with health score above this threshold' },
},
},
}
Churn Risk Detection
{
name: 'identify_churn_risk_accounts',
description: 'Identify accounts at risk of not renewing based on engagement patterns and contract terms',
input_schema: { type: 'object', properties: {} },
}
Each tool becomes a capability that Claude can reason about and use in combination. A single question like “Which of our largest accounts are at churn risk?” might trigger three tools in sequence.
For inspiration on advanced analytics patterns, explore the MCP Server Salesforce Repository on GitHub, which provides working examples and community contributions.
Monitoring and Optimization
Once deployed, monitor your MCP server’s performance:
Query Latency
Track how long SOQL queries take:
const startTime = Date.now();
const records = await conn.query(soql);
const duration = Date.now() - startTime;
logger.info('Query Performance', {
query: soql,
duration,
recordCount: records.records.length,
});
if (duration > 5000) {
logger.warn('Slow Query Detected', { soql, duration });
}
API Call Usage
Track Salesforce API consumption to stay under rate limits:
let apiCallCount = 0;
const maxCallsPerDay = 15000;
const checkApiLimit = () => {
if (apiCallCount >= maxCallsPerDay) {
throw new Error('Daily API limit exceeded');
}
apiCallCount++;
};
Error Tracking
Log errors for debugging:
try {
const records = await conn.query(soql);
} catch (error) {
logger.error('Query Failed', {
query: soql,
error: error.message,
errorCode: error.errorCode,
});
return { success: false, error: error.message };
}
Comparing MCP Salesforce Integration to Alternatives
Why build an MCP server instead of using other approaches?
vs. Native Salesforce Reports
- MCP is conversational and flexible; Salesforce reports are static
- MCP integrates with Claude’s reasoning; reports are isolated dashboards
- MCP can combine Salesforce data with other sources via Claude
vs. Salesforce Einstein Analytics
- MCP is open-source and portable; Einstein is Salesforce-proprietary
- MCP integrates with your existing AI stack; Einstein is a separate tool
- MCP is cheaper for teams already using Claude
vs. Third-Party BI Tools (Looker, Tableau, Power BI)
- MCP is conversational, not visual-first
- MCP integrates with AI; traditional BI tools do not
- MCP is faster to set up for simple use cases
- But traditional BI tools offer richer visualizations and broader audience reach
The best approach often combines both: use MCP for ad-hoc, AI-powered analytics; use managed Apache Superset or traditional BI for team dashboards and governance.
Troubleshooting Common Issues
“Authentication failed” errors
Verify your Salesforce credentials and security token:
echo "Username: $SALESFORCE_USERNAME"
echo "Password length: ${#SALESFORCE_PASSWORD}"
echo "Security token length: ${#SALESFORCE_SECURITY_TOKEN}"
Test authentication separately:
const testAuth = async () => {
try {
const conn = new jsforce.Connection();
await conn.login(username, password + securityToken);
console.log('Authentication successful');
} catch (error) {
console.error('Authentication failed:', error.message);
}
};
“INVALID_FIELD” SOQL errors
Ensure field names are correct. Query Salesforce’s metadata:
const describeAccount = await conn.describe('Account');
const fieldNames = describeAccount.fields.map((f) => f.name);
console.log('Available Account fields:', fieldNames);
Rate limit exceeded
Implement exponential backoff:
const queryWithRetry = async (soql, maxRetries = 3) => {
for (let i = 0; i < maxRetries; i++) {
try {
return await conn.query(soql);
} catch (error) {
if (error.errorCode === 'REQUEST_LIMIT_EXCEEDED' && i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s
await new Promise((resolve) => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
};
Next Steps and Resources
You now have a working MCP server that exposes Salesforce data to Claude. The next steps depend on your use case:
For Sales Teams
- Add tools for forecast analysis, pipeline health, and deal scoring
- Integrate with your CRM workflow to get Claude insights directly in Salesforce
- Use Claude to generate sales playbooks based on historical win/loss data
For Analytics Teams
- Combine this MCP server with D23’s managed Superset for both conversational and visual analytics
- Build custom dashboards that complement Claude’s ad-hoc capabilities
- Use MCP for data discovery and Superset for team reporting
For Engineering Teams
- Embed this MCP server in your product to give users Claude-powered analytics
- Use MCP as a foundation for building AI agents that take actions in Salesforce (create opportunities, update accounts, etc.)
- Extend the server with tools for your other enterprise systems (HubSpot, Marketo, Stripe, etc.)
For a step-by-step walkthrough of Salesforce MCP configuration, the Install and Configure the Salesforce DX MCP Server (Beta) documentation covers Node.js setup and JSON configuration requirements in detail.
If you’re building no-code or low-code integrations, the Create Custom MCP Server for Salesforce - Without Code guide explores alternative approaches using automation platforms.
For enterprise teams looking to standardize analytics across multiple systems, consider how MCP fits into your broader data strategy alongside D23’s analytics platform. Review D23’s terms of service and privacy policy if you’re integrating with managed services.
The Model Context Protocol is still evolving. Follow the Building Agent Integrations via Model Context Protocol (MCP) Salesforce Dreamforce 2025 session for updates on expanding Agentforce capabilities and new MCP patterns.
Start small, test thoroughly, and iterate. Your first MCP server doesn’t need to be perfect—it just needs to answer one question reliably. From there, you can add tools, optimize queries, and expand to more complex analytics workflows.