You want to build custom AI integrations for your business. The Model Context Protocol gives you a standardised way to connect AI systems to your data and tools. The development landscape, though? It can be tricky to figure out.
The complexity isn’t in MCP itself—the protocol is straightforward. The challenge is working out which SDK to use, how to test things, which authentication pattern fits, and where to deploy. Each choice affects how quickly you ship and how easily your team maintains what you build.
This article walks you through what’s involved in MCP server development from environment setup through to publishing. You’ll understand the SDK options, what tools and resources actually do, how OAuth fits in, what testing looks like, and how the registry works.
If you need the fundamentals of MCP architecture components, that article covers the client-server-host model. This article assumes you’re familiar with those concepts and ready to understand the development landscape.
We’ll look at environment setup, SDK choices, server capabilities, API integration patterns, OAuth authentication, testing, and publishing.
How do I set up my development environment for MCP server development?
MCP server development needs either Python 3.10+ or Node.js 18+. Your choice depends on what your team already knows and what your existing infrastructure looks like.
Python setups typically use a package manager like uv and virtual environments to isolate dependencies. FastMCP uses decorators to simplify server development, similar to how FastAPI works. The official Python SDK offers more control but requires more configuration.
TypeScript environments need Node.js and npm for dependency management. Projects usually have a tsconfig.json defining compiler settings—target ES2022, module Node16, strict mode enabled. The TypeScript SDK provides strong typing and first-class async support.
Both paths benefit from the MCP Inspector, a testing tool that acts as a basic MCP client. It shows the JSON-RPC messages being exchanged and lets you verify servers work correctly before connecting them to AI clients.
Environment variables store API keys and secrets. You need some way to manage these—.env files work for local development, but production needs proper secrets management services.
Directory structure matters for maintainability. Separate server code, tests, and configuration. Keep it simple initially.
Common setup problems? Permission errors that need elevated access, and path issues on Windows where JSON configuration files need double backslashes or forward slashes. MCP servers require absolute paths when configuring connections—relative paths cause connection failures.
IDE configuration with appropriate extensions and debugger setup saves time when troubleshooting. You’ll need to step through server code at some point.
The setup overhead is similar to any API development—get the tools in place, verify they work, then start building.
What programming language should I use for my MCP server – Python or TypeScript?
Both SDKs implement identical MCP capabilities. This choice is about developer ergonomics, not functionality.
Python offers two paths. The official SDK gives you complete control but you’ll write more boilerplate for Server class configuration. FastMCP uses decorators to simplify development—same pattern FastAPI popularised. Less code, faster development, works for most cases.
TypeScript provides strong typing and first-class async support. Teams already working in Node.js find it natural. The ecosystem integrations work well. JavaScript developers don’t need to context-switch.
The decision usually comes down to your team’s expertise and your existing codebase. Running a Node backend already? TypeScript makes sense. Have data scientists or ML engineers on your team? Python is easier. Need to ship quickly? FastMCP wins.
Performance differences are minimal for most applications. Don’t choose based on speed—choose based on what your team knows and what your infrastructure already uses.
How do I create my first Hello World MCP server?
The simplest MCP server imports the SDK, initialises a server instance, defines at least one capability, and implements a run function.
FastMCP servers initialise with a name identifier. Tools get defined using decorators on functions. Type hints are required—MCP uses them to generate tool schemas that AI clients can understand. Docstrings become the tool descriptions.
STDIO transport is the default for local development—perfect for command-line tools and desktop applications like Claude Desktop. It’s process-based, running your server as a child process.
The server needs to be configured in the client. Claude Desktop uses a configuration file at a specific location on each platform, requiring absolute paths to the server script.
TypeScript servers need compilation before testing. This step catches people every time until it becomes habit.
Testing a server means connecting MCP Inspector to it and verifying the capabilities appear correctly. Inspector shows initialisation messages, lists your tools, and lets you invoke them to check they work.
If capabilities don’t appear, common causes include decorator syntax errors or the server not restarting after code changes.
How do I implement tools, resources, and prompts in my MCP server?
MCP servers provide three capability types: tools execute actions, resources expose data, prompts define templates. These primitives map to the server/client/host explained in the architecture overview.
Tools are functions that perform operations. They take typed parameters, execute logic, and return results. The tool decorator registers them with the server. Docstrings describe what they do. Optional parameters use defaults. Error handling wraps the logic to catch failures and return messages AI clients can interpret.
Async operations matter for responsiveness. API calls, database queries, file reads—anything that might block should use async patterns. The difference in response time adds up.
Resources expose read-only data through URI schemes. A resource decorator defines them with a URI pattern. The URI can include placeholders that get replaced at runtime based on what the AI client requests. This pattern works for file access, user profiles, database records, or any parameterised data.
Prompts define reusable templates with variable substitution. A prompt decorator creates a function that takes parameters and returns formatted text. Code review prompts might take code and language parameters, then return formatted instructions.
Type safety helps with complex inputs. Pydantic models in Python or TypeScript interfaces validate data before it reaches your logic.
Tool output formatting follows a standard structure—arrays of message objects with type and text fields.
Testing each capability as you add it prevents accumulating bugs. Inspector shows what the server registers.
Tools suit actions that change state or fetch dynamic data. Resources suit static or semi-static data that doesn’t need parameters beyond a URI. Prompts suit reusable text templates that guide AI behaviour.
How do I integrate external APIs with my MCP server?
External API integration means wrapping API endpoints in MCP tools. You need an HTTP client library—httpx for Python async support, axios or node-fetch for TypeScript.
Credentials live in environment variables, never hardcoded. Tools load them at runtime and return errors if they’re missing.
A weather API tool might take a city parameter, load the API key from environment variables, make an async HTTP request with appropriate headers and authentication, parse the JSON response, extract relevant data like temperature, and return formatted text.
Response transformation matters. Raw JSON works but natural language summaries are easier for AI interpretation.
Error handling catches network failures, API errors, and invalid responses. Async error handling uses try-catch-finally blocks. The finally block runs regardless of errors—useful for cleanup.
Throwing errors with status codes when responses aren’t ok helps debug API issues.
Rate limits are real. Exponential backoff for retries prevents hammering APIs. Caching responses when appropriate reduces costs and latency—weather data doesn’t change every second.
Testing API integrations with Inspector means passing environment variables when launching it. This keeps credentials out of version control while allowing testing.
Try various inputs. Test error cases—invalid parameters, network timeouts, malformed responses. Verify the format matches expectations.
How do I implement OAuth authentication in my MCP server?
OAuth adds complexity. Use it when your server needs user-specific access to protected resources, not otherwise. For comprehensive coverage of security patterns, see the OAuth authorisation setup guide.
MCP recommends OAuth 2.1 with PKCE for user authorisation. This leverages existing identity infrastructure and provides security through proof key for code exchange.
The authorisation flow involves generating a URL with client_id, redirect_uri, and PKCE challenge. Users visit this URL to authenticate. The callback handler exchanges the authorisation code for access and refresh tokens.
Token storage needs security—environment variables for development, encrypted storage for production. Never commit tokens to version control.
Token refresh logic detects expired tokens and automatically refreshes using the refresh token. Most APIs return token expiration times.
Tools use OAuth by passing the access token in API request headers. The Authorization header typically uses Bearer token format.
URL mode elicitation lets users authenticate in their browser without the client seeing credentials. The server manages credentials directly.
Testing OAuth flows with Inspector means triggering authorisation, completing the browser flow, and verifying token usage in API calls.
API keys work for service-to-service authentication or read-only public data. OAuth makes sense when you need to act on behalf of a specific user with their permissions.
Security considerations include encrypted token storage, graceful expiration handling, proper scope management, redirect URI validation, and never logging tokens.
Common OAuth errors? Invalid redirect_uri from configuration mismatches with the OAuth provider, scope mismatches when requested permissions aren’t granted, and token expiration errors that need refresh logic.
For production OAuth setup and enterprise security requirements, the MCP security checklist covers implementation best practices and common pitfalls.
How do I test my MCP server with MCP Inspector?
Inspector is a command-line testing utility that launches your server process and lets you interact with it as a client would.
Connection verification checks the status indicator and reviews connection logs for errors.
The Inspector UI displays all exposed tools, resources, and prompts. Browsing the list verifies everything you implemented appears correctly.
Tool invocation means selecting them, filling in parameters, and examining the response and execution time. Slow tools need optimisation.
Resource access involves browsing URIs, fetching content, and verifying the data format matches expectations.
Prompt testing shows templates with variable substitution and checks the rendered output.
Error debugging uses the messages, stack traces, and logs Inspector shows. Tracing errors back to your server code identifies the problem.
Development mode with debug flags enables verbose logging—you see every message exchanged.
A testing checklist before production includes verifying all tools appear and execute successfully, error cases return meaningful messages, resources fetch data correctly with valid URIs, prompts render with variable substitution, performance is acceptable for expected usage, and OAuth flows complete successfully.
Common connection issues? Servers not starting (usually import errors or missing dependencies), wrong transport configuration (client expects stdio but server uses HTTP), and path problems from using relative instead of absolute paths.
Debugging workflow starts with Inspector logs to see what failed, checks server logs for exceptions, adds logging to narrow down the problem, tests individual functions outside the MCP context, fixes the issue, and restarts both server and Inspector.
Performance testing with Inspector identifies slow tools. Check for synchronous calls that should be async, unnecessary API calls, or redundant file reads. Optimise based on actual usage patterns.
For production testing considerations beyond local Inspector, the guide on deploying MCP servers to production covers monitoring and observability options.
How do I publish my MCP server to the registry?
Publishing involves creating metadata, documenting usage, pushing to GitHub, and submitting to the registry.
The smithery.json manifest defines server metadata—name, version, description, capabilities array listing tools and resources, and installation instructions for both Python and TypeScript users.
Documentation needs a clear README with usage examples, prerequisites, and configuration guidance. Explain what each tool does. Show sample invocations. List any API keys needed.
GitHub hosting means pushing your server code to a public repository. Choose an appropriate licence—MIT and Apache 2.0 are common for open source.
The MCP Registry submission process involves forking the GitHub MCP registry repo, adding your server entry, and submitting a pull request. Maintainers review it.
Registry requirements include a working server, documentation, smithery.json, and ideally test coverage. The PR needs to show the server functions.
Semantic versioning guides releases. Major versions (2.0.0) signal breaking changes, minor versions (1.1.0) add features, patch versions (1.0.1) fix bugs. Tag commits appropriately in git.
Publishing to PyPI for Python packages or npm for JavaScript packages makes installation easier. Users can install with package managers instead of git clone and manual setup.
Docker images work well for servers with complex dependencies. Users run your container without worrying about Python versions or system libraries.
The publishing workflow goes: test locally with Inspector, push to GitHub, create a release, submit registry PR, respond to community feedback, monitor for issues.
Maintenance includes responding to GitHub issues, updating when the SDK changes, and monitoring usage if possible. Version bumps should be meaningful.
Documentation template essentials cover what the server does in one sentence, prerequisites like Python version and API keys needed, installation instructions, configuration examples, tool descriptions with parameters, resource URI patterns, and common issues with solutions.
For production deployment considerations and transport selection for deployment, the operations guide covers stdio, SSE, and HTTP deployment patterns in detail.
FAQ
What’s the difference between MCP tools and resources?
Tools execute actions—API calls, calculations, write operations. Resources provide read-only data—file contents, database records. The distinction matters for how AI clients interact with them. Tools take parameters and produce results from executing logic. Resources take URIs and return data.
Can I use multiple MCP servers together in one project?
Yes. Claude Desktop and other AI clients connect to multiple MCP servers simultaneously. Each server’s tools and resources appear in the client. Make sure servers don’t conflict on names or functionality. Multiple specialised servers often make more sense than one large server trying to do everything.
Do I need OAuth for every MCP server?
No. OAuth makes sense when your server accesses user-specific protected resources requiring authorisation. For public APIs, read-only data, or internal tools, API keys or no authentication work fine. The added complexity isn’t worth it unless you need user-specific access.
How do I debug connection errors between my MCP server and client?
Check server logs for startup errors. Verify transport configuration matches client settings—stdio vs SSE. Confirm file paths are absolute. Enable development mode for verbose logging. Test with Inspector before trying production clients. Connection errors usually stem from misconfiguration rather than code bugs.
What’s the difference between stdio and SSE transport?
stdio (standard input/output) is process-based for local servers running as child processes. It’s the default for desktop integrations. SSE (Server-Sent Events) is HTTP-based for remote servers over networks. Web deployments need SSE. Local development typically uses stdio for simplicity.
Should I use FastMCP or the official Python SDK?
FastMCP suits faster development with less boilerplate—it works for most cases. The official SDK makes sense when you need fine-grained control over server lifecycle, custom transport implementations, or advanced configuration. FastMCP builds on the official SDK, so you’re not losing functionality by choosing it.
How do I handle secrets like API keys in my MCP server?
Store secrets in environment variables. Use .env files locally. Load them at runtime. Never commit secrets to version control. AWS Secrets Manager, HashiCorp Vault, or similar services handle production secrets management. The pattern is the same as any backend service.
Can I deploy my MCP server to production or is it local-only?
MCP servers run locally (stdio) or remotely (SSE/HTTP). Production deployment means running as a remote server with SSE transport, implementing proper authentication, using secrets management, adding monitoring, and tracking errors. The deployment article covers architecture considerations.
What are MCP resource templates and when should I use them?
Resource templates allow dynamic URI patterns with placeholders. Examples include file paths or user profiles. Use them when you need parameterised resource access where the specific resource depends on runtime values from the AI client. They’re more flexible than defining every possible resource statically.
How do I add database connectivity to my MCP server?
Install the database client library for your database. Create connections using credentials from environment variables. Implement tools or resources that query the database. Handle connection pooling and errors. Test with Inspector before production. Database connectivity works the same as in any application—MCP servers are just Python or TypeScript processes.
What are common MCP server errors and their solutions?
Connection refused means checking the server is running and transport matches the client. Import errors mean verifying the SDK is installed in the correct virtual environment. Tools not appearing means checking decorator syntax and restarting the server. Timeout errors mean implementing async operations or optimising slow tools.
How long does it take to build a basic MCP server?
A simple Hello World server takes 15-30 minutes including environment setup. A production-ready server with API integration, OAuth, error handling, and testing takes 2-4 hours for experienced developers. The time increases if you’re new to MCP or async programming patterns.
Next Steps
You now understand the development landscape for MCP servers—from environment setup through SDK selection, implementing tools and resources, OAuth integration, testing with Inspector, and publishing to the registry.
Basic servers implement simple tools with straightforward APIs. Production servers need error handling, authentication, testing coverage, and documentation. The effort scales with complexity, but the fundamentals remain consistent.
Once you’ve built and deployed a basic MCP server, explore MCP Tasks for long-running workflows and advanced agentic patterns that enable sophisticated multi-step operations and context engineering techniques.