AI assistants are becoming an increasingly important part of the developer experience. To help them provide accurate, up-to-date information about Nuxt, we built an MCP server that exposes our documentation, blog posts, and deployment guides in a structured way. Here's how we did it with the Nuxt MCP Toolkit, and how you can build your own.
The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely access data and tools. Think of it as an API specifically designed for AI assistants: rather than returning HTML or generic JSON, it provides structured, semantic data that LLMs can easily understand and use.
MCP defines three main primitives:
We've observed that AI assistants using MCP servers provide significantly better responses than traditional RAG (Retrieval-Augmented Generation) approaches:
Both Nuxt and Nuxt UI now have MCP servers with similar architectures, making it easier for AI assistants to help developers with these frameworks.
Our MCP server is built directly into nuxt.com using the Nuxt MCP Toolkit module. The module provides automatic discovery of tools, resources, and prompts from your server directory:
nuxt.com/
├── server/
│ └── mcp/
│ ├── tools/
│ │ ├── list-documentation-pages.ts
│ │ ├── get-documentation-page.ts
│ │ └── ...
│ ├── resources/
│ │ ├── nuxt-documentation-pages.ts
│ │ └── ...
│ └── prompts/
│ ├── find-documentation-for-topic.ts
│ └── ...
└── nuxt.config.ts
The architecture is straightforward: you define your tools, resources, and prompts as individual files, and the module automatically registers them and exposes an HTTP endpoint for MCP clients to connect. No manual server setup, no transport configuration, just create files in the right directories and they're ready to use.
Getting started is as simple as adding the module to your Nuxt config:
export default defineNuxtConfig({
modules: ['@nuxtjs/mcp-toolkit'],
mcp: {
name: 'Nuxt',
}
})
That's it. The module will automatically scan your server/mcp/ directory and register everything it finds.
Tools enable language models to interact with external systems by accepting parameters and performing operations. Here's how we implemented our list_documentation_pages tool:
import { z } from 'zod'
import { queryCollection } from '@nuxt/content/server'
export default defineMcpTool({
description: `Lists all available Nuxt documentation pages with their categories and basic information.
WHEN TO USE: Use this tool when you need to EXPLORE or SEARCH for documentation about a topic but don't know the exact page path.
WHEN NOT TO USE: If you already know the specific page path, use get_documentation_page directly instead.`,
inputSchema: {
version: z.enum(['3.x', '4.x', 'all']).optional().default('4.x').describe('Documentation version to fetch')
},
cache: '1h',
async handler({ version }) {
const event = useEvent()
const allDocs = await queryCollection(event, 'docsv4')
.select('title', 'path', 'description')
.all()
return jsonResult(allDocs.map(doc => ({
title: doc.title,
path: doc.path,
description: doc.description,
url: `https://nuxt.com${doc.path}`
})))
}
})
Notice a few key things:
defineMcpTool is auto-imported, no need to import it manuallyinputSchema uses Zod for parameter validationcache: '1h' enables built-in response cachingjsonResult() is a helper that formats the response correctlyThe tool name is automatically derived from the filename (list-documentation-pages.ts becomes list_documentation_pages).
Resources allow servers to share data that provides context to language models, such as files, database schemas, or application-specific information. Each resource is uniquely identified by a URI.
The simplest way to expose a file is using the file property, which automatically handles URI generation, MIME type detection, and file reading:
export default defineMcpResource({
name: 'readme',
description: 'Project README file',
file: 'README.md' // Relative to project root
})
For dynamic resources, you can use a custom handler:
import { queryCollection } from '@nuxt/content/server'
export default defineMcpResource({
uri: 'resource://nuxt-com/documentation-pages',
description: 'Complete list of available Nuxt documentation pages (defaults to v4.x)',
cache: '1h',
async handler(uri: URL) {
const event = useEvent()
const allDocs = await queryCollection(event, 'docsv4')
.select('title', 'path', 'description')
.all()
const result = allDocs.map(doc => ({
title: doc.title,
path: doc.path,
description: doc.description,
version: '4.x',
url: `https://nuxt.com${doc.path}`
}))
return {
contents: [{
uri: uri.href,
mimeType: 'application/json',
text: JSON.stringify(result, null, 2)
}]
}
}
})
Unlike tools which are model-controlled, resources are application-driven, the host application determines how to incorporate them based on user needs, such as through UI elements for explicit selection or automatic context inclusion.
Prompts are reusable templates with arguments that users can invoke. They return a conversation format that guides the AI through specific workflows:
import { z } from 'zod'
import { queryCollection } from '@nuxt/content/server'
export default defineMcpPrompt({
description: 'Find the best Nuxt documentation for a specific topic or feature',
inputSchema: {
topic: z.string().describe('Describe what you want to learn about'),
version: z.enum(['3.x', '4.x']).optional().describe('Documentation version to search')
},
async handler({ topic, version = '4.x' }) {
const event = useEvent()
const docsVersion = version === '4.x' ? 'docsv4' : 'docsv3'
const allDocs = await queryCollection(event, docsVersion)
.select('title', 'path', 'description')
.all()
const allPages = allDocs?.map(doc => ({
title: doc.title,
path: doc.path,
description: doc.description,
url: `https://nuxt.com${doc.path}`
})) || []
return {
messages: [{
role: 'user' as const,
content: {
type: 'text' as const,
text: `Help me find the best Nuxt documentation for this topic: "${topic}". Here are all available documentation pages: ${JSON.stringify(allPages, null, 2)}`
}
}]
}
}
})
Prompts differ from tools in that they are user-invoked and return conversation messages, while tools are model-controlled and return structured data.
The module provides several auto-imported helpers to simplify common patterns:
defineMcpTool, defineMcpResource, defineMcpPrompt: Define your MCP primitivesjsonResult(data): Format a JSON response for toolserrorResult(message): Return an error response from toolsTo use Nuxt server utilities like useEvent() in your handlers, enable asyncContext in your Nuxt config:
export default defineNuxtConfig({
experimental: {
asyncContext: true
}
})
Then you can access the H3 event and use Nuxt server composables like queryCollection from Nuxt Content.
Ready to build an MCP server for your own application? With the Nuxt MCP Toolkit, it takes just a few minutes.
npx nuxi module add mcp-toolkit
Add basic configuration to your Nuxt config:
export default defineNuxtConfig({
modules: ['@nuxtjs/mcp-toolkit'],
mcp: {
name: 'my-app'
}
})
Create a file in server/mcp/tools/:
import { z } from 'zod'
export default defineMcpTool({
description: 'Search through my content',
inputSchema: {
query: z.string().describe('Search query')
},
async handler({ query }) {
// Your search logic here
const results = await searchContent(query)
return jsonResult(results)
}
})
That's it! Your MCP server is now accessible at https://your-domain.com/mcp.
You can also add resources and prompts following the same pattern:
export default defineMcpResource({
name: 'readme',
description: 'Project README file',
file: 'README.md'
})
For more advanced configuration options, check out the Nuxt MCP Toolkit documentation.
Ready to experience the power of MCP with Nuxt? Our server is already live and provides access to all Nuxt documentation, blog posts, and deployment guides.
The easiest way to get started is with Cursor's one-click installation:
Install Nuxt MCP Server in CursorThe Nuxt MCP server works with Claude Desktop, Windsurf, Visual Studio Code, ChatGPT, and many other MCP-compatible AI assistants. For complete setup instructions for all platforms, check out our MCP documentation.
We encourage you to build MCP servers for your own applications. Whether it's documentation, API references, or domain-specific knowledge, MCP makes it easy for AI assistants to provide accurate, helpful information to your users.
The complete source code for our MCP server is available on GitHub in the server/mcp/ directory. Feel free to use it as inspiration for your own implementation!