Understanding MCP Prompts
Learn what MCP prompts are, how they provide reusable templates for AI interactions, and how to create custom prompts using mcp-framework and the official TypeScript SDK.
title: "Understanding MCP Prompts" description: "Learn what MCP prompts are, how they provide reusable templates for AI interactions, and how to create custom prompts using mcp-framework and the official TypeScript SDK." order: 6 level: "beginner" duration: "10 min" keywords:
- MCP prompts
- MCP prompt templates
- MCPPrompt class
- reusable AI prompts
- mcp-framework prompts
- MCP prompt development
- AI prompt engineering
- MCP server prompts date: "2026-04-01"
Prompts are the third MCP primitive. They provide reusable templates that guide how an AI assistant approaches a task. Unlike tools (which perform actions) and resources (which expose data), prompts shape the AI's behavior by providing structured instructions and context. This guide covers how to create and use MCP prompts effectively.
An MCP Prompt is a reusable template that defines a structured set of messages for an AI interaction. Each prompt has a name, a description, optional arguments, and a method that generates a sequence of messages (system instructions, user context, or assistant prefills). Prompts help standardize how AI assistants approach specific tasks across an organization.
What Are MCP Prompts For?
Prompts solve a different problem than tools and resources. While tools give the AI the ability to do things and resources give it access to data, prompts tell the AI how to think about a task.
Common use cases for prompts include:
- Code review templates — Standardize how AI reviews code across your team
- Analysis frameworks — Provide consistent analysis methodology for reports
- Debugging guides — Walk the AI through a systematic debugging process
- Documentation generators — Template for generating consistent docs from code
- Data analysis workflows — Structured approach to analyzing datasets
- Report writers — Consistent format and tone for business reports
How Do Prompts Differ from System Prompts?
A system prompt is set at the application level — it is always active. MCP prompts are selectable — users or the AI client can choose which prompt to activate for a specific task. Think of MCP prompts as a library of expert frameworks that can be pulled in when needed.
| Aspect | System Prompt | MCP Prompt |
|---|---|---|
| Scope | Always active for all conversations | Selected per-task as needed |
| Defined by | Application developer | MCP server developer |
| Flexibility | One per session | Multiple available, user chooses |
| Arguments | Static | Can accept dynamic parameters |
| Reusability | Tied to one application | Works across any MCP client |
How Do Prompts Work in the MCP Protocol?
The prompt lifecycle in MCP:
- Discovery — The client asks the server for available prompts via
prompts/list. The server responds with prompt names, descriptions, and accepted arguments. - Selection — The user or AI client selects a prompt to use.
- Generation — The client calls
prompts/getwith the prompt name and any arguments. The server returns a structured array of messages. - Application — The client incorporates these messages into the AI conversation, guiding the model's behavior for the current task.
How Do You Create a Prompt with mcp-framework?
In mcp-framework, prompts extend the MCPPrompt class. Here is a complete example:
import { MCPPrompt, PromptArgument } from "mcp-framework";
class CodeReviewPrompt extends MCPPrompt {
name = "code_review";
description =
"A systematic code review template that analyzes code for bugs, performance, security, and best practices";
args: PromptArgument[] = [
{
name: "language",
description:
"The programming language of the code to review (e.g., TypeScript, Python, Go)",
required: true,
},
{
name: "focus",
description:
"Specific focus area: security, performance, readability, or all",
required: false,
},
];
getMessages(args: Record<string, string>) {
const language = args.language || "TypeScript";
const focus = args.focus || "all";
let focusInstructions = "";
switch (focus) {
case "security":
focusInstructions =
"Focus primarily on security vulnerabilities, injection risks, authentication issues, and data exposure.";
break;
case "performance":
focusInstructions =
"Focus primarily on performance bottlenecks, unnecessary allocations, N+1 queries, and optimization opportunities.";
break;
case "readability":
focusInstructions =
"Focus primarily on code clarity, naming conventions, documentation, and maintainability.";
break;
default:
focusInstructions =
"Analyze all aspects: bugs, security, performance, readability, and adherence to best practices.";
}
return [
{
role: "system" as const,
content: `You are a senior ${language} developer performing a thorough code review. ${focusInstructions}
Structure your review as follows:
## Summary
A brief overview of the code and its purpose.
## Issues Found
List each issue with:
- **Severity**: Critical / Warning / Suggestion
- **Line(s)**: Reference specific lines
- **Description**: What the issue is
- **Fix**: How to resolve it
## Positive Aspects
Note well-written code, good patterns, and things done right.
## Recommendations
Broader suggestions for improvement.
Be specific, reference line numbers, and provide code examples for fixes.`,
},
{
role: "user" as const,
content:
"Please review the code I am about to share. Apply the review framework described above.",
},
];
}
}
export default CodeReviewPrompt;
Required Properties and Methods
| Property/Method | Type | Purpose |
|-----------------|------|---------|
| name | string | Unique identifier for the prompt. |
| description | string | Explains what the prompt template does. |
| args | PromptArgument[] | Defines accepted arguments with name, description, and required flag. |
| getMessages() | function | Returns an array of message objects that define the prompt template. |
Message Structure
The getMessages method returns an array of message objects with two properties:
role— Either"system","user", or"assistant"content— The text content of the message
Messages are ordered and applied in sequence:
| Role | Purpose |
|------|---------|
| system | Sets the AI's behavior, expertise, and instructions |
| user | Provides context or initial user input |
| assistant | Prefills the AI's response to guide its output format |
Practical Prompt Examples
Bug Report Analyzer
import { MCPPrompt, PromptArgument } from "mcp-framework";
class BugAnalyzerPrompt extends MCPPrompt {
name = "analyze_bug_report";
description =
"Analyzes a bug report and provides a structured investigation plan";
args: PromptArgument[] = [
{
name: "severity",
description: "Bug severity level: critical, high, medium, low",
required: false,
},
];
getMessages(args: Record<string, string>) {
const severity = args.severity || "unknown";
return [
{
role: "system" as const,
content: `You are a senior software engineer triaging a ${severity}-severity bug report. Follow this investigation framework:
1. **Reproduce**: Identify steps to reproduce the issue
2. **Isolate**: Narrow down the component or module responsible
3. **Root Cause**: Determine the underlying cause
4. **Impact Assessment**: Evaluate scope and affected users
5. **Fix Strategy**: Propose a fix with estimated effort
6. **Prevention**: Suggest how to prevent similar issues
Be systematic and ask clarifying questions if the bug report lacks detail.`,
},
{
role: "user" as const,
content:
"I have a bug report to analyze. I will share the details next.",
},
];
}
}
export default BugAnalyzerPrompt;
Documentation Generator
import { MCPPrompt, PromptArgument } from "mcp-framework";
class DocGeneratorPrompt extends MCPPrompt {
name = "generate_docs";
description =
"Generates comprehensive documentation for code, including usage examples and API references";
args: PromptArgument[] = [
{
name: "style",
description:
"Documentation style: api-reference, tutorial, readme, or jsdoc",
required: true,
},
{
name: "audience",
description: "Target audience: beginner, intermediate, or advanced",
required: false,
},
];
getMessages(args: Record<string, string>) {
const style = args.style;
const audience = args.audience || "intermediate";
const styleGuide: Record<string, string> = {
"api-reference": `Generate API reference documentation with:
- Function signatures with full type information
- Parameter descriptions in a table format
- Return type documentation
- Code examples for each function
- Error cases and edge cases`,
tutorial: `Generate a tutorial-style guide with:
- Step-by-step instructions
- Explanations of why each step is necessary
- Complete code examples that build on each other
- Common mistakes and how to avoid them
- A working example at the end`,
readme: `Generate a README with:
- Project overview and purpose
- Installation instructions
- Quick start guide
- Configuration options
- Contributing guidelines`,
jsdoc: `Generate JSDoc comments for all exported functions, classes, and interfaces with:
- @description tags
- @param tags with types and descriptions
- @returns tags
- @example tags with usage examples
- @throws tags where applicable`,
};
return [
{
role: "system" as const,
content: `You are a technical writer creating documentation for a ${audience} audience.
${styleGuide[style] || styleGuide["api-reference"]}
Write clearly and concisely. Use consistent formatting. Include practical examples that developers can copy and adapt.`,
},
{
role: "user" as const,
content:
"I will share the code that needs documentation. Generate the documentation according to the specified style.",
},
];
}
}
export default DocGeneratorPrompt;
Data Analysis Framework
import { MCPPrompt, PromptArgument } from "mcp-framework";
class DataAnalysisPrompt extends MCPPrompt {
name = "analyze_data";
description =
"A structured framework for analyzing datasets, identifying patterns, and generating insights";
args: PromptArgument[] = [
{
name: "domain",
description:
"The data domain: sales, users, performance, financial, or general",
required: false,
},
];
getMessages(args: Record<string, string>) {
const domain = args.domain || "general";
return [
{
role: "system" as const,
content: `You are a data analyst specializing in ${domain} data. When presented with data, follow this analysis framework:
## 1. Data Overview
- Describe the dataset structure (columns, types, size)
- Identify the time range if temporal data is present
- Note any immediately apparent data quality issues
## 2. Summary Statistics
- Calculate key metrics (mean, median, mode, std dev) for numeric fields
- Show distribution of categorical fields
- Identify outliers
## 3. Pattern Analysis
- Look for trends over time
- Identify correlations between fields
- Note any seasonal or cyclical patterns
## 4. Key Insights
- State 3-5 actionable findings
- Support each finding with specific data points
- Rank findings by business impact
## 5. Recommendations
- Suggest next steps based on the analysis
- Identify additional data that would be useful
- Propose hypotheses to test
Present findings clearly with specific numbers. Avoid vague statements.`,
},
{
role: "user" as const,
content:
"I will share my dataset. Please analyze it using the framework above.",
},
];
}
}
export default DataAnalysisPrompt;
How Do You Create Prompts with the Official SDK?
For comparison, here is the code review prompt implemented with @modelcontextprotocol/sdk:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({
name: "my-server",
version: "1.0.0",
});
server.prompt(
"code_review",
"A systematic code review template",
{
language: z.string().describe("Programming language to review"),
focus: z
.enum(["security", "performance", "readability", "all"])
.optional()
.describe("Specific focus area"),
},
async ({ language, focus }) => {
const f = focus || "all";
return {
messages: [
{
role: "user",
content: {
type: "text",
text: `Please review my ${language} code with a focus on ${f}. Structure your review with: Summary, Issues Found (with severity), Positive Aspects, and Recommendations.`,
},
},
],
};
}
);
| Aspect | mcp-framework | Official SDK |
|---|---|---|
| Style | Class-based | Functional |
| Arguments | PromptArgument array | Zod schema |
| Message format | { role, content } objects | { role, content: { type, text } } objects |
| Registration | Auto-discovered from prompts/ directory | Manual server.prompt() call |
| File organization | One file per prompt | Inline or imported handlers |
Prompt Design Best Practices
Follow these principles when creating MCP prompts:
- Be specific — Vague instructions produce vague results. Define exactly what structure and format you want.
- Use markdown structure — Headings, lists, and formatting make instructions clearer.
- Include examples — Show the AI what good output looks like.
- Parameterize wisely — Use arguments for things that genuinely vary (language, audience, severity), not for everything.
- Keep it focused — Each prompt should address one type of task. Create separate prompts for code review, documentation, and debugging.
Using System Messages Effectively
System messages set the AI's persona and instructions. They are the most powerful part of a prompt template:
// Good: specific, structured, actionable
{
role: "system",
content: `You are a database performance expert. When analyzing a query:
1. Identify the query type (SELECT, JOIN, aggregate)
2. Check for missing indexes
3. Look for N+1 patterns
4. Estimate row scan count
5. Suggest specific optimizations with rewritten SQL`
}
// Avoid: vague, unstructured
{
role: "system",
content: "You are helpful. Please analyze queries and suggest improvements."
}
Using Assistant Prefills
You can include an assistant message to "start" the AI's response in a specific format:
getMessages(args: Record<string, string>) {
return [
{
role: "system" as const,
content: "You are a security auditor analyzing code for vulnerabilities.",
},
{
role: "user" as const,
content: "Analyze the following code for security issues.",
},
{
role: "assistant" as const,
content: "## Security Audit Report\n\n### Findings\n\n",
},
];
}
The assistant prefill guides the AI to continue in the established format, ensuring consistent output structure.
Organizing Prompts in Your Project
The real power of MCP prompts emerges when combined with tools and resources on the same server. A code review prompt becomes much more effective when the server also provides a resource with your team's coding standards and a tool to run the linter. The AI gets instructions (prompt), context (resources), and capabilities (tools) all in one package.
Frequently Asked Questions
You now understand all three MCP primitives — tools, resources, and prompts. It is time to connect your server to a real AI client. Head to Connecting to Claude Desktop to see your server in action.