Set up MCP server and Agent Skills
Objective: Create a GPU Pod on YottaLabs using RTX 5090, PyTorch, and JupyterLab through natural language with Cursor's MCP integration.
🚀 Quick Start Guide
Follow these 5 steps to create and access your GPU Pod:
Step 1: Prerequisites Check ✅
Before you begin, ensure you have the following:
✅ Node.js >= 18
Check your version: Run
node --versionin terminalNot installed? Download from nodejs.org
✅ Yotta API Key
Navigate to Yotta Console
Go to Settings → Access Keys
Copy our apu keys securely

✅ Cursor IDE
Download and install the latest version of Cursor
Step 2: Configure MCP Server ⚙️
Configure the Yotta MCP server in Cursor to enable natural language GPU management.
📁 Locate Configuration File
Find or create the configuration file based on your operating system:
Windows
C:\Users\<YourUsername>\.cursor\mcp.json
macOS
~/.cursor/mcp.json
Linux
~/.cursor/mcp.json
⚙️ Add Configuration
Create or edit the mcp.json file with the following content:
{
"mcpServers": {
"yotta": {
"command": "npx",
"args": ["-y", "@yottascale/agent-native-infra"],
"env": {
"YOTTA_API_KEY": "your-yotta-api-key-here"
}
}
}
}⚠️ Important: Replace
your-yotta-api-key-herewith your actual Yotta API Key from Step 1.
🔄 Restart Cursor
After saving the configuration:
Completely close Cursor (all windows)
Restart Cursor
The Yotta MCP server will now be available
Step 3: Create Your GPU Pod 🎯
Now you can create a GPU Pod using natural language through Cursor Composer.
🎯 Method: Natural Language (Recommended)
Open Cursor Composer by pressing:
Windows/Linux:
Ctrl+ImacOS:
Cmd+I
Type your request in natural language:
Create a Yotta Pod named pytorch-rtx5090-jupyter using RTX 5090 GPU,
PyTorch image, enable JupyterLab, and set password to yotta2025Let AI handle it - The AI will call the
pod_createtool with appropriate parameters
📝 Key Parameters Reference ( just an example )
name
Pod name
pytorch-rtx5090-jupyter
image
Docker image
yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025081902
gpuType
GPU type
NVIDIA_RTX_5090_32G
gpuCount
Number of GPUs
1
environmentVars
Environment variables
JUPYTER_PASSWORD (required)
expose
Ports to expose
22 (SSH), 8888 (JupyterLab)
regions
Preferred regions
["us-east-1", "us-east-2"]
Step 4: Wait for Pod Creation 🕐
After submitting your request, you'll receive a response confirming the pod creation.
✅ Success Response Example
{
"id": "420522713875330018",
"name": "pytorch-rtx5090-jupyter",
"gpuType": "NVIDIA_RTX_5090_32G",
"gpuDisplayName": "RTX 5090",
"gpuCount": 1,
"singleCardVramInGb": 32,
"singleCardPrice": "0.65",
"status": "INITIALIZE",
"environmentVars": [
{"key": "JUPYTER_PASSWORD", "value": "yotta2025"}
],
"expose": [
{"port": 22, "protocol": "SSH"},
{"port": 8888, "protocol": "HTTP"}
]
}📊 Status Progression
Your pod will go through these states:
INITIALIZE → Pod is being created
RUNNING → Pod is ready to use (usually takes 1-2 minutes)
Step 5: Access Your Pod 🌐
Once the pod status changes to RUNNING, you can access it.
🌐 Via Yotta Console
Open Yotta Console
Navigate to Compute → Pods
Click on your pod:
pytorch-rtx5090-jupyterView the access URLs in the details panel
Get the SSH address from the pod details in the console.
Pod Management Commands
Once your pod is running, you can manage it using natural language commands in Cursor Composer:
📋 List All Pods
🔍 Get Pod Details
⏸️ Pause Pod (Stop Billing)
💡 Tip: Pause pods when not in use to save costs!
🗑️ Delete Pod
⚠️ Warning: This action is irreversible!
Last updated
Was this helpful?