Set up MCP server and Agent Skills

Objective: Create a GPU Pod on YottaLabs using RTX 5090, PyTorch, and JupyterLab through natural language with Cursor's MCP integration.


🚀 Quick Start Guide

Follow these 5 steps to create and access your GPU Pod:

chevron-rightStep 1: Prerequisites Check ✅hashtag

Before you begin, ensure you have the following:

✅ Node.js >= 18

✅ Yotta API Key

✅ Cursor IDE

chevron-rightStep 2: Configure MCP Server ⚙️hashtag

Configure the Yotta MCP server in Cursor to enable natural language GPU management.

📁 Locate Configuration File

Find or create the configuration file based on your operating system:

Operating System
Configuration File Path

Windows

C:\Users\<YourUsername>\.cursor\mcp.json

macOS

~/.cursor/mcp.json

Linux

~/.cursor/mcp.json

⚙️ Add Configuration

Create or edit the mcp.json file with the following content:

{
  "mcpServers": {
    "yotta": {
      "command": "npx",
      "args": ["-y", "@yottascale/agent-native-infra"],
      "env": {
        "YOTTA_API_KEY": "your-yotta-api-key-here"
      }
    }
  }
}

⚠️ Important: Replace your-yotta-api-key-here with your actual Yotta API Key from Step 1.

🔄 Restart Cursor

After saving the configuration:

  1. Completely close Cursor (all windows)

  2. Restart Cursor

  3. The Yotta MCP server will now be available

chevron-rightStep 3: Create Your GPU Pod 🎯hashtag

Now you can create a GPU Pod using natural language through Cursor Composer.

🎯 Method: Natural Language (Recommended)

  1. Open Cursor Composer by pressing:

    • Windows/Linux: Ctrl+I

    • macOS: Cmd+I

  2. Type your request in natural language:

Create a Yotta Pod named pytorch-rtx5090-jupyter using RTX 5090 GPU, 
PyTorch image, enable JupyterLab, and set password to yotta2025
  1. Let AI handle it - The AI will call the pod_create tool with appropriate parameters

📝 Key Parameters Reference ( just an example )

Parameter
Description
Example Value

name

Pod name

pytorch-rtx5090-jupyter

image

Docker image

yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025081902

gpuType

GPU type

NVIDIA_RTX_5090_32G

gpuCount

Number of GPUs

1

environmentVars

Environment variables

JUPYTER_PASSWORD (required)

expose

Ports to expose

22 (SSH), 8888 (JupyterLab)

regions

Preferred regions

["us-east-1", "us-east-2"]

chevron-rightStep 4: Wait for Pod Creation 🕐hashtag

After submitting your request, you'll receive a response confirming the pod creation.

✅ Success Response Example

{
  "id": "420522713875330018",
  "name": "pytorch-rtx5090-jupyter",
  "gpuType": "NVIDIA_RTX_5090_32G",
  "gpuDisplayName": "RTX 5090",
  "gpuCount": 1,
  "singleCardVramInGb": 32,
  "singleCardPrice": "0.65",
  "status": "INITIALIZE",
  "environmentVars": [
    {"key": "JUPYTER_PASSWORD", "value": "yotta2025"}
  ],
  "expose": [
    {"port": 22, "protocol": "SSH"},
    {"port": 8888, "protocol": "HTTP"}
  ]
}

📊 Status Progression

Your pod will go through these states:

  1. INITIALIZE → Pod is being created

  2. RUNNING → Pod is ready to use (usually takes 1-2 minutes)

chevron-rightStep 5: Access Your Pod 🌐hashtag

Once the pod status changes to RUNNING, you can access it.

🌐 Via Yotta Console

  1. Navigate to ComputePods

  2. Click on your pod: pytorch-rtx5090-jupyter

  3. View the access URLs in the details panel

Get the SSH address from the pod details in the console.


Pod Management Commands

Once your pod is running, you can manage it using natural language commands in Cursor Composer:

📋 List All Pods

🔍 Get Pod Details

⏸️ Pause Pod (Stop Billing)

💡 Tip: Pause pods when not in use to save costs!

🗑️ Delete Pod

⚠️ Warning: This action is irreversible!

Last updated

Was this helpful?