GPU Pods
Pods are the core compute units in Yotta Labs’ GPU Cloud. They allow users to deploy, manage, and connect to isolated GPU workloads across heterogeneous hardware through the Yotta SaaS Console or the
🧭 Navigation
You can manage Pods via the Yotta Console or through the OpenAPI.
Console: Go to Compute → Pods on the left sidebar.
OpenAPI: API Reference
💻 Managing Pods via Console
Pod Page Overview
When entering the Pods page:
The system by default displays Pods in In Progress state, including:
Initialize
– Resources are being allocated; the Pod is deploying.Running
– The Pod is running normally.Stopping
– The Pod is pausing; resources are being reclaimed.Stopped
– The Pod has been paused.Terminating
– The Pod is being terminated; resources are being reclaimed.
Click the History tab to view Pods that have completed within the last 24 hours:
Terminated
– The Pod has been deleted.Failed
– Deployment failed. Common causes:Insufficient system resources
Invalid image information
You can:
Search by Pod name (supports fuzzy search)
Filter by Pod Status or GPU Type
Click Deploy (top right corner) to create a new Pod
⚙️ Deploying a Pod
Step-by-Step Guide
Navigate to:
Compute → Pods
Click Deploy (top right)
You’ll enter the GPU Selection page.
Select GPU Type
Choose a GPU model suitable for your workload.
Configure Pod
Fill in required parameters (fields marked with
*
are mandatory).Click Edit to choose between a Public Image or Private Image.
Image Requirements
Platform
Must be x86 architecture
OS Base
Must be Debian/Ubuntu
Unsupported
ARM and other Linux distributions
Deploy Click Deploy to complete the process.
🔌 Connecting to Your Pod
Once the Pod is launched:
Click the Connect button on the Pod card to view exposed services.
Availability depends on the port configuration defined at deployment.
When the container port is Ready, the status will update automatically.
📜 Viewing Logs
Click Logs on the Pod card to view both:
System Logs (platform-level)
Container Logs (application-level)
This helps with debugging deployment or runtime issues.
🧊 Pausing or Terminating Pods
🔸 Pause
If you only need to suspend temporarily:
Click Pause on the Pod card.
Only Volume storage will continue to incur charges.
You can Run to restart anytime.
Pods can be edited while paused.
🔸 Terminate
If you want to remove the Pod completely:
Click the “...” on the Pod card → choose Terminate.
The Pod will be permanently deleted and no longer billed.
Terminated Pods cannot be edited or restarted.
✏️ Editing a Pod
Go to Compute → Pods and locate the Pod.
Click Pause and wait until the Pod enters Stopped state.
Click “...” → Edit, modify configurations, and save.
Click Run to restart the Pod with the new settings.
📈 Pod Status Reference
Initialize
Resource allocation in progress; Pod deploying
Running
Pod is running
Stopping
Pausing in progress; resources reclaiming
Stopped
Pod is paused
Terminating
Termination in progress; resources reclaiming
Terminated
Pod fully terminated
Failed
Deployment failed (insufficient resources / invalid image)
💰 Pricing & Billing
Formula
Pod hourly cost = (GPU unit price × number of GPUs)
+ (Disk hourly rate × GB size)
+ (Volume hourly rate × GB size)
Deduction Rules
Billing starts once the Pod is Running.
When balance nears $0, all active Pods will be terminated automatically.
To avoid charges:
Use Pause to temporarily suspend (still charges for persistent volumes).
Use Terminate to completely stop billing.
Pause
Charges continue for Volumes (Stopped state)
Terminate
No charges (Terminated state)
🧾 Viewing Your Bill
Go to Billing in the left sidebar to view:
Pod usage breakdown
GPU, Disk, and Volume hourly costs
Historical billing data
🧩 Managing Pods via OpenAPI
You can also manage Pods programmatically via Yotta Labs’ OpenAPI.
API Reference
Tip: Always review the API documentation before calling endpoints to avoid common request errors (invalid parameters, insufficient balance, etc.).
🧱 Example Use Cases
Automated Pod Deployment via Python SDK
Monitoring Pod Logs using API polling
Scaling Workloads across multiple GPU types
Integrating with CI/CD to trigger training jobs automatically
🪄 Best Practices
Use Pause instead of Terminate for short-term downtime.
Monitor balance regularly to prevent auto-termination.
Always verify image compatibility (x86 / Ubuntu-based).
For debugging, prefer checking container logs first.
🧩 Related Docs
Yotta Console Overview
Last updated
Was this helpful?