API Spec
Welcome to the YottaLabs API documentation.
Overview
This API allows you to manage pods, images, and resources programmatically.
Authentication
Yotta Platform uses API Key authentication. You can create and manage your keys from the Access Keys Settings. On Yotta Console, go to Settings
-> Access Keys
, Copy the API Keys.
Header format:
x-api-key: <YOUR_API_KEY>
All requests must be sent over HTTPS.
Base URL
https://api.yottalabs.ai
Example REST API Call
Here is an example of how to create a pod using curl
:
curl -X POST \
'https://api.yottalabs.ai/openapi/v1/pods/create' \
-H 'x-api-key: <YOUR_API_KEY>' \
-H 'Content-Type: application/json' \
-d '{
"image": "yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025050802",
"gpuType": "NVIDIA_L4_24G",
"gpuCount": 1,
"environmentVars": [
{ "key": "JUPYTER_PASSWORD", "value": "your_password" }
],
"expose": [
{ "port": 22, "protocol": "SSH" }
]
}'
Replace <YOUR_API_KEY>
with your actual API key.
Pods
Create Pod
Endpoint:
POST /openapi/v1/pods/create
Creates and launches a new compute pod using a container image.
Request Body
Example: Public Image
{
"image": "yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025050802",
"gpuType": "NVIDIA_L4_24G",
"gpuCount": 1,
"environmentVars": [
{
"key": "JUPYTER_PASSWORD",
"value": "9f2375dc-277b-4550-add9-563ad23973a5"
}
],
"expose": [
{
"port": 22,
"protocol": "SSH"
}
]
}
Example: Private Image
{
"imagePublicType": "PRIVATE",
"imageRegistryUsername": "yottalabsai",
"imageRegistryToken": "dckr_pat__token",
"image": "yottalabsai/xxxxx",
"gpuType": "NVIDIA_L4_24G",
"gpuCount": 1,
"initializationCommand": "sshd &",
"expose": [
{
"port": 22,
"protocol": "SSH"
}
]
}
Params
x-api-key
header
string
Yes
none
Response Examples (200)
{
"message": "success",
"code": 10000,
"data": 43264375788654
}
Responses
List Pods
Endpoint:
GET /openapi/v1/pods/list
Lists all pods owned by your organization.
Params
x-api-key
header
string
Yes
none
Response Examples (200)
{
"message": "success",
"code": 10000,
"data": [{
"id": "331864222263676928",
"orgId": "217646376996249600",
"applicantId": "217646376987860992",
"podName": "Test_from_SDK",
"imageId": "219419172892971011",
"officialImage": "CUSTOM",
"imagePublicType": "PUBLIC",
"image": "yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025050802",
"imageRegistryUsername": null,
"resourceType": "GPU",
"gpuType": "NVIDIA_L4_24G",
"gpuDisplayName": "L4",
"gpuCount": 1,
"singleCardVramInGb": 24,
"singleCardRamInGb": 10,
"singleCardVcpu": 3,
"location": "OR",
"region": "gke-dev-us-west-1",
"cloudType": "SECURE",
"containerVolumeInGb": 252,
"persistentVolumeInGb": 0,
"persistentMountPath": null,
"networkUploadMbps": 1258.2912,
"networkDownloadMbps": 1761.60768,
"diskReadSpeedMbps": 2516.5824,
"diskWriteSpeedMbps": 5872.0256,
"singleCardPrice": 0.6,
"persistentVolumePrice": 0.00005,
"containerVolumePrice": 0.00005,
"initializationCommand": "",
"environmentVars": [{
"key": "JUPYTER_PASSWORD",
"value": "9f2375dc-277b-4550-add9-563ad23973a5"
}],
"expose": [{
"port": 22,
"proxyPort": 32002,
"protocol": "SSH",
"host": "34.19.26.12",
"healthy": true,
"ingressUrl": "ssh [email protected] -p 32002 -i <private key file>",
"serviceName": "SSH Port"
}],
"sshCmd": "",
"status": "TERMINATED",
"createdAt": "2025-07-04 10:29:56",
"updatedAt": "2025-07-04 10:48:13"
}]
}
Responses
Delete Pod
Endpoint:
DELETE /openapi/v1/pods/{podId}
Permanently deletes a compute pod.
Params
podId
path
string
Yes
none
x-api-key
header
string
Yes
none
Response Examples (200)
{
"message": "success",
"code": 10000,
"data": null
}
Responses
Data Schema
ResultVoid
{
"message": "string",
"code": 0,
"data": {}
}
message
string
false
none
message
code
integer(int32)
false
none
code
data
object
false
none
data
ResultString
{
"message": "success",
"code": 10000,
"data": "333697876700471296"
}
message
string
false
none
message
code
integer(int32)
false
none
code
data
string
false
none
data
ResultObject
{
"message": "success",
"code": 10000,
"data": {}
}
message
string
true
none
message
code
integer(int32)
true
none
code
data
object
false
none
data
OpenapiPodCreateRequest
{
"cloudType": "SECURE",
"officialImage": "OFFICIAL",
"imagePublicType": "PUBLIC",
"resourceType": "GPU",
"region": "us-west-1",
"podName": "my pod",
"image": "aoudiamoncef/ubuntu-sshd",
"imageRegistryUsername": "username",
"imageRegistryToken": "token",
"gpuType": "NVIDIA_L4_24G",
"gpuCount": 1,
"containerVolumeInGb": 100,
"persistentVolumeInGb": 100,
"persistentMountPath": "/workspace",
"initializationCommand": "",
"environmentVars": [
{
"key": "key",
"value": "value"
}
],
"expose": [
{
"port": 22,
"protocol": "SSH"
}
]
}
cloudType
string
false
SECURE
none
CloudTypeEnum SECURE, COMMUNITY
officialImage
string
false
CUSTOM
none
ImageSourceEnum OFFICIAL, CUSTOM
imagePublicType
string
false
PUBLIC
none
ImagePublicTypeEnum PUBLIC, PRIVATE
resourceType
string
false
GPU
none
resourceType:GPU, CPU
region
string
false
none
none
region
podName
string
false
My Pod
none
pod nick name
image
string
true
none
image name
imageRegistryUsername
string
false
none
none
image registry username
imageRegistryToken
string
false
none
none
image registry token
gpuType
string
true
none
gpu type
gpuCount
integer(int32)
false
resourceType=GPU min(1)
none
none
containerVolumeInGb
integer(int32)
false
depends on gpuType
none
container volume unit:GB
persistentVolumeInGb
integer(int32)
false
depends on gpuType
none
persistent volume unit:GB
persistentMountPath
string
false
if persistentVolumeInGb > 0 required
none
persistent mount path
initializationCommand
string
false
none
none
initialization command
environmentVars
[KeyValuePairDTO]
false
none
none
image needed environment vars [{"key":"myKey", "value": myValue}]
expose
[OpenapiPodExposePortRequest]
false
none
none
expose ports [{"port": 8000, "protocol": "HTTP"}]
OpenapiPodExposePortRequest
{
"port": 65535,
"protocol": "string"
}
port
integer(int32)
true
none
port
protocol
string
false
none
protocoll: SSH, HTTP, TCP
OpenapiPodExposePortResponse
{
"port": 22,
"proxyPort": 30010,
"protocol": "SSH",
"host": "string",
"healthy": true,
"ingressUrl": "string",
"serviceName": "string"
}
port
integer(int32)
true
none
port
proxyPort
integer(int32)
false
none
proxy port
protocol
string
false
none
protocol: SSH, HTTP, TCP
host
string
false
none
host
healthy
boolean
false
none
healthy
ingressUrl
string
false
none
ingress url
serviceName
string
false
none
service name
KeyValuePairDTO
{
"key": "JUPYTER_PASSWORD",
"value": "9f2375dc-277b-4550-add9-563ad23973a5"
}
key
string
true
none
key
value
string
true
none
value
ResultListOpenapiPodDetailResponse
{
"message": "success",
"code": 10000,
"data": [{
"id": "331864222263676928",
"orgId": "217646376996249600",
"applicantId": "217646376987860992",
"podName": "Test_from_SDK",
"imageId": "219419172892971011",
"officialImage": "CUSTOM",
"imagePublicType": "PUBLIC",
"image": "yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025050802",
"imageRegistryUsername": null,
"resourceType": "GPU",
"gpuType": "NVIDIA_L4_24G",
"gpuDisplayName": "L4",
"gpuCount": 1,
"singleCardVramInGb": 24,
"singleCardRamInGb": 10,
"singleCardVcpu": 3,
"location": "OR",
"region": "gke-dev-us-west-1",
"cloudType": "SECURE",
"containerVolumeInGb": 252,
"persistentVolumeInGb": 0,
"persistentMountPath": null,
"networkUploadMbps": "1258.2912",
"networkDownloadMbps": "1761.60768",
"diskReadSpeedMbps": "2516.5824",
"diskWriteSpeedMbps": "5872.0256",
"singleCardPrice": "0.6",
"persistentVolumePrice": "0.00005",
"containerVolumePrice": "0.00005",
"initializationCommand": "",
"environmentVars": [{
"key": "JUPYTER_PASSWORD",
"value": "9f2375dc-277b-4550-add9-563ad23973a5"
}],
"expose": [{
"port": 22,
"proxyPort": 32002,
"protocol": "SSH",
"host": "34.19.26.12",
"healthy": true,
"ingressUrl": "ssh [email protected] -p 32002 -i <private key file>",
"serviceName": "SSH Port"
}],
"sshCmd": "",
"status": "TERMINATED",
"createdAt": "2025-07-04 10:29:56",
"updatedAt": "2025-07-04 10:48:13"
}]
}
message
string
false
none
message
code
integer(int32)
false
none
code
OpenapiPodDetailResponse
{
"id": "331864222263676928",
"orgId": "217646376996249600",
"applicantId": "217646376987860992",
"podName": "Test_from_SDK",
"imageId": "219419172892971011",
"officialImage": "CUSTOM",
"imagePublicType": "PUBLIC",
"image": "yottalabsai/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04-2025050802",
"imageRegistryUsername": null,
"resourceType": "GPU",
"gpuType": "NVIDIA_L4_24G",
"gpuDisplayName": "L4",
"gpuCount": 1,
"singleCardVramInGb": 24,
"singleCardRamInGb": 10,
"singleCardVcpu": 3,
"location": "OR",
"region": "gke-dev-us-west-1",
"cloudType": "SECURE",
"containerVolumeInGb": 252,
"persistentVolumeInGb": 0,
"persistentMountPath": null,
"networkUploadMbps": "1258.2912",
"networkDownloadMbps": "1761.60768",
"diskReadSpeedMbps": "2516.5824",
"diskWriteSpeedMbps": "5872.0256",
"singleCardPrice": "0.6",
"persistentVolumePrice": "0.00005",
"containerVolumePrice": "0.00005",
"initializationCommand": "",
"environmentVars": [{
"key": "JUPYTER_PASSWORD",
"value": "9f2375dc-277b-4550-add9-563ad23973a5"
}],
"expose": [{
"port": 22,
"proxyPort": 32002,
"protocol": "SSH",
"host": "34.19.26.12",
"healthy": true,
"ingressUrl": "ssh [email protected] -p 32002 -i <private key file>",
"serviceName": "SSH Port"
}],
"sshCmd": "",
"status": "TERMINATED",
"createdAt": "2025-07-04 10:29:56",
"updatedAt": "2025-07-04 10:48:13"
}
id
string
false
none
id
orgId
string
false
none
org id
applicantId
string
false
none
The applicant id
podName
string
false
none
pod nick name
imageId
string
false
none
image id
officialImage
string
true
none
ImageSourceEnum OFFICIAL, CUSTOM
imagePublicType
string
true
none
ImagePublicTypeEnum public,private
image
string
false
none
image name
imageRegistryUsername
string
false
none
image registry username
resourceType
string
false
none
resourceType:GPU, CPU
gpuType
string
false
none
gpu type: RTX_4090_24G
gpuDisplayName
string
false
none
gpu display name
gpuCount
integer(int32)
false
none
gpu count
singleCardVramInGb
integer(int32)
false
none
single card vram unit: GB
singleCardRamInGb
integer(int32)
false
none
single card ram unit
singleCardVcpu
integer(int32)
false
none
single card vcpu count
location
string
false
none
location
region
string
false
none
region
cloudType
string
false
none
cloud type Secure, Community
containerVolumeInGb
integer(int32)
false
none
container volume unit:GB
persistentVolumeInGb
integer(int32)
false
none
persistent volume unit:GB
persistentMountPath
string
false
none
persistent mount path
networkUploadMbps
number
false
none
network upload
networkDownloadMbps
string
false
none
network download
diskReadSpeedMbps
string
false
none
disk read speed
diskWriteSpeedMbps
string
false
none
disk write speed
singleCardPrice
string
false
none
gpu single card price
persistentVolumePrice
string
false
none
persistent volume price per GB/Hour
containerVolumePrice
string
false
none
container volume price per GB/Hour
initializationCommand
string
false
none
initialization command
environmentVars
[KeyValuePairDTO]
false
none
image needed environment vars [{"key":"myKey", "value": myValue}]
expose
[OpenapiPodExposePortResponse]
false
none
expose ports [{"port": 8000, "protocol": "http", "proxyPort": 35001}]
sshCmd
string
false
none
ssh cmd
status
string
false
none
createdAt
string(date-time)
false
none
create time
updatedAt
string(date-time)
false
none
update time
Enum Definition
officialImage
OFFICIAL
, CUSTOM
imagePublicType
PUBLIC
, PRIVATE
resourceType
GPU
, CPU
gpuType
NVIDIA_GeForce_RTX_4090_24G
, NVIDIA_GeForce_RTX_5090_32G
, NVIDIA_H100_80GB_HBM3_80G
, NVIDIA_L4_24G
cloudType
SECURE
, COMMUNITY
status
INITIALIZE
, RUNNING
, PAUSING
, PAUSED
, TERMINATING
, TERMINATED
, FAILED
officialImage:
OFFICIAL
: Official image provided by YottaLabsCUSTOM
: Custom image provided by user
imagePublicType:
PUBLIC
: Publicly available imagePRIVATE
: Private image requiring credentials
resourceType:
GPU
: GPU resourceCPU
: CPU resource
gpuType:
NVIDIA_L4_24G
: NVIDIA L4 Tensor Core GPU with 24GB GPU memoryNVIDIA_H100_80GB_HBM3_80G
: NVIDIA H100 Tensor Core GPU leveraging the high bandwidth of NVLINK with 80GB GPU memoryNVIDIA_GeForce_RTX_4090_24G
: Nvidia GeForce RTX 4090 GPU with 24GB GPU memoryNVIDIA_GeForce_RTX_5090_32G
: Nvidia GeForce RTX 5090 GPU with 32GB GPU memory
cloudType:
SECURE
: Secure cloudCOMMUNITY
: Community cloud
status:
INITIALIZE
: Pod is initializingRUNNING
: Pod is runningPAUSING
: Pod is pausingPAUSED
: Pod is pausedTERMINATING
: Pod is terminatingTERMINATED
: Pod is terminatedFAILED
: Pod failed
Have feedback or questions? Email us at [email protected]
Last updated
Was this helpful?