Get Started
Get started with Atlas Cloud Model APIs in minutes. This guide covers API key setup, making API calls, and using third-party tools.
Prerequisites
- An Atlas Cloud account
- An API key
API Overview
Atlas Cloud provides different API endpoints for different model types:
| Model Type | Base URL | Format |
|---|---|---|
| LLM (Chat) | https://api.atlascloud.ai/v1 | OpenAI-compatible |
| Image Generation | https://api.atlascloud.ai/api/v1 | Atlas Cloud API |
| Video Generation | https://api.atlascloud.ai/api/v1 | Atlas Cloud API |
| Media Upload | https://api.atlascloud.ai/api/v1 | Atlas Cloud API |
LLM / Chat Completions
The LLM API is fully OpenAI-compatible. Use the OpenAI SDK with Atlas Cloud's base URL.
Python
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://api.atlascloud.ai/v1"
)
# Non-streaming
response = client.chat.completions.create(
model="deepseek-v3",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
]
)
print(response.choices[0].message.content)
# Streaming
stream = client.chat.completions.create(
model="deepseek-v3",
messages=[
{"role": "user", "content": "Write a short poem about AI."}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Node.js / TypeScript
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "your-api-key",
baseURL: "https://api.atlascloud.ai/v1",
});
// Non-streaming
const response = await client.chat.completions.create({
model: "deepseek-v3",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in simple terms." },
],
});
console.log(response.choices[0].message.content);
// Streaming
const stream = await client.chat.completions.create({
model: "deepseek-v3",
messages: [{ role: "user", content: "Write a short poem about AI." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}cURL
curl https://api.atlascloud.ai/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-v3",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
]
}'Image Generation
import requests
response = requests.post(
"https://api.atlascloud.ai/api/v1/model/generateImage",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "seedream-3.0",
"prompt": "A futuristic cityscape at sunset, cyberpunk style"
}
)
result = response.json()
prediction_id = result["data"]["id"]
print(f"Prediction ID: {prediction_id}")Video Generation
import requests
response = requests.post(
"https://api.atlascloud.ai/api/v1/model/generateVideo",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "kling-v2.0",
"prompt": "A timelapse of flowers blooming in a garden"
}
)
result = response.json()
prediction_id = result["data"]["id"]
print(f"Prediction ID: {prediction_id}")Upload Media
Upload local files to get temporary URLs for image-to-video, image editing, and other multi-step workflows:
import requests
response = requests.post(
"https://api.atlascloud.ai/api/v1/model/uploadMedia",
headers={"Authorization": "Bearer your-api-key"},
files={"file": open("photo.jpg", "rb")}
)
url = response.json().get("url")
print(f"Uploaded file URL: {url}")Uploaded files are for temporary use with Atlas Cloud generation tasks. Files may be cleaned up periodically.
Get Async Results
Image and video generation tasks run asynchronously. Poll for results using the prediction ID:
import requests
import time
def wait_for_result(prediction_id, api_key, interval=5):
while True:
resp = requests.get(
f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}",
headers={"Authorization": f"Bearer {api_key}"}
)
data = resp.json()
status = data["data"]["status"]
if status == "completed":
return data["data"]["outputs"][0]
elif status == "failed":
raise Exception(f"Task failed: {data['data'].get('error')}")
print(f"Status: {status}. Waiting...")
time.sleep(interval)
result = wait_for_result(prediction_id, "your-api-key")
print(f"Result: {result}")Using Third-Party Tools
Chatbox / Cherry Studio
- Open Settings → Add Custom Provider
- Set API Host to
https://api.atlascloud.ai/v1(the/v1is required) - Enter your API Key
- Select a model name from the Model Library
- Start chatting
OpenWebUI
Configure an OpenAI-compatible connection with base URL https://api.atlascloud.ai/v1 and your API key.
IDE Integration
Use the MCP Server to access Atlas Cloud models directly from your IDE (Cursor, Claude Desktop, Claude Code, VS Code, etc.).
Explore Models
Browse all 300+ models on the Model Library. Each model page includes:
- An interactive Playground for testing with different parameters
- API View showing the exact request format and parameters
- Pricing information
For detailed API reference, see the API Reference.