Image Generation
Overview
Atlas Cloud provides access to a wide range of AI image generation models through a unified API. Generate stunning images from text prompts, transform existing images, remove backgrounds, swap faces, and more — all with a single API call.
Supported Model Types
| Type | Description | Use Cases |
|---|---|---|
| Text-to-Image | Generate images from text descriptions | Creative content, marketing, design, prototyping |
| Image-to-Image | Transform and enhance existing images | Style transfer, inpainting, outpainting |
| Image Tools | Advanced image processing and manipulation | Background removal, face swap, upscaling, restoration |
Featured Models
| Model | Provider | Highlights |
|---|---|---|
| Seedream | ByteDance | High-quality text-to-image with excellent prompt following |
| FLUX | Black Forest Labs | Fast, high-fidelity image generation with multiple variants (Dev, Schnell, Pro) |
| Qwen-Image | Alibaba | Powerful multilingual image generation |
| Ideogram | Ideogram | Excellent text rendering in images |
| HiDream | HiDream | Creative and artistic image generation |
| Nano Banana | Fast, high-quality image generation |
For a complete list of all image models and their specifications, visit the Model Library.
API Usage
Text-to-Image
import requests
response = requests.post(
"https://api.atlascloud.ai/api/v1/model/generateImage",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "seedream-3.0",
"prompt": "A serene Japanese garden with cherry blossoms, watercolor style"
}
)
result = response.json()
prediction_id = result["data"]["id"]
print(f"Prediction ID: {prediction_id}")Node.js Example
const response = await fetch(
"https://api.atlascloud.ai/api/v1/model/generateImage",
{
method: "POST",
headers: {
Authorization: "Bearer your-api-key",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "seedream-3.0",
prompt:
"A serene Japanese garden with cherry blossoms, watercolor style",
}),
}
);
const predictionId = (await response.json()).data.id;
console.log(`Prediction ID: ${predictionId}`);cURL Example
curl -X POST https://api.atlascloud.ai/api/v1/model/generateImage \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "seedream-3.0",
"prompt": "A serene Japanese garden with cherry blossoms, watercolor style"
}'Get Image Result
Image generation is asynchronous. Use the prediction ID to retrieve the result:
import requests
import time
def get_image_result(prediction_id, api_key):
while True:
response = requests.get(
f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}",
headers={"Authorization": f"Bearer {api_key}"}
)
result = response.json()
if result["data"]["status"] == "completed":
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(f"Generation failed: {result['data'].get('error')}")
time.sleep(2) # Poll every 2 seconds
image_url = get_image_result(prediction_id, "your-api-key")
print(f"Image URL: {image_url}")Using LoRA Models
You can enhance image generation with LoRA (Low-Rank Adaptation) models for custom styles and fine-grained control. See the LoRA Guide for detailed instructions on finding, selecting, and using LoRA models with Atlas Cloud.
Tips for Better Results
- Be specific: Describe style, composition, lighting, and mood in your prompt
- Use negative prompts: Specify what you don't want in the image (if supported by the model)
- Experiment with models: Different models excel at different styles — photorealistic, anime, artistic, etc.
- Adjust parameters: Each model has unique parameters. Check the model's detail page on the Model Library for available options
- Use seed values: Set a seed for reproducible results when iterating on prompts
Industry-Leading Speed
Atlas Cloud's optimized inference infrastructure delivers fast image generation speeds — under 5 seconds for most models. Combined with competitive pricing, it's ideal for both prototyping and production workloads.
For the full API specification and model-specific parameters, see the API Reference.