Native audio-visual joint generation model by ByteDance. Supports unified multimodal generation with precise audio-visual sync, cinematic camera control, and enhanced narrative coherence.

Native audio-visual joint generation model by ByteDance. Supports unified multimodal generation with precise audio-visual sync, cinematic camera control, and enhanced narrative coherence.
Your request will cost 0.018 per run. For $10 you can run this model approximately 555 times.
Here's what you can do next:
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-v1.5-pro/image-to-video-fast",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Install the required package for your language.
pip install requestsAll API requests require authentication via an API key. You can get your API key from the Atlas Cloud dashboard.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Never expose your API key in client-side code or public repositories. Use environment variables or a backend proxy instead.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Submit an asynchronous generation request. The API returns a prediction ID that you can use to check the status and retrieve the result.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-v1.5-pro/image-to-video-fast",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Poll the prediction endpoint to check the current status of your request.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingThe request is still being processed.completedGeneration is complete. Outputs are available.succeededGeneration succeeded. Outputs are available.failedGeneration failed. Check the error field.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Upload files to Atlas Cloud storage and get a URL you can use in your API requests. Use multipart/form-data to upload.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}The following parameters are accepted in the request body.
No parameters available.
{
"model": "bytedance/seedance-v1.5-pro/image-to-video-fast"
}The API returns a prediction response with the generated output URLs.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills integrates 300+ AI models directly into your AI coding assistant. One command to install, then use natural language to generate images, videos, and chat with LLMs.
npx skills add AtlasCloudAI/atlas-cloud-skillsGet your API key from the Atlas Cloud dashboard and set it as an environment variable.
export ATLASCLOUD_API_KEY="your-api-key-here"Once installed, you can use natural language in your AI assistant to access all Atlas Cloud models.
Atlas Cloud MCP Server connects your IDE with 300+ AI models via the Model Context Protocol. Works with any MCP-compatible client.
npx -y atlascloud-mcpAdd the following configuration to your IDE's MCP settings file.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Schema not availableYou need to be logged in to access your model request history.
Log InByteDance's revolutionary AI model that generates perfectly synchronized audio and video simultaneously from a single unified process. Experience true native audio-visual generation with millisecond-precision lip-sync across 8+ languages.
What makes SeeDANCE 1.5 Pro fundamentally different
Uses a 4.5 billion parameter Dual-Branch Diffusion Transformer (DB-DiT) that generates audio and video simultaneously—not sequentially—ensuring perfect synchronization from the start.
Understands individual phonemes and maps them correctly to lip shapes across different languages, achieving millisecond-precision audio-visual synchronization.
Intelligently fills narrative gaps based on prompt intent, maintaining coherent storytelling across characters' emotions, expressions, and actions.
Professional HD video output with cinematic quality at 24fps, supporting 4-12 second durations
English, Mandarin, Japanese, Korean, Spanish, Portuguese, Indonesian, plus Chinese dialects
Complex camera movements including dolly zooms, tracking shots, and professional film techniques
Natural conversations with multiple characters, distinct vocal identities, and realistic turn-taking
Realistic hair dynamics, fluid behaviors, and material interactions for lifelike visuals
Maintains clothing, faces, and style across scenes for complete story continuity
See how Seedance stands out from other video generation models
Create emotion-forward narrative clips with realistic character dialogue and cinematic lighting
Performance-heavy ad content with natural acting, perfect lip-sync, and professional production value
Reach global audiences with native-quality audio-visual content in 8+ languages
Engaging instructional content with clear narration and synchronized visual demonstrations
Viral-ready short-form content with professional audio-visual quality for maximum engagement
Pre-visualization and concept development with realistic character performances and dialogue
Powerful Text-to-Video (T2V) API and Image-to-Video (I2V) API endpoints for seamless integration
Our Seedance 1.5 Pro T2V API transforms text prompts into complete cinematic videos with native audio-visual synchronization. Generate scenes, camera movements, character actions, and dialogue in a single Text-to-Video API call.
Our Seedance 1.5 Pro I2V API brings still images to life with motion, camera movement, and synchronized audio. The Image-to-Video API features advanced frame control to define precise start and end points for your animations.
Both T2V API and I2V API modes support RESTful architecture with comprehensive documentation. Get started in minutes with SDKs for Python, Node.js, and more. All Seedance 1.5 Pro API endpoints include automatic audio generation with phoneme-level lip synchronization for seamless video creation.
Start generating videos in minutes with two simple paths
For developers building applications
Create your Atlas Cloud account or login to access the console
Bind your credit card in the Billing section to fund your account
Navigate to Console → API Keys and create your authentication key
Use the API key to make requests and integrate SeeDANCE into your application
For quick testing and experimentation
Create your Atlas Cloud account or login to access the platform
Bind your credit card in the Billing section to get started
Go to the model playground, enter your prompt, and generate videos instantly with an intuitive interface
Unlike other models that generate video first and add audio later, Seedance 1.5 Pro uses a dual-branch architecture to generate both simultaneously. This ensures perfect synchronization from the start, with phoneme-level lip-sync accuracy across all supported languages.
While Wan 2.6 supports longer durations (up to 15s) and text rendering, Seedance 1.5 Pro excels in cinematic camera control, multi-language/dialect support with spatial audio, and physics-accurate motion. Choose based on your needs: Seedance for storytelling and multilingual content, Wan for product demos with text.
Seedance 1.5 Pro generates native 1080p videos at 24fps. Supported aspect ratios include 16:9, 9:16, 4:3, 3:4, 1:1, and 21:9. Duration ranges from 4-12 seconds, with Smart Duration allowing the model to select the optimal length automatically.
Seedance 1.5 Pro supports 8+ languages including English, Mandarin Chinese, Japanese, Korean, Spanish, Portuguese, Indonesian, and Chinese dialects like Cantonese and Sichuanese. Each language features accurate lip-sync and natural pronunciation.
Yes! Seedance understands technical film grammar. You can specify camera techniques like "Dolly Zoom on the subject" (Hitchcock effect), tracking shots, close-ups, or wide shots. The model interprets these to create professional cinematic results.
Text-to-Video generates complete videos from text prompts. Image-to-Video uses a "First Frame" to lock character identity and lighting, with optional "Last Frame" control for precise beginning and end-point transitions. Both modes support full audio generation.
Experience unmatched performance, reliability, and support for your AI video generation needs
Our system is specifically optimized for AI model deployment. Run Seedance 1.5 Pro with maximum performance on infrastructure tailored for demanding AI workloads and video generation.
Access Seedance 1.5 Pro alongside 300+ AI models (LLMs, image, video, audio) through one unified API. Manage all your AI needs from a single platform with consistent authentication.
Save up to 70% compared to AWS with transparent, pay-as-you-go pricing. No hidden fees, no minimum commitments—only pay for what you use with volume discounts available.
Your data and generated videos are protected with SOC I & II certifications and HIPAA compliance. Enterprise-grade security with encrypted data transmission and storage.
Enterprise-grade reliability with guaranteed 99.9% uptime. Your Seedance 1.5 Pro video generation is always available for production applications and critical workflows.
Complete integration in minutes through our simple REST API and multi-language SDKs (Python, Node.js, Go). Comprehensive documentation and code examples get you started fast.
Join filmmakers, advertisers, and creators worldwide who are revolutionizing video content creation with Seedance 1.5 Pro's groundbreaking technology.
Seedance 1.5 PRO is a foundational model engineered specifically for native joint audio-visual generation, developed by the ByteDance Seed team. It represents a significant leap forward in transforming video generation into a practical, utility-driven tool. By integrating a dual-branch Diffusion Transformer architecture, the model achieves exceptional audio-visual synchronization and superior generation quality, establishing it as a robust engine for professional-grade content creation.
Seedance 1.5 PRO introduces several key technical advancements that set a new standard for audio-visual content generation.
The model's capabilities were rigorously evaluated against other state-of-the-art video generation models using the comprehensive SeedVideoBench 1.5 framework. Seedance 1.5 PRO demonstrates significant improvements across both video and audio dimensions.
In Text-to-Video (T2V) and Image-to-Video (I2V) tasks, it achieves a leading position in motion quality and instruction following (alignment). The model also shows strong competitiveness in visual aesthetics and motion dynamics. For audio generation, particularly in Chinese-language contexts, Seedance 1.5 PRO consistently outperforms competitors like Veo 3.1, delivering superior audio quality and audio-visual synchronization.
Seedance 1.5 PRO is well-suited for a wide range of professional applications, including: