Bring static images to life with dynamic motion, lighting consistency, and synchronized audio. This variant smoothly animates reference visuals into short video sequences.

Bring static images to life with dynamic motion, lighting consistency, and synchronized audio. This variant smoothly animates reference visuals into short video sequences.
Your request will cost 0.035 per run. For $10 you can run this model approximately 285 times.
Here's what you can do next:
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.5/image-to-video",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Installa il pacchetto richiesto per il tuo linguaggio.
pip install requestsTutte le richieste API richiedono l'autenticazione tramite una chiave API. Puoi ottenere la tua chiave API dalla dashboard di Atlas Cloud.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Non esporre mai la tua chiave API nel codice lato client o nei repository pubblici. Utilizza invece variabili d'ambiente o un proxy backend.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Invia una richiesta di generazione asincrona. L'API restituisce un ID di previsione che puoi usare per controllare lo stato e recuperare il risultato.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.5/image-to-video",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Interroga l'endpoint di previsione per verificare lo stato attuale della tua richiesta.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingLa richiesta è ancora in fase di elaborazione.completedLa generazione è completata. I risultati sono disponibili.succeededLa generazione è riuscita. I risultati sono disponibili.failedLa generazione è fallita. Controlla il campo errore.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Carica file nello storage Atlas Cloud e ottieni un URL utilizzabile nelle tue richieste API. Usa multipart/form-data per il caricamento.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}I seguenti parametri sono accettati nel corpo della richiesta.
Nessun parametro disponibile.
{
"model": "alibaba/wan-2.5/image-to-video"
}L'API restituisce una risposta di previsione con gli URL degli output generati.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills integra oltre 300 modelli di IA direttamente nel tuo assistente di codifica IA. Un comando per installare, poi usa il linguaggio naturale per generare immagini, video e chattare con LLM.
npx skills add AtlasCloudAI/atlas-cloud-skillsOttieni la tua chiave API dalla dashboard di Atlas Cloud e impostala come variabile d'ambiente.
export ATLASCLOUD_API_KEY="your-api-key-here"Una volta installato, puoi usare il linguaggio naturale nel tuo assistente IA per accedere a tutti i modelli Atlas Cloud.
Il server MCP di Atlas Cloud collega il tuo IDE con oltre 300 modelli di IA tramite il Model Context Protocol. Funziona con qualsiasi client compatibile MCP.
npx -y atlascloud-mcpAggiungi la seguente configurazione al file delle impostazioni MCP del tuo IDE.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Schema not availableYou need to be logged in to access your model request history.
Log InSuono e Visione, Tutto in Una Sola Ripresa
Il rivoluzionario modello di IA di ByteDance che genera audio e video perfettamente sincronizzati simultaneamente da un unico processo unificato. Sperimenta la vera generazione audio-visiva nativa con sincronizzazione labiale di precisione millimetrica in oltre 8 lingue.
Despite Google's recent price cuts, Veo 3 remains expensive overall. Wan 2.5 is lightweight and cost-effective, providing creators with more options while significantly reducing production costs.
With Wan 2.5, no separate voice recording or manual lip alignment is needed. Just provide a clear, structured prompt to generate complete videos with audio/voiceover and lip sync in one go - faster and simpler.
When prompts are in Chinese, Wan 2.5 reliably generates A/V synchronized videos. In contrast, Veo 3 often displays "unknown language" for Chinese prompts.
Wan 2.5 excels at character trait restoration, accurately presenting character appearance, expressions, and movement styles, making generated video characters more recognizable and personalized for enhanced storytelling and immersion.
Supports Studio Ghibli-style rendering, creating hand-painted watercolor textures and animation effects. Brings warm, dreamy visual experiences that enhance artistic appeal and storytelling depth.
Whether it's product launches, promotional campaigns, or brand marketing, Wan 2.5 helps you quickly generate high-quality videos, making creation easy and efficient.
Provides ideal content localization solutions for multinational companies, making creation easier and more efficient.
Creators can leverage Wan 2.5 to improve video production efficiency while ensuring high-quality output.
Wan 2.5 makes corporate training more efficient and engaging.
Wan 2.5 lets creativity flow without expensive equipment or actors - AI generates everything efficiently.
Transform creativity into reality without high costs - Wan 2.5 makes quality content production easy and economical.
Generate complete videos with synchronized audio, voiceover, and lip-sync in a single process
Supports simultaneous generation of two characters with synchronized actions, expressions, and lip-sync for natural interactions
High-quality video output with realistic character expressions and precise lip synchronization
Excellent support for Chinese prompts and reliable generation of multilingual content
Significantly lower costs compared to competitors while maintaining professional quality
Precisely recreates character appearance, expressions, and movement styles with high fidelity and personality
Supports various artistic styles including Studio Ghibli-inspired hand-painted watercolor textures
Perfect for dialogue scenes, interviews, or dual-person short films with natural audio-visual consistency
Discover the power of Wan 2.5 through these curated examples. From digital human lip-sync to dual character scenes, artistic rendering to character restoration - experience the possibilities.
A middle-aged man sitting at a wooden desk in a cozy study room, surrounded by bookshelves and a warm lamp glow. He opens an old book and reads aloud with a calm, deep voice: 'History teaches us more than just facts… it shows us who we are.' The room has subtle background sounds: pages turning, the faint ticking of a clock, and distant rain against the window.
A young couple sitting on a park bench during sunset. The woman leans her head on the man's shoulder. He whispers softly: 'No matter where we go, I'll always be here with you.' The sound includes the rustling of leaves, distant laughter of children playing, and the gentle hum of cicadas in the evening air.
A graceful ballerina with her hair in a messy bun, performing a powerful and emotional contemporary ballet routine. She is in a minimalist, dark art studio. Abstract patterns of light and shadow, projected from a hidden source, dance across her body and the surrounding walls, constantly shifting with her movements. The camera focuses on the tension in her muscles and the expressive gestures of her hands. A single, dramatic slow-motion shot captures her mid-air leap, with the light patterns swirling around her like a galaxy. Moody, artistic, high contrast.
Studio Ghibli-inspired anime style. A young girl with a straw hat lies peacefully in a sun-dappled magical forest, surrounded by friendly, glowing forest spirits (Kodama). A gentle breeze rustles the leaves of the giant, ancient trees. The air is filled with sparkling dust motes, illuminated by shafts of sunlight. The art style is soft, with a hand-painted watercolor texture. The scene feels serene, magical, and heartwarming.
Unisciti a cineasti, inserzionisti e creatori di tutto il mondo che stanno rivoluzionando la creazione di contenuti video con la tecnologia rivoluzionaria di Seedance 1.5 Pro.
| Field | Description |
|---|---|
| Model Name | Wan 2.5 |
| Developed By | Alibaba Group |
| Release Date | September 24, 2025 |
| Model Type | Generative AI, Video Foundation Model |
| Related Links | Official Website: https://wan.video/, Hugging Face: https://huggingface.co/Wan-AI, Technical Paper (Wan Series): https://arxiv.org/abs/2503.20314 |
Wan 2.5 is a state-of-the-art, open-source video foundation model developed by Alibaba's Wan AI team. It is designed to generate high-quality, cinematic videos complete with synchronized audio directly from text or image prompts. The model represents a significant advancement in the field of generative AI, aiming to lower the barrier for creative video production. Its core contribution lies in its ability to produce coherent, dynamic, and narratively consistent video clips with a high degree of realism and integrated audio-visual elements, such as lip-sync and sound effects, in a single, streamlined process.
Wan 2.5 introduces several key features that distinguish it from previous models and competitors:
Wan 2.5 is built upon the Diffusion Transformer (DiT) paradigm, which has become a mainstream approach for high-quality generative tasks. The technical report for the Wan model series outlines a suite of innovations that contribute to its performance.
The architecture includes a novel Variational Autoencoder (VAE) designed for high-efficiency video compression, enabling the model to handle high-resolution video data effectively. The Wan series is available in multiple sizes to balance performance and computational requirements, such as the 1.3B and 14B parameter models detailed for Wan 2.2. The model was trained on a massive, curated dataset comprising billions of images and videos, which enhances its ability to generalize across a wide range of motions, semantics, and aesthetic styles.
Wan 2.5 is designed for a wide array of applications in creative and commercial fields. Its intended uses include:
Wan 2.5 has demonstrated significant performance improvements over previous versions and holds a competitive position against other leading video generation models. Independent reviews and benchmarks provide insight into its capabilities.
A review conducted by Curious Refuge Labs™ evaluated the model's visual generation capabilities across several metrics.
| Metric | Score (out of 10) |
|---|---|
| Prompt Adherence | 7.0 |
| Temporal Consistency | 6.6 |
| Visual Fidelity | 6.5 |
| Motion Quality | 5.9 |
| Style & Cinematic Realism | 5.7 |
| Overall Score | 6.3 |
These scores indicate strong prompt understanding and a notable improvement in visual quality from Wan 2.2, although it still shows limitations in complex motion and realism compared to top-tier commercial models.