kwaivgi/kling-v2.6-pro/motion-control

Kling 2.6 Pro Motion Control turns reference motion clips (dance, action, gesture) into smooth, realistic animations. Upload a character image (or source video) and a motion video; the model transfers the movement while preserving identity and temporal consistency.

IMAGE-TO-VIDEONEW
Home
Explore
kwaivgi/kling-v2.6-pro/motion-control
Kling v2.6 Pro Motion Control
image-to-video
PRO

Kling 2.6 Pro Motion Control turns reference motion clips (dance, action, gesture) into smooth, realistic animations. Upload a character image (or source video) and a motion video; the model transfers the movement while preserving identity and temporal consistency.

INPUT

Loading parameter configuration...

OUTPUT

Idle
Your generated videos will appear here
Configure your settings and click Run to get started

Your request will cost 0.095 per run. For $10 you can run this model approximately 105 times.

Here's what you can do next:

Parameters

Code Example

import requests
import time

# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "kwaivgi/kling-v2.6-pro/motion-control",
    "prompt": "A beautiful sunset over the ocean with gentle waves",
    "width": 512,
    "height": 512,
    "duration": 3,
    "fps": 24,
}

generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]

# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"

def check_status():
    while True:
        response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
        result = response.json()

        if result["data"]["status"] in ["completed", "succeeded"]:
            print("Generated video:", result["data"]["outputs"][0])
            return result["data"]["outputs"][0]
        elif result["data"]["status"] == "failed":
            raise Exception(result["data"]["error"] or "Generation failed")
        else:
            # Still processing, wait 2 seconds
            time.sleep(2)

video_url = check_status()

Install

Install the required package for your language.

bash
pip install requests

Authentication

All API requests require authentication via an API key. You can get your API key from the Atlas Cloud dashboard.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

HTTP Headers

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
Keep your API key secure

Never expose your API key in client-side code or public repositories. Use environment variables or a backend proxy instead.

Submit a request

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "prompt": "A beautiful landscape"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Submit a Request

Submit an asynchronous generation request. The API returns a prediction ID that you can use to check the status and retrieve the result.

POST/api/v1/model/generateVideo

Request Body

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}

data = {
    "model": "kwaivgi/kling-v2.6-pro/motion-control",
    "input": {
        "prompt": "A beautiful sunset over the ocean with gentle waves"
    }
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")

Response

{
  "id": "pred_abc123",
  "status": "processing",
  "model": "model-name",
  "created_at": "2025-01-01T00:00:00Z"
}

Check Status

Poll the prediction endpoint to check the current status of your request.

GET/api/v1/model/prediction/{prediction_id}

Polling Example

import requests
import time

prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

while True:
    response = requests.get(url, headers=headers)
    result = response.json()
    status = result["data"]["status"]
    print(f"Status: {status}")

    if status in ["completed", "succeeded"]:
        output_url = result["data"]["outputs"][0]
        print(f"Output URL: {output_url}")
        break
    elif status == "failed":
        print(f"Error: {result['data'].get('error', 'Unknown')}")
        break

    time.sleep(3)

Status Values

processingThe request is still being processed.
completedGeneration is complete. Outputs are available.
succeededGeneration succeeded. Outputs are available.
failedGeneration failed. Check the error field.

Completed Response

{
  "data": {
    "id": "pred_abc123",
    "status": "completed",
    "outputs": [
      "https://storage.atlascloud.ai/outputs/result.mp4"
    ],
    "metrics": {
      "predict_time": 45.2
    },
    "created_at": "2025-01-01T00:00:00Z",
    "completed_at": "2025-01-01T00:00:10Z"
  }
}

Upload Files

Upload files to Atlas Cloud storage and get a URL you can use in your API requests. Use multipart/form-data to upload.

POST/api/v1/model/uploadMedia

Upload Example

import requests

url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

with open("image.png", "rb") as f:
    files = {"file": ("image.png", f, "image/png")}
    response = requests.post(url, headers=headers, files=files)

result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")

Response

{
  "data": {
    "download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
    "file_name": "image.png",
    "content_type": "image/png",
    "size": 1024000
  }
}

Input Schema

The following parameters are accepted in the request body.

Total: 0Required: 0Optional: 0

No parameters available.

Example Request Body

json
{
  "model": "kwaivgi/kling-v2.6-pro/motion-control"
}

Output Schema

The API returns a prediction response with the generated output URLs.

idstringrequired
Unique identifier for the prediction.
statusstringrequired
Current status of the prediction.
processingcompletedsucceededfailed
modelstringrequired
The model used for generation.
outputsarray[string]
Array of output URLs. Available when status is "completed".
errorstring
Error message if status is "failed".
metricsobject
Performance metrics.
predict_timenumber
Time taken for video generation in seconds.
created_atstringrequired
ISO 8601 timestamp when the prediction was created.
Format: date-time
completed_atstring
ISO 8601 timestamp when the prediction was completed.
Format: date-time

Example Response

json
{
  "id": "pred_abc123",
  "status": "completed",
  "model": "model-name",
  "outputs": [
    "https://storage.atlascloud.ai/outputs/result.mp4"
  ],
  "metrics": {
    "predict_time": 45.2
  },
  "created_at": "2025-01-01T00:00:00Z",
  "completed_at": "2025-01-01T00:00:10Z"
}

Atlas Cloud Skills

Atlas Cloud Skills integrates 300+ AI models directly into your AI coding assistant. One command to install, then use natural language to generate images, videos, and chat with LLMs.

Supported Clients

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ supported clients

Install

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

Setup API Key

Get your API key from the Atlas Cloud dashboard and set it as an environment variable.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

Capabilities

Once installed, you can use natural language in your AI assistant to access all Atlas Cloud models.

Image GenerationGenerate images with models like Nano Banana 2, Z-Image, and more.
Video CreationCreate videos from text or images with Kling, Vidu, Veo, etc.
LLM ChatChat with Qwen, DeepSeek, and other large language models.
Media UploadUpload local files for image editing and image-to-video workflows.

MCP Server

Atlas Cloud MCP Server connects your IDE with 300+ AI models via the Model Context Protocol. Works with any MCP-compatible client.

Supported Clients

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ supported clients

Install

bash
npx -y atlascloud-mcp

Configuration

Add the following configuration to your IDE's MCP settings file.

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

Available Tools

atlas_generate_imageGenerate images from text prompts.
atlas_generate_videoCreate videos from text or images.
atlas_chatChat with large language models.
atlas_list_modelsBrowse 300+ available AI models.
atlas_quick_generateOne-step content creation with auto model selection.
atlas_upload_mediaUpload local files for API workflows.

API Schema

Schema not available

Please log in to view request history

You need to be logged in to access your model request history.

Log In

Kling v2.6 Pro Motion Control

Kling v2.6 Pro Motion Control is Kuaishou's advanced motion transfer model that animates a reference image by applying the movement from a reference video. Upload a character image and a motion clip (like a dance or action sequence), and the model extracts the motion path to generate smooth, realistic video where your subject performs those exact movements.

Key capabilities

  • Motion extraction and transfer Upload a 3 to 30-second reference video showing any movement (dance, walk cycle, martial arts, gestures), and the model captures the full motion sequence frame-by-frame to apply it to your image.
  • Full-body motion accuracy The system captures detailed movements including posture, limb positions, and complex actions, ensuring smooth and natural-looking animation even for fast or intricate sequences.
  • Flexible character orientation control Choose whether the final video follows the reference image's aspect ratio and composition ("image" mode) or the reference video's framing ("video" mode), with duration limits adjusted accordingly.
  • Audio preservation option Retain the original audio from your reference video or generate silent output, giving you control over the final soundscape.
  • Prompt-guided refinement Use text prompts to adjust scene details, styling, lighting, and atmosphere while maintaining the core motion transfer from the reference video.

Parameters and how to use

  • image: (required) The reference image showing the subject you want to animate
  • video: (required) The reference video containing the motion sequence to transfer
  • character_orientation: (required) Controls output framing and duration limits
  • prompt: Text description to refine scene details, style, and atmosphere
  • keep_original_sound: Whether to preserve audio from the reference video
  • negative_prompt: Elements to avoid in the generated video

How to use

  • Prompt

Describe the scene setting, visual style, lighting, and atmosphere you want while the motion is being transferred. The model will apply your reference video's movement to your reference image, so focus your prompt on environmental details rather than the action itself.

Example: "cinematic lighting, shallow depth of field, urban street background, golden hour, film grain"

Media requirements

Images

  • Max file size: 10 MB
  • Tip: Use clear, well-lit images showing the full subject for best motion transfer results

Videos

  • Duration limits depend on character_orientation setting (see below)

Other parameters

  • character_orientation – (required) Choose one:

image – Output matches the reference image's framing and composition. video – Output matches the reference video's framing and composition. Reference video can be up to 30 seconds.

  • keep_original_sound – Boolean, defaults to true

true – Preserve audio from the reference video false – Generate silent video output

  • negative_prompt – Optional text to specify unwanted elements like "blurry, distorted, watermark, low quality, flickering". Max 2,500 characters.

After you finish configuring the parameters, click Run, preview the result, and iterate if needed.

Pricing

Duration (s)Billed Duration (s)Total Price (USD)
55$0.560
1010$1.120
1515$1.680
3030$3.360

Notes

Best practices:

  • For complex movements like dance or martial arts, use reference videos between 3 and 10 seconds showing clear, unobstructed motion
  • Ensure your reference image shows the subject in good lighting with minimal occlusion
  • Start with the default settings and use prompts primarily for scene styling rather than motion instructions
  • The model works best when the reference image subject and reference video subject are similar in type (e.g., both human characters)

Use cases:

  • Animate character illustrations with real dance choreography or action sequences
  • Create product demonstration videos by transferring human gestures to animated mascots
  • Generate character performance clips for storyboarding and concept work
  • Produce social media content by applying trending motion clips to custom characters
  • Kling v2.6 Pro Image-to-Video – Generate videos from a single image with prompt-driven motion and optional native audio.
  • Kling v2.6 Pro Text-to-Video – Create videos entirely from text prompts with cinematic visuals and audio–video co-generation.
  • Kling Omni Video O1 Reference-to-Video – Maintain subject identity across frames using multi-reference inputs for character-consistent video generation.

Start From 300+ Models,

Explore all models