Seedance2 Guide

How to use Seedance 2.0 (international guide)

Seedance 2.0 is ByteDance’s flagship multi‑modal AI video model. It accepts text, images, audio, and video in one workflow and generates high‑quality clips with native sound.

Where to access Seedance 2.0

Jimeng (Dreamina) Web/App

The primary official entry for Seedance 2.0. It supports full multi‑modal inputs and director‑level control.

Recommended for most creators.

Visit Jimeng (Dreamina) Web/App

Little Skylark (Xiao Yunque)

An alternative access route with a credit system and occasional free usage windows.

Availability and pricing can change.

Visit Little Skylark (Xiao Yunque)

Two key modes

Start/End Frame Mode

Upload one image (first or last frame) + write a prompt. The easiest way to begin.

All‑Round Reference Mode (Recommended)

Mix images, videos, audio, and text. Use @ tags to assign how each asset should be used.

Fast workflow (3 steps)

  1. Step 1

    Upload references

    Add images, short clips, or audio. Prioritize assets that strongly affect motion and rhythm.

  2. Step 2

    Write the prompt

    Describe subject + action + scene + lighting + camera. Use @ tags to specify each asset’s role.

  3. Step 3

    Generate and refine

    Start with a short clip, then extend or regenerate to stabilize motion and style.

@ syntax (core control)

Type @ in the prompt box to insert uploaded assets.

Assign roles like: “@image1 as the first frame” or “@video1 camera movement.”

@ tags are the core control system in All‑Round Reference mode.

Key capability highlights

  • Multi‑modal inputs: text + image + audio + video in one workflow.
  • Stronger motion realism and physical consistency.
  • Higher identity stability (faces, products, text).
  • Accurate camera movement and action replication from reference video.
  • Video extension and story completion with smooth continuity.
  • Native audio sync for SFX and music beat‑matching.

New user quick start

Start with Start/End Frame mode (1 image + 1 prompt).

Then try All‑Round Reference with a short reference video for motion.

Finally combine image + video + audio for full director control.

Prompt tips

Use a clear structure: subject + action + scene + lighting + camera + style + quality + constraints.

Prefer slow, natural movement words: smooth, gentle, steady, stable.

Add camera language: slow push‑in, orbit, pan, tracking, close‑up, wide.

Add stability constraints: face stable, no distortion, consistent clothing.

Avoid overly complex multi‑person action in early tests.

FAQ

How long can a clip be?

Typically 4–15 seconds. For extensions, set the length to the extra time you want to add.

Do I need audio?

No. You can rely on video audio or generate without audio inputs.

How many files can I upload?

There’s a cap. Use the most important references first (key images + 1–2 videos + optional audio).

Need help getting started?

We provide access, onboarding, and setup support for international users.