Retaining Character Identity in AI Art

Character consistency is the #1 challenge in AI art production. This comprehensive guide explains the 2025 state-of-the-art: Midjourney v7's Omni Reference for creators, Adobe Firefly Custom Models for enterprises, and the critical distinction between individual workflows and production pipelines.

The Challenge: Why Characters Drift

Text Prompts Only (The Problem)

Describing a character in words creates a different interpretation every time. "Blue-haired warrior with green eyes" could match millions of faces. AI models are designed for variety, not consistency—each generation is an independent creative act.

Result: 60-80% character drift rate

Visual Anchors (The Solution)

Reference images or trained models lock in a character's "DNA"—the specific visual identity that persists across scenes. The AI sees exactly who this person is, not just what they look like generically.

Result: 5-20% drift with proper setup
Key Insight: AI models can't "remember" characters between generations. You must provide that memory via references, trained models, or manual corrections.

Midjourney v7: Omni Reference & Style Reference

--oref (Omni Reference)

V7's character consistency feature. Upload one reference image showing your character, and Midjourney embeds that character's visual identity into new generations. Works with external photos or Midjourney-generated images.

--oref [URL] --ow 100

--sref (Style Reference)

Controls artistic style (colors, textures, mood) without affecting character identity. Combine with --oref to get consistent characters in different artistic styles. Can use images or preset style codes.

--sref [URL] --sw 100
Version Note: --cref (Character Reference) is v6.1 only. In v7, use --oref instead. Omni Reference is smarter, handles camera angles better, and costs 2x GPU time but delivers superior consistency.

Midjourney Parameters Explained

Parameter What it does Range Default Use For Version
--oref Image URL None Character/object consistency in new scenes v7 only
--ow 1-1000 100 Control how strictly to match reference. 50-250 recommended for most use. Higher values (400+) can cause artifacts. v7 only
--cref Image URL(s) None Character consistency (legacy feature) v6.1 only
--cw 0-100 100 100 = face+hair+clothes. 0 = face only (change outfits). 60-80 for clothing variations. v6.1 only
--sref Image URL or code None Apply artistic style (not character identity). Use with --oref for styled characters. v6, v7
--sw 0-1000 100 Style strength. Lower for subtle styling, higher to force specific aesthetic. v6, v7
--seed 0-4294967295 Random Experimental only. NOT recommended for consistency. Use --oref/--sref instead. All versions

Adobe Firefly: Custom Models (Enterprise Solution)

Custom Style Models

Train Firefly on 10-30 brand images to learn colors, aesthetic, visual identity. Perfect for campaign consistency and brand guidelines at scale.

Concept: "Style1" → Brand aesthetic

Custom Subject Models

THE character solution. Train on 10-30 images of a specific character or object. Firefly learns that exact subject and can generate it in unlimited scenes. This is Adobe's answer to LoRA training.

Concept: "Alex" → Character DNA
Enterprise Only: Custom Models require Adobe enterprise license + Adobe Storage for business. Available via Firefly API, GenStudio, Express. NOT available to individual Photoshop users.

Complete Solution Comparison

Solution Different approaches Best For Consistency Level Setup Time Cost Tier
Text Prompts Only Quick ideation, exploring variations Low (60-80% drift) Instant $
Midjourney v7 --oref Individual creators, freelancers, 5-100 images High (10-20% drift) 5 minutes $$
Midjourney v6.1 --cref Legacy workflows, simpler character needs Medium-High (15-30% drift) 5 minutes $$
Stable Diffusion LoRA Tech-savvy users, open-source workflows Very High (5-10% drift) 2-4 hours $ (GPU costs)
Firefly Custom Subject Model Enterprise, brands, 100+ image campaigns Very High (5-10% drift) 1-2 hours $$$$ (Enterprise)
Hybrid (MJ + Photoshop) Professional projects, IP licensing, publishing Near-Perfect (0-5% final) Variable $$$

Practical Workflow Examples

Solo Creator Workflow

Generate character in MJ v7 → Save as reference → Use --oref for all new scenes → --ow 100-150 for tight consistency → Photoshop touchups as needed.

Style Variation Workflow

Character ref: --oref [char.jpg] --ow 150 → Style ref: --sref [style.jpg] --sw 100 → Prompt: "Alex in cyberpunk alley" → Result: Same character, new style.

Enterprise Campaign

Train Custom Subject Model on 20 mascot images → Deploy via Firefly API → Marketing team generates 500 on-brand variations → Automatic brand consistency.

Multi-Character Scene

Generate each character separately with --oref → Export to Photoshop → Composite using layers → Generative Fill to blend → Manual refinement for perfection.

Changing Outfits

V6.1: --cref [character.jpg] --cw 0 (face only) → V7: Lower --ow to 50-80 → Specify new outfit in prompt → Face stays same, clothes change.

Publication Workflow

MJ generates base → Select best 3 variants → Photoshop: fix hands, eyes, consistency → Layer masks to composite best features → Upscale to print resolution.

Essential Terminology

Test Your Knowledge

You're using Midjourney v7 and want maximum character consistency. Which parameter combination is correct?

An enterprise wants to generate 500 images of their mascot character across different campaigns. What's the best solution?

Why does Midjourney recommend AGAINST using --seed for character consistency?

You want the same character in both photorealistic and anime styles. Which approach works?

Frequently Asked Questions

Individual creators: Use Midjourney v7 --oref + Photoshop refinement. Enterprises: Train Firefly Custom Subject Models. Both need visual anchors—text alone never works.