Home » today » Technology » Midjourney’s New Feature Enables Consistent Character Reproduction in AI Image Generation

Midjourney’s New Feature Enables Consistent Character Reproduction in AI Image Generation





The AI Image Generating Service Midjourney Introduces Consistent Character Feature for Image Recreation

Why Consistent Characters Are Essential for Generative AI Imagery

March 27, 2024

Generative AI image generation has long been an exciting and innovative field. AI image generators, such as Midjourney, have been able to create stunning and realistic images based on user inputs. However, one of the challenges has been the ability to recreate consistent characters across new images. This is crucial for narrative continuity in various visual mediums like films, novels, and comic books. Midjourney is now addressing this issue with the introduction of a powerful new feature.

In the past, generative AI image generators relied on diffusion models, which used text inputs from users to create images pixel by pixel. However, they often struggled to create consistent characters. Each prompt entered into the system resulted in something new, even if the key elements remained the same.

Introducing Midjourney’s New Feature for Consistent Character Generation

In a significant breakthrough, Midjourney has deployed a new tag, “-cref”, which users can add to their text prompts on the Midjourney Discord. This tag allows users to recreate characters with specific facial features, body types, and even clothing. By pasting the URL of a previously generated character into the prompt, Midjourney will attempt to recreate that character in new settings.

While the feature is still in its early stages, it shows great promise for professional creators and artists. Storyboarding and visual mediums can now benefit from consistent character appearances, enhancing narrative cohesion.

How to Use Midjourney’s Consistent Character Feature

To get started, users should have a previously generated Midjourney character. Here is a simple workflow:

  1. Generate or retrieve the URL of a previously generated Midjourney character.
  2. In a new prompt, add the character description with the tag “-cref [URL]”.
  3. Midjourney will analyze the URL and attempt to recreate the character from the provided reference.

It is important to note that the results may not be an exact match to the original character or prompt. However, users can control the degree of resemblance to the original character by applying the “-cw” tag followed by a number from 1 to 100. A higher number will result in a greater similarity, while a lower number will introduce more variation.

For example, if the desired prompt is to recreate a character in a white tuxedo standing in a villa, the prompt could be: “wearing a white tuxedo standing in a villa -cref [URL] -cw 100”. Midjourney will make its best effort to generate an image that matches the provided description.

Advanced Features for Blend and Combination

In addition to recreating characters from a single URL, Midjourney supports blending and combination. Users can introduce multiple character references into the prompt using two “-cref” tags with their respective URLs. This enables the blending of traits and information from multiple characters in a single image.

Midjourney’s new consistent character feature is an exciting development that brings generative AI imagery closer to achieving narrative continuity in visual storytelling. Artists and creators can now explore new possibilities and create more consistent and compelling narratives.

Learn more about the latest advancements in AI and transformative enterprise technology by subscribing to VentureBeat’s Briefings.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.