Advertising

Introducing a New Feature by Midjourney: Generating Consistent Characters across Multiple Generative AI Images

blankMidjourney, the popular AI image generating service, has recently unveiled a highly anticipated feature: the ability to generate consistent characters across multiple images. This is a significant breakthrough in the field of generative AI, as it addresses a major challenge that has hindered previous image generation models.

Most AI image generators rely on diffusion models, similar to Stability AI’s Stable Diffusion algorithm, to piece together images based on user input. However, these models often generate something new for every prompt entered, lacking consistency even with repeated prompts or similar keywords. This inconsistency has been a limiting factor for generative AI applications, particularly in visual mediums like film, novels, and graphic novels where maintaining character continuity is crucial.

Midjourney aims to change that with its new feature, introducing the “–cref” tag (short for “character reference”) that users can add to their text prompts. By pasting a URL of a previously generated character image after the tag, Midjourney’s algorithm attempts to match the character’s features, body type, and clothing in new images.

While the feature is still being refined, it has the potential to elevate Midjourney from a mere ideation tool to a professional resource. The ability to generate consistent characters opens up possibilities for artists and creators in various visual mediums, allowing them to maintain narrative continuity and explore different scenes, settings, facial expressions, and props for their characters.

Using the new Midjourney consistent character feature is relatively straightforward. Users start by generating or retrieving the URL of a previously generated character image. They can then include the tag “–cref” followed by the URL in their new prompt to generate an image of the same character in a different setting. The results may not be an exact replica of the original character or prompt but are certainly promising.

In addition, users have some control over how closely the new image reproduces the original character. By adding the tag “–cw” followed by a number from 1 to 100, users can adjust the variance of the resulting image. Lower numbers provide more variation, while higher numbers closely follow the original reference.

Midjourney’s consistent character feature has just been released, and artists and creators are already putting it to the test. The feature works best with characters made from Midjourney images and is not designed for real people or photos. While it may not capture every intricate detail, it focuses on character traits and allows for blending multiple character references in a single image.

As with any new feature, there are some advanced options and considerations. Users can blend information from multiple images using multiple “–cref” tags with their respective URLs. Midjourney’s web alpha version allows users to drag or paste an image into the interface and select whether it is an image prompt, a style reference, or a character reference.

While Midjourney’s V6 version is still in alpha, the team is actively seeking feedback and ideas from users. They acknowledge that features may change during the testing phase but assure users that the official beta version is on the horizon.

Overall, Midjourney’s consistent character feature is a significant step forward in generative AI imagery. It empowers artists and creators to maintain continuity in their storytelling and explore new visual possibilities. As Midjourney continues to refine and improve this feature, it has the potential to become an essential tool for professionals in various creative industries.