EDVARD TOTH

EDVARD TOTH

art | design | technology | entertainment

Menu
  • Posts
  • News
  • About
  • Games
    • The Teal Sphere
    • Casino Games
      • Catch you on the flipside
      • Gamblit Gallery
      • Big expo, much wow
      • Expo Schmexpo
      • Deal or No Deal
      • Chasing Goldilocks
      • TriStation launched!
      • I know what you did at G2E
      • Model G launched!
      • Gaming expo of the undead
      • The skill of being lucky
      • Gamblit interactive table games
      • Gamblit mobile games
    • Mobile & Casual Games
      • BINGO Blitz
      • Solitaire and Prizes
      • Serf Wars
      • Action AllStars
      • Tiki Resort
      • MonkWerks
      • Snood
    • Console Games
      • Jak X Combat Racing
      • Jak 3
      • The Incredibles
      • Spongebob Squarepants: BFBB
      • True Crime: Streets of L.A.
      • Ghostworld / Fishdemo
      • Star Wars: Demolition
      • Vigilante 8: Second Offense
    • Perihelion
  • Art
    • The Tower
    • Banshee Waifu
    • Dark Architecture
    • AI-dventures in a Strange Realm
    • AI-pocalypse
    • The Teal Sphere
    • Gamblit Gallery
    • Archive Gallery
    • Vintage Pixel Gallery
    • Photography
    • Korn video
  • AI
    • All-In-Wan Video Workflow
    • The Tower
    • Starter Character Workflow
    • Banshee Waifu
    • Dark Architecture
    • AI-dventures in a Strange Realm
    • pAIn & gAIn
    • AI-pocalypse
    • The Teal Sphere
    • Warhammer 40k + Midjourney 5
  • Technology
  • Contact
  • ©
Menu

Starter Character Workflow

Posted on January 13, 2025January 28, 2025 by edvardtoth

I have seen time and time again that people who try to get into ComfyUI – the super versatile node-based generative AI front end – get quickly discouraged and utterly confused by its pretty steep learning curve.

Despite being quite technical and having decades of experience with numerous tools, I found myself in the same situation not too long ago. That’s why I decided to clean up and release a baseline workflow – a nifty little toolbox that’s quite competent at generating characters without getting too deep into the weeds.

I believe the best way to learn is by doing, and this workflow is the result of exactly that: wrestling with and finding workable solutions to problems everyone is bound to encounter early on.

To be clear, a rudimentary understanding of ComfyUI basics is still required: how to install it (if you ask me, the best way is through Pinokio), how to use the Node Manager, where to put model files, etc.

Some key “selling points” of the workflow:

  • Available for both Flux and SDXL models.
    • The Flux workflow is configured to use the flagship Flux Dev model, but other variants (e.g. Flux Schnell) work as well.
    • The SDXL workflow is configured to use the very fast WildcardXL Turbo SDXL derivative, but other variants work as well.
  • Allows the use of multiple LoRas (fine-tuning models).
  • Uses a ControlNet to take an (optional) pose reference image and extracts an OpenPose skeleton from it to drive the pose of your generated character.
    • This reference can be a photo, a 3d model, a sketch, basically anything vaguely humanoid.
  • Uses a 2-pass KSampler technique for noticeably better prompt-adherence.
    • The first KSampler generates step 1 and passes the result over to another KSampler which completes the remaining steps.
  • Utilizes an (optional) detailer model to refine fine facial and clothing detail.
  • The bright red nodes are Boolean switches that toggle the optional features above.
  • Performs a very fast final upscale pass that delivers a surprisingly good result.
  • Neatly organized, promotes learning through tinkering and reverse-engineering. This is probably the main factor that sets it apart from other similar workflows out there.
  • Relies heavily on UE (“use everywhere”) nodes which are amazingly useful, and dramatically reduce messy “link spaghetti”.
    • Think of these as “wireless” nodes that plug into matching inputs without visible links (although by right-clicking the canvas you can make them visible with the “Show UE links…” option.
  • Includes a Note node that contains the links to all the model, clip, VAE, ControlNet, detailer, etc. files used in the workflow – no more scrambling to figure out where to download these files from.

The workflows are included below – they are encoded PNG images, dragging them into the ComfyUI canvas will reconstruct the workflows.

Click the images, and use the “Save image as…” option on the full-size images. Make sure the images are saved as PNG, otherwise they won’t work.

This is the Flux version:

…and this is the SDXL version:

Just in case, here’s also a download link of both workflows in a ZIP file.

Additional tips:

  • Apply ControlNet node: This node determines how strongly the pose reference influences the generated output.
    • Strength: This parameter is self-explanatory.
    • Start_percent and End_percent: These parameters determine when the ControlNet kicks in and stops during the generation process. For example, values of 0.0 and 0.5 mean that the ControlNet is active from the start of the generation to its halfway point (50%).
  • The Number of steps parameter: This parameter significantly influences both the quality and speed of the generation, depending on the model.
    • Generally, around 20 steps yield very good results.
    • More steps may produce much nicer results but at a substantial increase in generation time.
  • The Control_after_generate parameter in various nodes: Changing these parameters from “randomize” to “fixed” allows re-generating the same output with small iterative changes.

January 24 update: V2 is now up, has a reduced number of nodes, a 2-pass KSampler technique, improved conditioning for Flux, and minor fixes for the image comparison node and the SDXL version.

Well, there you have it.
Hopefully this helps someone – let me know if you got any value out of this thing! 😀

There are many potential ways to expand on this (likeness references, style transfer, model sheet generation, expressions, image-to-video, etc.) but that’s way beyond the scope of a “starter” workflow for now.
However, this quick teaser featuring the “mascot” young lady may provide a hint about where all this might be going…

Spread the word

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Telegram (Opens in new window) Telegram
  • Click to email a link to a friend (Opens in new window) Email
  • More
  • Click to share on Tumblr (Opens in new window) Tumblr

Like this:

Like Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *

  • LinkedIn
  • YouTube
  • Link
  • X
  • GitHub

News

  • Lip-sync test with Sonic

    Lip-sync test with Sonic

    March 6, 2025
  • Testing Nvidia’s Sana model

    Testing Nvidia’s Sana model

    January 18, 2025
  • The Cultist – a GenAI test using Invoke

    The Cultist – a GenAI test using Invoke

    November 3, 2024
  • The Lich – a GenAI case study

    The Lich – a GenAI case study

    January 16, 2024
  • Warhammer 40k + Midjourney 5

    Warhammer 40k + Midjourney 5

    March 24, 2023
  • ROG releases Aimlab-themed mouse!

    ROG releases Aimlab-themed mouse!

    January 11, 2023
  • Aimlabs.com launched!

    Aimlabs.com launched!

    December 9, 2022
  • …and third time’s the charm at the EKG Slot Awards!

    …and third time’s the charm at the EKG Slot Awards!

    March 20, 2021
  • Won another EKG Slot Award!

    Won another EKG Slot Award!

    March 2, 2020
  • Won an EKG Slot Award!

    Won an EKG Slot Award!

    March 17, 2019
(C) 2023 Edvard Toth
 

Loading Comments...
 

    %d