I have seen time and time again that people who try to get into ComfyUI – the super versatile node-based generative AI front end – get quickly discouraged and utterly confused by its pretty steep learning curve.
Despite being quite technical and having decades of experience with numerous tools, I found myself in the same situation not too long ago. That’s why I decided to clean up and release a baseline workflow – a nifty little toolbox that’s quite competent at generating characters without getting too deep into the weeds.
I believe the best way to learn is by doing, and this workflow is the result of exactly that: wrestling with and finding workable solutions to problems everyone is bound to encounter early on.
To be clear, a rudimentary understanding of ComfyUI basics is still required: how to install it (if you ask me, the best way is through Pinokio), how to use the Node Manager, where to put model files, etc.
Some key “selling points” of the workflow:
- Available for both Flux and SDXL models.
- The Flux workflow is configured to use the flagship Flux Dev model, but other variants (e.g. Flux Schnell) work as well.
- The SDXL workflow is configured to use the very fast WildcardXL Turbo SDXL derivative, but other variants work as well.
- Allows the use of multiple LoRas (fine-tuning models).
- Uses a ControlNet to take an (optional) pose reference image and extracts an OpenPose skeleton from it to drive the pose of your generated character.
- This reference can be a photo, a 3d model, a sketch, basically anything vaguely humanoid.
- Uses a 2-pass KSampler technique for noticeably better prompt-adherence.
- The first KSampler generates step 1 and passes the result over to another KSampler which completes the remaining steps.
- Utilizes an (optional) detailer model to refine fine facial and clothing detail.
- The bright red nodes are Boolean switches that toggle the optional features above.
- Performs a very fast final upscale pass that delivers a surprisingly good result.
- Neatly organized, promotes learning through tinkering and reverse-engineering. This is probably the main factor that sets it apart from other similar workflows out there.
- Relies heavily on UE (“use everywhere”) nodes which are amazingly useful, and dramatically reduce messy “link spaghetti”.
- Think of these as “wireless” nodes that plug into matching inputs without visible links (although by right-clicking the canvas you can make them visible with the “Show UE links…” option.
- Includes a Note node that contains the links to all the model, clip, VAE, ControlNet, detailer, etc. files used in the workflow – no more scrambling to figure out where to download these files from.
The workflows are included below – they are encoded PNG images, dragging them into the ComfyUI canvas will reconstruct the workflows.
Click the images, and use the “Save image as…” option on the full-size images. Make sure the images are saved as PNG, otherwise they won’t work.
This is the Flux version:

…and this is the SDXL version:

Just in case, here’s also a download link of both workflows in a ZIP file.
Additional tips:
- Apply ControlNet node: This node determines how strongly the pose reference influences the generated output.
- Strength: This parameter is self-explanatory.
- Start_percent and End_percent: These parameters determine when the ControlNet kicks in and stops during the generation process. For example, values of 0.0 and 0.5 mean that the ControlNet is active from the start of the generation to its halfway point (50%).
- The Number of steps parameter: This parameter significantly influences both the quality and speed of the generation, depending on the model.
- Generally, around 20 steps yield very good results.
- More steps may produce much nicer results but at a substantial increase in generation time.
- The Control_after_generate parameter in various nodes: Changing these parameters from “randomize” to “fixed” allows re-generating the same output with small iterative changes.
January 24 update: V2 is now up, has a reduced number of nodes, a 2-pass KSampler technique, improved conditioning for Flux, and minor fixes for the image comparison node and the SDXL version.
Well, there you have it.
Hopefully this helps someone – let me know if you got any value out of this thing! 😀
There are many potential ways to expand on this (likeness references, style transfer, model sheet generation, expressions, image-to-video, etc.) but that’s way beyond the scope of a “starter” workflow for now.
However, this quick teaser featuring the “mascot” young lady may provide a hint about where all this might be going…