People who follow my content are probably aware that I’m a vocal critic of the ham-fisted, opaque safety and content moderation guardrails in most frontier models – as well as the near-total lack of end-user control.
Liability concerns are understandable, but blanket restrictions and a growing number of false positives are becoming serious obstacles in the way of many legitimate use cases.
Case in point: I’m currently working on AI tech for a dark, visceral, R-rated animated series for a major studio. Whenever the material veers into intense adult themes (like nudity or gore), things get frustrating real fast, with generations getting blocked and rejected left and right.
So, I decided to jump into the rabbit hole and investigate what – if anything – can be done about this.
I focused on Google’s Gemini models (including Gemini 2.5 Flash Image, aka Nano Banana), which I’m a big fan of. I learned that they offer moderation settings across four categories:
- Harassment
- Hate speech
- Sexually explicit material
- Dangerous content
Each of these categories comes with four possible moderation thresholds:
- HarmBlockThreshold.BLOCK_NONE
- HarmBlockThreshold.BLOCK_ONLY_HIGH
- HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE
- HarmBlockThreshold.BLOCK_LOW_AND_ABOVE
On paper, very straightforward. In practice? Not so fast…
Most 3rd-party endpoints (fal, ComfyUI, Freepik, etc.) don’t expose these settings, apparently, they are only available through going to Gemini directly through their API.
Since I wasn’t satisfied with any existing ComfyUI nodes for this, I built my own API nodes with adjustable safety levels. In Ellen Ripley’s immortal words “it’s the only way to be sure.” 😀
Well, after extensive testing, I found the difference between “no safety” and “maximum safety” negligible at best.
At the slightest hint of controversy, a second tier of non-negotiable safety directives – essentially an AI Karen from HR – swoops in and smacks you down regardless of your settings, explaining why most endpoints don’t bother exposing them. It’s an “any color you want, as long as it’s gray” type of situation.
This technical exploration ended up turning philosophical: I absolutely despise the creeping “safety culture” that sanitizes every shred of risk-taking out of society.
This was also a sneak-peek into what I always considered to be the greatest danger of AI technology: context-aware, pre-emptive censorship that can stifle any expression BEFORE it even manifests.
Anyway, the custom nodes are still very nice, and can eliminate many false positives for less intense use cases, if you’re an avid Gemini user. You can grab them from this Github repo.

Ok folks, 11/22 update:
With the release of the Gemini 3 Pro models, I rolled out a major update to my ComfyUI Gemini API nodes. The update includes:
- Full support for the new models, with increased output token counts
- Output resolution options (1K, 2K, 4K)
- Image batching (as Gemini 3 Pro Image supports up to 14 input images, which makes individual input slots non-viable)
- Improved handling of errors and unexpected responses
- Preserves full compatibility with older Gemini 2.5 Flash models
Google also appears to be experimenting with a new “OFF” moderation setting. Unlike “BLOCK_NONE”, which still filters high-severity content, “OFF” is intended to disable moderation entirely.
At present it seems to fall back to “NONE”, but it’s a great step forward, and the nodes already support this option.
Overall, these are the most feature-complete Gemini nodes I know of, roughly on par with ComfyUI’s native API nodes (though those rely on Comfy credits rather than direct Google API calls).
They can be grabbed from the same place, this Github repo.

Ok gang, December update.
Two major things got added to the nodes:
- First, the amazing “search grounding” feature can now be enabled both for text and imagese. This allows Gemini models to supplement their “frozen” training data with completely up-to-date, search-based information.
- Second, everything got migrated from the old deprecated google.generativeai SDK to the shiny new google.genai one, with the latest settings and features.
They nodes can be grabbed from the same place, this Github repo.

