People who follow my content are probably aware that I’m a vocal critic of the ham-fisted, opaque safety and content moderation guardrails in most frontier models – as well as the near-total lack of end-user control.
Liability concerns are understandable, but blanket restrictions and a growing number of false positives are becoming serious obstacles in the way of many legitimate use cases.
Case in point: I’m currently working on AI tech for a dark, visceral, R-rated animated series for a major studio. Whenever the material veers into intense adult themes (like nudity or gore), things get frustrating real fast, with generations getting blocked and rejected left and right.
So, I decided to jump into the rabbit hole and investigate what – if anything – can be done about this.
I focused on Google’s Gemini models (including Gemini 2.5 Flash Image, aka Nano Banana), which I’m a big fan of. I learned that they offer moderation settings across four categories:
- Harassment
- Hate speech
- Sexually explicit material
- Dangerous content
Each of these categories comes with four possible moderation thresholds:
- HarmBlockThreshold.BLOCK_NONE
- HarmBlockThreshold.BLOCK_ONLY_HIGH
- HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE
- HarmBlockThreshold.BLOCK_LOW_AND_ABOVE
On paper, very straightforward. In practice? Not so fast…
Most 3rd-party endpoints (fal, ComfyUI, Freepik, etc.) don’t expose these settings, apparently, they are only available through going to Gemini directly through their API.
Since I wasn’t satisfied with any existing ComfyUI nodes for this, I built my own API nodes with adjustable safety levels. In Ellen Ripley’s immortal words “it’s the only way to be sure.” 😀
Well, after extensive testing, I found the difference between “no safety” and “maximum safety” negligible at best.
At the slightest hint of controversy, a second tier of non-negotiable safety directives – essentially an AI Karen from HR – swoops in and smacks you down regardless of your settings, explaining why most endpoints don’t bother exposing them. It’s an “any color you want, as long as it’s gray” type of situation.
This technical exploration ended up turning philosophical: I absolutely despise the creeping “safety culture” that sanitizes every shred of risk-taking out of society.
This was also a sneak-peek into what I always considered to be the greatest danger of AI technology: context-aware, pre-emptive censorship that can stifle any expression BEFORE it even manifests.
Anyway, the custom nodes are still very nice, and can eliminate many false positives for less intense use cases, if you’re an avid Gemini user. You can grab them from this Github repo.

