
Hey, my friends. I’m Anna, a writer and life experimenter who tests AI in daily routines, small habits, and personal projects. I didn't plan to think about ethics when I first tried Project Genie. I was just trying to make a small game about a cat who delivers mail. But somewhere around my third attempt, the tool refused to generate a character I'd described — and I realized I had no idea why.
That moment of confusion pushed me into a few weeks of testing: what actually gets blocked, what slips through, and what I should probably avoid even if the system allows it. This isn't a comprehensive audit. It's a set of observations from someone who stumbled into these questions while trying to build harmless little projects.

Project Genie has content filters. They catch some things consistently and miss others in ways that feel unpredictable.
From what I've seen, the system reliably blocks:
What caught me off guard was the inconsistency. A prompt mentioning "battling monsters" went through fine one day, but a similar phrase triggered a block another time. According to Google's official documentation, Project Genie uses a combination of automated filters and contextual analysis, which can produce different results based on surrounding prompt details.

The blocks I hit most often weren't about obviously problematic content. They were about ambiguous phrasing:
The friction here isn't that guardrails exist — it's that I often couldn't tell why something failed until I'd rewritten it three different ways.
This is where I got genuinely cautious.
In early 2025, several game developers reported receiving takedown notices for projects that used AI-generated assets resembling copyrighted characters, even when those assets were created without explicit brand references. Project Genie's terms of service state you're responsible for ensuring your outputs don't infringe IP — but the tool doesn't always stop you from creating something that might.
I tested this carefully. When I prompted "a plumber in red overalls," the system generated art that looked... familiar. Not identical to any trademarked character, but close enough to make me uncomfortable publishing it. The guardrail didn't trigger because I hadn't named the brand. But the resemblance was obvious.
What I learned: the absence of a block doesn't mean you're legally safe. Research on AI-generated content and copyright shows that outputs can infringe even without direct copying if they're "substantially similar" to protected works.
After a few near-misses, I developed habits:
This didn't guarantee I'd never brush against someone's copyright, but it reduced the chance I'd accidentally recreate something recognizable.
I noticed patterns in what the tool generated by default — and what it struggled with.
When I didn't specify character demographics, outputs skewed toward certain defaults: lighter skin tones, able-bodied characters, binary gender presentations. Research on generative AI systems consistently shows these models tend to reproduce biases present in their training data, particularly around race, gender, and disability representation.
The fix required active prompting. If I wanted diverse representation, I had to request it explicitly: "a Black woman scientist," "a character using a wheelchair," "non-binary shopkeeper." The system generally handled these prompts well — but the fact that I had to ask each time was telling.
What frustrated me more were the subtle stereotypes. When I prompted "tough warrior character," I got muscular men. When I wrote "nurturing healer," the tool generated women in soft colors. It took deliberate counter-prompting to break these patterns.
I'm not saying the tool is uniquely biased — most AI systems carry these issues. But if you're building something you want others to play, the defaults matter. And defaults require pushback.
The most common question I see: "Does Google train on my prompts?"
According to Google's data usage policy for Project Genie, prompts and generated content are not used to train Google's models without explicit user consent, though metadata about usage patterns may be collected for service improvement.
That answer is technically reassuring — but it doesn't cover everything people worry about.
I keep a mental list of things I won't include, even if the system would accept them:
The risk isn't always that Google will misuse this data. It's that I don't fully know where the boundaries are — and I'd rather err toward caution than assume safety.
Our Macaron is often used to transform this "security check" into reusable prompts or small tools, rather than relying on memory each time. You can try using Macaron to organize your pre-generation checks, IP risk alerts, or bias self-checks into a fixed process, and quickly go through it before each creation. There are no promises or requirements. Test it with a real project to see if it really helps you reduce omissions.
Click here to try it for free!

Before I generate anything now, I run through a quick mental check:
This doesn't make me perfectly safe. But it reduces the chances I'll publish something I regret — or receive a legal notice I can't afford to fight.

I'm still using Project Genie. But I'm using it differently than I did at first — more carefully, with fewer assumptions about what's allowed versus what's wise.
The guardrails catch some things. Not everything. And the gaps mean I'm the one responsible for thinking through what I'm creating, not just whether the system lets me create it.
Your projects, your call. But if you're building anything public-facing, the extra friction of careful prompting is probably worth it.