Project Genie Ethics & Safety (2026): Guardrails, IP Risks, and Responsible Prompting

Hey, my friends. I’m Anna, a writer and life experimenter who tests AI in daily routines, small habits, and personal projects. I didn't plan to think about ethics when I first tried Project Genie. I was just trying to make a small game about a cat who delivers mail. But somewhere around my third attempt, the tool refused to generate a character I'd described — and I realized I had no idea why.

That moment of confusion pushed me into a few weeks of testing: what actually gets blocked, what slips through, and what I should probably avoid even if the system allows it. This isn't a comprehensive audit. It's a set of observations from someone who stumbled into these questions while trying to build harmless little projects.


What guardrails exist (and what can slip through)

Project Genie has content filters. They catch some things consistently and miss others in ways that feel unpredictable.

From what I've seen, the system reliably blocks:

  • Explicit violence or graphic content
  • Sexual or romantic content involving minors
  • Hate speech or discriminatory language in prompts

What caught me off guard was the inconsistency. A prompt mentioning "battling monsters" went through fine one day, but a similar phrase triggered a block another time. According to Google's official documentation, Project Genie uses a combination of automated filters and contextual analysis, which can produce different results based on surrounding prompt details.

Common failure cases users should watch

The blocks I hit most often weren't about obviously problematic content. They were about ambiguous phrasing:

  • Vague character descriptions that could imply minors: I wrote "young adventurer" and got blocked. Changing it to "college-age explorer" worked immediately.
  • Brand names used casually: Even in contexts that seemed transformative or educational, mentions of specific franchises sometimes triggered IP warnings.
  • Weaponry or conflict themes: "Sword-fighting game" sometimes passed, sometimes didn't. I learned to describe mechanics instead: "turn-based combat with medieval tools."

The friction here isn't that guardrails exist — it's that I often couldn't tell why something failed until I'd rewritten it three different ways.


IP & brand-character risks (what changed in practice)

This is where I got genuinely cautious.

In early 2025, several game developers reported receiving takedown notices for projects that used AI-generated assets resembling copyrighted characters, even when those assets were created without explicit brand references. Project Genie's terms of service state you're responsible for ensuring your outputs don't infringe IP — but the tool doesn't always stop you from creating something that might.

I tested this carefully. When I prompted "a plumber in red overalls," the system generated art that looked... familiar. Not identical to any trademarked character, but close enough to make me uncomfortable publishing it. The guardrail didn't trigger because I hadn't named the brand. But the resemblance was obvious.

What I learned: the absence of a block doesn't mean you're legally safe. Research on AI-generated content and copyright shows that outputs can infringe even without direct copying if they're "substantially similar" to protected works.

Safer prompt patterns (original worlds without brands)

After a few near-misses, I developed habits:

  • Describe mechanics and aesthetics, not franchises: Instead of "like Pokémon but with robots," I'd write "creature-collection game with mechanical companions."
  • Build from original concepts: I started with "what if mail carriers were cats" rather than "anthropomorphic version of [existing character]."
  • Avoid visual references to known IP: No "art style similar to [famous game]." I'd describe color palettes, line weights, or mood instead.

This didn't guarantee I'd never brush against someone's copyright, but it reduced the chance I'd accidentally recreate something recognizable.


Bias + representation pitfalls

I noticed patterns in what the tool generated by default — and what it struggled with.

When I didn't specify character demographics, outputs skewed toward certain defaults: lighter skin tones, able-bodied characters, binary gender presentations. Research on generative AI systems consistently shows these models tend to reproduce biases present in their training data, particularly around race, gender, and disability representation.

The fix required active prompting. If I wanted diverse representation, I had to request it explicitly: "a Black woman scientist," "a character using a wheelchair," "non-binary shopkeeper." The system generally handled these prompts well — but the fact that I had to ask each time was telling.

What frustrated me more were the subtle stereotypes. When I prompted "tough warrior character," I got muscular men. When I wrote "nurturing healer," the tool generated women in soft colors. It took deliberate counter-prompting to break these patterns.

I'm not saying the tool is uniquely biased — most AI systems carry these issues. But if you're building something you want others to play, the defaults matter. And defaults require pushback.


Privacy/data questions users ask

The most common question I see: "Does Google train on my prompts?"

According to Google's data usage policy for Project Genie, prompts and generated content are not used to train Google's models without explicit user consent, though metadata about usage patterns may be collected for service improvement.

That answer is technically reassuring — but it doesn't cover everything people worry about.

What never to put in prompts

I keep a mental list of things I won't include, even if the system would accept them:

  • Real people's names (living or recently deceased)
  • Private information (addresses, phone numbers, even fictional ones that could match real data)
  • Sensitive personal details (medical conditions, financial situations, even in fictional contexts)
  • Identifiable locations tied to private individuals

The risk isn't always that Google will misuse this data. It's that I don't fully know where the boundaries are — and I'd rather err toward caution than assume safety.


Safer prompting checklist

Our Macaron is often used to transform this "security check" into reusable prompts or small tools, rather than relying on memory each time. You can try using Macaron to organize your pre-generation checks, IP risk alerts, or bias self-checks into a fixed process, and quickly go through it before each creation. There are no promises or requirements. Test it with a real project to see if it really helps you reduce omissions.

Click here to try it for free!

Before I generate anything now, I run through a quick mental check:

  • Could this resemble existing IP? If yes, rework toward original concepts.
  • Am I relying on defaults for representation? If yes, specify demographics intentionally.
  • Does this prompt include real people or private details? If yes, revise or remove.
  • Would I be comfortable if this prompt were public? If no, don't submit it.
  • Am I describing mechanics and themes, not brands? If no, reframe.

This doesn't make me perfectly safe. But it reduces the chances I'll publish something I regret — or receive a legal notice I can't afford to fight.


I'm still using Project Genie. But I'm using it differently than I did at first — more carefully, with fewer assumptions about what's allowed versus what's wise.

The guardrails catch some things. Not everything. And the gaps mean I'm the one responsible for thinking through what I'm creating, not just whether the system lets me create it.

Your projects, your call. But if you're building anything public-facing, the extra friction of careful prompting is probably worth it.

Hi, I'm Anna, an AI exploration blogger! After three years in the workforce, I caught the AI wave—it transformed my job and daily life. While it brought endless convenience, it also kept me constantly learning. As someone who loves exploring and sharing, I use AI to streamline tasks and projects: I tap into it to organize routines, test surprises, or deal with mishaps. If you're riding this wave too, join me in exploring and discovering more fun!

Apply to become Macaron's first friends