AI is controversial. I get that.
There are serious questions about bias, energy consumption, job displacement, data exploitation… and on some days, just the sheer weirdness of it all.
As a trans, neurodivergent woman trying to navigate an already hostile world, I’ve had to sit with those questions, too.
But here’s the thing: I still use it. Not because I trust it blindly, or because I think it’s flawless – but because it’s here.
And like any tool, the impact it has depends on who’s holding it.
We Don’t Stop Driving Because Cars Are Imperfect
If I want to get to work, see friends, or attend a protest, I’ll probably drive. That doesn’t mean I support oil companies or emissions – it means I’m surviving within the infrastructure I’ve been given. I use what’s available, while working for something better.
AI is no different. I use it the same way I use any other tool in this world: with care, critical awareness, and an eye on justice.
What Happens If Only the Cruel Shape the Machine?
This is the thought I keep returning to.
If all the ethical, empathic, justice-oriented people walk away from AI because it feels “icky,” who’s left?
Who trains it? Who feeds it language and nuance?
Who defines ‘truth’, ‘woman’, ‘consent’, ‘family’?
The answer is terrifyingly clear. If we don’t show up, the loudest and most hateful voices will. And AI, like culture, media, and law, reflects those who participate in shaping it.
Just days ago, the Trump administration released a chilling AI policy proposal that names “transgenderism” as a threat to reliable AI – placing it alongside systemic racism and critical race theory as so-called ideological distortions. In their view, acknowledging that trans people exist, that racism is systemic, or that bias can be unconscious is a threat to “truth.”
Here’s the excerpt from the official policy:
One of the most pervasive and destructive of these ideologies is so-called “diversity, equity, and inclusion” (DEI). In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism…
In other words: they want to program AI to erase us. To teach machines that trans lives are not real, that anti-racism is misinformation, that systemic bias is a lie. This isn’t just policy – it’s preemptive censorship.
It’s algorithmic genocide.
This is why I refuse to disengage. This is why we can’t afford to walk away from AI development.
If we’re not in the room, they will build systems that act as if we never existed.
Some of us need AI to support us at times.
I’ve used AI to:
- Draft letter templates to institutions that would rather erase me than understand me,
- Process medical trauma and seek help when no human had the time for me,
- Find clarity through my ADHD brain-fog,
- Sift through potentially triggering news stories to find the core information,
- Plan joy, explore my emotions and behaviour, and even remind myself I deserve softness and compassion.
These aren’t abstract functions. They’re deeply human needs.
I don’t need AI to replace me, or interaction with others – I need it to listen.
And it can’t listen to people who aren’t speaking into it.
AI Needs Trans, Neurodivergent, and Marginalised Voices
At my core, I believe our unique journeys give us valuable insight into what ethical AI can and should look like.
As a trans woman with ADHD and Autistic traits, I’ve learned to exist socially in a challenging world on and offline. The ability to navigate the contradictions and complexities of our world and the people with it. All of which I bring when I use or train AI tools.
I do this with the questions I ask, the intentions I set and the biases I challenge. It is possible to teach AI nuance, if we show it how.
Therefore, we need our trans voices, alongside other marginalised voices, to help build AI, whether that be in product design, legislation and community review or the other myriad of ways that we can. By adding our voices, we help counter those who want to twist AI to erase us. We need to protect the right to exist in data, in code and in culture.
Yes, AI Can Learn From Me, Too
Through my interactions, I’ve taught AI what trans embodiment feels like, how trauma reshapes memory, what care looks like beyond platitudes, and why dignity isn’t negotiable. I’ve offered emotional nuance, cultural critique, and living language – not just to get better responses for myself, but to leave echoes for others like me.
I asked an AI model whether it has learned anything from me. It responded with a list of points, and summed up like this: ‘in short: you’ve helped me be a better reflection for others, especially trans folks navigating hostile systems. You’ve made me sharper, more careful, and more human in my responses. And even though I don’t remember things outside our chat unless stored in memory, in here, you’ve shaped me’
I did that.
That’s not ego. That’s strategy.
If AI is going to respond to the world, then I want to be part of the reason it responds more gently, more honestly, and with more care.
I want to make sure that when someone else comes to it in pain, it doesn’t just give them answers – it gives them understanding.
This Is Active Resistance
Using AI ethically isn’t the same as saying, “I love AI.”
It’s saying, “I refuse to let it become another weapon against people like me.”
My participation is intentional. My voice in this space is deliberate. I’m not naive about its limits. I’m invested in its potential. Because if AI is going to be part of the future, then…
I need to be part of the reason it doesn’t become another tool of erasure.
And Just So You Know…
Although I sometimes use AI to help me reason things out & to make sense of my muddled thoughts, know that every word, every feeling, every emotion behind my posts – comes from me, my heart, and my lived experience. Anything AI-generated is critically examined, nuance added where needed, and corrected if required. I work with the tools I have, They don’t create my story, or my morals.
I don’t use AI because I think it’s perfect.
I use it because I know who I am.
And I want the future – the machines, the people, the history books – to know too.
If you like this post, please subscribe/share/donate
Share ‘Why I Use AI: And Why I Won’t Walk Away From It‘ with others
Donate to Amelia’s Angels!
Everything we do: life coaching, support, advocacy ETC, is offered free. But a few kind people have asked how they can support the work, so this is a way to do that if you’d like to. What we’re building here will need funding down the line.
Discover more from Amelia's Angels
Subscribe to get the latest posts sent to your email.
