🤖 Why I’m Not Anti-AI (and Why We Need to Separate the Technology from Those Who Control It)
It’s become fashionable in many corners of the open-source and IndieWeb worlds to say “I’m against AI.” And I understand why. The industry surrounding it has done little to earn people’s trust - extractive data practices, environmental impact, closed-source monopolies, and the creeping sense that we’re building machines to replace ourselves.
But here’s the thing: I’m not anti-AI. I’m anti-exploitation.
🧠 The Technology Itself Is Not the Enemy
AI, in the broadest sense, is the logical extension of what humans have always done: building tools to make life easier. Whether it’s a wheel, a printing press, a search engine, or a transformer model, the intent is the same: to automate complexity so we can focus on meaning.
The problem isn’t the technology; it’s the concentration of power around it.
We’ve allowed innovation to become something that happens to people, rather than for them - a race for profit rather than progress. But to reject AI itself because of corporate misuse would be like rejecting electricity because some companies built coal plants.
The technology isn’t inherently unethical. The way we deploy and govern it is.
🌍 The Real Issues Deserve Serious Attention
There are legitimate concerns - and I don’t want to minimise them.
Training large models consumes staggering amounts of energy. Data scraping without consent undermines the work of countless creators. And the human cost of annotation work, moderation, and labour outsourcing in AI supply chains is rarely discussed.
These are not small issues. But they are industrial issues - not existential ones. They should drive regulation, transparency, and accountability - not blanket opposition.
If we treat AI like an unstoppable apocalypse rather than a tool that can be governed, we remove our agency before the fight even begins.
💜 The Socialist in Me Sees a Different Future
If AI really can automate swathes of cognitive work - and I think it can - then the productivity gains should belong to everyone, not just shareholders.
That’s why I’m convinced that AI will inevitably push us towards universal basic income. Not as a safety net, but as a logical response to a world where machines create value faster than humans can.
If AI can make society more productive, then the dividends of that productivity should be distributed fairly - to the people.
Automation shouldn’t mean unemployment; it should mean liberation from meaningless labour.
🛠️ Technology Should Serve Humanity, Not Replace It
I want a future where AI is open, ethical, and collaborative - where we build tools that augment human potential, not exploit it.
That’s why I work in localisation and language technology: because I’ve seen what happens when AI is used well. It can help people communicate, connect, and create across cultures. It can translate, caption, describe, and empower - if we design it that way.
The key isn’t to abandon AI. It’s to reclaim it.
🔮 Closing Thoughts
The fear around AI is understandable. We’ve watched too many technologies promised as “democratising” end up centralised and monetised beyond recognition. But I still believe - stubbornly, maybe - that we can do better this time.
The same open-source ethos that built the web can shape AI into something genuinely liberating. It’s not about fighting machines. It’s about fighting systems of control.
So no, I’m not anti-AI.
I’m pro-human.
And I think those two things should never have been seen as opposites. 💜
Laura Hargreaves 👩💻
Localisation engineer, language technologist and general tinkerer. I write about tech, localisation and life on the open web — chasing internet nostalgia and genuine connections online. 🌍💜