

The issue with AI is not that it’s not an impressive technology, it’s that it’s built on stolen data and is incredibly wasteful of resources. It’s a lot like cars in that regard, sure it solves some problems and is more convenient than the alternatives, but its harmful externalities vastly outweigh the benefits.
LLMs are amazing because they steal the amazing work of humans. Encyclopedias, scientific papers, open source projects, fiction, news, etc. Every time the LLM gets something right, it’s because a human figured it out, their work was published, and some company scraped it without permission. Yet it’s the LLM that gets the credit and not the person. Their very existence is unjust because they profit off humanity’s collective labour and give nothing in return.
No matter how good the technology is, if it’s made through unethical means, it doesn’t deserve to exist. You’re not entitled to AI more than content creators are entitled to their intellectual property.
Running the AI is not where the power demand comes from, it’s training the AI. Which, if you trained it only once it wouldn’t be so bad, but obviously every AI vendor will be training all the time to ensure their model stays competitive. That’s when you get into the tragedy of the commons situation where the collective power consumption goes out of control for tiny improvements in the AI model.
“It will happen anyway” is not an excuse to not try to stop it. That’s like saying drug dealers will sell drugs regardless of how ethical it is so there’s no point in trying to criminalize drug distribution.
Except there are no truly open AI models because they all use stolen training data. Even the “open source” models like Mistral and DeepSeek say nothing about where they get their data from. The only way for there to be an open source AI model is if there was a reputable pool of training data where all the original authors consented to their work being used to train AI.
Even if the model itself is open source and free to run, if there are no restrictions against using the generated data commercially, it’s still complicit in the theft of human-made works.
A lot of people will probably disagree with me but I don’t think there’s anything inherently wrong with using AI generated content as long as it’s not for commercial purposes. But if it is, you’re by definition making money off content that you didn’t create which to me is what makes it unethical. You could have hired that hypothetical person whose work was used in the AI, but instead you used their work to generate value for yourself while giving them nothing in return.