AI

Is AI bad? The truth about “evil” robots

Jay Perlman
August 25, 2025
Is AI bad? The truth about “evil” robots

Artificial intelligence is here. It’s everywhere, and it’s changing how we live, work, create, and communicate.

From recommendation algorithms to generative design platforms, AI is transforming every industry it touches.

And yet, one question continues to echo louder than ever: is AI bad?

Unsurprisingly, people feel that it AI will take their jobs or replace them in some form. There’s also the question about its role in creativity, ethics, and even the future of our planet. And on the most sinister level: will AI turn us humans into paperclips? These are genuine, and not-so-crazy, worries that society has as AI continues to grow more powerful.

But before we dismiss AI as “evil” and AI tools as “evil robots” we need to pause. The truth is more nuanced than any headline can capture.

In this article, we’ll explore the concerns and fears surrounding AI. We’ll unpack why AI is bad for the environment, what makes AI art controversial, and why some fear it’s replacing human workers. At the same time, we’ll explain how AI is actually a valuable creative collaborator; one that, when used responsibly, can empower individuals, democratize tools, and unlock entirely new frontiers of expression.

Why are people asking, “is AI bad?”

AI has moved from science fiction to the palm of your hand. It writes, paints, generates videos, mimics voices, and even holds conversations. But as we already mentioned, this rapid growth of AI rightfully produces anxieties.

If you never considered how AI could possibly have negative consequences on the work force, creativity, and society as a whole, here are a few of the main reasons.

1. The fear of displacement

AI’s efficiency is often framed as a threat to the labor force. Automation has already impacted industries like manufacturing and logistics. Now, with creative generative AI tools, creative professions are feeling the pressure too.

Writers, designers, product managers, developers and many other professionals are wondering what their futures hold.

But the question we need to ask isn’t simply, “Is AI bad for jobs?” Rather, it’s: How does AI change the nature of work?

The answer is, as you might have guessed, complicated. AI excels at repetitive, rules-based tasks. It does not possess the human intuition, ethics, or lived experience that creativity demands. Instead of eliminating creative roles, AI is reshaping them.

Jobs won’t vanish all at once, but instead, many will evolve, and professionals will increasingly function as AI supervisors, editors, curators, and strategists. The future belongs to those who can integrate human creativity with machine speed.

2. The rise of bad AI art

If you’ve scrolled through AI-generated artwork, you’ve probably seen the odd results: distorted hands, warped faces, surreal mishaps. It’s easy to dismiss these outputs as gimmicks or aesthetic noise. The deeper concern happens when it crosses into the more serious topic of ethics.

Many AI models are trained on massive datasets scraped from the internet, including copyrighted illustrations, photographs, or written work. Artists rightly worry that their intellectual property is being used without consent.

Plus, some argue that AI-generated art lacks soul or intentionality, and simply mimics styles without any originality.

So, is AI bad for art?

Only if we let it replace artists instead of empowering them. When used ethically, AI becomes a tool like a paintbrush, a camera, or a musical synthesizer. It should not be a shortcut to skip the creative process, but rather a scaffold for building upon it.

3. The sci-fi myth of the evil robot

Popular media often depicts AI as an existential threat. Think HAL 9000, Skynet, or sentient androids who turn on their creators.These narratives play on deep-seated fears of machines becoming conscious and unpredictable.

But this version of AI doesn’t exist yet. Today’s AI systems are narrow for such maliciousness, and they’re designed for specific tasks, like translating text, composing music, or identifying patterns in data.

They do not possess self-awareness yet, and they cannot form goals. They are still just tools, not thinkers.

The real danger isn’t that AI wants to harm us (yet), but that humans may misuse AI in harmful ways. Deepfakes, surveillance tools, biased algorithms aren’t problems caused by evil machines. They’re caused by poor oversight and human intent.

What AI actually is and isn’t

Artificial Intelligence refers to systems that simulate human cognitive functions such as learning, problem-solving, and decision-making. Most current AI falls under “narrow AI,” meaning it performs a single task or limited range of functions with high precision.

What it’s not: conscious, emotional, or self-motivated. In other words, it doesn’t think, feel, or intend like a human. When you ask it for a poem or a photo edit, it predicts outputs based on the patterns it has learned.

This is key because it means we are still in charge. It also means that when something goes wrong, it’s not a glitch in the machine’s morality, but usually a gap in our design or ethics.

Understanding this difference changes the conversation. Instead of fearing sentient machines, we can focus on building responsible, human-centered systems.

Why AI is bad for the environment

This is one of the most urgent and under-discussed questions in tech: why is AI bad for the environment?

The reality is that training large AI models consumes vast amounts of energy. One 2019 study found that training a single large model like GPT-3 could produce over 284 tons of carbon dioxide. That’s the equivalent of five American cars over their lifetimes.

The reason is that training requires massive computational resources. Models are fed trillions of data points and adjusted over weeks or months. This involves high-performance GPUs running around the clock in massive data centers, and these centers are often powered by fossil fuels, not renewables.

And once a model is trained, the environmental cost doesn’t stop. Every query, image generation, or AI-written paragraph requires computing power. This adds up quickly when millions of people use AI tools daily.

So yes, AI is bad for the environment, right now, but this alone doesn’t mean we should abandon the pursuit of a better AI.

The good news is that there are some promising efforts of people working on just that:

As AI becomes more central to our lives, sustainability must become part of its foundation.

How to use AI responsibly

Whether you’re a student, artist, business owner, educator, or simply curious, responsible AI use begins with intention, awareness, and ongoing learning.

Here are six key principles for ethical, effective AI engagement:

1. Treat AI as a collaborator, not a creator

AI should support the creative process, not replace it. You should Use it to explore directions, generate variations, or test concepts.

But remember, meaningful work still comes from human insight. Think of AI as a digital co-worker. It helps you move faster, but it still needs your input to steer the ship.

2. Verify all AI-generated content

Language models and image generators can fabricate facts or details that sound plausible but are entirely false.

Always fact-check what you generate. AI is a strong assistant but a poor authority. Rely on your research and judgment to validate everything before publication or decision-making.

3. Respect creators and intellectual property

If your AI tool draws on datasets that include creative work, be mindful. Not all training data is ethically sourced.

Avoid lifting work that resembles real artists or authors without their permission. Choose platforms that are transparent about their data practices and respect copyright laws.

When in doubt, give credit or use original content.

4. Consider the environmental cost

AI isn’t abstract, and you can’t forget that it runs on physical machines in data centers that consume electricity, often from nonrenewable sources. That means carbon emissions from AI technology are a real concern for everyone.

Support tools that operate with energy efficiency or use carbon offsets. Ask vendors about their sustainability commitments. And consider whether every AI task is necessary or just convenient.

5. Learn continuously

AI is advancing rapidly, and what you know today may be outdated in six months.

To make sure you don’t get left behind as it rapidly advances, make sure you stay curious, follow researchers, take short courses, and veraciously read about the latest developments.

Understanding how models work, how they fail, and how they’re governed makes you a more responsible user and thinker.

The best possible outcome: AI as a an ally

So, we return to the original question: is AI bad?

Yes, in some ways. It is energy intensive and harms the environment if unmanaged. It poses challenges to jobs and creativity. And it requires vigilant oversight, because there are and always will be bad actors in the field.

But these risks are not reasons to fear AI. They are reasons to engage with it more thoughtfully.

Used well, AI becomes a collaborator and a partner to human intelligence.

It helps solo creators scale up. It helps small businesses do more with less. It gives students tools to think differently. It sparks new forms of writing, art, music, and design.

It’s not a shortcut. It’s an invitation to expand your toolkit.

The better question might not be “is AI bad?” It might be: how can we use AI to make things better?

What kind of future do we want?

Instead of fearing AI, let’s shape it. Let’s demand transparency, sustainability, and fairness. Let’s teach it to amplify the best parts of being human which is our curiosity, our empathy, our creativity.

In the end, we get to choose the story AI tells. If we choose wisely, it won’t be about evil robots, but instead it will be about human potential.

Suggested

How to write better design prompts

Read more