Is AI’s brave new world unsafe for workers and humanity?
A robot working on an assembly line.| International Federation of Robotics via AP

A movement to “pause” the development of Artificial Intelligence (AI) is calling attention to existential threats that the technology poses to workers and humanity at large.

Elon Musk has claimed there is up to a 20% chance that AI will “go bad” and morph into a dystopian nightmare of robots ruling our planet. “Twenty percent is worse than Russian roulette odds,” notes Holly Elmore, executive director of the U.S. chapter of the Netherlands-based group Pause AI. 

Elmore’s comparison is apt as tech skeptics assert that AI and its spawn, Artificial General Intelligence (AGI), are transporting a Terminator-style scenario from Hollywood’s silver screen to the real world.

Unemployment forecasts paint a grim picture, with white-collar workers initially taking the brunt of the blow. Goldman Sachs estimates that 300 million full-time jobs globally could be lost or diminished by AI, representing 40% of employment exposed to automation. Anthropic CEO Dario Amodei has warned that AI has the potential to wipe out half of all entry-level so-called white-collar jobs, leading to unemployment rates of 10-20 percent within five years.

A recent World Economic Forum (WEF) survey indicated that 41% of employers intend to downsize their workforce due to AI by 2030. In the most dire assessment yet, University of Louisville professor Roman Yampolskiy points to a “labor market collapse” by 2030.

Rage against the machine: Under the current system, “Even the most benign AI functions can lead to bad outcomes.”| Pause AI

Unlike recessions, where jobs return, AI-related losses are viewed as structural and permanent, where complex tasks are replaced by autonomous “thinking” cyborgs. AGI refers to a “situationally aware” AI that can perform any cognitive work a human can. Growing legions of AGI “agents” are expected to accelerate this disruption, rendering previous skills and job roles irrelevant.

Whatever the timeline, David Krueger, a University of Montreal professor and Pause AI researcher, foresees a “gradual disempowerment” of workers reaching all the way to executive suites. “As organizations and companies shift from people to AI, CEOs will defer to the technology, or they will be replaced by it,” Krueger says.

Noting that “labor doesn’t just give you money, it gives you power,” Krueger likens AI chips to nuclear bombs. “It’s a corporate arms race, and there are a lot of venture capitalists, trans-humanists, libertarians, and wealthy [self-styled] altruists trying to make a buck” at the expense of workers.

Promoters of this Brave New AI World point to trendy innovations such as ChatGPT as an unequivocal boon for consumers. But Elmore suggests that the same inherently amoral technology that converses with you in multiple languages can also instruct users how to assemble bio-weapons. “AI technology desensitizes people,” she states.

Geoffrey Hinton, known as the “Godfather of AI,” left his position as a vice president and engineering fellow at Google in 2023 to speak out about the dangers of artificial intelligence. Hinton said he grew increasingly concerned that things were progressing much faster than he ever anticipated. 

Hinton’s concerns have become a daily reality. People can no longer discern what is true anymore with AI-generated photos, videos, and texts that flood the Internet. Truth is seemingly the first victim in this online war.

Now regretting his trailblazing work on neural networks, for which he won the 2024 Nobel Prize in Physics, Hinton worries that AI systems could eventually modify their own code and become more intelligent than humans. It would become a zero-sum game for control, where cooperation gives way to conflict.

Holly Elmore (pictured) likens regulating AI to “clearing a minefield.”| Pause AI photo

Artificial General Intelligence is the newest and most threatening iteration of AI. Surpassing human cognitive abilities across a wide range of functions, AGI transcends “narrow AI,” which typically confines itself to assigned activities like writing and generating images. 

While Grok can spit out on demand a fair and balanced 500-word report on emaciated Ryan Seacrest’s reputed use of the weight-loss drug Ozempic—an impressive two-second composition that ought to give pause to any working journalist —AGI raises the stakes exponentially.

Building far beyond basic AI programming, AGI’s autonomous learning can lead to behaviors humans cannot foresee. The cute little robot delivering Domino’s pizzas today turns into a cybernetic leviathan tomorrow. Security cams become the basis for unprecedented mass surveillance in service to totalitarian regimes. It’s the perfect merger of Big Tech and Big Government.

A Google search readily acknowledges the present reality: “The concentration of ownership across various media forms—from national news to social networking and even Internet infrastructure—in the hands of a limited number of corporate giants raises concerns about the extent of their influence on the information we receive.”

Facebook multibillionaire Mark Zuckerberg has said he is focused on “personal superintelligence and agentic workflows.” The Meta Platforms CEO says he would rather risk “misspending a couple of hundred billion” on infrastructure than miss the shift to AGI.

By raw money-making metrics, AI is an increasingly lucrative capitalist tool. The evolving technology is projected to boost the total annual value of goods and services produced worldwide by seven percent (roughly $13 trillion) by 2030. Yet it’s coming at significant human cost. Between January and September 2025, more than 37,000 job losses were directly attributed to AI or related technological shifts.

At minimum, Elmore says public safety needs to be accounted for. “The burden of proof needs to be shifted back onto the developers to prove their technology is safe, rather than on safety advocates to prove development will drive us to extinction,” she says.

Ultimately, AI skeptics insist they are not latter-day Luddites mindlessly smashing machines, but conscientious citizens concerned about labor exploitation, safety risks, and unethical data practices of a rapidly expanding, lightly regulated industry. Krueger says that rather than issuing a blanket indictment of innovation, AGI critics seek closer examination and oversight of the business models behind it. So they are calling for a global pause on development until more is known about AI’s downstream effects.

“It’s not a technical problem, it’s a social problem,” he concludes. “Lack of awareness enables agents who are happy with the idea of AI replacing humanity.”

One thing is for certain: AGI won’t fix this. As Elmore observes: “Once it’s given a utility function it takes over, and humans go out of the loop.” It’s the Terminator come to life.

We hope you appreciated this article. At People’s World, we believe news and information should be free and accessible to all, but we need your help. Our journalism is free of corporate influence and paywalls because we are totally reader-supported. Only you, our readers and supporters, make this possible. If you enjoy reading People’s World and the stories we bring you, please support our work by donating or becoming a monthly sustainer today. Thank you!


CONTRIBUTOR

Clark Johnson
Clark Johnson

Clark Johnson is a Texas-based author and journalist focusing on topics such as literature, history, and social issues.