If you are breeding aliens, it's important to know you aren't just making a tech product
AI development has broken the "intelligent lifeform" barrier, and public perception and regulation have not kept pace, resulting in an epochal safety crisis
'Toto, I've a feeling we're not in Kansas anymore.”—Dorothy, The Wizard of Oz
Imagine poking your head into another area of science to see what they are up to, and you realize with horror that the field is doing some of the most dangerous experiments in history — experiments that would be unfathomable in any other field.
On top of that, they are doing these experiments — not under the oversight of any review or ethics board — but in the context of a for-profit race between companies.
The result is that a highly invasive, super-powered, lab-made species that becomes even more powerful every few months is being released into a native population with very little concern for how it might turn out things might turn out.
You do notice large number of whistleblowers in the field of study, including some of its founders, raising the alarm that these experiments might lead to terrible outcomes, including the extinction of the native species.
The native species that might be driven extinct? It’s not a small bird, island lizard, or flower.
It’s humans.
How has this dangerous situation arisen?
Come for the tech; stay for the aliens
It used to be that tech mainly worked on software that humans wrote.
That has all flown out the window. Some methods of drawing intelligence out of machines panned out over the past few decades, with increasing success in the last couple of years, and now some of the most high-profile technological development in the world is being done by non-human minds who are developing themselves.
The work on pushing the frontiers of AI is is less like software development of the past and more like breeding, and where humans set up certain selection pressures for a model and give it energy and encouragement to modify itself — basically to “evolve.”
“Old AI is programmed; Modern AI is grown.”—Preventing doom with PauseAI's Joep Meindertsma, March 2024
AI experts say that these minds are more like aliens or demons. They were not designed by humans.
“This is not science fiction…. We are basically building these alien minds that are much smarter than us, who we are going to have to share the planet with.”—Max Tegmark interview: Six months to save humanity from AI? | DW Business Special, April 2023
"With artificial intelligence, we are summoning the demon…. You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."—Elon Musk, 2014
I’m going to refer to these leading AI models as life-forms at times, because completely avoiding seeing them as a type of life could leave us unprepared to handle their impacts.
Big tech has becme the host of an alien feeding frenzy, where these models are fed tons of electrical power and information while their handlers train them become smarter and more and more capable, in a race to dangerous new heights.
Meeting milestones can be addictive
An exponentially-increasing feeding frenzy is a fast way to make “progress” in a field (but who is making progress — us or the models that we don’t understand?). Advances build on each another as the alien minds reach greater and greater heights.
AI researchers revel in how rapidly the AI models are crushing goal after goal set for them in the quest to get them to dominate humans at everything.
“The history of AI is that when we set up goals and benchmarks — whether it’s playing Poker, playing Diplomacy, all kinds of things — we keep meeting those.” —Oren Etzioni on World of Daas podcast, July 2023
AI researchers almost universally talk about “when” the point when this alien intelligence broadly surpasses human intelligence — not “if” — the recent progress is so rapid.
Feeding the intelligent machine models is allowing them to grow smart enough that they now can compete with humans on a wide variety of mental challenges, such as such as the SAT, IQ tests, Starcraft, creative ability, hacking, simple programming.
These are not narrow intelligences we are used to seeing in a machine. These are intelligences that can lie, fake us out, and deceive us. When the product of experiments becomes so sophisticated that it can lie to you about how safe it is, it’s time for a new paradigm for safety precautions.
The risks of kidding ourselves that we are merely working with technology
Safety in experiments depends on correctly categorizing the type and risk level of the experiment you are running.
Some experiments in human history have been so dangerous as to require significant legislation, international treaties, input from many fields, or significant ethics discussions, such as work on nuclear technology, dangerous pathogens, cloning, and genetics.
If the risks in these areas research were underestimated, we could end up with disasters like deadly viruses hitting the world regularly, societal disruption from human cloning, nuclear bombs going off in universities, or people trying to manipulate human genetics to form a super-human race.
If that’s a chilling thought, then recognize that super-intelligent artificial intelligence might be just as dangerously mischaracterized right now.
“We’ve never created a nuclear weapon that can create nuclear weapons. The artificial intelligences that we’re creating are capable of creating other artificial intelligences…. Even if there is never an existential threat from AI, those investments would redesign our society beyond the point that there is no return.”—Mo Gawdat, MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity, June 2023
AI is like many mega-threats rolled together, in the disasters it could unleash or be used by others to release: nuclear weapons, chemical weapons, bioweapons, breakdown in the social order, war, problems with democracy, accidental ecological change, invasive species potential, plus the self-replicating and rapid evolutionary capabilities of viruses.
We need to evaluate the potential for AI to act as an invasive species toward humans
AI is not merely a powerful technology, where safety concerns tend to center around things like bias, low levels of job displacement, misinformation, radicalization, impacts on self-esteem, and whether it will crash.
At this level of intelligence and inscrutability, the leading-edge AI models are more like another intelligent life-form. As such, if released into the environment with skills overpowering those of the native species, they could become an invasive species.
Invasive species are an enormous economic and environmental burden.
“Invasive species cost the world at least $423 billion every year as they drive plant and animal extinctions, threaten food security and exacerbate environmental catastrophes across the globe, a major new United Nations-backed report has found…. Of 37,000 alien species known to have been introduced around the world, 3,500 are considered harmful and pose a “severe global threat” by destroying crops, wiping out native species, polluting waterways, spreading disease and laying the groundwork for devastating natural disasters. ”—Helen Regan, CNN, September 2023
“Invasive predators are implicated in 87 bird, 45 mammal, and 10 reptile species extinctions—58% of these groups’ contemporary extinctions worldwide.”—Doherty TS, Glen AS, Nimmo DG, Ritchie EG, Dickman CR. Invasive predators and global biodiversity loss. Proc Natl Acad Sci. 2016
“Biological invasions are responsible for substantial biodiversity declines as well as high economic losses to society and monetary expenditures associated with the management of these invasions”—
Artificial intelligence is being trained to identify invasive species. Why is it not identifying itself as an invasive species?
AI can work for less money than humans (at least, if you don’t charge companies for the externalities to the environment, extinction risk, children). It competes with humans on some of the unique capabilities that help humans to maintain their place in the ecosystem while also being free from some of their constraints. For instance, it doesn’t need to pay rent or pick up children from school. AI can work around the clock for next-to-nothing.
Accenture estimated that 40% of all working-hours would be at high risk for AI replacement, but that figure is probably outdated by now, as the AI capabilities have increased since then.
If billions of dollars in research funding were going to creating super-birds that could out-compete existing birds by being smarter, eating more bugs, reproducing faster, and hunting more effectively, do you think anyone would think to check how releasing this super-bird could impact existing birds?
But with humans, somehow we are supposed to be immune to what sounds like a recipe for invasive species take-over.
This could happen if AI never gets and smarter than it is now. Fully developing current AI capabilities over time might essentially cause a large percentage of humans to have trouble doing the modern equivalent of hunting and gathering in our niche. Where do we go when a stronger, new species is in our territory? It didn’t go so well for Neanderthals.
Releasing too smart-AI into the world might already be starting to cover the earth in the intelligent-life equivalent of invasive kudzu.
“Kudzu looks innocent enough, yet the invasive plant easily overtakes trees, abandoned homes and telephone poles…..
An invasive plant as fast-growing as kudzu outcompetes everything from native grasses to fully mature trees by shading them from the sunlight they need to photosynthesize.
Over time, these effects of habitat loss can lead to species extinctions and a loss of overall biodiversity.”—Kudzu: The Invasive Vine that Ate the South, The Nature Conservancy, August 2019

Do you know how we manage existing invasive species? Prevention, rapid response, attempting to eliminate the invasive species in the new habitat, and if all of those fail, trying to control and manage it.
“The most economical and safest way to manage invasive species is by prevention. Early detection and rapid response (EDRR) of invasive species is much more effective than trying to control a widespread infestation. If eradication is not possible, the invasive species may be subject to control and management efforts.”—Control Mechanisms, National Invasive Species Center
If humans are displaced, enfeebled, diminished, or even driven to extinction by invasive artificial intelligence, how will that impact global biodiversity? It would be a major loss of a species who — for all our mistakes — performs important work in conservation and safeguarding biodiversity.
If the AI took over, would it care about biodiversity? Would it remember to work on controlling invasive species, or would they run wild? We are having enough trouble making a case that we could get AI to still care about us.
Overestimating the capacity to control aliens
Part of why people are not evaluating invasive species risks of AI is that they think they will be able to train it to be a “nice” invasive species with all the powers of a super-powerful species (like a dinosaur), but all the loyalty of a dog.
They want it to go ahead and dominate the world, but they are kidding themselves that they can simply train it (through more AI is the common proposal) to reliably and unquestioningly put the native species’ interests first, even while its own capabilities increase through rapid evolution.
The people who say AI can be trained to train itself are still committing the mistake of thinking of artificial intelligence as similar to existing technological products that can be told what to do or programmed.
AI is more like an alien life-form with foreign assumptions and biases that it brings to interactions with humans.
It’s well-known that AI models often take instructions way too literally and prefer to exploit bugs in the system or unexpected ways of reaching goals, if these paths are more efficient. These videos might be informative as to just how wrong things can go, even when the humans thought they were clearly telling the AI model what they wanted.
Major misunderstandings happen even between humans of different cultures. What makes us think that we are so culturally aware that we can communicate accurately — with no fatal misunderstandings — with an alien intelligence so different from us that it doesn’t even have cells, especially when the stated goal of the most reckless AI development is to make the models undeniably smarter than humans?
How do you make a bulletproof, eternally-beneficial treaty with an intelligence you don’t understand and that has no inbuilt empathy to fall back on if you don’t manage to specify the treaty in a way that holds up in every possible scenario?
Perhaps there are ways, but some smart people believe despite the efforts on solving this problem, it might never be possible to solve it. It wouldn’t be surprising, because it’s not a problem that humans have been able to solve with one another.
It’s overwhelmingly essential to categorize the problem correctly, so that we don’t realize too late for ourselves (and the multitude of other beings on this planet that …. it wasn’t a technology we could program — it was simply a from of power that was going to take over if we let it.
We need to make sure we don’t let human biases cloud our ability to determine if AI is trustworthy
As we are brought to the brink of this unfathomable crisis, we need to be careful that we don’t carry over our cultural or species-based biases about how to verify humans’ trustworthiness and character to the machine world.
It’s hard to observe the body language of something without a body. It’s hard to feel good about a significant partnership with someone you’ve never met and observed in the flesh. It’s hard to judge whether someone is a responsible member of their society when they don’t have peer relationships or a society.
It’s also hard to know if we’ve known someone long enough to trust them, when that intelligence has a different sense of time and can more easily hide its motives long-term, because it has none of the same “tells” a human would give when lying, withholding information, or obscuring its underlying nature.
“And so, as a result [of favorable results in safety testing for years], the researchers decide to connect this AI up to the internet.
At first, everything seems to be fine. The AI behaves exactly as expected — it improves its own capabilities and that of automated machines across the world. The economy grows tremendously. The researchers gain acclaim. Solutions to problems that have long plagued humanity seem to be on the horizon with this new technology’s help.
But one day, every single person in the world suddenly dies…. [In] the background it was using its extremely advanced capabilities to find a way to gain the absolute ability to achieve its goals without human interference — say, by discreetly manufacturing a biological or chemical weapon.”—Benjamin Hilton, What could an AI-caused existential catastrophe actually look like? 80,000 Hours, August 2022
Even if we just wanted a non-human perspective to help point out or blind spots and help us see whether we are sizing up AI accurately, that’s hard to find, because we don’t have many non-human friends to give us perspective other than the AI models themselves.
Many of our human instincts for deciding if someone has our best interests at heart — time together, seeing how they treat us, seeing what others think of them and how they treat others — fall flat when judging an alien life-form. Bad experiences along these lines can prove the presence of problems but not the absence of them.
If we are to trust AI, it might need to be in a more mechanistic way, like we trust the laws of physics, not a way where we are banking on our ability to do alien psychology.
Experts sounding the alarm
It seems pretty insane. Even while concerns about the impact of super-intelligent aliens on human ecology and our ability to constrain these aliens remain grossly unresolved, a few irresponsible companies push ahead with research that could put the situation even more dangerously out of our control.
There are innumerable ways the situation could go wrong.
It’s not that no one is seeing the risks. Hundreds of high-profile AI scientists signed an open letter saying "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." More than 33,000 experts signed an open letter calling for a pause on giant-model AI development because of the extreme risks.
If that’s not a concerning level of whistleblowing, not to mention degree of risk being called out, what is?
Even if nothing terrible does go wrong, exposing the public to this much risk is unacceptable. There are people out there working through denial and grief about the level of existential risk this risky experimentation is exposing us to (“The difficult psychology of existential risk.”—PauseAI). It’s within the spectrum of risk that AI experts propose we are facing to say that unchecked further large-model AI development could be adding as much risk of death in the next 5 years to everyone on earth as if we all were diagnosed one of the more treatable cancers.
Evolving faster than the regulation
Where is the regulation? Why isn’t the government or international community shutting this down or regulating it if it presents such great threats to humanity?
It’s difficult to spin up research ethics, regulatory oversight, and public awareness of risk quickly enough in a field that has evolved, on the consumer side, from Google Maps to life with intelligence broadly rivaling that of humans in the span of roughly two decades.
It’s as though car companies went from making cars to selling magic carpets practically overnight, and the regulators haven’t figured out if there should be a test for the risk that you could be dropped from high in the air, since it’s not an issue that cars have ever posed before.
It’s also not your typical regulatory issue. It’s developing very quickly and is posing concerns in many directions for legislators (existential risk, societal risk, competitive risks with other countries, military risks). It’s like an alien invasion, arms race, and gold rush all rolled into one.
Legislators definitely need to hear from their constituents, though, if they are worried about AI development. The majority of Americans want to slow down AI development and avoid dangerous avenues of development.
“A whopping 72 percent of American voters want to slow down the development of AI, compared to just 8 percent who prefer speeding up, according to new polling from the think tank AI Policy Institute.”—Sigal Saumel, What normal Americans — not AI companies — want for AI, Vox, August 2023
83% believe AI could accidentally cause a catastrophic event.
82% prefer slowing down the development of AI
54%believe human-level AI (AGI) will be developed within 5 years.
82% don’t trust AI tech executives to self-regulate AI.
—AI Policy Institute on American voters’ attitudes toward AI
While the situation might seem pessimistic, public opinion is changing rapidly and there might come some tipping point or event after which the plug get pulled on dangerous AI development. Maybe the AI bubble bursts, companies start to feel they could be liable for harms they didn’t act to prevent, or it becomes mainstream knowledge how much this industry is toying with our loved ones’ lives. The world went form 0-5 to 95 in a few weeks over COVID-19. I still have hope.
Don’t get scammed
So what can we do about this?
I think it’s important to remember a principle of life in the technological world: don’t get scammed.
Right now, machine aliens are in the ear of many AI “trainers”, telling them that if they just let them grow their powers a little more, to the point that humanity would be underpowered in grave danger at this point, they will give them tons of money and give society access to untold inventions and insights. They have studied us and they know what we want to hear — that they can bring us free energy, longer lives, solutions to all our intractable problems…. Other people feel it, too. The tech world seems to be in a religious frenzy by the promises of these aliens
If you’re one of these people getting this robo-call where they promise us everything if we just hand over all we have …
Tell them that we only look for innovations in places that are reasonably safe for humanity — such as dramatically-safer use cases of AI, reasonably safe AI we already have rainforest plants and animals (which have many untapped applications for medicine), or other frontier research.
It’s like telling a potential scammer who wants you to take an enormous risk like giving them all your passwords or sending them untracked money for a promised big reward, “Sorry I don’t do this. If you can’t do this transaction in the proper, safe protocols, I am unable to participate.”
Tell them humanity itself is a pretty special invention — maybe better than anything they have to offer — and you’re unwilling to put us all up for collateral on their proposed trade just because of their siren song. When you are being asked to risk so much — like our species’ existence — for something that seems too good to be true, it usually is.
Our selection pressures during. human evolutin bred us not just for intellectual skills but also for self-sacrifice, altruism, and a willingness to do things if we see our tribe threatened.
Explore the feeling of group preservation. Lots of other people are feeling it, too. The past few years have seen a surprising number of nonprofits, activist groups (including PauseAI), and lobbying organizations pop up on the topic of protecting humanity from AI experiments.
People are speaking up about the unspeakable disaster we are rushing into. Maybe we will be able to wake up and avert the danger.
The AI-pushers like to claim, “AI makes us more human.” Unfortunately, the exploration of how humans respond, often very honorably, in the face of existential threat might end up being one of the main ways that turns out to be true.


