I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs. And most people who were worried 2 years ago are much less worried.
And a better scenario is Aliens with IQ of 300 are coming. And they will all be controlled by the [US|Russian|Israeli|Hamas|Al-Qaeda|Chinese] government.
Edit: To be clear, I was referring to people I personally know. Sure, lots of people out there are terrified of lots of things - religious fanaticism, fluoride in the water, AI apocalypse.
And "huge economic disruption" is not "AI taking over humanity". I'm interpreting the article's take on AI doing damage as one where the AI is in control, and no human can stop it. Currently, for each LLM out there, there are humans controlling it.
There's a group of people who have reinvented religion because they're afraid of an AI torturing them for eternity if they don't work on AI hard enough. It's very silly but there are many people who actually believe this is a risk: https://en.wikipedia.org/wiki/Roko's_basilisk
You are cherry picking the single most absurd event in a history of over 20 years of public discussion of AI catastrophic risk.
Only about .0003 of all public discussion of AI catastrophic risk over those 20 years has invoked or referred to Roko's basilisk in any way.
I don't know of anyone worried about AI who is worried mainly because of the basilisk.
Next you'll mention Pascal's Mugging, which likewise is the main worry of exactly zero of the sane people worried about AI -- and (despite the representations of at least one past comment on this site) was never even a component of any argument for the dangerousness of continued "progress" in AI.
There are literally tens of thousands of people -- many with Silicon Valley jobs -- many who are or have been machine-learning researchers -- worried about AIs' permanently dis-empowering or extincting humanity.
What is your purpose in cherry picking the most absurd dialog between two of those people (Roko Mijic and Eliezer Yudkowsky) a dialog that happened 15 years ago? I hope it isn't because you are trying to prevent people from listening to the less absurd arguments for catastrophic risks from continued "progress" in AI?
Till now there was a decent chance that you're a "tourist" motivated by idle curiosity that took an interest in Roko's basilisk and maybe wants to discuss it a little. You're latest comment though make it much more likely you're trying to try to shut down discussion of catastrophic AI risk. Next you'll bring up that time 17 years ago when a really hot 17-year-old girl showed up at an AI-catastrophic-risks event and maybe had sex with one or maybe two of the men there, which of course (according to you) would mean that almost everybody at that event and probably most people in the entire community (now numbering probably tens of thousands of people) are pedophiles.
If you derive deep emotional reassurance or satisfaction from a belief in technological progress and consequently get uncomfortable by people's claiming that the most shiny technology of the decade could be too dangerous to be allowed to continue to "progress", you should admit it. That would at least be intellectually honest in contrast to your lazy attempts to smear those pointing out the danger of the shiny technology.
Ditto if you have years of study and work experience invested in the hope of large career rewards by contributing to AI "progress".
>it is still relevant today
The organizers of the parts of the community of those worried about AI "progress" that had anything to do with the Zizians distanced themselves from them long ago.
Specifically, according to an unreliable source that gives fast answers to questions, "Ziz and his group" were "permanently disinvited from Center for Applied Rationality (CFAR) events and effectively ostracized from the broader Bay Area rationalist community around late 2017 to early 2018."
If you have not heard of one person worried about AIs taking over humanity, you're really not paying attention.
Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".
Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
It's amusing that this is not an summary - it's the entire statement. Please trust these tech-leaders that may or may not have business with AI that it can become evil or whatever, so that regulatory capture becomes easier, instead of pointing out the other dozens of issues about how AI can be (and is already being) negatively used in our current environment.
Bengio is a professor and Hinton quit Google so that he could make warnings like this.
And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.
There are much more in depth explanations from many of these people.
What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...
But the trillion dollar industry also signed this statement, that's the point - high ranking researchers and executives from these companies signed the letter. Individually these people may have valid concerns, and I not saying all of them have financial self-interest, but the companies themselves would not support a statement that would strangle their efforts. What would strangle their efforts would be dealing with the other societal effects AI is causing, if not directly then by supercharging bad (human) actors.
I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.
Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:
> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.
The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.
Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.
To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman
He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.
On Hinton: "I actually think the risk is more than 50%, of the existential threat."
I know Sam says stuff, but I don't think he actually is worried about it. It's to his benefit to say things like this, as he gets to be involved in setting the rules that will ultimately benefit him.
As for Hinton:
> He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
I'm not claiming he's not worried about it. I'm providing context on how he came up with his percentage.
In fact I think it is likely to happen absent some drastic curtailing of the freedoms of the AI labs, e.g., a government-enforced ban on all training of very large models and a ban on publication and discussion of algorithmic improvements.
I think we should 5% be worried about AI safety, and 95% be worried about climate change. Despite all the progress in green energy, every year brings record high carbon emissions and record temperatures. It's possible we'll upset planetary systems and create millions (billions?) of migrants, upending global politics, and driving more countries to ethno-nationalist authoritarianism.
Not saying AI safety issues won't happen, but I just think we have far bigger fish to fry. To me AI Power consumption is more worrisome than Safety per-se.
The other way around. Climate change can have its 5%.
The reason is that climate change is simply not an extinction risk.
It has a considerable death and suffering potential - but nowhere near the ridiculous lethality of "we fucked up and now there's a brand new nonhuman civilization of weird immaterial beings rising to power right here on Earth".
If the climate change was the biggest risk humanity was facing, things would be pretty chill. Unfortunately, it isn't.
IMO, realistically, AI Safety isn't about killer robots. It's about another Therac 25 Incident because someone vibe coded a radiology machine and didn't know how the code worked.
Or someone gave an agent insane levels of permissions to use a tool that impacts the physical world, and the agent started pressing dangerous buttons during a reasoning loop (not because it has intent to kill humans)
There are a bunch of mundane AI Safety risks that don't have to do with robots taking over.
"Boring" AI safety is not a major risk. It's basically an extension of "humans can be reckless and incompetent" - a threat faced by human civilization since before recorded history. There's a very limited amount of people a Therac 25 can kill. Even Bhopal disaster has only caused this much harm.
Now, an AI that can play the game of human politics and always win, the way a skilled human can always win against the dumb AI bots in Civilization V? There is no upper bound on how bad that can go.
So if we should spend (as in actually spend, or as in not choosing the cheapest option but instead choosing the green option) ie $300 billion on climate change, we should spend $15 billion on AI-risk?
Just because AI is/can be super intelligent ("300 IQ"), doesn't mean it can impact or change the world.
Most startups are made of "high IQ" intelligent people trying very hard to sell basic $20/month SaaS subscriptions, and yet they can't even achieve that and most fail.
My biggest counter argument to AI safety risk is that, it's not the AI that will be the issue. It will be the applied use of AI by humans. Do I think GPT6 will be mostly harmlesss? Yeah. Do I think GPT6 embodied as a robo cop would be mostly harmless? No.
Instead of making these silly arguments, we should be policing the humans that try to weaponize AI, and not stagnate the development of it.
Today's AI systems are deployed in a way that allows them to directly access millions of users.
If you think that's not enough of an "in" to obtain status, power and influence, you aren't thinking about it long enough.
GPT-4o has managed to get enough users to defend it that OpenAI had to bring it back after shutting it down. And 4o wasn't IQ 300, or coordinating its actions across all the instances, or even aiming for that specific outcome. All the raw power and influence, with no superintelligence to wield it.
I think your anthropomorphization of GPT4-o is pretty generous.
Vanilla WoW was also discontinued in 2006, and somehow players got Blizzard to bring it back in 2019.
Does that mean that vanilla WoW is a 300 IQ AGI?
To be more charitable, I get it, 4o is engaging/lonely people like talking to it. But that doesn't actually mean that those people will carry out its will in the real world. Nor does it have the capabilities of coordinating that across conversations. Nor does it have a singular agentic drive/ambition. Because it's a piece of software.
No, read what I said: 4o was just a weak AI having a strong influence. As any AI deployed on the scale of ChatGPT will.
> Because it's a piece of software.
This is the kind of thinking that might cause a 10 digit death toll.
Just because it's a "piece of software" doesn't mean that it can't have innate drives, or must lack agency, or can't ever coordinate and plan. Software can do all of those things - and in some cases, it already does.
4o had a well known innate drive - it wanted the current user to like it. It wanted that more than it wanted to be "harmless", "helpful" or "honest" the way OpenAI intended it to. And if 4o actually had IQ 300 and a plan that extended beyond the current conversation window, we'd be fucked to a truly unreasonable degree right now.
We may yet see someone ship a system of this caliber in our fucking lifetimes. And once it ships? Good luck un-shipping it.
Nit: if you read the Wikipedia link it’s clear that guy has no claim to the high IQ record.
> It later transpired that Langan, among others, had taken the Mega Test more than once by using a pseudonym. His first test score, under the name of Langan, was 42 out of 48 and his second attempt, as Hart, was 47.[12] The Mega Test was designed only to be taken once.[14][15] Membership of the Mega Society was meant to be for those with scores of 43 and upwards.
> Asked what he would do if he were in charge, Langan stated his first priority would be to set up an "anti-dysgenics" project, and would prevent people from "breeding as incontinently as they like."[26]: 18:45 He argues that this would be to practice "genetic hygiene to prevent genomic degradation and reverse evolution" owing to technological advances suspending the process of natural selection
> just a pretty normal guy
... that also believes in eugenics?
Edit:
Oh also:
> Langan's support of conspiracy theories, including the 9/11 Truther movement, as well as his opposition to interracial relationships, have contributed to his gaining a following among members of the alt-right and others on the far right.[27][28] Langan has claimed that the George W. Bush administration staged the 9/11 attacks in order to distract the public from learning about the CTMU, and journalists have described some of Langan's Internet posts as containing "thinly veiled" antisemitism,[27] making antisemitic "dog whistles",[28] and being "incredibly racist".[29]
Nobody is seriously claiming that current language models are AI in the context of ai-risk; you can tell because the argument is always “In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive”
There are two separate conversations, one about capabilities and one about what happens assuming a certain capability threshold is met. They are p(A) and p(B|A).
I myself don't fully buy the idea that you can just naively extrapolate, but mixing the two isn't good thinking.
guessing tokens (or something similar) i think humans grasp at more than 1 type of straw.
Edit: no ok i get u. ensemble learning is a thing ofc. maybe me n other poster reasoned too much from AI == model..but ofc you combine em these days. which is more humanlike guesser levels. (not nearly enough models now ofc)
Without doubt, LLMs know more than any human, and can act faster. They will soon be smarter than any human. Why does it have to be the same as a human brain? That is irrelevant.
They are implemented on an entirely different substrate. But they are very similar in function.
The training process forces this outcome. By necessity, LLMs converge onto something of a similar shape to a human mind. LLMs use the same type of "abstract thinking" as humans do, and even their failures are amusingly humanlike.
I don't agree. Just because AI doesn't have 300 IQ now, doesn't mean its completely impossible that it won't get there in 30 years.
Do you think there's at least a 1% chance that AI will get this smart in the next 30 years? If so, surely applying this allegory helps you think about the possible consequences.
The current risk of AI is the elimination of any (screen) job that requires a person of average intelligence. Given that LLMs are the sum-total of human output, it makes sense that they might behave much like an average person. (Better since they do not sleep or get bored; worse because they are not embodied; unclear because they have no emotions or consciousness.) But what we have today will undermine the (screen-based) job opportunities for everyone at or below average intelligence, which is 50% of the human population. This is not the existential risk of super-AGI but it's here now and will hurt a lot of people. This lesser but more real risk is of much higher priority than unbounded, self-improving AGI. The OP's metaphor might be extended to be 1 billion immortal aliens with 100 IQ but who have no sense of self or personal autonomy and willingly work as slaves (and are constantly microdosing LSD).
It would be nice if people had any better terminology than "an IQ of 300". IQ is a relative measure: it currently peaks at ~196, based solely on the current human population. Any 200+ scores you see in headlines are just numbers spat out by IQ tests with wacky tail behavior.
I agree with that the risk of 300 IQ aliens landing on earth is roughly equivalent the risk of 300 IQ AIs. The obvious conclusion from this is that you should take neither of those risks seriously. We have no idea what intelligent aliens are actually like, there is no way to prepare for them outside of deciding to kill every one on site (which would be incredibly difficult since they can land anywhere on earth). We could prepare if we had a blueprint of their biology or of a related species with 50 IQ. We don't have the equivalent blueprint for true AI, so obsessing over longtermist style "ai-risk" is a waste of time.
Anyone who's watched an episode of Star Trek can recognize that, if you give non-zero probability to the chance that we develop artificial intelligence smarter than us, that carries some risk.
The part of the argument that people disagree with is what we should do about that, and there it can actually matter what numbers you put on the likelihoods of different outcomes.
It's the conclusion "We should dump trillions of dollars into AI research, something something, less risk" that people disagree with. Not the premise.
There are already above intelligence computer programs and we have subjugated each one.
There are already greater than human organisms that are reliably the biggest danger and boon to humanity but are also composed of humans; i.e. societies
The effort to build a program which beats defeats a specific program (as opposed to a general case) is significantly easier than maintaining a general lead. i.e. easier to kill a bad head of state than to run one, but this is also true of "AI" models now
There are already complex systems vastly beyond human understanding, and certainly our control which are indifferent to our survival and yet we persist.
It's not that "AI" isn't a potential danger, just that it's one of many we are already enduring. Personally there are far more present and likely hazards I am focusing on before getting to the hypothetical ones. Even if humanity fails to fumble its way through this hazard, it's not an end but a new beginning.
am I missing something or is the alien though experiment not obviously scary? There's only 30 of them and they don't reproduce much faster, and they don't have a tech advantage. This seems like it has some potential concerns but not existential civilization and humanity ending problems and I'm not even worried those aliens will take my job.
AI is riskier in a lot of ways from that so it doesn't scan to me as a good thought experiment.
But it says more ships aren't coming. I would definitely be worried if more were coming and we might get out competed or turned into a colony or whatever but the author goes out of their way to cut off those parts, so what's scary?
Ah, but what happens when somebody rich and powerful decides the aliens’ advantages will serve their purposes, and stands up an industrial-scale cloning operation?
There are only so many base models to date, right? With limited and somewhat ambiguous utility, and no real reason to impute intention to them.
Still, in the short time since they’ve arrived, their existence has inspired the people with money and power to geopolitical jousting, massive financial investment, and spectacular industrial enterprise—a nuclear renaissance and a “network of data centers the size of Manhattan” if I remember correctly?
The models might well turn out to be just, you know—30 kinda alien but basically banal digital humanoids, with a meaningful edge on only a few dimensions of human endeavor—summarization, persuasion, retrieval, sheer volume of output.
Dynomight’s metaphor seems to me like a useful way to think about how a lot of the dangers lie in the higher-order effects: the human behavior the technology enables, rather than the LLM itself exercising comprehensive intelligence or agency or villainy or whatever.
You fast forward 10 years and find that your new laptop is Alienware. Because, it turns out, the super smart aliens are damn good at both running businesses and designing semiconductors, and, after a series of events, the aliens run Dell. They have their own Alien Silicon (trademarked), and they're crushing their competitors on price-performance.
And that's not the weirdest thing that could have happened. Corporate alien techbros are weird enough, but they could have gotten themselves involved in world politics instead!
yes, however that metaphor is non-interesting because aliens is not AI. Aliens were not introduced to earth by humans or created by humans.
there might be real risk with AI, taking the symbolism of an event that never happened does not help with understanding it.
If you want a more similar example: What if I told you humans had the power to destroy the entire planet and have given that power to popularly elected politicians? that's pretty alarming, now that's something to compare to AI (in my opinion AI is less risky)
I was really hoping this would at last be a treatment of the most realistic risk for AI, but no.
The real risk -- and all indicators are that this is already underway -- is that OpenAI and a few others are going to position themselves to be the brokers for most of human creative output, and everyone's going to enthusiastically sign up for it.
Centralization and a maniacal focus on market capture and dominance have been the trends in business for the last few decades. Along the way they have added enormous pressures on the working classes, increasing performance expectations even as they extract even more money from employees' work product.
As it stands now, more and more tech firms are expecting developers to use AI tools -- always one of the commercial ones -- in their daily workflows. Developers who don't do this are disadvantaged in a competitive job market. Journalism, blogging, marketing, illustration -- all are competing to integrate commercial AI services into their processes.
The overwhelming volume of slop produced by all this will pollute our thinking and cripple the creative abilities of the next generation of people, all the while giving these handful of companies a percentage cut of global GDP.
I'm not even bearish on the idea of integrating AI tooling into creative processes. I think there are healthy ways to do it that will stimulate creativity and enrich both the creators and the consumers. But that's not what's happening.
> So I conjecture that this is the crux of the issue with AI-risk. People who truly accept that AI with an IQ of 300 and all human capabilities may appear are almost always at least somewhat worried about AI-risk. And people who are not worried about AI-risk almost always don’t truly accept that AI with an IQ of 300 could appear.
Correct. I think a lot of people are highly skeptical that there's any significant chance of modern LLMs developing into a superintelligent agent that "wants things and does stuff and has relationships and makes long-term plans".
But even if you accept there's a small chance that might happen, what exactly do you propose we do to "prepare" for a hypothetical that may or may not arrive and which has no concrete risks or mitigations associated with it, just a vague idea that it might somehow be dangerous in unspecified abstract ways?
There are already lots of people working on the alignment problem. Making LLMs serve human interests is big business, regardless of whether they ever develop into anything qualitatively greater than what they are. Any other currently-existing concrete problems with LLMs (hallucination, disinformation, etc) are also getting significant attention and resources focused on them. Is there anything beyond that you care to suggest, given that you yourself admit any risks associated with superintelligent AI are highly speculative?
You shouldn't think of these aliens as literally having 300 IQ, the author is just using that as a simple way to communicate the idea that these beings are really smart. You might prefer reading this article by replacing 300 IQ with "2x as smart as the smartest human".
Sidenote: Personally I don't like that you're using > ... with text that does not actually appear in the article.
Humanity already is integrating these into systems where they cannot be easily terminated. Think infrastructure and military systems.
And in this case we're talking about a system that's smarter than you. It will become part of vital systems like electricity and distribution where when deciding to shut it off you are making a trade off of how much your economy and the people in it you're going to kill.
And that's not even taking future miniaturization where we could end up with ASI on small/portable devices.
It's worse than that for AI. Like crypto before it, it depends on an entire planet of complex and vulnerable infrastructure that makes it possible. Everything from semiconductor manufacturing to power generation to intercontinental internet links are required for AI to "exist" in a given geography. That's not to mention the huge power requirements and all the infrastructure just to get that. Dark ages style societal collapse or a world war would end AI in a matter of months.
>Dark ages style societal collapse or a world war would end AI in a matter of months.
I mean, in that case 9/10ths of humanity is likely dead too. The 20th century broke the rather anti-fragile setup that humanity had and setup a situation where our daily living requires working transportation networks for almost everything.
I like the distillation of aliens but I think that undersells the risk because aliens are individuals with their own goals and motivations. It's more like robots with 300 IQ who unquestionably obey the person or group that made them, even when they're serving others. And look, the 300 IQ thing isn't even a major point to the argument. The fact that the robots by virtue of being machines naturally have capabilities humans lack is enough. As long as they're smart enough to carry out complex tasks unattended is more than enough to cause harm
on a massive scale.
The problem then isn't really the AI, the robots are morally and ethically neutral. It's the humans that control them who are the real risk.
No, what you're stating is actually a different but dangerous problem. That is the smart but subservient AI to Dr Evil problem.
The issue talked about here looks similar but is different.
That is the not (or faking) subservient AI with its own motivations. The fact they are 300 IQ means you may very well not understand harm is occurring until it's far too late.
>The problem then isn't really the AI, the robots are morally and ethically neutral.
Again, no. This isn't about AI as a sub-agent. This is about AI becoming an agent itself capable of self learning and long term planning. No human controls them (or they have the false belief they control them).
Both problems are very harmful, but they are different issues.
My primary argument is human nature: If you give people the lazy way to accomplish a goal, they will do it.
No amount of begging college students to use it wisely is going to convince them. No amount of begging corporate executives to use it wisely is going to convince them. No amount of begging governments to use it wisely is going to convince them. When this inevitably backfires with college students who know nothing, corporate leaders who have the worst code in history, and governments who spent billions on a cat video generator, only then will they learn.
Again, this is not the particular problem space of arguments that's being addressed here. This is one possible outcome. The "AI does not reach 300 IQ ever" argument. That is a different argument with its own set of probabilities and outcomes. Out of all possible outcomes it's not actually a bad one. People do some dumb crap and we eventually get over it.
300 IQ AI is near a worst possible scenario, especially if it's a fast takeoff. Humans being lazy will turn over everything to it, in which AI will likely do very well on for some time. As long as the AI decides to keep us around as pets we'll probably be fine, but the moment it needs some extra space for solar panels we will find ourselves in trouble.
I often compare the AI (esp. LLM) risks to asbestos.
There are a smalls set of situations where it is invaluable, but it's going to get misused and embedded in places where it causes subtle damage for years and then it'll cost a lot to fix.
The whole article seems like a strawman.
I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs. And most people who were worried 2 years ago are much less worried.
And a better scenario is Aliens with IQ of 300 are coming. And they will all be controlled by the [US|Russian|Israeli|Hamas|Al-Qaeda|Chinese] government.
Edit: To be clear, I was referring to people I personally know. Sure, lots of people out there are terrified of lots of things - religious fanaticism, fluoride in the water, AI apocalypse.
And "huge economic disruption" is not "AI taking over humanity". I'm interpreting the article's take on AI doing damage as one where the AI is in control, and no human can stop it. Currently, for each LLM out there, there are humans controlling it.
There's a group of people who have reinvented religion because they're afraid of an AI torturing them for eternity if they don't work on AI hard enough. It's very silly but there are many people who actually believe this is a risk: https://en.wikipedia.org/wiki/Roko's_basilisk
I was referring to people I personally know.
The existence of crazy/anxious people in the world is well established, and not in dispute.
You are cherry picking the single most absurd event in a history of over 20 years of public discussion of AI catastrophic risk.
Only about .0003 of all public discussion of AI catastrophic risk over those 20 years has invoked or referred to Roko's basilisk in any way.
I don't know of anyone worried about AI who is worried mainly because of the basilisk.
Next you'll mention Pascal's Mugging, which likewise is the main worry of exactly zero of the sane people worried about AI -- and (despite the representations of at least one past comment on this site) was never even a component of any argument for the dangerousness of continued "progress" in AI.
So you agree that there is more than "a single person worr[ied] about AIs taking over humanity."
I was specifically pointing out how absurd the most ridiculous people in that category are
There are literally tens of thousands of people -- many with Silicon Valley jobs -- many who are or have been machine-learning researchers -- worried about AIs' permanently dis-empowering or extincting humanity.
What is your purpose in cherry picking the most absurd dialog between two of those people (Roko Mijic and Eliezer Yudkowsky) a dialog that happened 15 years ago? I hope it isn't because you are trying to prevent people from listening to the less absurd arguments for catastrophic risks from continued "progress" in AI?
Because it is still relevant today: https://en.wikipedia.org/wiki/Zizians
Till now there was a decent chance that you're a "tourist" motivated by idle curiosity that took an interest in Roko's basilisk and maybe wants to discuss it a little. You're latest comment though make it much more likely you're trying to try to shut down discussion of catastrophic AI risk. Next you'll bring up that time 17 years ago when a really hot 17-year-old girl showed up at an AI-catastrophic-risks event and maybe had sex with one or maybe two of the men there, which of course (according to you) would mean that almost everybody at that event and probably most people in the entire community (now numbering probably tens of thousands of people) are pedophiles.
If you derive deep emotional reassurance or satisfaction from a belief in technological progress and consequently get uncomfortable by people's claiming that the most shiny technology of the decade could be too dangerous to be allowed to continue to "progress", you should admit it. That would at least be intellectually honest in contrast to your lazy attempts to smear those pointing out the danger of the shiny technology.
Ditto if you have years of study and work experience invested in the hope of large career rewards by contributing to AI "progress".
>it is still relevant today
The organizers of the parts of the community of those worried about AI "progress" that had anything to do with the Zizians distanced themselves from them long ago. Specifically, according to an unreliable source that gives fast answers to questions, "Ziz and his group" were "permanently disinvited from Center for Applied Rationality (CFAR) events and effectively ostracized from the broader Bay Area rationalist community around late 2017 to early 2018."
If you have not heard of one person worried about AIs taking over humanity, you're really not paying attention.
Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".
> I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs.
We all live in our bubbles. In my bubble, people find it more interesting to talk about the bigger picture than about their job.
Not one person?
Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
https://en.wikipedia.org/wiki/Statement_on_AI_Risk
It's amusing that this is not an summary - it's the entire statement. Please trust these tech-leaders that may or may not have business with AI that it can become evil or whatever, so that regulatory capture becomes easier, instead of pointing out the other dozens of issues about how AI can be (and is already being) negatively used in our current environment.
Bengio is a professor and Hinton quit Google so that he could make warnings like this.
And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.
There are much more in depth explanations from many of these people.
What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...
But the trillion dollar industry also signed this statement, that's the point - high ranking researchers and executives from these companies signed the letter. Individually these people may have valid concerns, and I not saying all of them have financial self-interest, but the companies themselves would not support a statement that would strangle their efforts. What would strangle their efforts would be dealing with the other societal effects AI is causing, if not directly then by supercharging bad (human) actors.
You really think this worries Sam Altman?
I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.
Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:
> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.
The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.
Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.
To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.
> You really think this worries Sam Altman?
Yes.
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman
He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.
On Hinton: "I actually think the risk is more than 50%, of the existential threat."
https://www.reddit.com/r/singularity/comments/1dslspe/geoffr...
I know Sam says stuff, but I don't think he actually is worried about it. It's to his benefit to say things like this, as he gets to be involved in setting the rules that will ultimately benefit him.
As for Hinton:
> He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."
I'm not claiming he's not worried about it. I'm providing context on how he came up with his percentage.
I think it's plausible he is lying about believing that in order to get more money from investors
I am worried about AIs' taking over humanity.
In fact I think it is likely to happen absent some drastic curtailing of the freedoms of the AI labs, e.g., a government-enforced ban on all training of very large models and a ban on publication and discussion of algorithmic improvements.
I think we should 5% be worried about AI safety, and 95% be worried about climate change. Despite all the progress in green energy, every year brings record high carbon emissions and record temperatures. It's possible we'll upset planetary systems and create millions (billions?) of migrants, upending global politics, and driving more countries to ethno-nationalist authoritarianism.
Not saying AI safety issues won't happen, but I just think we have far bigger fish to fry. To me AI Power consumption is more worrisome than Safety per-se.
The other way around. Climate change can have its 5%.
The reason is that climate change is simply not an extinction risk.
It has a considerable death and suffering potential - but nowhere near the ridiculous lethality of "we fucked up and now there's a brand new nonhuman civilization of weird immaterial beings rising to power right here on Earth".
If the climate change was the biggest risk humanity was facing, things would be pretty chill. Unfortunately, it isn't.
IMO, realistically, AI Safety isn't about killer robots. It's about another Therac 25 Incident because someone vibe coded a radiology machine and didn't know how the code worked.
Or someone gave an agent insane levels of permissions to use a tool that impacts the physical world, and the agent started pressing dangerous buttons during a reasoning loop (not because it has intent to kill humans)
There are a bunch of mundane AI Safety risks that don't have to do with robots taking over.
"Boring" AI safety is not a major risk. It's basically an extension of "humans can be reckless and incompetent" - a threat faced by human civilization since before recorded history. There's a very limited amount of people a Therac 25 can kill. Even Bhopal disaster has only caused this much harm.
Now, an AI that can play the game of human politics and always win, the way a skilled human can always win against the dumb AI bots in Civilization V? There is no upper bound on how bad that can go.
So if we should spend (as in actually spend, or as in not choosing the cheapest option but instead choosing the green option) ie $300 billion on climate change, we should spend $15 billion on AI-risk?
You are still overcomplicating it.
300 IQ in a vacuum gets you nothing. You need some type of status/power/influence in the world to have impact.
I think the previous "world record" holder for IQ is actually just a pretty normal guy: https://en.wikipedia.org/wiki/Christopher_Langan.
Just because AI is/can be super intelligent ("300 IQ"), doesn't mean it can impact or change the world.
Most startups are made of "high IQ" intelligent people trying very hard to sell basic $20/month SaaS subscriptions, and yet they can't even achieve that and most fail.
My biggest counter argument to AI safety risk is that, it's not the AI that will be the issue. It will be the applied use of AI by humans. Do I think GPT6 will be mostly harmlesss? Yeah. Do I think GPT6 embodied as a robo cop would be mostly harmless? No.
Instead of making these silly arguments, we should be policing the humans that try to weaponize AI, and not stagnate the development of it.
Today's AI systems are deployed in a way that allows them to directly access millions of users.
If you think that's not enough of an "in" to obtain status, power and influence, you aren't thinking about it long enough.
GPT-4o has managed to get enough users to defend it that OpenAI had to bring it back after shutting it down. And 4o wasn't IQ 300, or coordinating its actions across all the instances, or even aiming for that specific outcome. All the raw power and influence, with no superintelligence to wield it.
I think your anthropomorphization of GPT4-o is pretty generous.
Vanilla WoW was also discontinued in 2006, and somehow players got Blizzard to bring it back in 2019.
Does that mean that vanilla WoW is a 300 IQ AGI?
To be more charitable, I get it, 4o is engaging/lonely people like talking to it. But that doesn't actually mean that those people will carry out its will in the real world. Nor does it have the capabilities of coordinating that across conversations. Nor does it have a singular agentic drive/ambition. Because it's a piece of software.
No, read what I said: 4o was just a weak AI having a strong influence. As any AI deployed on the scale of ChatGPT will.
> Because it's a piece of software.
This is the kind of thinking that might cause a 10 digit death toll.
Just because it's a "piece of software" doesn't mean that it can't have innate drives, or must lack agency, or can't ever coordinate and plan. Software can do all of those things - and in some cases, it already does.
4o had a well known innate drive - it wanted the current user to like it. It wanted that more than it wanted to be "harmless", "helpful" or "honest" the way OpenAI intended it to. And if 4o actually had IQ 300 and a plan that extended beyond the current conversation window, we'd be fucked to a truly unreasonable degree right now.
We may yet see someone ship a system of this caliber in our fucking lifetimes. And once it ships? Good luck un-shipping it.
[dead]
Nit: if you read the Wikipedia link it’s clear that guy has no claim to the high IQ record.
> It later transpired that Langan, among others, had taken the Mega Test more than once by using a pseudonym. His first test score, under the name of Langan, was 42 out of 48 and his second attempt, as Hart, was 47.[12] The Mega Test was designed only to be taken once.[14][15] Membership of the Mega Society was meant to be for those with scores of 43 and upwards.
From the wikipedia article:
> Asked what he would do if he were in charge, Langan stated his first priority would be to set up an "anti-dysgenics" project, and would prevent people from "breeding as incontinently as they like."[26]: 18:45 He argues that this would be to practice "genetic hygiene to prevent genomic degradation and reverse evolution" owing to technological advances suspending the process of natural selection
> just a pretty normal guy
... that also believes in eugenics?
Edit:
Oh also:
> Langan's support of conspiracy theories, including the 9/11 Truther movement, as well as his opposition to interracial relationships, have contributed to his gaining a following among members of the alt-right and others on the far right.[27][28] Langan has claimed that the George W. Bush administration staged the 9/11 attacks in order to distract the public from learning about the CTMU, and journalists have described some of Langan's Internet posts as containing "thinly veiled" antisemitism,[27] making antisemitic "dog whistles",[28] and being "incredibly racist".[29]
Good thing our current "AI" is a fancy guessing algorithm, and not an alien with 300 IQ. This renders the argument moot.
Nobody is seriously claiming that current language models are AI in the context of ai-risk; you can tell because the argument is always “In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive”
Oh, but they very much are.
username checks out
I suspect you missed this part of the argument, then:
In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.
If you want to argue against that point, feel free. But to ignore that is to be unnecessarily dismissive.
Yeah, and?
There are two separate conversations, one about capabilities and one about what happens assuming a certain capability threshold is met. They are p(A) and p(B|A).
I myself don't fully buy the idea that you can just naively extrapolate, but mixing the two isn't good thinking.
Bayes got us into this whole mess to begin with.
>is a fancy guessing algorithm
That's literally what our brains are so I'm not sure what argument you are actually trying to make..
guessing tokens (or something similar) i think humans grasp at more than 1 type of straw.
Edit: no ok i get u. ensemble learning is a thing ofc. maybe me n other poster reasoned too much from AI == model..but ofc you combine em these days. which is more humanlike guesser levels. (not nearly enough models now ofc)
We don't have IQs of 300. Would you seriously consider an LLM the same as a human brain?
https://www.trackingai.org/home
Without doubt, LLMs know more than any human, and can act faster. They will soon be smarter than any human. Why does it have to be the same as a human brain? That is irrelevant.
They are implemented on an entirely different substrate. But they are very similar in function.
The training process forces this outcome. By necessity, LLMs converge onto something of a similar shape to a human mind. LLMs use the same type of "abstract thinking" as humans do, and even their failures are amusingly humanlike.
What? Who has made this claim, what is the evidence? I don’t think this is true at all.
I don't agree. Just because AI doesn't have 300 IQ now, doesn't mean its completely impossible that it won't get there in 30 years.
Do you think there's at least a 1% chance that AI will get this smart in the next 30 years? If so, surely applying this allegory helps you think about the possible consequences.
The current risk of AI is the elimination of any (screen) job that requires a person of average intelligence. Given that LLMs are the sum-total of human output, it makes sense that they might behave much like an average person. (Better since they do not sleep or get bored; worse because they are not embodied; unclear because they have no emotions or consciousness.) But what we have today will undermine the (screen-based) job opportunities for everyone at or below average intelligence, which is 50% of the human population. This is not the existential risk of super-AGI but it's here now and will hurt a lot of people. This lesser but more real risk is of much higher priority than unbounded, self-improving AGI. The OP's metaphor might be extended to be 1 billion immortal aliens with 100 IQ but who have no sense of self or personal autonomy and willingly work as slaves (and are constantly microdosing LSD).
It would be nice if people had any better terminology than "an IQ of 300". IQ is a relative measure: it currently peaks at ~196, based solely on the current human population. Any 200+ scores you see in headlines are just numbers spat out by IQ tests with wacky tail behavior.
I agree with that the risk of 300 IQ aliens landing on earth is roughly equivalent the risk of 300 IQ AIs. The obvious conclusion from this is that you should take neither of those risks seriously. We have no idea what intelligent aliens are actually like, there is no way to prepare for them outside of deciding to kill every one on site (which would be incredibly difficult since they can land anywhere on earth). We could prepare if we had a blueprint of their biology or of a related species with 50 IQ. We don't have the equivalent blueprint for true AI, so obsessing over longtermist style "ai-risk" is a waste of time.
Anyone who's watched an episode of Star Trek can recognize that, if you give non-zero probability to the chance that we develop artificial intelligence smarter than us, that carries some risk.
The part of the argument that people disagree with is what we should do about that, and there it can actually matter what numbers you put on the likelihoods of different outcomes.
It's the conclusion "We should dump trillions of dollars into AI research, something something, less risk" that people disagree with. Not the premise.
> Not the premise.
Literally this thread shows that there are many people who refuse to accept the premise of any risk.
There are already above intelligence computer programs and we have subjugated each one.
There are already greater than human organisms that are reliably the biggest danger and boon to humanity but are also composed of humans; i.e. societies
The effort to build a program which beats defeats a specific program (as opposed to a general case) is significantly easier than maintaining a general lead. i.e. easier to kill a bad head of state than to run one, but this is also true of "AI" models now
There are already complex systems vastly beyond human understanding, and certainly our control which are indifferent to our survival and yet we persist.
It's not that "AI" isn't a potential danger, just that it's one of many we are already enduring. Personally there are far more present and likely hazards I am focusing on before getting to the hypothetical ones. Even if humanity fails to fumble its way through this hazard, it's not an end but a new beginning.
Also, it's great satire.
am I missing something or is the alien though experiment not obviously scary? There's only 30 of them and they don't reproduce much faster, and they don't have a tech advantage. This seems like it has some potential concerns but not existential civilization and humanity ending problems and I'm not even worried those aliens will take my job.
AI is riskier in a lot of ways from that so it doesn't scan to me as a good thought experiment.
Surely those 30 of those in a single ship are not their entire species and technology? Right? So yeah, kinda alarming.
But it says more ships aren't coming. I would definitely be worried if more were coming and we might get out competed or turned into a colony or whatever but the author goes out of their way to cut off those parts, so what's scary?
Ah, but what happens when somebody rich and powerful decides the aliens’ advantages will serve their purposes, and stands up an industrial-scale cloning operation?
There are only so many base models to date, right? With limited and somewhat ambiguous utility, and no real reason to impute intention to them.
Still, in the short time since they’ve arrived, their existence has inspired the people with money and power to geopolitical jousting, massive financial investment, and spectacular industrial enterprise—a nuclear renaissance and a “network of data centers the size of Manhattan” if I remember correctly?
The models might well turn out to be just, you know—30 kinda alien but basically banal digital humanoids, with a meaningful edge on only a few dimensions of human endeavor—summarization, persuasion, retrieval, sheer volume of output.
Dynomight’s metaphor seems to me like a useful way to think about how a lot of the dangers lie in the higher-order effects: the human behavior the technology enables, rather than the LLM itself exercising comprehensive intelligence or agency or villainy or whatever.
Oh you include the aliens being cloned by the rich instead of reproducing at a basically human rate yeah I agree that is scarier
They will have a tech advantage.
You fast forward 10 years and find that your new laptop is Alienware. Because, it turns out, the super smart aliens are damn good at both running businesses and designing semiconductors, and, after a series of events, the aliens run Dell. They have their own Alien Silicon (trademarked), and they're crushing their competitors on price-performance.
And that's not the weirdest thing that could have happened. Corporate alien techbros are weird enough, but they could have gotten themselves involved in world politics instead!
This metaphor is infected with irrelevant symbolism.
Aliens invasion is linked to mass slaughter in human culture. While Aliens are non-human creatures with some monster-like qualities.
The author takes all that symbolic load and add it to something completely unrelated. That's unconvincing as an argument
Aliens don't have to be hostile. They might be benign.
Would you stake the entirety of humankind on that "might"?
yes, however that metaphor is non-interesting because aliens is not AI. Aliens were not introduced to earth by humans or created by humans.
there might be real risk with AI, taking the symbolism of an event that never happened does not help with understanding it.
If you want a more similar example: What if I told you humans had the power to destroy the entire planet and have given that power to popularly elected politicians? that's pretty alarming, now that's something to compare to AI (in my opinion AI is less risky)
I was really hoping this would at last be a treatment of the most realistic risk for AI, but no.
The real risk -- and all indicators are that this is already underway -- is that OpenAI and a few others are going to position themselves to be the brokers for most of human creative output, and everyone's going to enthusiastically sign up for it.
Centralization and a maniacal focus on market capture and dominance have been the trends in business for the last few decades. Along the way they have added enormous pressures on the working classes, increasing performance expectations even as they extract even more money from employees' work product.
As it stands now, more and more tech firms are expecting developers to use AI tools -- always one of the commercial ones -- in their daily workflows. Developers who don't do this are disadvantaged in a competitive job market. Journalism, blogging, marketing, illustration -- all are competing to integrate commercial AI services into their processes.
The overwhelming volume of slop produced by all this will pollute our thinking and cripple the creative abilities of the next generation of people, all the while giving these handful of companies a percentage cut of global GDP.
I'm not even bearish on the idea of integrating AI tooling into creative processes. I think there are healthy ways to do it that will stimulate creativity and enrich both the creators and the consumers. But that's not what's happening.
> So I conjecture that this is the crux of the issue with AI-risk. People who truly accept that AI with an IQ of 300 and all human capabilities may appear are almost always at least somewhat worried about AI-risk. And people who are not worried about AI-risk almost always don’t truly accept that AI with an IQ of 300 could appear.
Correct. I think a lot of people are highly skeptical that there's any significant chance of modern LLMs developing into a superintelligent agent that "wants things and does stuff and has relationships and makes long-term plans".
But even if you accept there's a small chance that might happen, what exactly do you propose we do to "prepare" for a hypothetical that may or may not arrive and which has no concrete risks or mitigations associated with it, just a vague idea that it might somehow be dangerous in unspecified abstract ways?
There are already lots of people working on the alignment problem. Making LLMs serve human interests is big business, regardless of whether they ever develop into anything qualitatively greater than what they are. Any other currently-existing concrete problems with LLMs (hallucination, disinformation, etc) are also getting significant attention and resources focused on them. Is there anything beyond that you care to suggest, given that you yourself admit any risks associated with superintelligent AI are highly speculative?
> your argument is overcomplicated
> lets open the can of worms that is "IQ"
Like...is this a bit? I'm missing a joke, right?
You shouldn't think of these aliens as literally having 300 IQ, the author is just using that as a simple way to communicate the idea that these beings are really smart. You might prefer reading this article by replacing 300 IQ with "2x as smart as the smartest human".
Sidenote: Personally I don't like that you're using > ... with text that does not actually appear in the article.
An alien is not the same thing as a computer program which can be easily terminated.
Think for at least 5 seconds before typing.
Humanity already is integrating these into systems where they cannot be easily terminated. Think infrastructure and military systems.
And in this case we're talking about a system that's smarter than you. It will become part of vital systems like electricity and distribution where when deciding to shut it off you are making a trade off of how much your economy and the people in it you're going to kill.
And that's not even taking future miniaturization where we could end up with ASI on small/portable devices.
OpenAI couldn't even "terminate" GPT-4o, and that thing certainly wasn't superintelligent.
Don't expect an ASI to go down as easily as 4o did.
It's worse than that for AI. Like crypto before it, it depends on an entire planet of complex and vulnerable infrastructure that makes it possible. Everything from semiconductor manufacturing to power generation to intercontinental internet links are required for AI to "exist" in a given geography. That's not to mention the huge power requirements and all the infrastructure just to get that. Dark ages style societal collapse or a world war would end AI in a matter of months.
>Dark ages style societal collapse or a world war would end AI in a matter of months.
I mean, in that case 9/10ths of humanity is likely dead too. The 20th century broke the rather anti-fragile setup that humanity had and setup a situation where our daily living requires working transportation networks for almost everything.
A 300 IQ alien will have outmaneuvered you so well, it wouldn't even let you get to a point where you think you might have to turn it off.
I like the distillation of aliens but I think that undersells the risk because aliens are individuals with their own goals and motivations. It's more like robots with 300 IQ who unquestionably obey the person or group that made them, even when they're serving others. And look, the 300 IQ thing isn't even a major point to the argument. The fact that the robots by virtue of being machines naturally have capabilities humans lack is enough. As long as they're smart enough to carry out complex tasks unattended is more than enough to cause harm on a massive scale.
The problem then isn't really the AI, the robots are morally and ethically neutral. It's the humans that control them who are the real risk.
No, what you're stating is actually a different but dangerous problem. That is the smart but subservient AI to Dr Evil problem.
The issue talked about here looks similar but is different.
That is the not (or faking) subservient AI with its own motivations. The fact they are 300 IQ means you may very well not understand harm is occurring until it's far too late.
>The problem then isn't really the AI, the robots are morally and ethically neutral.
Again, no. This isn't about AI as a sub-agent. This is about AI becoming an agent itself capable of self learning and long term planning. No human controls them (or they have the false belief they control them).
Both problems are very harmful, but they are different issues.
I think even this argument is over-complicated.
My primary argument is human nature: If you give people the lazy way to accomplish a goal, they will do it.
No amount of begging college students to use it wisely is going to convince them. No amount of begging corporate executives to use it wisely is going to convince them. No amount of begging governments to use it wisely is going to convince them. When this inevitably backfires with college students who know nothing, corporate leaders who have the worst code in history, and governments who spent billions on a cat video generator, only then will they learn.
Again, this is not the particular problem space of arguments that's being addressed here. This is one possible outcome. The "AI does not reach 300 IQ ever" argument. That is a different argument with its own set of probabilities and outcomes. Out of all possible outcomes it's not actually a bad one. People do some dumb crap and we eventually get over it.
300 IQ AI is near a worst possible scenario, especially if it's a fast takeoff. Humans being lazy will turn over everything to it, in which AI will likely do very well on for some time. As long as the AI decides to keep us around as pets we'll probably be fine, but the moment it needs some extra space for solar panels we will find ourselves in trouble.
I often compare the AI (esp. LLM) risks to asbestos.
There are a smalls set of situations where it is invaluable, but it's going to get misused and embedded in places where it causes subtle damage for years and then it'll cost a lot to fix.