interstice

Wiki Contributions

Comments

Sorted by

Some beliefs can be worse or better at predicting what we observe, this is not the same thing as popularity.

Far enough in the future ancient brain scans would be fascinating antique artifacts like rare archaeological finds today, I think people would be interested in reviving you on that basis alone(assuming there are people-like things with some power in the future)

I like the decluttering. I think the title should be smaller and have less white space above it. Also think that it would be better if the ToC was maybe just faded a lot until mouseover, the sudden appearance/disappearance feels too sudden.

No I don't think so because people could just airgap the GPUs.

Weaker AI probably wouldn't be sufficient to carry out an actually pivotal act. For example the GPU virus would probably be worked around soon after deployment, via airgapping GPUs, developing software countermeasures, or just resetting infected GPUs.

This discussion is a nice illustration of why x-riskers are definitely more power-seeking than the average activist group. Just like Eskimos proverbially have 50 words for snow, AI-risk-reducers need at least 50 terms for "taking over the world" to demarcate the range of possible scenarios. ;)

Nice overview, I agree but I think the 2016-2021 plan could still arguably be described as "obtain god-like AI and use it to take over the world"(admittedly with some rhetorical exaggeration, but like, not that much)

I would be happy to take bets here about what people would say.

Sure, I DM'd you.

I think making inferences from that to modern MIRI is about as confused as making inferences from people's high-school essays about what they will do when they become president

Yeah, but it's not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that "a small group of people with overwhelming hard power" was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.

interstice1012

I think they talked explicitly about planning to deploy the AI themselves back in the early days(2004-ish) then gradually transitioned to talking generally about what someone with a powerful AI could do.

But I strongly suspect that in the event that they were the first to obtain powerful AI, they would deploy it themselves or perhaps give it to handpicked successors. Given Eliezer's worldview I don't think it would make much sense for them to give the AI to the US government(considered incompetent) or AI labs(negligently reckless)

Load More