interstice

Wiki Contributions

Comments

Sorted by

Good post, it's underappreciated that a society of ideally rational people wouldn't have unsubsidized, real-money prediction markets.

unless you've actually got other people being wrong even in light of the new actors' information

Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It's not a perfect such method, but does have the advantage of simplicity. How many of these issues could be solved by subsidizing markets?

Discord Message

What discord is this, sounds cool.

That's probably the one I was thinking of.

I know of only two people who anticipated something like what we are seeing far ahead of time; Hans Moravec and Jan Leike

I didn't know about Jan's AI timelines. Shane Legg also had some decently early predictions of AI around 2030(~2007 was the earliest I knew about)

Some beliefs can be worse or better at predicting what we observe, this is not the same thing as popularity.

Far enough in the future ancient brain scans would be fascinating antique artifacts like rare archaeological finds today, I think people would be interested in reviving you on that basis alone(assuming there are people-like things with some power in the future)

I like the decluttering. I think the title should be smaller and have less white space above it. Also think that it would be better if the ToC was maybe just faded a lot until mouseover, the sudden appearance/disappearance feels too sudden.

No I don't think so because people could just airgap the GPUs.

Weaker AI probably wouldn't be sufficient to carry out an actually pivotal act. For example the GPU virus would probably be worked around soon after deployment, via airgapping GPUs, developing software countermeasures, or just resetting infected GPUs.

This discussion is a nice illustration of why x-riskers are definitely more power-seeking than the average activist group. Just like Eskimos proverbially have 50 words for snow, AI-risk-reducers need at least 50 terms for "taking over the world" to demarcate the range of possible scenarios. ;)

Nice overview, I agree but I think the 2016-2021 plan could still arguably be described as "obtain god-like AI and use it to take over the world"(admittedly with some rhetorical exaggeration, but like, not that much)

Load More