I don't think "we're currently living in a simulation" or "ASI would have effects beyond imagination, at least for the median human imaginer" are such weird beliefs among this crowd that them proving true would qualify for OP to win the bet. Of course, they specifically say that if UAP are special cases in the simulation that counts, but not the mere belief in simulation.
Would you mind sharing how much you will win if the bet goes your way and everyone pays out?
Also, I would like to see more actions like yours, so I'd like to put money into that. I want to unconditionally give you $50; if you win the bet you may (but would be under no obligation to) return this money to me. All I'd need now is an ETH wallet to send money to.
I would like this to be construed as a meta-level incentive for people to have this attitude of "put up or shut up" while offering immediate payouts; not as taking a stance on the object-level question.
I hear you, thank you for your comment.
I guess I don't have a clear model for how big is the pool of people who:
As soon as someone managed to turn ChatGPT into an agent (AutoGPT), someone created an agent, ChaosGPT, with the explicit goal to destroy humankind. This is the kind of person that might benefit from having what I intend to produce: an overview of AI capabilities required to end the world, how far along we are in obtaining them, and so on. I want this information to be used to prevent an existential catastrophe, not precipitate it.
Thank you for your post. It is important for us to keep refining the overall p(doom) and the ways it might happen or be averted. You make your point very clearly, even in just the version presented here, condensed from your full posts on varios specific points.
It seems to me that you are applying a sort of symmetric argument to values and capabilities and arguing that x-risk requires that we hit the bullseye of capability but miss the one for values. I think this has a problem and I'd like to know your view as to how much this problem affects your overall argument.
The problem, as I see it, is that goal-space is qualitatively different from capability-space. With capabilities, there is a clear ordering that is inherent to the capabilities themselves: if you can do more, then you can do less. Someone who can lift 100kg can also lift 80kg. It is not clear to me that this is the case for goal-space; I think it is only extrinsic evaluation by humans that makes "tile the universe with paperclips" a bad goal.
Do you think this difference between these spaces holds, and if so, do you think it undermines your argument?
Gwern has posted several of Kurzweil's predictions on Predictionbook and I have marked many of them as either right or wrong. In some cases I included comments on the bits of research I did.
I couldn't get things to work here, but thank you Elizabeth, Raymond and Ben for trying to help me! Have fun!
I'm thinking a few things that are perhaps not super important individually, but ought to have at least some weight in such an index:
Standardization and transportation
Legal cooperation/integration
A caveat: while I've phrased all of these in a positive light, this does not preclude there being trade-offs. For example, expanding the freedoms of the air would likely boost air travel, which has bad environmental impacts.
AlphaGo used about 0.5 petaflops (= trillion floating point operations per second)
Isn't peta- the prefix for quadrillion?
As I understand it – with my only source being Ben's post and a couple of comments that I've read – Drew is also a cofounder of Nonlinear. Also, this was reported:
So, based on what we're told, there was romantic entanglement between the employers – Drew included – and Alice, and such relationships, even in the best-case scenario, need to be handled with a lot of caution, and this situation seems to be significantly worse than a best-case scenario.