I still think it was an interesting concept, but I'm not sure how deserving of praise this is since I never actually got beyond organizing two games.
He said it was him on Joe Rogan's podcast.
you find some pretty ironic things when rereading 17-year-old blog posts, but this one takes the cake.
If you look over all possible worlds, then asking "did the coin come up Heads or Tails" as if there's only one answer is incoherent. If you look over all possible worlds, there's a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.
But from the perspective of a particular observer, the question they're trying to answer is a question of indexical uncertainty - out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-worlds? It's true that there are equally as many Heads-worlds as Tails-worlds - but 2/3 of observers are in the latter worlds.
Or to put it another way - suppose you put 10 people in one house, and 20 people in another house. A given person should estimate a 1/3 chance that they're in the first house - and the fact that 1 house is half of 2 houses is completely irrelevant. Why should this reasoning be any different just because we're talking about possible universes rather than houses?
I think you're overestimating the intended scope of this post. Eliezer's argument involves multiple claims - A, we'll create ASI; B, it won't terminally value us; C, it will kill us. As such, people have many different arguments against it. This post is about addressing a specific "B doesn't actually imply C" counterargument, so it's not even discussing "B isn't true in the first place" counterarguments.
While you're quite right about numbers on the scale of billions or trillions, I don't think it makes sense in the limit for the prior probability of X people existing in the world to fall faster than X grows in size.
Certain series of large numbers grow larger much faster than they grow in complexity. A program that returns 10^(10^(10^10)) takes fewer bits to specify (relative to most reasonable systems of specifying programs) than a program that returns 32758932523657923658936180532035892630581608956901628906849561908236520958326051861018956109328631298061259863298326379326013327851098368965026592086190862390125670192358031278018273063587236832763053870032004364702101004310417647840155719238569120561329853619283561298215693286953190539832693826325980569123856910536312892639082369382562039635910965389032698312569023865938615338298392306583192365981036198536932862390326919328369856390218365991836501590931685390659103658916392090356835906398269120625190856983206532903618936398561980569325698312650389253839527983752938579283589237325987329382571092301928* - even though 10^(10^(10^10)) is by far the larger number. And it only takes a linear increase in complexity to make it 10^(10^(10^(10^(10^(10^10))))) instead.
*I produced this number via keyboard-mashing; it's not anything special.
Consider the proposition "A superpowered entity capable of creating unlimited numbers of people ran a program that output the result of a random program out of all possible programs (with their outputs rendered as integers), weighted by the complexity of those programs, and then created that many people."
If this happened, the probability that their program outputs at least X would fall much slower than X rises, in the limit. The sum doesn't converge at all; the expected number of people created would be literally infinite.
So as long as you assign greater than literally zero probability to that proposition - and there's no such thing as zero probability - there must exist some number X such that you assign greater than 1/X probability to X people existing. In fact, there must exist some number X such that you assign greater than 1/X probability to X million people existing, or X billion, or so on.
(btw, I don't think that the sort of SIA-based reasoning here is actually valid - but if it was, then yeah, it implies that there are infinite people.)
I'm kind of concerned about the ethics of someone signing a contract and then breaking it to anonymously report what's going on (if that's what your private source did). I think there's value from people being able to trust each others' promises about keeping secrets, and as much as I'm opposed to Anthropic's activities, I'd nevertheless like to preserve a norm of not breaking promises.
Can you confirm or deny whether your private information comes from someone who was under a contract not to give you that private information? (I completely understand if the answer is no.)
By conservation of expected evidence, I take your failure to cite anything relevant as further confirmation of my views.
This is one of the best burns I've ever heard.
Had a dream last night in which I was having a conversation on LessWrong - unfortunately, I can't remember most of the details of my dreams unless I deliberately concentrate on what happened as soon as I wake up, so I don't know what the conversation was about.
But I do remember that I realized halfway through the conversation that I had been clicking on the wrong buttons - clicking "upvote" & "downvote" instead of "agree" and "disagree", and vice versa. In my dream, the first and second pairs of buttons looked identical - both of them were just the < and > signs.
I suggested to the LW team that they put something to clarify which buttons were which - maybe write the words "upvote", "downvote", "agree", and "disagree" above the buttons. They thought that putting the words there would look really ugly and clutter up the UI too much.
But when I woke up, it turned out that the actual site has a checkmark and an X for the second pair of buttons! And it also displays what each one means when you hover over it! So thanks for retroactively solving my problem, LW team!
What's the b word?