Some of the memes you referenced do seem "cringe" to me, but people have different senses of humor. I'm not sure what the issue is with someone posting memes they personally find funny.
If you disagree with the point that the memes are making, that's different, but can you give an example of something in one of the memes she posted that you thought was invalid reasoning? You called her content "dark arts tactics" and said:
"It feels like it is trying to convince me of something rather than make me smarter about something. It feels like it is trying to convey feelings at me rather than facts."
but you've only explained how it's making you feel instead of what message it's conveying.
Huh. I first heard of Greg Egan in the context of Eliezer mentioning him as a SF writer who he liked, iirc. Kind of ironic he ended up here.
What's the b word?
I still think it was an interesting concept, but I'm not sure how deserving of praise this is since I never actually got beyond organizing two games.
He said it was him on Joe Rogan's podcast.
you find some pretty ironic things when rereading 17-year-old blog posts, but this one takes the cake.
If you look over all possible worlds, then asking "did the coin come up Heads or Tails" as if there's only one answer is incoherent. If you look over all possible worlds, there's a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.
But from the perspective of a particular observer, the question they're trying to answer is a question of indexical uncertainty - out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-wor...
I think you're overestimating the intended scope of this post. Eliezer's argument involves multiple claims - A, we'll create ASI; B, it won't terminally value us; C, it will kill us. As such, people have many different arguments against it. This post is about addressing a specific "B doesn't actually imply C" counterargument, so it's not even discussing "B isn't true in the first place" counterarguments.
While you're quite right about numbers on the scale of billions or trillions, I don't think it makes sense in the limit for the prior probability of X people existing in the world to fall faster than X grows in size.
Certain series of large numbers grow larger much faster than they grow in complexity. A program that returns 10^(10^(10^10)) takes fewer bits to specify (relative to most reasonable systems of specifying programs) than a program that returns 32758932523657923658936180532035892630581608956901628906849561908236520958326051861018956109328631298061...
I'm kind of concerned about the ethics of someone signing a contract and then breaking it to anonymously report what's going on (if that's what your private source did). I think there's value from people being able to trust each others' promises about keeping secrets, and as much as I'm opposed to Anthropic's activities, I'd nevertheless like to preserve a norm of not breaking promises.
Can you confirm or deny whether your private information comes from someone who was under a contract not to give you that private information? (I completely understand if the answer is no.)
(Not going to answer this question for confidentiality/glommarization reasons)
By conservation of expected evidence, I take your failure to cite anything relevant as further confirmation of my views.
This is one of the best burns I've ever heard.
Had a dream last night in which I was having a conversation on LessWrong - unfortunately, I can't remember most of the details of my dreams unless I deliberately concentrate on what happened as soon as I wake up, so I don't know what the conversation was about.
But I do remember that I realized halfway through the conversation that I had been clicking on the wrong buttons - clicking "upvote" & "downvote" instead of "agree" and "disagree", and vice versa. In my dream, the first and second pairs of buttons looked identical - both of them were just the <...
Multiple points, really. I believe that this calculation is flawed in specific ways, but I also think that most calculations that attempt to estimate the relative odds of two events that were both very unlikely a priori will end up being off by a large amount. These two points are not entirely unrelated.
The specific problems that I noticed were:
You can just try to estimate the base rate of a bear attacking your tent and eating you, then estimate the base rate of a thing that looks identical to a bear attacking your tent and eating you, and compare them. Maybe one in a thousand tents get attacked by a bear, and 1% of those tent attacks end with the bear eating the person inside. The second probability is a lot harder to estimate, since it mostly involves off-model surprises like "Bigfoot is real" and "there is a serial killer in these woods wearing a bear suit," but I'd have trouble seeing how it ...
It doesn't matter how often the possum would have scratched it. If your tent would be scratched 50% of the time in the absence of a bear, and a bear would scratch it 20% of the time, then the chance it gets scratched if there is a bear is 1-(1-50%)(1-20%), or 60%. Unless you're postulating that bears always scare off anything else that might scratch the tent.
Also, what about how some of these probabilities are entangled with each other? Your tent being flipped over will almost always involve your tent being scratched, so once we condition on the tent being...
"20% a bear would scratch my tent : 50% a notbear would"
I think the chance that your tent gets scratched should be strictly higher if there's a bear around?
Do you have any specific examples of what this new/rebooted organization would be doing?
It sounds odd to hear the "even if the stars should die in heaven" song with a different melody than I had imagined when reading it myself.
I would have liked to hear the Tracey Davis "from darkness to darkness" song, but I think that was canonically just a chant without a melody. (Although I imagined a melody for that as well.)
...why did someone promote this to a Frontpage post.
If I'm understanding correctly, the argument here is:
A)
B)
C)
Therefore, .
First off, this seems to have an implicit assumption that .
I think this assumption is true for any functions f and g, but I've learned not to always trust my intuitions when it comes to limits and infinity; can anyone else confirm this is true?
Second, A seems to depend on the relative sizes of the infinities, ...
I think I could be a good fit as a writer, but I don't have much in the way of writing experience I can show you. Do you have any examples of what someone at this position would be focusing on? I'm happy to write up a couple pieces to demonstrate my abilities.
The question, then, is whether a given person is just an outlier by coincidence, or whether the underlying causal mechanisms that created their personality actually are coming from some internal gender-variable being flipped. (The theory being, perhaps, that early-onset gender dysphoria is an intersex condition, to quote the immortal words of a certain tribute band.)
If it was just that biological females sometimes happened to have a couple traits that were masculine - and these traits seemed to be at random, and uncorrelated - then that wouldn't imply anyt...
Fair. I do indeed endorse the claim that Aella, or other people who are similar in this regard, can be more accurately modelled as a man than as a woman - that is to say, if you're trying to predict some yet-unmeasured variable about Aella that doesn't seem to be affected by physical characteristics, you'll have better results by predicting her as you would a typical man, than as you would a typical woman. Aella probably really is more of a man than a woman, as far as minds go.
But your mentioning this does make me realize that I never really had a clear me...
If a person has a personality that's pretty much female, but a male body, then thinking of them as a woman will be a much more accurate model of them for predicting anything that doesn't hinge on external characteristics. I think the argument that society should consider such a person to be a woman for most practical purposes is locally valid, even if you reject that the premise is true in many cases.
...Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.
Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-a
Maybe the chance that Kennedy wins, given a typical election between a Republican and a Democrat, is too low to be worth tracking. But this election seems unusually likely to have off-model surprises - Biden dies, Trump dies, Trump gets arrested, Trump gets kicked off the ballot, Trump runs independently, controversy over voter fraud, etc. If something crazy happens at the last minute, people could end up voting for Kennedy.
If you think the odds are so low, I'll bet my 10 euros against your 10,000 that Kennedy wins. (Normally I'd use US dollars, but the value of a US dollar in 2024 could change based on who wins the election.)
I can't tie up cash in any sort of escrow, but I'd take that bet on a handshake.
Unfortunately, I don't have the time to research more than a thousand candidates across the country, and there's probably only about 1 or 2 LessWrongers in most congressional districts. But I encourage everyone to research the candidates' views on AI for whichever Congress elections you're personally able to vote in.
I'm not denying that the military and government are secretive. But there's a difference between keeping things from the American people, and keeping them from the president. When it comes to whether the president controls the military and nuclear arsenal, that's the sort of thing that the military can't lie about without substantial risk to the country.
Let's say the military tries to keep the keys to the nukes out of the president's hands - by, say, giving them fake launch codes. Then they're not just taking away the power of the president, they're also o...
I wouldn't entirely dismiss Kennedy just yet; he's polling better than any independent or third party candidate since Ross Perot. That being said, I do agree that his chances are quite low, and I expect I'll end up having to vote for one of the main two candidates.
The president might not hold enough power to singlehandedly change everything, but they still probably have more power than pretty much any other individual. And lobbying them hasn't been all that ineffective in the past; the AI safety crowd seems to have been involved in the original executive order. I'd expect there to be more progress if we can get a president who's sympathetic to the cause.
Ah. I don't think the writers meant that in terms of ASI killing everyone, but yeah, it's kind of related.
I think that Eliezer, at least, uses the term "alignment" solely to refer to what you call "aimability." Eliezer believes that most of the difficulty in getting an ASI to do good things lies in "aimability" rather than "goalcraft." That is, getting an ASI to do anything, such as "create two molecularly identical strawberries on a plate," is the hard part, while deciding what specific thing it should do is significantly easier.
That being said, you're right that there are a lot of people who use the term differently from how Eliezer uses it.
I'm not sure what the current algorithm is other than a general sense of "posts get promoted more if they're more recent," but it seems like it could be a good idea to just round it all up so that everything posted between 0 and N hours ago is treated as equally recent, so that time of day effects aren't as strong.
Not sure about the exact value of N... 6? 12? It probably depends on what the current function is, and what the current cycle of viewership by time of day looks like. Does LW keep stats on that?
Q3: $50, Q4: $33.33
The answers that immediately come to mind for me for Q1 and Q2 are 50% and 33.33%, though it depends how exactly we're defining "probability" and "you"; the answer may very well be "~1" or "ill formed question".
The entities that I selfishly care about are those who have the patterns of consciousness that make up "me," regardless of what points in time said "me"s happen to exist at. $33.33 maximizes utility across all the "me"s if they're being weighted evenly, and I don't see any particular reason to weight them differently (I think they...
It takes a lot of time for advisors to give advice, the player has to evaluate all the suggestions, and there's often some back-and-forth discussion. It takes much too long to make moves in under a minute.
Conor explained some details about notation during the opening, and I explained a bit as well. (I wasn't taking part in the discussion about the actual game, of course, just there to clarify the rules.)
Agree with Bezzi. Confusion about chess notation and game rules wasn't intended to happen, and I don't think it applies very well to the real-world example. Yes, the human in the real world will be confused about which actions would achieve their goals, but I don't think they're very confused about what their goals are: create an aligned ASI, with a clear success/failure condition of are we alive.
You're correct that the short time control was part of the experimental design for this game. I was remarking on how this game is probably not as accurate of a model of the real-world scenario as a game with longer time controls, but "confounder" was probably not the most accurate term.
Thanks, fixed.
(Puzzle 1)
I'm guessing that the right move is Qc5.
At the end of the Qxb5 line (after a4), White can respond with Rac1, to which Black doesn't really have a good response. b6 gets in trouble with the d6 discovery, and Nd2 just loses a pawn after Rxc7 Nxb2 Rxb7 - Black may have a passed pawn on a4, but I doubt it's enough not to lose.
That being said, that wasn't actually what made me suspect Qc5 was right. It's just that Qxb5 feels like a much more natural, more human move than Qc5. Before I even looked at any lines, I thought, "well, this looks like Richard
Because I want to keep the option of being able to make promises. This way, people can trust that, while I might not answer every question they ask, the things that I do say to them are the truth. If I sometimes lie to them, that's no longer the case, and I'm no longer able to trustworthily communicate at all.
Meta-honesty is an alternate proposed policy that could perhaps reduce some of the complication, but I think it only adds new complication because people have to ask you questions on the meta level whenever you say something for which they might suspe...
Thanks, fixed.
If B were the same level as A, then they wouldn't pose any challenge to A; A would be able to beat them on their own without listening to the advice of the Cs.
I saw it fine at first, but after logging out I got the same error. Looks like you need a Chess.com account to see it.
Thanks, fixed.
I've created a Manifold market if anyone wants to bet on what happens. If you're playing in the experiment, you are not allowed to make any bets/trades while you have private information (that is, while you are in a game, or if I haven't yet reported the details of a game you were in to the public.)
https://manifold.markets/Zane_3219/will-chess-players-win-most-of-thei
The problem is that while the human can give some rationalizations as to "ah, this is probably why the computer says it's the best move," it's not the original reasoning that generated those moves as the best option, because that took place inside the engine. Some of the time, looking ahead with computer analysis is enough to reproduce the original reasoning - particularly when it comes to tactics - but sometimes they would just have to guess.
[facepalms] Thanks! That idea did not occur to me and drastically simplifies all of the complicated logistics I was previously having trouble with.
Sounds like a good strategy! ...although, actually, I would recommend you delete it before all the potential As read it and know what to look out for.
Agreed that it could be a bit more realistic that way, but the main constraint here is that we need a game where there are three distinct levels of players who always beat each other. The element of luck in games like poker and backgammon makes that harder to guarantee (as suggested by the stats Joern_Stoller brought up). And another issue is that it'll be harder to find a lot of skilled players at different levels from any game that isn't as popular as chess is - even if we find an obscure game that would in theory be a better fit for the experiment, we won't be able to find any Cs for it.
wow I wouldn't have expected LessWrongers' long-suppressed sexual instincts to be crypto scams - no, you know what, if anyone got turned on by crypto scams it would probably be us.
(more seriously: the link is broken.)