All of Zane's Comments + Replies

Zane10

wow I wouldn't have expected LessWrongers' long-suppressed sexual instincts to be crypto scams - no, you know what, if anyone got turned on by crypto scams it would probably be us.

(more seriously: the link is broken.)

Zane123

Some of the memes you referenced do seem "cringe" to me, but people have different senses of humor. I'm not sure what the issue is with someone posting memes they personally find funny.

If you disagree with the point that the memes are making, that's different, but can you give an example of something in one of the memes she posted that you thought was invalid reasoning? You called her content "dark arts tactics" and said:

"It feels like it is trying to convince me of something rather than make me smarter about something. It feels like it is trying to convey feelings at me rather than facts."

but you've only explained how it's making you feel instead of what message it's conveying.

1just_browsing
Typically I agree with the underlying facts behind her memes! For example I also think AI safety is a pressing issue. If her memes were funny I would instead be writing a post about how awesome it is that Kat Woods is everywhere. My main objection is that I do not like the packaging of the ideas she is spreading. For example the memes are not funny. (See the outline of this post: content, vibes, conduct.)  You asked for an example of Kat Woods content that aims to convince rather than educate. Here is one recent example. I feel like the packaging of this meme conveys: "all of the objections you might have to the idea of X-risk via AI can actually be easily be debunked, therefore you would be stupid to not believe X-risk via AI".  In reality, questions regarding likelihood of x-risk via AI are really tricky. Many thoughtful people have thought about these problems at great length and declared them to be hard and full of uncertainty. I feel like this meme doesn't convey this at all. Therefore, I'm not sure whether it is good for peoples' brains to consume this content. I will certainly say it's not good for my brain to consume this content.
Zane31

Huh. I first heard of Greg Egan in the context of Eliezer mentioning him as a SF writer who he liked, iirc. Kind of ironic he ended up here.

2Zack_M_Davis
"[A] common English expletive which may be shortened to the euphemism bull or the initialism B.S."
Zane10

I still think it was an interesting concept, but I'm not sure how deserving of praise this is since I never actually got beyond organizing two games.

3Martin Randall
Seems like it should be possible to automate this now but having all five participants be, for example, LLMs with access to chess AIs of various levels.
Zane60

He said it was him on Joe Rogan's podcast.

Zane10

you find some pretty ironic things when rereading 17-year-old blog posts, but this one takes the cake.

Answer by Zane21

If you look over all possible worlds, then asking "did the coin come up Heads or Tails" as if there's only one answer is incoherent. If you look over all possible worlds, there's a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.

But from the perspective of a particular observer, the question they're trying to answer is a question of indexical uncertainty - out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-wor... (read more)

Zane2313

I think you're overestimating the intended scope of this post. Eliezer's argument involves multiple claims - A, we'll create ASI; B, it won't terminally value us; C, it will kill us. As such, people have many different arguments against it. This post is about addressing a specific "B doesn't actually imply C" counterargument, so it's not even discussing "B isn't true in the first place" counterarguments.

Zane4-2

While you're quite right about numbers on the scale of billions or trillions, I don't think it makes sense in the limit for the prior probability of X people existing in the world to fall faster than X grows in size.

Certain series of large numbers grow larger much faster than they grow in complexity. A program that returns 10^(10^(10^10)) takes fewer bits to specify (relative to most reasonable systems of specifying programs) than a program that returns 32758932523657923658936180532035892630581608956901628906849561908236520958326051861018956109328631298061... (read more)

2JBlack
I think when you get to any class of hypotheses like "capable of creating unlimited numbers of people" with nonzero probability, you run into multiple paradoxes of infinity. For example, there is no uniform distribution over any countable set, which includes the set of all halting programs. Every non-uniform distribution this hypothetical superbeing may have used over such programs is a different prior hypothesis. The set of these has no suitable uniform distribution either, since they can be partitioned into countably many equivalence classes under natural transformations. It doesn't take much study of this before you're digging into pathologies of measure theory such as Vitali sets and similar. You can of course arbitrarily pick any of these weightings to be your "chosen" prior, but that's just equivalent to choosing a prior over population directly so it doesn't help at all. Probability theory can't adequately deal with such hypothesis families, and so if you're considering Bayesian reasoning you must discard them from your prior distribution. Perhaps there is some extension or replacement for probability that can handle them, but we don't have one.
Zane3-12

I'm kind of concerned about the ethics of someone signing a contract and then breaking it to anonymously report what's going on (if that's what your private source did). I think there's value from people being able to trust each others' promises about keeping secrets, and as much as I'm opposed to Anthropic's activities, I'd nevertheless like to preserve a norm of not breaking promises.

Can you confirm or deny whether your private information comes from someone who was under a contract not to give you that private information? (I completely understand if the answer is no.)

3Ben Pace
I think this is a reasonable question to ask. I will note that in this case, if your guess is right about what happened, the breaking of the agreement is something that it turned out the counterparty endorsed, or at least, after the counterparty became aware of the agreement, they immediately lifted it. I still think there's something to maintaining all agreements regardless of context, but I do genuinely think it matters here if you (accurately) expect the entity you've made the secret agreement with would likely retract it if they found out about it. (Disclaimer that I have no private info about this specific situation.)
habryka140

(Not going to answer this question for confidentiality/glommarization reasons)

Zane-2-4

By conservation of expected evidence, I take your failure to cite anything relevant as further confirmation of my views.

This is one of the best burns I've ever heard.

Zane142

Had a dream last night in which I was having a conversation on LessWrong - unfortunately, I can't remember most of the details of my dreams unless I deliberately concentrate on what happened as soon as I wake up, so I don't know what the conversation was about.

But I do remember that I realized halfway through the conversation that I had been clicking on the wrong buttons - clicking "upvote" & "downvote" instead of "agree" and "disagree", and vice versa. In my dream, the first and second pairs of buttons looked identical - both of them were just the <... (read more)

3mofeien
Thank you for the retroactive feature request!
Zane122

Multiple points, really. I believe that this calculation is flawed in specific ways, but I also think that most calculations that attempt to estimate the relative odds of two events that were both very unlikely a priori will end up being off by a large amount. These two points are not entirely unrelated.

The specific problems that I noticed were:

  1. The probabilities are not independent of each other, so they cannot be multiplied together directly. A bear flipping over your tent would almost always immediately be preceded by the bear scratching your tent, so up
... (read more)
Zane10

You can just try to estimate the base rate of a bear attacking your tent and eating you, then estimate the base rate of a thing that looks identical to a bear attacking your tent and eating you, and compare them. Maybe one in a thousand tents get attacked by a bear, and 1% of those tent attacks end with the bear eating the person inside. The second probability is a lot harder to estimate, since it mostly involves off-model surprises like "Bigfoot is real" and "there is a serial killer in these woods wearing a bear suit," but I'd have trouble seeing how it ... (read more)

3Screwtape
I'm not sure I'm following your actual objection. Is your point that this algorithm is wrong and won't update towards the right probabilities even if you keep feeding it new pieces of evidence, that the explanations and numbers for these pieces of evidence don't make sense for the implied story, that you shouldn't try to do explicit probability calculations this way, or some fourth thing? If this algorithm isn't actually equivalent to Bayes in some way, that would be really useful for someone to point out. At first glance it seems like a simpler (to me anyway) way to express how making updates works, not just on an intuitive "I guess the numbers move that direction?" way but in a way that might not get fooled by e.g. the mammogram example.  If these explanations and numbers don't make exact sense for the implied story, that seems fine? "A train is moving from east to west at a uniform speed of 12 m/s, ten kilometers west a second train is moving west to east at a uniform speed of 15 m/s, how far will the first train have traveled when they meet?" is a fine word problem even if that's oversimplified for how trains work.  If you don't think it's worth doing explicit probability calculations this way, even to practice and try and get better or as a way to train the habit of how the numbers should move, that seems like a different objection and one you would have with any guide to Bayes. That's not to say you shouldn't raise the objection, but that doesn't seem like an objection that someone did the math wrong! And of course maybe I'm completely missing your point.
Zane104

It doesn't matter how often the possum would have scratched it. If your tent would be scratched 50% of the time in the absence of a bear, and a bear would scratch it 20% of the time, then the chance it gets scratched if there is a bear is 1-(1-50%)(1-20%), or 60%. Unless you're postulating that bears always scare off anything else that might scratch the tent.

Also, what about how some of these probabilities are entangled with each other? Your tent being flipped over will almost always involve your tent being scratched, so once we condition on the tent being... (read more)

5lemonhope
I was thinking the bear would scare other stuff off yeah. But now I think I'm doing this wrong and the code is broken. Can you fix my code?
Zane43

"20% a bear would scratch my tent : 50% a notbear would"

I think the chance that your tent gets scratched should be strictly higher if there's a bear around?

1lemonhope
A possom or whatever will scratch mine like half the time
Zane31

Do you have any specific examples of what this new/rebooted organization would be doing?

Zane20

It sounds odd to hear the "even if the stars should die in heaven" song with a different melody than I had imagined when reading it myself.

I would have liked to hear the Tracey Davis "from darkness to darkness" song, but I think that was canonically just a chant without a melody. (Although I imagined a melody for that as well.)

Zane10

...why did someone promote this to a Frontpage post.

Zane10

If I'm understanding correctly, the argument here is:

A) 

B) 

C) 

Therefore, .

 

First off, this seems to have an implicit assumption that .

I think this assumption is true for any functions f and g, but I've learned not to always trust my intuitions when it comes to limits and infinity; can anyone else confirm this is true?

Second, A seems to depend on the relative sizes of the infinities, ... (read more)

2Shankar Sivarajan
I didn't make any claim about limits. If you're looking for rigor, you're in the wrong place, as I tried to make clear in the introduction. But (A) is true without any unconventional weirdness: ∑∞k=1kekxcos(kx)=e(1+i)x(e2ix−4e(1+i)x+e2x+e(2+2i)x+1)2(−ex+eix)2(−1+e(1+i)x)2 (from Mathematica), and limx→0− of that is −112.
Zane20

I think I could be a good fit as a writer, but I don't have much in the way of writing experience I can show you. Do you have any examples of what someone at this position would be focusing on? I'm happy to write up a couple pieces to demonstrate my abilities.

3Gretta Duleba
Writers at MIRI will primarily be focusing on explaining why it's a terrible idea to build something smarter than humans that does not want what we want. They will also answer the subsequent questions that we get over and over about that. 
Zane41

The question, then, is whether a given person is just an outlier by coincidence, or whether the underlying causal mechanisms that created their personality actually are coming from some internal gender-variable being flipped. (The theory being, perhaps, that early-onset gender dysphoria is an intersex condition, to quote the immortal words of a certain tribute band.)

If it was just that biological females sometimes happened to have a couple traits that were masculine - and these traits seemed to be at random, and uncorrelated - then that wouldn't imply anyt... (read more)

Zane8-2

Fair. I do indeed endorse the claim that Aella, or other people who are similar in this regard, can be more accurately modelled as a man than as a woman - that is to say, if you're trying to predict some yet-unmeasured variable about Aella that doesn't seem to be affected by physical characteristics, you'll have better results by predicting her as you would a typical man, than as you would a typical woman. Aella probably really is more of a man than a woman, as far as minds go.

But your mentioning this does make me realize that I never really had a clear me... (read more)

8Zack_M_Davis
Consider a biased coin that comes up Heads with probability 0.8. Suppose that in a series of 20 flips of such a coin, the 7th through 11th flips came up Tails. I think it's possible to simultaneously notice this unusual fact about that particular sequence, without concluding, "We should consider this sequence as having come from a Tails-biased coin." (The distributions include the outliers, even though there are fewer of them.) I agree that Aella is an atypical woman along several related dimensions. It would be bad and sexist if Society were to deny or erase that. But Aella also ... has worked as an escort? If you're writing a biography of Aella, there are going to be a lot of detailed Aella Facts that only make sense in light of the fact that she's female. The sense in which she's atypically masculine is going to be different from the sense in which butch lesbians are atypically masculine. I'm definitely not arguing that everyone should be forced into restrictive gender stereotypes. (I'm not a typical male either.) I'm saying a subtler thing about the properties of high-dimensional probability distributions. If you want to ditch the restricting labels and try to just talk about the probability distributions (at the expense of using more words), I'm happy to do that. My philosophical grudge is specifically against people saying, "We can rearrange the labels to make people happy."
2Rafael Harth
I think that's fair -- in fact, the test itself is evidence that the claim is literally true in some ways. I didn't mean the comment as a reductio ad absurdum, more as as "something here isn't quit right (though I'm not sure what)". Though I think you've identified what it is with the second paragraph.
Zane2-3

If a person has a personality that's pretty much female, but a male body, then thinking of them as a woman will be a much more accurate model of them for predicting anything that doesn't hinge on external characteristics. I think the argument that society should consider such a person to be a woman for most practical purposes is locally valid, even if you reject that the premise is true in many cases.

7Rafael Harth
I have to point out that if this logic applies symmetrically, it implies that Aella should be viewed as a man. (She scored .95% male on the gender-contimuum test, which is much more than the average man (don't have a link unfortunately, small chance that I'm switching up two tests here).) But she clearly views herself as a woman, and I'm not sure you think that society should consider her a man for most practical purposes (although probably for some?) You could amend the claim by the condition that the person wants to be seen as the other gender, but conditioning on preference sort of goes against the point you're trying to make.
Zane50

Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-a

... (read more)
7Zack_M_Davis
"Essentially are" is too strong. (Sex is still real, even if some people have sex-atypical psychology.) In accordance with not doing policy, I don't claim to know under what conditions kids in the early-onset taxon should be affirmed early: maybe it's a good decision. But whether or not it turns out to be a good decision, I think it's increasingly not being made for the right reasons; the change in our culture between 2013 and 2023 does not seem sane.
Zane30

Maybe the chance that Kennedy wins, given a typical election between a Republican and a Democrat, is too low to be worth tracking. But this election seems unusually likely to have off-model surprises - Biden dies, Trump dies, Trump gets arrested, Trump gets kicked off the ballot, Trump runs independently, controversy over voter fraud, etc. If something crazy happens at the last minute, people could end up voting for Kennedy.

If you think the odds are so low, I'll bet my 10 euros against your 10,000 that Kennedy wins. (Normally I'd use US dollars, but the value of a US dollar in 2024 could change based on who wins the election.)

Ericf110

I can't tie up cash in any sort of escrow, but I'd take that bet on a handshake.

Zane10

Unfortunately, I don't have the time to research more than a thousand candidates across the country, and there's probably only about 1 or 2 LessWrongers in most congressional districts. But I encourage everyone to research the candidates' views on AI for whichever Congress elections you're personally able to vote in.

Zane20

I'm not denying that the military and government are secretive. But there's a difference between keeping things from the American people, and keeping them from the president. When it comes to whether the president controls the military and nuclear arsenal, that's the sort of thing that the military can't lie about without substantial risk to the country.

Let's say the military tries to keep the keys to the nukes out of the president's hands - by, say, giving them fake launch codes. Then they're not just taking away the power of the president, they're also o... (read more)

Zane20

I wouldn't entirely dismiss Kennedy just yet; he's polling better than any independent or third party candidate since Ross Perot. That being said, I do agree that his chances are quite low, and I expect I'll end up having to vote for one of the main two candidates.

3dr_s
I would. It's possible an election in which a third party candidate has a serious chance might exist, but it wouldn't look like this one at this point. Only way the boat could at least be rocked is if the charges go through and Trump is out of the race by force majeure, at which point there's quite a bit of chaos.
4Ericf
Mr. Pero got fewer votes than either major party candidate. Not a ringing endorsement. And I didn't say the chances were quite low, I said they were zero*. Which is at least 5 orders of magnitude difference from "quite low" so I don't think we agree about his chances. *technically odds can't be zero, but I consider anything less likely than "we are in a simulation that is subject to intervention from outside" to be zero for all decision making purposes.
Zane21

The president might not hold enough power to singlehandedly change everything, but they still probably have more power than pretty much any other individual. And lobbying them hasn't been all that ineffective in the past; the AI safety crowd seems to have been involved in the original executive order. I'd expect there to be more progress if we can get a president who's sympathetic to the cause.

8trevor
None of us have solid models on how much power the president has.  The president and his advisors probably don't actually control the nuclear arsenal; that's probably a lie, the military probably doesn't hand over control of the nuclear arsenal to a rando and his election campaign team every 4-8 years.  Some parts of the constitution are de-facto more respected than others; if the president and his advisors had substantial real power over the military, then both the US Natsec community and foreign intelligence agencies would be very, very heavily involved in the presidential primary process (what we've seen from foreign intelligence agencies so far seems more like targeting public opinion than altering the results of the election). The president and his advisors influence over the federal legislative process is less opaque, but the details of how that works are worth massive amounts of money because it allows people to navigate the space (and the information becomes worthless if everyone knows it).  Plus, most presidents are probably far more nihilistic and self-interested in-person than in front of the cameras, and probably became hardcore conflict theorists due to being so deeply immersed in an environment where words are used as weapons (it the legislative process too, not just public opinion).  So getting a powerful president to support good AI policy would be nice, but it's probably not worth the effort; there are other people in the executive branch with a better ratio of cost-to-access vs unambiguous policy influence. We don't know this either, it's too early to tell. These institutions are extremely, extremely sophisticated at finding clever ways to make elites feel involved, when in reality the niche has already been filled by the elites who arrived first.   For example, your text makes its way into the final bill which gets passed, but the bureaucracy ignores it because it didn't have the keywords that signal that your text is actually supposed to b
Zane2-2

Ah. I don't think the writers meant that in terms of ASI killing everyone, but yeah, it's kind of related.

Zane10

I think that Eliezer, at least, uses the term "alignment" solely to refer to what you call "aimability." Eliezer believes that most of the difficulty in getting an ASI to do good things lies in "aimability" rather than "goalcraft." That is, getting an ASI to do anything, such as "create two molecularly identical strawberries on a plate," is the hard part, while deciding what specific thing it should do is significantly easier.

That being said, you're right that there are a lot of people who use the term differently from how Eliezer uses it.

2Vladimir_Nesov
If the initial specific thing is pivotal processes that end the acute risk period, it doesn't matter if the goodness-optimizing goalcraft is impossibly hard to figure out, since we'll have time to figure it out.
Zane10

I'm not sure what the current algorithm is other than a general sense of "posts get promoted more if they're more recent," but it seems like it could be a good idea to just round it all up so that everything posted between 0 and N hours ago is treated as equally recent, so that time of day effects aren't as strong.

Not sure about the exact value of N... 6? 12? It probably depends on what the current function is, and what the current cycle of viewership by time of day looks like. Does LW keep stats on that?

Answer by Zane10

Q3: $50, Q4: $33.33

The answers that immediately come to mind for me for Q1 and Q2 are 50% and 33.33%, though it depends how exactly we're defining "probability" and "you"; the answer may very well be "~1" or "ill formed question".

The entities that I selfishly care about are those who have the patterns of consciousness that make up "me," regardless of what points in time said "me"s happen to exist at. $33.33 maximizes utility across all the "me"s if they're being weighted evenly, and I don't see any particular reason to weight them differently (I think they... (read more)

Zane10

It takes a lot of time for advisors to give advice, the player has to evaluate all the suggestions, and there's often some back-and-forth discussion. It takes much too long to make moves in under a minute.

2[anonymous]
I'd expect the amount of time this all takes to be a function of the time-control. Like, if I have 90 mins, I can allocate more time to all of this. I can consult each of my advisors at every move. I can ask them follow-up questions. If I only have 20 mins, I need to be more selective. Maybe I only listen to my advisors during critical moves, and I evaluate their arguments more quickly. Also, this inevitably affects the kinds of arguments that the advisors give. Both of these scenarios seem pretty interesting and AI-relevant. My all-things-considered guess would be that the 20 mins version yields high enough quality data (particularly for the parts of the game that are most critical/interesting & where the debate is most lively) that it's worth it to try with shorter time controls. (Epistemic status: Thought about this for 5 mins; just vibing; very plausibly underestimating how time pressure could make the debates meaningless).
Zane10

Conor explained some details about notation during the opening, and I explained a bit as well. (I wasn't taking part in the discussion about the actual game, of course, just there to clarify the rules.)

Zane1-1

Agree with Bezzi. Confusion about chess notation and game rules wasn't intended to happen, and I don't think it applies very well to the real-world example. Yes, the human in the real world will be confused about which actions would achieve their goals, but I don't think they're very confused about what their goals are: create an aligned ASI, with a clear success/failure condition of are we alive.

You're correct that the short time control was part of the experimental design for this game. I was remarking on how this game is probably not as accurate of a model of the real-world scenario as a game with longer time controls, but "confounder" was probably not the most accurate term.

Zane00

(Puzzle 1)

I'm guessing that the right move is Qc5.

At the end of the Qxb5 line (after a4), White can respond with Rac1, to which Black doesn't really have a good response. b6 gets in trouble with the d6 discovery, and Nd2 just loses a pawn after Rxc7 Nxb2 Rxb7 - Black may have a passed pawn on a4, but I doubt it's enough not to lose.

That being said, that wasn't actually what made me suspect Qc5 was right. It's just that Qxb5 feels like a much more natural, more human move than Qc5. Before I even looked at any lines, I thought, "well, this looks like Richard

... (read more)
[This comment is no longer endorsed by its author]Reply
1Richard Willis
There's definitely something to learn from the setting of the position. I actually took it from Strategic Chess Exercises, just taking one of the variations of one of the problems. There's picking a position that it makes sense to debate over, but also a meta thing that you have raised, which I didn't consider.
Zane154

Because I want to keep the option of being able to make promises. This way, people can trust that, while I might not answer every question they ask, the things that I do say to them are the truth. If I sometimes lie to them, that's no longer the case, and I'm no longer able to trustworthily communicate at all.

Meta-honesty is an alternate proposed policy that could perhaps reduce some of the complication, but I think it only adds new complication because people have to ask you questions on the meta level whenever you say something for which they might suspe... (read more)

Zane10

If B were the same level as A, then they wouldn't pose any challenge to A; A would be able to beat them on their own without listening to the advice of the Cs.

Zane21

I saw it fine at first, but after logging out I got the same error. Looks like you need a Chess.com account to see it.

Zane10

I've created a Manifold market if anyone wants to bet on what happens. If you're playing in the experiment, you are not allowed to make any bets/trades while you have private information (that is, while you are in a game, or if I haven't yet reported the details of a game you were in to the public.)

https://manifold.markets/Zane_3219/will-chess-players-win-most-of-thei

Zane10

The problem is that while the human can give some rationalizations as to "ah, this is probably why the computer says it's the best move," it's not the original reasoning that generated those moves as the best option, because that took place inside the engine. Some of the time, looking ahead with computer analysis is enough to reproduce the original reasoning - particularly when it comes to tactics - but sometimes they would just have to guess.

Zane10

[facepalms] Thanks! That idea did not occur to me and drastically simplifies all of the complicated logistics I was previously having trouble with.

Zane10

Sounds like a good strategy! ...although, actually, I would recommend you delete it before all the potential As read it and know what to look out for.

Zane10

Agreed that it could be a bit more realistic that way, but the main constraint here is that we need a game where there are three distinct levels of players who always beat each other. The element of luck in games like poker and backgammon makes that harder to guarantee (as suggested by the stats Joern_Stoller brought up). And another issue is that it'll be harder to find a lot of skilled players at different levels from any game that isn't as popular as chess is - even if we find an obscure game that would in theory be a better fit for the experiment, we won't be able to find any Cs for it.

Load More