Also I realize I was talking about (at expected effect size) whether there's a trade-off at all to gaining more IQ, but to be clear even if there is a trade-off (or uncertainty about the trade-off) it is still probably worth it up to some point - I certainly don't think that we should only augment intelligence if we can prove it is literally costless.
I'm not really worried given the expected effect size. e.g. let's say by the time I'd be ready to use such tech, the best known interventions have an EV of +10 IQ points. Well the world already has a significant number of humans with 10 more IQ points than me (and a significant number in a range on both sides of that); obviously I haven't done real studies on this but my vague impression looking around at those existing people is that the extra IQ points don't on average trade off against other things I care about. (It's possible that I'm wrong about existing humans, or that I'm right but that tech selecting for IQ points does involve a trade-off I wouldn't accept if I knew about it.)
I haven't thought much about how worried I'd be if we're talking about much more extreme IQ gains. Certainly in the limit, if you told me my kid would have +100000 IQ points, I mean my first response is that you're lying, but yeah if I believed it I'd be worried for similar reasons that one worries about ASI.
I agree that roughly, tracking 0 echoes gets you ask culture and tracking 1 gets you guess culture. But I wouldn't equate tracking 0 with ask culture[1]. Echoes are determined by cultural norms, e.g. the echoes caused by sticking your third finger straight up are different in cultures where that is or isn't an insult. So once you are tracking 2 or more echoes and you realize that a guess-like set of norms might make it impossible[2] for you to ask or even think straight about certain questions at all, one solution is to explicitly agree on ask-style norms. At which point it's not that you're not tracking echoes anymore, but rather that you've built a world with different echoes - the echo of "can my friend stay over" is not "I'm feeling pressured to say yes" but rather "I recall we agreed to ask-style norms, so..."
Good news: there is no way to opt in because you are already in. (If you want to opt out, we have a problem.)
This week's topic is Reading & Discussion.
should be changed
My guess at the truth of the matter is that almost no one is 100% guessing, but some people are extremely confident in their answer (a lot of the correct folks and also a small number of die-hard geocentrists), and then there's a range down to people who haven't thought about it in ages and just have a vague recollection of some elementary school teacher. Which I think is also a more hopeful picture than either the 36% clueless or the 18% geocentrists models? Because for people who are right but not confident, I'm reasonably ok with that; ideally they'd "know" more strongly, but it's not a disaster if they don't. And for people who are wrong but not confident, there are not that many of them and also they would happily change their mind if you just told them the correct answer.
How valid is it to assume that (approximately) everyone who got the heliocentrism question wrong got it wrong by "guessing"? If 18% got it wrong, then your model says that there's 36% who had no clue and half guessed right, but at the other extreme there's a model that everyone 'knows' the answer, but 18% 'know' the wrong answer. I'm not sure which is scarier - 36% clueless or 18% die-hard geocentrists - but I don't think we have enough information here to tell where on that spectrum it is. (In particular, if "I don't know" was an option and only 3% selected it, then I think this is some evidence against the extreme end of 36% clueless?)
Too Like the Lightning by Ada Palmer :)
Done, true to form just barely by the deadline :)