All of Marion Z.'s Comments + Replies

85 is really not that low. It's an entire standard deviation above the usual threshold for diagnosis of intellectual disability. It puts the guy in the 16th percentile. I would not expect that person, who as he says has gone to college and done well there, to have issues writing coherent sentences.

No, around the same level as Socrates. 

We are sure with 99%+ probability both were real people, it would be possible but really difficult to fake all the evidence of their existence. 

We are sure with quite high but lesser probability that the broad strokes of their life are correct: Socrates was an influential philosopher who taught Plato and was sentenced to death, Muhammad was a guy from Mecca who founded Islam and migrated to Medina, then returned to Mecca with his followers. 

We think some of the specific details written about them in his... (read more)

Anecdotally, I started casually reading Less Wrong/Overcoming Bias when I was 12. I didn't really get it, obviously, but I got it enough to explain some basic things about biases and evidence and probability to an uninitiated person

1sinitsa
Can say exactly the same about myself, even same age. I actually think it was one of the main reasons why I stopped being Christian in early teens.

Agreed on the first point, learning about lying is good. On the parenting bit, I'll preface this by saying I don't have kids but this seems like a great way to create a "dark rationalist". I am not perfectly or near-perfectly honest, though I admire people who are and think it's probably a good idea, but rewarding skilled lies as a training tool feels dangerous. 

Neutral on the second point, Santa may in fact be a useful deception but I think there are associated downsides and I don't feel strongly either way.

Absolutism can be useful because parents ar... (read more)

The Aes Sedai have the advantage that Robert Jordan is writing books, and whenever he needs to demonstrate that Aes Sedai can capably mislead while telling the truth, he arranges the circumstances such that this is possible. In real life, seriously deceiving people about most topics on the fly - that is, in a live conversation - without making untrue statements is pretty hard, unless you've prepared ahead of time. It's not impossible, but it's hard enough that I would definitely have a higher baseline of belief in the words of someone who is committed to not telling literal lies.

1Foyle
Telling lies and discerning lies are both extremely important skills, becoming adept at it involves developing better and better cognitive models of other humans reactions and perspectives, a chess game of sorts.  Human society elevates and rewards the most adept liars; CEOs, politicians, actors and sales people in general, you could perhaps say that Charisma is in essence mostly convincing lying.  I take the approach with my children of punishing obvious lies, and explaining how they failed because I want them to get better at it, and punishing less or not at all when they have been sufficiently cunning about it. For children I think the Santa deception is potentially a useful awakening point - a right of passage where they learn not to trust everything they are told, that deception and lies and uncertainty in the truth are a part of the adult world, and a little victory where they can get they get to feel like they have conquered an adult conspiracy.  They rituals are also a fun interlude for them and the adults in the meantime. As a wider policy I generally don't think absolutism is a good style for parenting (in most things), there are shades of grey in almost everything, even if you are a hard-core rationalist in your beliefs, 99.9% of everyone you and your children deal with won't be, and they need to be armed for that.  Discussing the grey is an endless source of useful teachable moments.

Sorry for doing such an insane necro here, and I'll delete if asked, but I don't think this is right at all. Broadly, in the real world, I accept the premise "avoiding listening to opposing positions is bad." I do not believe that "if you really don't think you could stand up to debate with a talented missionary, maybe you aren't really an atheist" because I don't think it scales up. 

I am a human, I have mechanisms for deciding what I believe that are not based on rationality. I have worked very hard to break and adapt some of those mechanisms to alig... (read more)

But you've perfectly forgotten about the hoodlum, so you will in fact one box. Or, does the hoodlum somehow show up and threaten you in the moment between the scanner filling the boxes and you making your decision? That seems to add an element of delay and environmental modification that I don't think exists in the original problem, unless I'm misinterpreting. 

Also, I feel like by analyzing your brain to some arbitrarily precise standard, the scanner could see 3 things:  You are (or were at some point in the past) likely to think of this solution... (read more)

I mean I think the obvious answer is that an adult isn't universally entitled to their parents subsidizing their law school tuition. The actual concern is that people can brainwash their kids from a very early age, so they don't see the choices they actually have as legitimate, but I think that's a nearly intractable problem in any system. You could fix it, but only with absurd levels of tyranny.

Only replying to a tiny slice of your post here, but the original (weak) Pascal's wager argument actually does say you should pretend to believe even if you secretly don't believe, for various fuzzy reasons such as societal influence, and that maybe God will see that you were trying, and that sheer repetition might make you believe a little bit eventually

That seems entirely reasonable, insofar as the death penalty is at all. I don't think we should be going around executing people, but if we're going to then we might as well save a few lives by doing it

1. What is your probability that there is a god? A: 7%. It seems very unlikely for a lot of very logical reasons, but I think there's some chance the infinite recursion argument is true, and I also have to give some chance to any belief that most humans on earth are convinced of.
2. What is your probability that psychic powers exist? A: 15%. I feel like I'm way off the community standard here, but I think if there is a field of "unscientific" research that has promise, it is parapsychology. Frankly, psychology itself seems to border on pseudoscience some of... (read more)

Marion Z.-1-1

The most effective diet for weight loss? Seems plausible. The most effective diet for being healthy, that sounds extremely unlikely. Even if your seven foods are nutritionally complete you're not likely to be eating them in the right balances. Intuitive body regulation sounds good there but in general, our bodies are actually not so good at guessing that kind of thing.

1Portia
No idea why you were downvoted. Humans have some correct intuitions on foods they need, but they are generally overwhelmed by wrong signals. My body certainly believes that it would profit from living off sugary fatty things, and that any multicoloured food has lots of nutrients, or that anything acidic is high in vitamins, which is why acidic, multicoloured candy is obviously a complete diet. I see why it thinks that and why that would help in a natural environment, but it really does not help with typical foods.

Yes. My uncle, who is a doctor working in gastroenterology, was talking about basically the exact same topic last week. He said that they're highly confident a significant number of patients are having entirely or near-entirely psychosomatic illnesses, but it's incredibly difficult to identify when that is specifically happening, and unfortunately due to time and money constraints they have a tendency to just slap the label on difficult cases. We just do not know enough about the human body and how the brain affects it to be confident outside of extremely obvious cases. Even a lot of what we do know is being reexamined in the last two decades due to edge cases being discovered and lack of rigor in earlier testing.

1Portia
That label also just does not achieve anything. Sure, absolutely, poor mental health worsens physical health, and some debilitating condition have no apparent physical causes.  But this doesn't make them hurt less. It doesn't provide a resolution. My episode beginings have a significant correlation with stressful events. I am perfectly aware of this. I would still, really really, like a way to interrupt the resulting destructive cascade other than going "well, it would have been better not to have been stressed".

I’m a casual climber and know a lot of former pros/serious climbers - the death rate is simply staggering. I get that these people just have the drive and can’t imagine not pushing the boundaries even further, but when a single guy can tell me three different stories about watching a fellow climber or paraglider or whatever else they do in the mountains dying in front of him, that sport is too much for me to go further into. I remember reading outdoor magazines about the exploits of the most famous climbers fifteen or twenty years ago, and I look them up now and a solid chunk of them are dead. It’s wild, but there’s something appealing about it in a primal sense.

Any individual doomsday mechanism we can think of, I would agree is not nearly so simple for an AGI to execute as Yudkowsky implies. I do think that it's quite likely we're just not able to think of mechanisms even theoretically that an AGI could think of,  and one or more of those might actually be quite easy to do secretly and quickly. I wouldn't call it guaranteed by any means, but intuitively this seems like the sort of thing that raw cognitive power might have a significant bearing on.

5[anonymous]
I agree. One frightening mechanism I thought of is : "ok, assume the AGI can't craft the bioweapon or nanotechnology killbots without collecting vast amounts of information through carefully selected and performed experiments. (Basically enormous complexes full of robotics). How does it get the resources it needs? And the answer is it scams humans into doing it. We have many examples of humans trusting someone they shouldn't even when the evidence was readily available that they shouldn't.

I've always quite liked Scott Alexander's answer to the problem of evil. It is absolutely useless as a defense of Abrahamic beliefs in the real world, but is relatively satisfying to an atheist wondering how that question might theoretically be answered by a true god.

In case you're not familiar, the basic idea is that God did create a perfectly good universe full of a near-infinite number of consciousnesses experiencing total bliss at all times - then decided that he wanted more net good to exist, so he made a universe which was almost exactly the same as ... (read more)

Fascinating. I can't help feeling that A escaping D's notice was rather hand-wavey, but then so is D being aligned in the first place and not a paper clip maximiser itself, so I suppose I can't complain too much about that.

some sort of general value for life, or a preference for decreased suffering of thinking beings, or the off chance we can do something to help (which i would argue is almost exactly the same low chance that we could do something to hurt it). I didn't say there wasn't an alignment problem, just that AGI whose goals don't perfectly align with those of humanity in general isn't necessarily catastrophic. Utility functions tend to have a lot of things they want to maximize, with different weights. Ensuring one or more of the above ideas is present in an AGI is important.

3Yitz
I think that if we can reliably incorporate that into a machine’s utility function, we’d be most of the way to alignment, right?

Fourth, seventh, and 66th out of ~200 is quite good? I agree that there are aspects of all of these nations which are objectionable, particularly China, but corruption seems like an odd example. I think there's a fair argument that the PRC has been extremely successful in many metrics given the position of the nation in 1945 - that China was in extreme poverty and I wouldn't have expected it to improve so quickly. China is undemocratic in many ways in practice, particularly press freedom and freedom of speech, but on a gears level, the system of local and regional governance is a relatively effective democracy.

1Ben
I agree 66th of of 200 is pretty good. My general point is that to talk about "success" you need to already know what winning looks like.  Low corruption is certainly not the #1 thing, and probably not in the top 10 for most people. But it probably makes it into the top 100. Maybe GDP per capita is a in the top 10. These discussions (what is good) are sort of needed to ground any kind of discussion about whether a particular system produces good outcomes. I singled out China simply because the other two on this list would (by the kinds of metrics I would reach for) be world-leading (A/A+), while China would not be. For example, when you say that China has improved quickly since 1945 you are presumably using an economic metric (GDP)? The problem with going all the way back to 1945 is that systems change. In my weird and unscientific "how efficient do I feel different governments are" I can give the 2022 Chinese government a fair score, but I would score the 1950's and 60's Chinese governments very, very low.

I assume that any unrestrained AGI would pretty much immediately exert enough control over the mechanisms through which an AGI might take power (say, the internet, nanotech, whatever else it thinks of) to ensure that no other AI could do so without its permission. I suppose it is plausible that humanity is capable of threatening an AGI through the creation of another, but that seems rather unlikely in practice. First-mover advantage is incalculable to an AGI. 

Other people have addressed the truth/belief gap. I want to talk about existential risk.

We got EXTREMELY close to extinction with nukes, more than once.  Launch orders in the Cold War were given and ignored or overridden three separate times that I'm aware of, and probably more. That risk has declined but is still present. The experts were 100% correct and their urgency and doomsday predictions were arguably one of the reasons we are not all dead.

The same is true of global warming, and again there is still some risk. We probably got extremely lucky in... (read more)

That seems like extremely limited, human thinking. If we're assuming a super powerful AGI, capable of wiping out humanity with high likelihood, it is also almost certainly capable of accomplishing its goals despite our theoretical attempts to stop it without needing to kill humans. The issue, then, is not fully aligning AGI goals with human goals, but ensuring it has "don't wipe out humanity, don't cause extreme negative impacts to humanity" somewhere in its utility function. Probably doesn't even need to be weighted too strongly, if we're talking about a ... (read more)

5Alex Vermillion
Why would it "want" to keep humans around? How much do you care about whether or not you move dirt while you drive to work? If you don't care about something at all, it won't factor in to your choice of actions[1] ---------------------------------------- 1. I know I phrased this tautologically, but I think the idiom will be clear. If not, just press me on it more. I think this is the best way to get the message across or I wouldn't have done it. ↩︎
1Leo P.
If humans are capable of building one AGI, they certainly would be capable to build a second one which could have goals unaligned with the first one.