No, around the same level as Socrates.
We are sure with 99%+ probability both were real people, it would be possible but really difficult to fake all the evidence of their existence.
We are sure with quite high but lesser probability that the broad strokes of their life are correct: Socrates was an influential philosopher who taught Plato and was sentenced to death, Muhammad was a guy from Mecca who founded Islam and migrated to Medina, then returned to Mecca with his followers.
We think some of the specific details written about them in his...
Anecdotally, I started casually reading Less Wrong/Overcoming Bias when I was 12. I didn't really get it, obviously, but I got it enough to explain some basic things about biases and evidence and probability to an uninitiated person
Agreed on the first point, learning about lying is good. On the parenting bit, I'll preface this by saying I don't have kids but this seems like a great way to create a "dark rationalist". I am not perfectly or near-perfectly honest, though I admire people who are and think it's probably a good idea, but rewarding skilled lies as a training tool feels dangerous.
Neutral on the second point, Santa may in fact be a useful deception but I think there are associated downsides and I don't feel strongly either way.
Absolutism can be useful because parents ar...
The Aes Sedai have the advantage that Robert Jordan is writing books, and whenever he needs to demonstrate that Aes Sedai can capably mislead while telling the truth, he arranges the circumstances such that this is possible. In real life, seriously deceiving people about most topics on the fly - that is, in a live conversation - without making untrue statements is pretty hard, unless you've prepared ahead of time. It's not impossible, but it's hard enough that I would definitely have a higher baseline of belief in the words of someone who is committed to not telling literal lies.
Sorry for doing such an insane necro here, and I'll delete if asked, but I don't think this is right at all. Broadly, in the real world, I accept the premise "avoiding listening to opposing positions is bad." I do not believe that "if you really don't think you could stand up to debate with a talented missionary, maybe you aren't really an atheist" because I don't think it scales up.
I am a human, I have mechanisms for deciding what I believe that are not based on rationality. I have worked very hard to break and adapt some of those mechanisms to alig...
But you've perfectly forgotten about the hoodlum, so you will in fact one box. Or, does the hoodlum somehow show up and threaten you in the moment between the scanner filling the boxes and you making your decision? That seems to add an element of delay and environmental modification that I don't think exists in the original problem, unless I'm misinterpreting.
Also, I feel like by analyzing your brain to some arbitrarily precise standard, the scanner could see 3 things: You are (or were at some point in the past) likely to think of this solution...
I mean I think the obvious answer is that an adult isn't universally entitled to their parents subsidizing their law school tuition. The actual concern is that people can brainwash their kids from a very early age, so they don't see the choices they actually have as legitimate, but I think that's a nearly intractable problem in any system. You could fix it, but only with absurd levels of tyranny.
Only replying to a tiny slice of your post here, but the original (weak) Pascal's wager argument actually does say you should pretend to believe even if you secretly don't believe, for various fuzzy reasons such as societal influence, and that maybe God will see that you were trying, and that sheer repetition might make you believe a little bit eventually
That seems entirely reasonable, insofar as the death penalty is at all. I don't think we should be going around executing people, but if we're going to then we might as well save a few lives by doing it
1. What is your probability that there is a god? A: 7%. It seems very unlikely for a lot of very logical reasons, but I think there's some chance the infinite recursion argument is true, and I also have to give some chance to any belief that most humans on earth are convinced of.
2. What is your probability that psychic powers exist? A: 15%. I feel like I'm way off the community standard here, but I think if there is a field of "unscientific" research that has promise, it is parapsychology. Frankly, psychology itself seems to border on pseudoscience some of...
The most effective diet for weight loss? Seems plausible. The most effective diet for being healthy, that sounds extremely unlikely. Even if your seven foods are nutritionally complete you're not likely to be eating them in the right balances. Intuitive body regulation sounds good there but in general, our bodies are actually not so good at guessing that kind of thing.
Yes. My uncle, who is a doctor working in gastroenterology, was talking about basically the exact same topic last week. He said that they're highly confident a significant number of patients are having entirely or near-entirely psychosomatic illnesses, but it's incredibly difficult to identify when that is specifically happening, and unfortunately due to time and money constraints they have a tendency to just slap the label on difficult cases. We just do not know enough about the human body and how the brain affects it to be confident outside of extremely obvious cases. Even a lot of what we do know is being reexamined in the last two decades due to edge cases being discovered and lack of rigor in earlier testing.
I’m a casual climber and know a lot of former pros/serious climbers - the death rate is simply staggering. I get that these people just have the drive and can’t imagine not pushing the boundaries even further, but when a single guy can tell me three different stories about watching a fellow climber or paraglider or whatever else they do in the mountains dying in front of him, that sport is too much for me to go further into. I remember reading outdoor magazines about the exploits of the most famous climbers fifteen or twenty years ago, and I look them up now and a solid chunk of them are dead. It’s wild, but there’s something appealing about it in a primal sense.
Any individual doomsday mechanism we can think of, I would agree is not nearly so simple for an AGI to execute as Yudkowsky implies. I do think that it's quite likely we're just not able to think of mechanisms even theoretically that an AGI could think of, and one or more of those might actually be quite easy to do secretly and quickly. I wouldn't call it guaranteed by any means, but intuitively this seems like the sort of thing that raw cognitive power might have a significant bearing on.
I've always quite liked Scott Alexander's answer to the problem of evil. It is absolutely useless as a defense of Abrahamic beliefs in the real world, but is relatively satisfying to an atheist wondering how that question might theoretically be answered by a true god.
In case you're not familiar, the basic idea is that God did create a perfectly good universe full of a near-infinite number of consciousnesses experiencing total bliss at all times - then decided that he wanted more net good to exist, so he made a universe which was almost exactly the same as ...
Fascinating. I can't help feeling that A escaping D's notice was rather hand-wavey, but then so is D being aligned in the first place and not a paper clip maximiser itself, so I suppose I can't complain too much about that.
some sort of general value for life, or a preference for decreased suffering of thinking beings, or the off chance we can do something to help (which i would argue is almost exactly the same low chance that we could do something to hurt it). I didn't say there wasn't an alignment problem, just that AGI whose goals don't perfectly align with those of humanity in general isn't necessarily catastrophic. Utility functions tend to have a lot of things they want to maximize, with different weights. Ensuring one or more of the above ideas is present in an AGI is important.
Fourth, seventh, and 66th out of ~200 is quite good? I agree that there are aspects of all of these nations which are objectionable, particularly China, but corruption seems like an odd example. I think there's a fair argument that the PRC has been extremely successful in many metrics given the position of the nation in 1945 - that China was in extreme poverty and I wouldn't have expected it to improve so quickly. China is undemocratic in many ways in practice, particularly press freedom and freedom of speech, but on a gears level, the system of local and regional governance is a relatively effective democracy.
I assume that any unrestrained AGI would pretty much immediately exert enough control over the mechanisms through which an AGI might take power (say, the internet, nanotech, whatever else it thinks of) to ensure that no other AI could do so without its permission. I suppose it is plausible that humanity is capable of threatening an AGI through the creation of another, but that seems rather unlikely in practice. First-mover advantage is incalculable to an AGI.
Other people have addressed the truth/belief gap. I want to talk about existential risk.
We got EXTREMELY close to extinction with nukes, more than once. Launch orders in the Cold War were given and ignored or overridden three separate times that I'm aware of, and probably more. That risk has declined but is still present. The experts were 100% correct and their urgency and doomsday predictions were arguably one of the reasons we are not all dead.
The same is true of global warming, and again there is still some risk. We probably got extremely lucky in...
That seems like extremely limited, human thinking. If we're assuming a super powerful AGI, capable of wiping out humanity with high likelihood, it is also almost certainly capable of accomplishing its goals despite our theoretical attempts to stop it without needing to kill humans. The issue, then, is not fully aligning AGI goals with human goals, but ensuring it has "don't wipe out humanity, don't cause extreme negative impacts to humanity" somewhere in its utility function. Probably doesn't even need to be weighted too strongly, if we're talking about a ...
85 is really not that low. It's an entire standard deviation above the usual threshold for diagnosis of intellectual disability. It puts the guy in the 16th percentile. I would not expect that person, who as he says has gone to college and done well there, to have issues writing coherent sentences.