All of SithLord13's Comments + Replies

Personally, I think the update most people should be making is the one getting the least attention. That even a 30% chance means 3 out of 10 times. Things far more unlikely than 3 out of 10 happen every day. But because we assign such importance to the election, we assign a much greater confidence to our predictions, even when we know we're not that confident.

-2username2
Except that the chances weren't 30%. That was a number generated by Nate Silver based on polling methodology that was not calibrated to the reality on the ground. I think you can find much deeper lessons here than that, especially given it seems to be a repeat of the Brexit phenomenon. Fool me once, fool me twice...

This discussion was about agential risks, the part I quoted was talking about extreme ecoterrorism as a result of environmental degradation. In other words, the main post was partially about stricter regulations on CO2 as a means of minimizing the risk of a potential doomsday scenario from an anti global warming group.

I think the issue here might be slightly different than posed. I think the real issue is that children instinctively assume they're running on corrupted hardware. For all priors in math, they've had a solvable problem. They've had problems they couldn't solve, and then been shown it was a mistake on their part. Without good cause, why would they suddenly assume all their priors are wrong, and not just that they're failing to grasp it? Given their priors and information, it's ration to expect that they missed something.

4James_Miller
Yes, I agree. It shows children are trying to guess the teacher's password and are not doing math. Interestingly, when I asked my son this question he said you couldn't find the answer. When I asked how he knew that he said he has seen other math problems where you don't have enough information to solve.

I think the best reason for him to raise that possibility is to give a clear analogy. Nukes are undoubtedly airgapped from the net, and there's no chance anyone with the capacity to penetrate would think otherwise. It's just an easy to grasp way for him to present it to the public.

Citation? This is commonly asserted by AI risk proponents, but I'm not sure I believe it. My best friend's values are slightly misaligned relative to my own, but if my best friend became superintelligent, that seems to me like it'd be a pretty good outcome.

I highly recommend reading this.

4hg00
I'm familiar with lots of the things Eliezer Yudkowsky has said about AI. That doesn't mean I agree with them. Less Wrong has an unfortunate culture of not discussing topics once the Great Teacher has made a pronouncement. Plus, I don't think philosophytorres' claim is obvious even if you accept Yudkowsky's arguments. From here. OK, so do my best friend's values constitute a 90% match? A 99.9% match? Do they pass the satisficing threshold? Also, Eliezer's boredom-free scenario sounds like a pretty good outcome to me, all things considered. If an AGI modified me so I could no longer get bored, and then replayed a peak experience for me for millions of years, I'd consider that a positive singularity. Certainly not a "catastrophe" in the sense that an earthquake is a catastrophe. (Well, perhaps a catastrophe of opportunity cost, but basically every outcome is a catastrophe of opportunity cost on a long enough timescale, so that's not a very interesting objection.) The utility function is not up for grabs--I am the expert on my values, not the Great Teacher. Here's the abstract from his 2011 paper: It sounds to me like Eliezer's point is more about the complexity of values, not the need to prevent slight misalignment. In other words, Eliezer seems to argue here that a naively programmed definition of "positive value" constitutes a gross misalignment, NOT that a slight misalignment constitutes a catastrophic outcome. Please think critically.

Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.

Disregarding any discussion of legitimate climate concerns, isn't this a really bad decision? Isn't it better to be unblackmailable, to disincentivize blackmail.

1philosophytorres
What do you mean? How is mitigating climate change related to blackmail?

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Can you expand on this a bit? (Full disclosure I'm still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they're honest answers and that people are actually bound by their morals (or are at least answering a... (read more)

5username2
This is deserving of a much longer answer which I have not had the time to write and probably won't any time soon, I'm sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation. Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn't some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes. Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn't enlighten us about the hole digging nature at all. Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I'd rather have dust in no one's eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn't tell us much about our underlying desires because there isn't some consistent mathematical utility function underlying our responses. At best it just reveals how we've been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.

There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it's an emotionally biasing topic, you've got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you've got the fa... (read more)

1username2
Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie's choice "would you rather..." game). The mother should still act to protect her child. I'm not joking. You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what. But I don't think this is necessary -- you don't need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn't want to change this, IMHO. And I think the OP's question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn't have asked the question...

Could chewing gum serve as a suitable replacement for you?