Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Murska00

In this situation, I would shut down the AI, examine it to figure out if it did torture simulated copies of me and delete it entirely if it did or if I can't know with a high confidence. Threat of torture is bad, letting an UFAI free is worse. Actual torture is probably even worse, but luckily I get to choose before the experience.

Murska00

True. However, there is no such thing as 'impossible', or probability 0. And while in common language people do use 'impossible' for what is merely 'very improbable', there's no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use 'very improbable' for things that are very improbable, and 'unlikely' for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that.

Okay, I just typed that and then I went back and looked and it seems that we've talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I'll leave it here, unless you believe otherwise.

Murska00

Assuming you mean that things you believe are merely 'unlikely' can actually, more objectively, be less likely than things you believe are outright 'impossible', then I agree.

Murska20

I am confused now. Did you properly read my post? What you say here is 'I disagree, what you said is correct.'

To try and restate myself, most people use 'unlikely' like you said, but some, many of whom frequent this site, use it for 'so unlikely it is as good as impossible', and this difference can cause communication issues.

Murska00

If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible.

The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely.

And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable.

Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don't believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn't convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way - say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven't found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinitesimal probabilities.

Anyway, back to the earlier point: What I'm saying is that whether you say "X is untrue" or "X is extremely unlikely", when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing. The hypothetical skeptic who does not know to use the terminology of probabilities and likelihoods will simply call things he finds extremely unlikely 'untrue'. And then, when a person who is unused to this sort of terminology hears the words 'X is very unlikely' he considers that to mean 'X is not unlikely enough to be considered untrue, but it is still quite unlikely, which means X is quite possible, even if it is not the likeliest of possibilities'. And here a misunderstanding happens, because I meant to say that X is so unlikely that it is not worth considering, but he takes it as me saying X is unlikely, but not unlikely enough not to be worth considering.

Of course, there are also people who actually believe in something being true or untrue, meaning their probability estimate could not possibly be altered by any evidence. But in the case of most beliefs, and most people, I think that when they say 'true' or 'false', they mean 'extremely likely' or 'extremely unlikely'.

Murska00

I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They're in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don't, and shouldn't, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities.

But yeah, I'm generally rather optimistic about things. Reading LW has helped me, at that - before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them.

As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes 'This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have' -> 'This is very unlikely to be true' -> 'This is not true' and what happens in his brain is he figures it's untrue and does not consider it any further. I would assume that your thinking goes something along the lines of 'This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have' -> 'This is very unlikely to be true', and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don't consider it any further.

And both of you are most likely willing to reconsider should additional evidence present itself.

Murska00

Yes, that is probably clear to most of us here. But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions (we'll profess 'this is very unlikely' instead of 'this is not true', but what actually happens is the same thing). There's a huge amount of probability 10^{-18} deities out there, we just shrug and assume they don't exist unless enough strong (or 'good', I still don't see the difference there) evidence comes up to alter that probability enough so that it is in the realm of probabilities worth actually spending time and effort thinking about.

This hypothetical skeptic, if pressed, would most likely concede that sure, it is /possible/ that Zeus exists. He'd even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists. But he'd say that is fruitless nitpicking, because both of them are overwhelmingly unlikely to exist and the fact that they still might exist does not change our actions in any way. If you wish to argue this point, then that is fine, but if we agree here then there's no argument, just a conflict of language.

I'm trying to say that where you would say "Probability for X is very low", most people who have not learned the terminology here would normally say "X is false", even if they would concede that "X is possible but very unlikely" if pressed on it.

Murska100

It looks to me to be rather clear that what is being said ("myths are not evidence for Zeus") translates roughly to "myths are very weak evidence for Zeus, and so my beliefs are changed very little by them". Is there still a real misunderstanding here?

Murska00

Got me to register, this one. I was curious about my own reaction, here.

See, I took in the problem, thought for a moment about game theory and such, but I am not proficient in game theory. I haven't read much of it. I barely know the very basics. And many other people can do that sort of thinking much better than I can.

I took a different angle, because it should all add up to normality. I want to save human lives here. For me, the first instinct on what to do would be to cooperate on the first iteration, then cooperate on the second regardless of whether the other side defected or not and then if they cooperate, keep cooperating until the end, and if they defect, keep defecting until the end. So why it feels so obvious to me? After some thought, I came to the conclusion that that would be because it feels to me that the potential cost of two million lives lost by cooperating in the first two rounds against a player who will always defect weighs less in my decision making than the potential gain of a hundred million lives if I can convince it to cooperate with me to the end.

So, the last round. Or, similarly, the only round in a non-iterated model. At first, I felt like I should defect, when reading the post on the one-shot game. Why? Because, well, saving one billion lives or three billion compared to saving two billion or none. I can't see why the other player would cooperate in this situation, given that they only care about paperclips. I'm sure there are convincing reasons, and possibly they even would - but if they would, then that means I save three billion lives by defecting, right? Plus, I feel that /not saving any lives/ would be emotionally worse for me than saving a billion lives while potentially letting another billion die. I'm not proud of it, but it does affect my reasoning, the desire to at least get something out of it, to avoid the judgment of people who shout at me for being naive and stupid and losing out on the chance to save lives. After all, if I defect and he defects, I can just point at his choice and say he'd have done it anyway, so I saved the maximum possible lives. If I defect and he cooperates, I've saved even more. I recognize that it would be better for me on a higher level of reasoning to figure out why cooperating is better, in order to facilitate cooperation if I come across such a dilemma afterwards, but my reasoning does not influence the reasoning of the other player in this case, so even if I convince myself with a great philosophical argument that cooperating is better, the fact of the matter is that player 2 either defects or cooperates completely regardless of what I do, according to his own philosophical arguments to himself about what he should do, and in either case I should defect.

And a rationalist should win, right? I note the difference here to Newcomb's Problem, which I would one-box in, is that Player 2 has no magical way of knowing what I will do. In Newcomb's Problem, if I one-box I gain a million, if I two-box I gain a thousand, so I one-box to gain a million. In this case, Player 2 either defects of cooperates and that does not depend on me and my reasoning and arguments and game-theoric musings in any way. My choice is to defect, because that way I save the most lives possible in that situation. If I were to convince myself to cooperate, that would not change the world into one where Player 2 would also convince itself to cooperate, it would affect Player 2's decision in no way at all.

But somehow the case seems different for the last round of an iterative game (and, even more so, for all the preceding rounds). This, in turn, makes me feel worried, because it is a sign that some bias or another may be affecting me adversely here. One thing is obviously me being blinded to what the numbers 'billion' and 'million' actually mean, but I try to compensate for that as best I can. Anyway, by the 100th round, after 99 rounds of cooperation, I get the choice to cooperate or to defect. At this point, me and the other player have a cooperative relationship. We've gained a lot. But our mutual interaction is about to end, which means there are no repercussions about defecting here, which means I should maximize my winnings by defecting. However, it feels to me that, since I already know Player 2 is enough of a winner-type to cooperate with me for all the previous rounds, he realizes the same thing. And in that case, I should cooperate here, to maximize my gains. At which point defecting makes more sense. Repeating forever.

What tilts the instinctual decision towards cooperating in this particular case seems to me to be that, regardless of what happens, I have already saved 198 million people. Whether I now save 0, 1, 2 or 3 million more is not such a big thing in comparison (even though it obviously is, but big numbers make me blind). Because I cannot reason myself into either defecting or cooperating, and thus I am unable to assign meaningful probabilities for what Player 2 will do, I cooperate by default because I feel that, other things being equal, it's the 'right' thing to do. If I am fooled and P2 defects, one million people die that would not have died otherwise, but I can bear that burden in the knowledge that I've saved 198 million. And meanwhile, it's P2 that has to bear the label of traitor, which means that I will be better able to justify myself to both myself and society at large. Obviously this reasoning doesn't seem very good. It feels like I am convincing myself here that my reasoning about what should be done somehow influences the reasoning of Player 2, after condemning that in the one-shot case just above. But then again, I have interacted with P2 for 99 rounds now, influencing him by my reasoning on what's the best way to act.

And, of course, there's the looming problem that if either of us had reasoned that it was likely for the other to defect in the last round no matter what, then it would have been better for us to defect in the second-to-last round, which did not happen. By defecting on round 99, you gain +3, and then on round 100 you're pretty much guaranteed to get +1, which is exactly the same gain as cooperating twice. By defecting earlier than round 98, you lose more than you gain, assuming all the remaining rounds are defect, which seems to me like a reasonable assumption. But by being betrayed on round 99 you get 0, and gain only 1 afterwards on round 100, which means you're left with 3 less than you could've had. Still, I don't care about how many paperclips P2 gets, only about how many lives I save. I, as a human, have an innate sense of 'fair play' that makes 2+2 cooperate feel better than 3+1 double defect in a void. However, does that 'fair play' count as more weighty in decision-making than the risk that P2 defects and I gain 1, as opposed to 4? After all, by round 99 if I defect, I'm guaranteed +4. If I cooperate, I'm only guaranteed +1. And even if we both cooperate on round 99, there is still the risk that I gain nothing in the last round. Fair play does not seem worth even the possibility of losing several million lives. Still, the whole basis of this is that I don't care about Player 2, I only care about lives saved, and thus giving him the opportunity to cooperate gives me the chance to save more lives (at this point, even if he defects and I cooperate for the remaining turns I've still saved more than I would have by defecting from the beginning). So I feel, weakly, that I should cooperate until the end here after all simply because it seems that only reasoning that would make me cooperate until the end would give me the ability to cooperate at all, and thus save the most possible lives. But I have not convinced myself on this yet, because it still feels to me that I am unsure of what I would do on that very last round, when P2's choice is already locked in, and millions of lives are at stake.

Now, the above is simply an analysis of my instinctual choices, and me trying to read into why those were my instinctual choices. But I am not confident in stating they are the correct choices, I am just trying to write my way into better understanding of how I decide things.

Murska30

Hello.

I'm 21, from Finland. Studying physics, right now. I've felt for my entire life that that has been the path I want to take, and even after significant soul-searching lately on whether I really do want to go for it or not, partially sparked by reading LW, I still haven't changed my mind thus far.

I've read quite a bit of the Sequences and various other posts, mostly because many of the topics are very interesting (though I've found that I am interested in a lot of things), some of them affirming to my previous views and others disillusioning. It feels like the site has pushed me towards a change in how I think and what I think about, and although that change is not yet finished, I feel that it's starting to be possible for me to contribute in some cases. And having the option is always nice, because I am unlikely to come back to write down my insights if I am unable to send them immediately. And, well, I enjoy the spirit of the community from what I have seen, when compared to various other places where I have attempted to discuss things that interest me.

I am currently struggling with some difficulties in forcing myself to put in the necessary work for the goals I wish to reach. I expect I'll succeed sooner or later, and preferably sooner, but that's what I'm currently working on at any rate.

So, again, hello. I hope every one of you has a nice life.

Load More