shminux comments on Welcome to Less Wrong! (6th thread, July 2013) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Welcome, Avi!
It looks like I downvoted three of your previous comments. Sorry about that (not really, it had to be done). Here is my reasoning, since you asked:
Your comment on AI avoiding destruction suggested that you neither read the previous discussion of the issue first, nor thought about it in any depth, just blurted out the first or second idea that you came up with.
Your retracted FTL question indicated that you didn't bother searching online for one of the most common questions ever asked about entanglement. Not until later, anyway. So the downvote worked as intended there.
Your comment on the vague quasi-philosophical concept of superdeterminism purported to provide some sort of a proof of it being not Turing-computable, yet did not discuss why the T.M. would not halt, only gave some poorly described thought experiment.
I am sorry you got a harsher-that-average welcome to this forum, I hope your comment quality improves after these few bumps to your ego.
Good for you. Note that the Quantum sequence is one of the harder and more controversial ones, consider alternative sources, like Scott Aaronson's semi-popular Quantum Computing Democritus, written by an expert in the field.
That's quite wise. If you write down what you want to say and then look back at it after you finish reading, you will likely find your original thoughts naive in retrospect. But a good exercise nonetheless.
If at some point you think that after a cursory reading of some post you found a hole in Eliezer's reasoning that had not been discussed in the comments, you are probably mistaken. Consider this post of mine as a warning.
Also note that as a self-identifying "Orthodox Jewish", you are bound to have compartmentalized a lot, and Eliezer's and Yvain's posts tend to vaporize these barriers quite spectacularly, so be warned, young Draco. Your original identity is not likely to remain intact, either.
With these caveats, have fun! :)
Joining these forums can serve as something of a reality check to gifted young people; they may be used to most any half-baked thought still being sufficient to impress their environment. Rarely is polish needed, rarely are "proofs" thoroughly nitpicked. Getting actual feedback knocking them off of their pedestal ("the smartest one around") can be ego-bruising, since we usually define ourselves through our perceived strengths. Ego-bruising, yet really, really important for actual personal and intellectual growth.
Blessed be the ones growing up around other minds who call them out on their mistakes, intellects against which they can grow their potential.
(I don't mean this as applying specifically to Avi, but more as a general observation.)
Yep. I'll put it even more directly.
Smart people growing up in environments where most people around them are less smart tend to develop a highly convenient habit of handwaving or bullshitting through issues. However when they find themselves among people who are at least as smart as they are and some are smarter, that habit often leads to problems and a need for adjustment :-)
Does that go both ways? That is, can I "nitpick" other people's comments and posts? Also, if I find a typo in a post (in the sequences so far, I've spotted at least 2), is it acceptable to comment just pointing out the typo?
Why not PM them first?
This is my own practice. My reasoning is that pointing out a typo is of no enduring interest to other readers, and renders the comments section less valuable to other readers; so if it's convenient to contact the author more quietly, one should.
Yes. I recommend using ctrl-f to ensure no one else has already pointed out that typo.
Of course you can. Whether it's wise to do so is an entirely different question :-D
I don't think I would have minded as much if there would have been comments explaining why they thought I was wrong. It was the lack of response that bothered me.
(And what's with this "You are trying to submit too fast"? I'm not allowed to post too many comments in a row?)
Yes. If I remember correctly, LW also implements some form of slow-banning (the amount of time required between your comments depends on your total karma), but I may be recalling a feature request as an implemented feature.
I thought it was caused by having a lot of recent posts downvoted.
From your post that you linked: "Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies." I think that's what I did on my FTL comment. (Incidentally, I had looked online and found several different versions of an experiment that said the same as I did in different ways, but the answers didn't explain well enough for me).
I actually spent at least an hour reading through the comments on that AI post, and decided that the previous discussion wasn't enough for my idea.
I'm not too good at anticipating which part of my arguments people will disagree with or not understand, so that may be why I don't explain fully. I was hoping for a response that I could then see what's missing and fill it in. It's usually better explained in my head than I write down.
I read most of the posts offline in ebooks. That means I don't see the comments unless I then go online and look. Is there a set of ebooks that includes comments? (For all I know, most of my ideas have already been said and refuted.)
And is he perfect?
I don't know, but sounds like a good idea. Would be rather Talmudic in spirit. Unfortunately, most of the comments are fluff not worth reading, and separating the few percent that aren't is not that easy. Maybe pick the threads with top 10 comments by karma or something.
Oh, far from it. I think that some of his statements are flat out wrong, but I only make this determination where either I have the relevant expertise or several experts disagree with him after considering his point in earnest.
Don't many experts disagree with him on his MWI view on quantum mechanics?
Also note that replacing "Everett branches" with "possible worlds" works in 99% of the decision-theoretic arguments Eliezer makes, so there is no need to sweat MWI vs other interpretations. I would be more interested to hear your opinion on the Trolley problem, the Newcomb's problem, and the Dust specks vs Torture issue. Assuming, of course, that you have studied it in some depth and went over the various arguments on both sides, the process you must be intimately familiar with if you have attended a yeshiva.
I've seen Newcomb and Dust specks vs Torture but not Trolley (although I've seen that one before in other places). Which sequences do I need to finish for those?
If the trolley one is the same as the "standard" version, then it's fairly trivial within the framework of Orthodox Judaism (if I'm allowed to bring that in), because of strict rules about death. I'll elaborate further when I'm up to the question. The other two are a lot more complicated for me.
I don't think there's a Lesswrong-specific take on the trolley problem, so I'm assuming shminux is just referring to the usual one.
Yes, the standard Trolley problem, sorry. For more LW-specific problems, consider Parfit's hitchhiker.
Of course you are allowed to bring it in. And, unless you insist that it is the One True Way, as opposed to just one of many religious and moral frameworks, you probably will not be judged harshly. So, by all means!
So according to Orthodox Judaism, one is not allowed to (even indirectly) cause a death, even when the alternative is considered worse. The standard example is if you're in a city and the "enemy" demands you hand over a specific person to be killed (unjustly), and says if you don't do so, they will destroy the whole city and everyone will die (including that person). The rule in that situation is that you aren't allowed to hand them over. Accepting that as an axiom, the trivial answer to the trolley situation is “don't do anything”. Maintain the status quo. You cannot cause a death, even though it will save ten other people.
Parfit's hitchhiker also appears trivial. It seems to assume I place no value on telling the truth. As I do, in fact, place a high utility on being truthful (based on Judaism) , my saying "Yes" will translate into a truthful expression on my face and I will get the ride.
Note: I got the link from searching for "midvar sheker tirchak", which is the Bible's verse that says not to lie, roughly translated as "distance yourself from falsehood.
On another topic, if I think that it is the “One True Way”, but don't say that, is that OK?
Thank you, I appreciate your replies.
Hmm, I see. So, a clear and simple deontological rule. So, if you see your children being slaughtered in front of you, and all you need to do to save them and to kill the attacker is to press a button, you are not allowed to do it?
Also, does this mean that there cannot be Orthodox Jewish soldiers? If so, is this a recent development, given that ancient Hebrews fought and killed without a second thought? Or is there another reason why it was OK to kill your enemy in King David's time, but not now?
Right, ethical systems which value honesty absolutely have no difficulty with this. But
is this a utilitarian calculation or an absolute injunction, like in the previous case, where you are not allowed to kill, no matter what? Or is there some threshold of (dis)utility above which lying is OK? If so, what price demanded by the selfish driver would surely cause a good Orthodox Jewish hitchhiker to attempt to lie?
First, note that I do not represent LW in any way and often misjudge the reaction of others. But my guess would be that simply stating this is not an issue, but explicitly using this belief in an argument may result in downvoting. This community is mildly hypocritical in this regard, as people who push their transhumanist views here as "the best/objective/universal morality" (I am exaggerating) can get away with it, but what can you do.
I may not have given enough detail. The prohibition against killing is specifically innocent people. There is a death penalty for many crimes, including murder (although not as far as EY seems to think. He once said that the Bible gives the death penalty for crossdressing. Evidence suggests otherwise. But that's another topic.) So:
Assuming this attacker is the one killing or threatening to kill your kids, you are allowed to kill him (although you are supposed to try to injure them if killing isn't necessary to stop them). You wouldn't be allowed to kill someone else who is innocent, even to save many people.
I don't know if you're familiar with the current debate in Israel over the draft? It's not really related, though. Again, the “ancient Hebrews” fights, were usually either to reclaim parts of Israel which belonged to them from the gentile nations that were inhabiting them, or to defend themselves against attackers. In both scenarios, the “victims” weren't innocent. For some more info, see here, here, and here.
(By the way, I just saw this while looking up that last link, which (mostly) confirms what I said about the Trolley problem.
I realized after I posted that answer yesterday that I could conceive of a case that would work for me, in the spirit of the Parfit's hitchhiker example. Namely, if I knew that when I got to town there would be someone who's life I could save, but only with $100. (Also assuming that I've got only $100 cash total). That person's life would take precedence over telling the truth, and I wouldn't get the ride. There isn't anything I could do in terms of prior obligation that would override the life concern of that person later.
What happens if instead of "causing" a death, you're doing something with some probability of causing a death? For instance, handing someone over to the enemy results in a 99% probability of them being killed by the enemy. What if it's only 10%? What if the enemy isn't going to kill him, but you need to drive through a war zone to give him the prisoner, and driving through the war zone results in a 10% chance of the person being killed? What if the enemy says that he's going to kill one person from his jail no matter what, and he puts the person in the same jail (so that instead of 1 person being killed out of 9 in the jail, 1 person is killed out of a group of 10 that includes the new person, thus increasing the chance this specific person is killed, but not increasing the number of people killed)?
I think that a 99% probability would be the same as 100% for this purpose. A “doubt of death” is considered as strong as a definite death in general. In the war zone example, I think (with a little less confidence) a 10% would work the same. You simply don't take into account the potential benefits, when weighed against an action that you must do that will cause a death. On the other hand, the person being requested is allowed to sacrifice their own life (or a 10% chance of doing so) to save others. I'll have to think about your last case a little more.
Some high-profile physicists disagree, others agree. Very few believe in some sort of objective collapse these days, but some still do. This strange situation is possible because MWI is not a well-formed physical model but more of an inspirational ontological outlook.