Comment author: InhalingExhaler 31 January 2016 06:36:58PM *  4 points [-]

Hello.

I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the following text (I am ready to move it from welcome board to any other suitable one if needed):

John was reading a book called “Rationality: From AI to Zombies” and thought: “Well, I am advised to doubt my beliefs, as some of them may turn out to be wrong”. So, it occurred to John to try do doubt the following statement: “Extraordinary claim requires extraordinary evidence”. But that was impossible to doubt, as this statement was a straightforward implication of the theorem X of probability theory, which John, as a mathematician, knew to be correct. After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?” But John didn’t even consider that idea seriously, because such an extraordinary claim would definitely require extraordinary evidence.

Fifteen minutes later, John spontaneously considered the following hypothetical situation: He visualized a religious person, Jane, who is reading a book called “Rationality: From AI to Zombies”. After reading for some time, Jane thinks that she should try to doubt her belief in Zeus. But it is definitely an impossible action, as existence of Zeus is confirmed in the Sacred Book of Lightning, which, as Jane knows, contains only Ultimate and Absolute Truth. After a while a wild thought runs through her mind: “What if the Sacred Book of Lightning actually consists of lies?” But Jane doesn’t even consider the idea seriously, because the Book is surely written by Zeus himself, who doesn’t ever lie.

From this hypothetical situation John concluded that if he couldn’t doubt B because he believed A, and couldn’t doubt A because he believed B, he’d better try to doubt A and B simultaneously, as he would be cheating otherwise. So, he attempted to simultaneously doubt the facts that “Extraordinary claim requires extraordinary evidence” and that “Theorem X is proved correctly”.

As he attempted to do it, and succeeded, he spent some more time considering Jane’s position before settling his doubt. Jane justifies her set of beliefs by Faith. Faith is certainly an implication of her beliefs (the ones about reliability of the Sacred Book), and Faith certainly belongs to the meta-level of her thinking, affecting her ideas about existence of Zeus located at the object level.

So, John generalized that if he had some meta-level process controlling his thoughts and this process was implied by the very thought he was currently doubting, it would be wise to suspend the process for the time of doubting. Because not following this rule could make him not to lose some beliefs which, from the outside perspective, looked as ridiculous as Jane’s religion. John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix? He didn’t allow himself to disregard this nonsense with “Extraordinary claim requires extraordinary evidence” – otherwise he would fail doubting this very statement and there would be no point in this whole crisis of faith which he deliberately inflicted on himself…

Jane, in whose imagination the whole story took place, yawned and closed a book called “Rationality: From AI to Zombies”, lying in front of her. If learning rationality was going to make her doubt herself out of rationality, why would she even bother to try that? She was comfortable with her belief in Zeus, and the only theory which could point out her mistakes apparently ended up in self-annihilation. Or, shortly, who would believe anyone saying “We have evidence that considering evidence leads you to truth, therefore it is true that considering evidence leads you to truth”?

Comment author: RichardKennaway 01 February 2016 11:58:42AM 2 points [-]

Welcome to Less Wrong!

My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective.

Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.

Comment author: skeptical_lurker 31 January 2016 06:13:04PM *  4 points [-]

I think an interesting aspect here is that Eugine/Azeroth/Ra/Lion is a neoreactionary and believes in a strong hierarchy. Well, the mods are above you in the heirarchy, so respect their authority!

How can one worry about a mod abusing their power in an online forum, and yet not worry about a monarch abusing their far greater power in a monarchy?

Comment author: RichardKennaway 01 February 2016 11:28:36AM 1 point [-]

I think an interesting aspect here is that Eugine/Azeroth/Ra/Lion is a neoreactionary and believes in a strong hierarchy. Well, the mods are above you in the heirarchy, so respect their authority!

OTOH, an nrx might argue that the strength of the authority must be continually tested by fighting it. Their ideal society is a struggle of all against all, all the time. Respect is but the acknowledgement of another's greater power, to be granted for only so long as they actually have it, and only to their face, as a polite ritual. They would argue that this is the essential nature of all society, and that only the weak pretend otherwise, the weak being everyone but them and their heroes from history. The strong do what they will and the weak bear what they must. Strength is the only real virtue, all others being but idle amusements of the leisure that only strength can provide.

Comment author: EGarrett 31 January 2016 12:16:29PM *  1 point [-]

He didn't just mass downvote. He purposefully attempted to remove other contributing members from the community. He also did not confess to it indicating both dishonesty and that he was aware that his actions were unacceptable. He also multi-accounted and still does and posts absolutely disgusting and logic-free racial comments and trolling (referring to black scientists to "dancing bears." You're welcome to demonstrate what's rational or constructive about that).

You don't just undo those actions, you punish the person who takes part in them in order to deter the action occurring in the future. So that there can be civil discourse going forward. This is rational and a standard part of human social requirements.

Comment author: RichardKennaway 31 January 2016 05:12:26PM 0 points [-]

He purposefully attempted to remove other contributing members from the community. He also did not confess to it

Never publicly, but I believe that (when he was posting as "Eugine Nier") a moderator did question him privately about it and he said that was his intention.

Comment author: rpmcruz 30 January 2016 04:55:58PM *  2 points [-]

I am new here. But what about just disable downvoting? Good comments will be voted up, bad comments will not be voted at all, and will rot in the bottom. Why remove them?

Possibly, you could have a "report" button to ask a moderator to review a very offensive comment.

Comment author: RichardKennaway 30 January 2016 09:01:07PM 2 points [-]

Possibly, you could have a "report" button to ask a moderator to review a very offensive comment.

I believe there used to be one, but it went away some years ago. I don't know why. Maybe it was being abused, or was found to just not be useful.

Comment author: IlyaShpitser 30 January 2016 05:30:33PM 0 points [-]

I think comparing Harvard to a research group is a type error, though. Research groups don't typically do this. I am not going to defend Unis shaking alums down for money, especially given what they do with it.

Comment author: RichardKennaway 30 January 2016 08:59:09PM 0 points [-]

Research groups don't typically do this.

In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly "in orbit" by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?

Comment author: bogus 30 January 2016 07:54:50PM *  1 point [-]

Whereas he already "knows" that the truth is NRx, and what he calls "rationality" is simply whatever path you take to go from your current beliefs to NRx.

I assume that NRX does contain some genuine insight about the real world, even though some or perhaps even most of it may be quite wrong. Anyway, there are a lot of folks out there who have made up their mind already and are not going to be convinced otherwise, much like Eugine or The_Lion or whoever. LW is plenty resilient enough to deal with such people - and indeed, this is a key requirement if it is to be successful.

Comment author: RichardKennaway 30 January 2016 08:45:16PM 0 points [-]

I assume that NRX does contain some genuine insight about the real world, even though some or perhaps even most of it may be quite wrong.

For me, that is far too low a bar for getting my interest.

Comment author: Fluttershy 28 January 2016 10:55:30AM 0 points [-]

Oops. I've tried to clarify that he's only interested in FAI research, not AI research on the whole.

Comment author: RichardKennaway 28 January 2016 11:41:56AM 1 point [-]

FAI is only a problem because of AI. The imminence of the problem depends on where AI is now and how rapidly it is progressing. To know these things, one must know how AI (real, current and past AI, not future, hypothetical AI, still less speculative, magical AI) is done, and to know this in technical terms, not fluff.

I don't know how much your friend knows already, but perhaps a crash course in Russell and Norvig, plus technical papers on developments since then (i.e. Deep Learning) would be appropriate.

Comment author: Fluttershy 28 January 2016 10:07:24AM *  2 points [-]

I'm trying to help a dear friend who would like to work on FAI research, to overcome a strong fear that arises when thinking about unfavorable outcomes involving AI. Thinking about either the possibility that he'll die, or the possibility that an x-risk like UFAI will wipe us out, tends to strongly trigger him, leaving him depressed, scared, and sad. Just reading the recent LW article about how a computer beat a professional Go player triggered him quite strongly.

I've suggested trying to desensitize him via gradual exposure; the approach would be similar to the way in which people who are afraid of snakes can lose their fear of snakes by handling rope (which looks like a snake) until handling rope is no longer scary, and then looking at pictures of snakes until such pictures are no longer scary, and then finally handling a snake when they are ready. However, we've been struggling to think of what a sufficiently easy and non-scary first step might be for my friend; everything I've come up with as a first step akin to handling rope has been too scary for him to want to attempt so far.

I don't think that I'll even be able to convince my friend that desensitization training will be worth it at all--he's afraid that the training might trigger him, and leave him in a depression too deep for him to climb out of. At the same time, he's so incredibly nice, and he really wants to help with FAI research, and maybe even work for MIRI in the "unlikely" (according to him) event that he is able to overcome his fears. Are there reasonable alternatives to, say, desensitization therapy? Are there any really easy and non-scary first steps he might be okay with trying if he can be convinced to try desensitization therapy? Is there any other advice that might be helpful to him?

Comment author: RichardKennaway 28 January 2016 11:27:46AM 2 points [-]

He sounds like someone with a phobia of fire wanting to be a fireman. Why does he want to work on FAI? Would not going anywhere near the subject work for him instead?

Comment author: Clarity 27 January 2016 12:20:08PM 0 points [-]

How true is the proverb: 'To break habit you must make a habit'

Comment author: RichardKennaway 27 January 2016 01:03:00PM 0 points [-]

Was that "How true?" or "How true!"?

I think it is true, with the proviso that the habit to make can be the habit of noticing when the old habit is about to happen and not letting it.

Comment author: username2 25 January 2016 07:36:31PM 0 points [-]

Given that Eugine very likely will be able to get around an IP ban, I wonder if it is legally possible for MIRI to take out a restraining order that prevents him from posting to Less Wrong? This will of course only be possible if we can discover his real identity.

Comment author: RichardKennaway 25 January 2016 07:58:33PM 1 point [-]

That would be an absurd overreaction. I can't see the law taking the matter seriously, even if anyone knew "Eugine's" real identity.

View more: Prev | Next