Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 07 February 2016 07:05:45PM 0 points [-]

Translation: it's better to be drunk.

Not sure this qualifies as a rationality quote.

Comment author: RichardKennaway 08 February 2016 04:52:03PM 0 points [-]

Orthodox Islamic apologists rescue Khayyam by interpreting "wine" as spiritual intoxication. (How well this really fits is another matter. And the Song of Solomon is about Christ's love for His Church.) But one can as easily interpret the verse in a rationalist way. Channelling Fitzgerald for a moment...

The sot knows nothing but the tavern's wine
Rumi and Shams but ecstacy divine
The Way of Eli is not here nor there
But in the pursuit of a Fun sublime!

Great literature has as many versions as there are readers.

Comment author: Vaniver 05 February 2016 03:37:37PM 1 point [-]

This is such a technologically good idea that it must happen within a few years.

So, early on people were excited about machine translation--yeah, it wasn't great, but you could just have human translators start from the machine translation and fix the mess.

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

Comment author: RichardKennaway 05 February 2016 05:54:06PM 1 point [-]

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

The talented ones, yes, but there will be a lot of temptation for the also rans. You've got a blogging deadline and nothing is coming together, why not fire up the bot and get topical article ideas? "It's just supplying facts and links, and the way it weaves them into a coherent structure, well I could have done that, of course I could, but why keep a dog and bark myself? The real creative work is in the writing." That's how I see the slippery slope starting, into the Faustian pact.

Comment author: Lumifer 05 February 2016 04:12:00PM 1 point [-]

I'm just saying it's so technologically cool, someone will do it as soon as it's possible.

Ahem. ELIZA, the chat bot, was made in mid-1960s. And...:

Weizenbaum tells us that he was shocked by the experience of releasing ELIZA (also known as "Doctor") to the nontechnical staff at the MIT AI Lab. Secretaries and nontechnical administrative staff thought the machine was a "real" therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy.

Comment author: RichardKennaway 05 February 2016 04:27:52PM 0 points [-]

I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.

Comment author: Lumifer 05 February 2016 03:39:03PM *  2 points [-]

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

LOL. Wake up and smell the tea :-) People who want to push advertising into your eyeballs now routinely construct on-demand (as in, in response to a Google query) websites/blogs/etc. just so that you'd look at them and they get paid for ad impressions.

See e.g. recent Yvain:

EHealthMe’s business model is to make an automated program that runs through every single drug and every possible side effect, scrapes the FDA database for examples, then autopublishes an ad-filled web page titled “COULD $DRUG CAUSE $SIDEEFFECT?”. It populates the page by spewing random FDA data all over it, concludes “$SIDEEFFECT is found among people who take $DRUG”, and offers a link to a support group for $DRUG patients suffering from $SIDE_EFFECT. Needless to say, the support group is an automatically-generated forum with no posts in it.

Now, you say you want to turn this to the light side..?

Comment author: RichardKennaway 05 February 2016 04:03:20PM 1 point [-]

Now, you say you want to turn this to the light side..?

I'm just saying it's so technologically cool, someone will do it as soon as it's possible. Whether it would actually be good in the larger scheme of things is quite another matter. I can see an arms race developing between drones rewriting bot-written copy and exposers of the same, together with scandals of well-known star bloggers discovered to be using mechanical assistance from time to time. There would be a furious debate over whether using a bot is actually a legitimate form of writing. All very much like drugs and sport.

Bot-assisted writing may make the traditional essay useless as a way of assessing students, perhaps to be replaced by oral exams in a Faraday cage. On Facebook, how will you know whether your friends' witticisms are their own work, especially the ones you've never been face to face with?

In response to Upcoming LW Changes
Comment author: Lumifer 03 February 2016 08:36:30PM 1 point [-]

On a tangential note: it would be cute for LW to acquire a collection of resident chat bots, preferably ones which could be dissected and rewired by all and sundry. Erecting defences against chat bots run amok would also be enlightening :-)

Comment author: RichardKennaway 05 February 2016 09:40:06AM 3 points [-]

Watson can already philosophize at you from TED talks. Someone needs to develop a chat bot based on it, and have it learn from the Sequences.

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

At present, most bot-written books are pretty obviously junk, but instead of going for volume and long tails, you could hire human editors to make the words read more as if they were originated by a human being. They'd have to have a good command of English, though, so the stereotypical outsourcing to Bangalore wouldn't be good enough. Ideally, you'd want people who were not just native speakers, but native to American culture, smart, familiar with the general area of the ideas, and good with words. Existing bloggers, that is. Offer this to them as a research tool. It would supply a blogger with a stream of article outlines and the blogger would polish them up. Students with essays to write could use it as well, and since every essay would be different, you wouldn't be able to detect it wasn't the student's work by googling phrases from it.

This is such a technologically good idea that it must happen within a few years.

Comment author: RichardKennaway 02 February 2016 03:10:20PM 0 points [-]

Suppose Alice and Bob are the same person. Alice tosses a coin a large number of times and records the results.

Should she disbelieve what she reads?

Comment author: IlyaShpitser 02 February 2016 03:57:55AM 2 points [-]

I apologize if I caused you any distress, that was not my intention.

Comment author: RichardKennaway 02 February 2016 02:49:26PM 0 points [-]
Comment author: InhalingExhaler 01 February 2016 05:04:52PM 0 points [-]

Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn't doubt... But it is kind of dogmatic and feels wrong. So should it maybe be like "Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X". But this still implies "There are cases when you shouldn't doubt", which is still suspicious and doesn't sound "rational". I mean, doesn't sound like making the map reflect the territory.

Comment author: RichardKennaway 01 February 2016 10:37:12PM 0 points [-]

It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.

Comment author: InhalingExhaler 31 January 2016 06:36:58PM *  4 points [-]


I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the following text (I am ready to move it from welcome board to any other suitable one if needed):

John was reading a book called “Rationality: From AI to Zombies” and thought: “Well, I am advised to doubt my beliefs, as some of them may turn out to be wrong”. So, it occurred to John to try do doubt the following statement: “Extraordinary claim requires extraordinary evidence”. But that was impossible to doubt, as this statement was a straightforward implication of the theorem X of probability theory, which John, as a mathematician, knew to be correct. After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?” But John didn’t even consider that idea seriously, because such an extraordinary claim would definitely require extraordinary evidence.

Fifteen minutes later, John spontaneously considered the following hypothetical situation: He visualized a religious person, Jane, who is reading a book called “Rationality: From AI to Zombies”. After reading for some time, Jane thinks that she should try to doubt her belief in Zeus. But it is definitely an impossible action, as existence of Zeus is confirmed in the Sacred Book of Lightning, which, as Jane knows, contains only Ultimate and Absolute Truth. After a while a wild thought runs through her mind: “What if the Sacred Book of Lightning actually consists of lies?” But Jane doesn’t even consider the idea seriously, because the Book is surely written by Zeus himself, who doesn’t ever lie.

From this hypothetical situation John concluded that if he couldn’t doubt B because he believed A, and couldn’t doubt A because he believed B, he’d better try to doubt A and B simultaneously, as he would be cheating otherwise. So, he attempted to simultaneously doubt the facts that “Extraordinary claim requires extraordinary evidence” and that “Theorem X is proved correctly”.

As he attempted to do it, and succeeded, he spent some more time considering Jane’s position before settling his doubt. Jane justifies her set of beliefs by Faith. Faith is certainly an implication of her beliefs (the ones about reliability of the Sacred Book), and Faith certainly belongs to the meta-level of her thinking, affecting her ideas about existence of Zeus located at the object level.

So, John generalized that if he had some meta-level process controlling his thoughts and this process was implied by the very thought he was currently doubting, it would be wise to suspend the process for the time of doubting. Because not following this rule could make him not to lose some beliefs which, from the outside perspective, looked as ridiculous as Jane’s religion. John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix? He didn’t allow himself to disregard this nonsense with “Extraordinary claim requires extraordinary evidence” – otherwise he would fail doubting this very statement and there would be no point in this whole crisis of faith which he deliberately inflicted on himself…

Jane, in whose imagination the whole story took place, yawned and closed a book called “Rationality: From AI to Zombies”, lying in front of her. If learning rationality was going to make her doubt herself out of rationality, why would she even bother to try that? She was comfortable with her belief in Zeus, and the only theory which could point out her mistakes apparently ended up in self-annihilation. Or, shortly, who would believe anyone saying “We have evidence that considering evidence leads you to truth, therefore it is true that considering evidence leads you to truth”?

Comment author: RichardKennaway 01 February 2016 11:58:42AM 2 points [-]

Welcome to Less Wrong!

My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective.

Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.

Comment author: EGarrett 31 January 2016 12:16:29PM *  1 point [-]

He didn't just mass downvote. He purposefully attempted to remove other contributing members from the community. He also did not confess to it indicating both dishonesty and that he was aware that his actions were unacceptable. He also multi-accounted and still does and posts absolutely disgusting and logic-free racial comments and trolling (referring to black scientists to "dancing bears." You're welcome to demonstrate what's rational or constructive about that).

You don't just undo those actions, you punish the person who takes part in them in order to deter the action occurring in the future. So that there can be civil discourse going forward. This is rational and a standard part of human social requirements.

Comment author: RichardKennaway 31 January 2016 05:12:26PM 0 points [-]

He purposefully attempted to remove other contributing members from the community. He also did not confess to it

Never publicly, but I believe that (when he was posting as "Eugine Nier") a moderator did question him privately about it and he said that was his intention.

View more: Next