Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: RichardKennaway 08 February 2016 06:27:06PM -3 points [-]

Well, that wraps it up. This post, and some of the asinine comments to it, have persuaded me that I have no further use for this site.

Comment author: Lumifer 07 February 2016 07:05:45PM 0 points [-]

Translation: it's better to be drunk.

Not sure this qualifies as a rationality quote.

Comment author: RichardKennaway 08 February 2016 04:52:03PM 0 points [-]

Orthodox Islamic apologists rescue Khayyam by interpreting "wine" as spiritual intoxication. (How well this really fits is another matter. And the Song of Solomon is about Christ's love for His Church.) But one can as easily interpret the verse in a rationalist way. Channelling Fitzgerald for a moment...

The sot knows nothing but the tavern's wine
Rumi and Shams but ecstacy divine
The Way of Eli is not here nor there
But in the pursuit of a Fun sublime!

Great literature has as many versions as there are readers.

Comment author: Vaniver 05 February 2016 03:37:37PM 1 point [-]

This is such a technologically good idea that it must happen within a few years.

So, early on people were excited about machine translation--yeah, it wasn't great, but you could just have human translators start from the machine translation and fix the mess.

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

Comment author: RichardKennaway 05 February 2016 05:54:06PM 1 point [-]

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

The talented ones, yes, but there will be a lot of temptation for the also rans. You've got a blogging deadline and nothing is coming together, why not fire up the bot and get topical article ideas? "It's just supplying facts and links, and the way it weaves them into a coherent structure, well I could have done that, of course I could, but why keep a dog and bark myself? The real creative work is in the writing." That's how I see the slippery slope starting, into the Faustian pact.

Comment author: Lumifer 05 February 2016 04:12:00PM 1 point [-]

I'm just saying it's so technologically cool, someone will do it as soon as it's possible.

Ahem. ELIZA, the chat bot, was made in mid-1960s. And...:

Weizenbaum tells us that he was shocked by the experience of releasing ELIZA (also known as "Doctor") to the nontechnical staff at the MIT AI Lab. Secretaries and nontechnical administrative staff thought the machine was a "real" therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy.

Comment author: RichardKennaway 05 February 2016 04:27:52PM 0 points [-]

I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.

Comment author: Lumifer 05 February 2016 03:39:03PM *  2 points [-]

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

LOL. Wake up and smell the tea :-) People who want to push advertising into your eyeballs now routinely construct on-demand (as in, in response to a Google query) websites/blogs/etc. just so that you'd look at them and they get paid for ad impressions.

See e.g. recent Yvain:

EHealthMe’s business model is to make an automated program that runs through every single drug and every possible side effect, scrapes the FDA database for examples, then autopublishes an ad-filled web page titled “COULD $DRUG CAUSE $SIDEEFFECT?”. It populates the page by spewing random FDA data all over it, concludes “$SIDEEFFECT is found among people who take $DRUG”, and offers a link to a support group for $DRUG patients suffering from $SIDE_EFFECT. Needless to say, the support group is an automatically-generated forum with no posts in it.

Now, you say you want to turn this to the light side..?

Comment author: RichardKennaway 05 February 2016 04:03:20PM 1 point [-]

Now, you say you want to turn this to the light side..?

I'm just saying it's so technologically cool, someone will do it as soon as it's possible. Whether it would actually be good in the larger scheme of things is quite another matter. I can see an arms race developing between drones rewriting bot-written copy and exposers of the same, together with scandals of well-known star bloggers discovered to be using mechanical assistance from time to time. There would be a furious debate over whether using a bot is actually a legitimate form of writing. All very much like drugs and sport.

Bot-assisted writing may make the traditional essay useless as a way of assessing students, perhaps to be replaced by oral exams in a Faraday cage. On Facebook, how will you know whether your friends' witticisms are their own work, especially the ones you've never been face to face with?

In response to Upcoming LW Changes
Comment author: Lumifer 03 February 2016 08:36:30PM 1 point [-]

On a tangential note: it would be cute for LW to acquire a collection of resident chat bots, preferably ones which could be dissected and rewired by all and sundry. Erecting defences against chat bots run amok would also be enlightening :-)

Comment author: RichardKennaway 05 February 2016 09:40:06AM 3 points [-]

Watson can already philosophize at you from TED talks. Someone needs to develop a chat bot based on it, and have it learn from the Sequences.

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

At present, most bot-written books are pretty obviously junk, but instead of going for volume and long tails, you could hire human editors to make the words read more as if they were originated by a human being. They'd have to have a good command of English, though, so the stereotypical outsourcing to Bangalore wouldn't be good enough. Ideally, you'd want people who were not just native speakers, but native to American culture, smart, familiar with the general area of the ideas, and good with words. Existing bloggers, that is. Offer this to them as a research tool. It would supply a blogger with a stream of article outlines and the blogger would polish them up. Students with essays to write could use it as well, and since every essay would be different, you wouldn't be able to detect it wasn't the student's work by googling phrases from it.

This is such a technologically good idea that it must happen within a few years.

Comment author: RichardKennaway 02 February 2016 03:10:20PM 0 points [-]

Suppose Alice and Bob are the same person. Alice tosses a coin a large number of times and records the results.

Should she disbelieve what she reads?

Comment author: IlyaShpitser 02 February 2016 03:57:55AM 2 points [-]

I apologize if I caused you any distress, that was not my intention.

Comment author: RichardKennaway 02 February 2016 02:49:26PM 0 points [-]
Comment author: InhalingExhaler 01 February 2016 05:04:52PM 0 points [-]

Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn't doubt... But it is kind of dogmatic and feels wrong. So should it maybe be like "Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X". But this still implies "There are cases when you shouldn't doubt", which is still suspicious and doesn't sound "rational". I mean, doesn't sound like making the map reflect the territory.

Comment author: RichardKennaway 01 February 2016 10:37:12PM 0 points [-]

It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.

Comment author: skeptical_lurker 01 February 2016 05:36:59PM 0 points [-]

I don't think most nrxers do believe this, and one who did certainly would be a hypocrite to accuse a mod of abusing their power - if there is no morality but the will to power, then how could a mod, or anyone else, abuse their power?

Comment author: RichardKennaway 01 February 2016 10:33:20PM 1 point [-]

if there is no morality but the will to power, then how could a mod, or anyone else, abuse their power?

Accusations of abuse would simply be a move in the power struggle. Nothing is true, all is a lie.

I don't think most nrxers do believe this

I am extrapolating outrageously, of course. Or, to continue in this vein, those that don't believe this are merely fellow-travellers and wannabe nrxs, beta foot-soldiers to be exploited by Those Who Know the truths that lesser beings fear, hide from, and hide from themselves the fact that they are hiding.

View more: Next