Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: mindreadings 21 March 2017 07:03:31PM 1 point [-]

Good. The experiment is, however, very good evidence for the hypothesis that R.S. Marken is a crank, and explains the >quote from his farewell speech that didn't make sense to me before:

I can be a pretty cranky fellow but I think there might be better evidence of that than the model fitting effort you refer to. The "experiment" that you find to be poor evidence for PCT comes from a paper published in the journal Ergonomics that describes a control theory model that can be used as a framework for understanding the causes of error in skilled performance, such as writing prescriptions. The fit of the model to the error data in Table 1 is meant to show that such a control model can produce results that mimic some existing data on error rates (and without using more free parameters than data points; there are 4 free parameters and 4 data points; the fit of the model is, indeed, very good but not perfect).

But the point of the model fitting exercise was simply to show that the control model provides a plausible explanation of why errors in skilled performance might occur at particular (very low) rates. The model fitting exercise was not done to impress people with how well the control model fits the data relative to other models since, to my knowledge, there are no comparable models of error against which to compare the fit .As I said in the introduction to the paper, existing models of error (which are really just verbal descriptions of why error occurs) "tell us the factors that might lead to error, but they do not tell us why these factors produce an error only rarely."

So if it's the degree of fit to the data that you are looking for as evidence of the merits of PCT then this paper is not necessarily a good reference for that. Actually, a good example of the kind of fit to data you can get with PCT can be gleaned from doing one of the on-line control demos at my Mind Readings site, particularly the Tracking Task. When you become skilled at doing this task you will find that the correlation between the PCT model (called "Model" in graphic display at he end of each trial) and your behavior will be close to one. And this is achieved using a model with no free parameters at all; they are the parameters that have worked for many different individuals and they are now simply constants in the model.

OH, and if you are looking for examples of things PCT can do that other models can't do, try the Mind Reading demo, where the computer uses a methodology based on PCT, called the Test for the Controlled Variable, to tell which of three avatars -- all three of which are being moved by your mouse movements -- is the one being moved intentionally.

The fact that Marken was repeatedly told this, interpreted it to mean that others were jealous of his precision, and continued to produce experimental "results" of the same sort along with bold claims of their predictive power, makes him a crank.

I don't recall ever being told (by reviewers or other critics) that the goodness of fit of my (and my mentor Bill Powers') PCT models to data was a result of having more free parameters than data points. And had I ever been told that I would certainly not have thought it was because others were jealous of the precision of our results. And the main reason I have continued to produce experimental results -- available in my books Mind Readings, More Mind Readings and Doing Research on Purpose-- is not to make bold claims about the predictive power of the PCT model but to emphasize the point that PCT is a model of control, the process of consistently producing pre-selected results in a disturbance prone world. The precision of PCT comes only from the fact that it recognizes that behavior is not a caused result of input or a cognitively planed output but a process of control of input. So if I’m a crank, it’s not because I imagine that my model of behavior fits the data better than other models; it’s because I think my concept of what behavior is is better than other concepts of what behavior is.

I believe Richard Kennaway, who is on this blog, can attest to the fact that, while I may not be the sharpest crayon in the box, I’m not really a crank; at least, no more of a crank than the person who is responsible for all this PCT stuff, the late (great) William T. Powers.

I hope all the formatting comes out ok on this; I can't seem to find a way to preview it.

Best regards

Rick Marken

Comment author: RichardKennaway 23 March 2017 01:52:54PM 2 points [-]

Actually, I left LessWrong about a year ago, as I judged it to have declined to a ghost town since the people most worth reading had mostly left. I've been reading it now and then since, and might be moved to being more active here if it seems worth it. I don't think I have enough original content to post to be a part of its revival myself.

As Rick says, he can be pretty cranky, but is not a crank.

Comment author: Lumifer 07 February 2016 07:05:45PM 0 points [-]

Translation: it's better to be drunk.

Not sure this qualifies as a rationality quote.

Comment author: RichardKennaway 08 February 2016 04:52:03PM 0 points [-]

Orthodox Islamic apologists rescue Khayyam by interpreting "wine" as spiritual intoxication. (How well this really fits is another matter. And the Song of Solomon is about Christ's love for His Church.) But one can as easily interpret the verse in a rationalist way. Channelling Fitzgerald for a moment...

The sot knows nothing but the tavern's wine
Rumi and Shams but ecstacy divine
The Way of Eli is not here nor there
But in the pursuit of a Fun sublime!

Great literature has as many versions as there are readers.

Comment author: Vaniver 05 February 2016 03:37:37PM 1 point [-]

This is such a technologically good idea that it must happen within a few years.

So, early on people were excited about machine translation--yeah, it wasn't great, but you could just have human translators start from the machine translation and fix the mess.

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

Comment author: RichardKennaway 05 February 2016 05:54:06PM 1 point [-]

The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.

The talented ones, yes, but there will be a lot of temptation for the also rans. You've got a blogging deadline and nothing is coming together, why not fire up the bot and get topical article ideas? "It's just supplying facts and links, and the way it weaves them into a coherent structure, well I could have done that, of course I could, but why keep a dog and bark myself? The real creative work is in the writing." That's how I see the slippery slope starting, into the Faustian pact.

Comment author: Lumifer 05 February 2016 04:12:00PM 1 point [-]

I'm just saying it's so technologically cool, someone will do it as soon as it's possible.

Ahem. ELIZA, the chat bot, was made in mid-1960s. And...:

Weizenbaum tells us that he was shocked by the experience of releasing ELIZA (also known as "Doctor") to the nontechnical staff at the MIT AI Lab. Secretaries and nontechnical administrative staff thought the machine was a "real" therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy.

Comment author: RichardKennaway 05 February 2016 04:27:52PM 0 points [-]

I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.

Comment author: Lumifer 05 February 2016 03:39:03PM *  2 points [-]

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

LOL. Wake up and smell the tea :-) People who want to push advertising into your eyeballs now routinely construct on-demand (as in, in response to a Google query) websites/blogs/etc. just so that you'd look at them and they get paid for ad impressions.

See e.g. recent Yvain:

EHealthMe’s business model is to make an automated program that runs through every single drug and every possible side effect, scrapes the FDA database for examples, then autopublishes an ad-filled web page titled “COULD $DRUG CAUSE $SIDEEFFECT?”. It populates the page by spewing random FDA data all over it, concludes “$SIDEEFFECT is found among people who take $DRUG”, and offers a link to a support group for $DRUG patients suffering from $SIDE_EFFECT. Needless to say, the support group is an automatically-generated forum with no posts in it.

Now, you say you want to turn this to the light side..?

Comment author: RichardKennaway 05 February 2016 04:03:20PM 1 point [-]

Now, you say you want to turn this to the light side..?

I'm just saying it's so technologically cool, someone will do it as soon as it's possible. Whether it would actually be good in the larger scheme of things is quite another matter. I can see an arms race developing between drones rewriting bot-written copy and exposers of the same, together with scandals of well-known star bloggers discovered to be using mechanical assistance from time to time. There would be a furious debate over whether using a bot is actually a legitimate form of writing. All very much like drugs and sport.

Bot-assisted writing may make the traditional essay useless as a way of assessing students, perhaps to be replaced by oral exams in a Faraday cage. On Facebook, how will you know whether your friends' witticisms are their own work, especially the ones you've never been face to face with?

In response to Upcoming LW Changes
Comment author: Lumifer 03 February 2016 08:36:30PM 1 point [-]

On a tangential note: it would be cute for LW to acquire a collection of resident chat bots, preferably ones which could be dissected and rewired by all and sundry. Erecting defences against chat bots run amok would also be enlightening :-)

Comment author: RichardKennaway 05 February 2016 09:40:06AM 3 points [-]

Watson can already philosophize at you from TED talks. Someone needs to develop a chat bot based on it, and have it learn from the Sequences.

Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots!

At present, most bot-written books are pretty obviously junk, but instead of going for volume and long tails, you could hire human editors to make the words read more as if they were originated by a human being. They'd have to have a good command of English, though, so the stereotypical outsourcing to Bangalore wouldn't be good enough. Ideally, you'd want people who were not just native speakers, but native to American culture, smart, familiar with the general area of the ideas, and good with words. Existing bloggers, that is. Offer this to them as a research tool. It would supply a blogger with a stream of article outlines and the blogger would polish them up. Students with essays to write could use it as well, and since every essay would be different, you wouldn't be able to detect it wasn't the student's work by googling phrases from it.

This is such a technologically good idea that it must happen within a few years.

Comment author: RichardKennaway 02 February 2016 03:10:20PM 0 points [-]

Suppose Alice and Bob are the same person. Alice tosses a coin a large number of times and records the results.

Should she disbelieve what she reads?

In response to comment by [deleted] on Open thread, Feb. 01 - Feb. 07, 2016
Comment author: IlyaShpitser 02 February 2016 03:57:55AM 2 points [-]

I apologize if I caused you any distress, that was not my intention.

Comment author: RichardKennaway 02 February 2016 02:49:26PM 0 points [-]
Comment author: InhalingExhaler 01 February 2016 05:04:52PM 0 points [-]

Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn't doubt... But it is kind of dogmatic and feels wrong. So should it maybe be like "Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X". But this still implies "There are cases when you shouldn't doubt", which is still suspicious and doesn't sound "rational". I mean, doesn't sound like making the map reflect the territory.

Comment author: RichardKennaway 01 February 2016 10:37:12PM 0 points [-]

It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.

Comment author: InhalingExhaler 31 January 2016 06:36:58PM *  4 points [-]

Hello.

I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the following text (I am ready to move it from welcome board to any other suitable one if needed):

John was reading a book called “Rationality: From AI to Zombies” and thought: “Well, I am advised to doubt my beliefs, as some of them may turn out to be wrong”. So, it occurred to John to try do doubt the following statement: “Extraordinary claim requires extraordinary evidence”. But that was impossible to doubt, as this statement was a straightforward implication of the theorem X of probability theory, which John, as a mathematician, knew to be correct. After a while a wild thought ran through his mind: “What if every time a person looks at the proof of the theorem X, the Dark Lords of the Matrix alter the perception of this person to make the proof look correct, but actually there is a mistake in it, and the theorem is actually incorrect?” But John didn’t even consider that idea seriously, because such an extraordinary claim would definitely require extraordinary evidence.

Fifteen minutes later, John spontaneously considered the following hypothetical situation: He visualized a religious person, Jane, who is reading a book called “Rationality: From AI to Zombies”. After reading for some time, Jane thinks that she should try to doubt her belief in Zeus. But it is definitely an impossible action, as existence of Zeus is confirmed in the Sacred Book of Lightning, which, as Jane knows, contains only Ultimate and Absolute Truth. After a while a wild thought runs through her mind: “What if the Sacred Book of Lightning actually consists of lies?” But Jane doesn’t even consider the idea seriously, because the Book is surely written by Zeus himself, who doesn’t ever lie.

From this hypothetical situation John concluded that if he couldn’t doubt B because he believed A, and couldn’t doubt A because he believed B, he’d better try to doubt A and B simultaneously, as he would be cheating otherwise. So, he attempted to simultaneously doubt the facts that “Extraordinary claim requires extraordinary evidence” and that “Theorem X is proved correctly”.

As he attempted to do it, and succeeded, he spent some more time considering Jane’s position before settling his doubt. Jane justifies her set of beliefs by Faith. Faith is certainly an implication of her beliefs (the ones about reliability of the Sacred Book), and Faith certainly belongs to the meta-level of her thinking, affecting her ideas about existence of Zeus located at the object level.

So, John generalized that if he had some meta-level process controlling his thoughts and this process was implied by the very thought he was currently doubting, it would be wise to suspend the process for the time of doubting. Because not following this rule could make him not to lose some beliefs which, from the outside perspective, looked as ridiculous as Jane’s religion. John searched through the meta-level controlling his thoughts. He was horrified to realize that Bayesian reasoning itself fitted the criteria: it was definitely organizing his thought process, and its correctness was implied by the theorem X he was currently doubting. So he was sitting, with his belief unsettled and with no ideas of how to settle it correctly. After all, even if he made up any idea, how could he know that it wasn’t the worst idea ever intentionally given to him by the Dark Lords of the Matrix? He didn’t allow himself to disregard this nonsense with “Extraordinary claim requires extraordinary evidence” – otherwise he would fail doubting this very statement and there would be no point in this whole crisis of faith which he deliberately inflicted on himself…

Jane, in whose imagination the whole story took place, yawned and closed a book called “Rationality: From AI to Zombies”, lying in front of her. If learning rationality was going to make her doubt herself out of rationality, why would she even bother to try that? She was comfortable with her belief in Zeus, and the only theory which could point out her mistakes apparently ended up in self-annihilation. Or, shortly, who would believe anyone saying “We have evidence that considering evidence leads you to truth, therefore it is true that considering evidence leads you to truth”?

Comment author: RichardKennaway 01 February 2016 11:58:42AM 2 points [-]

Welcome to Less Wrong!

My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective.

Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.

View more: Next