In response to Closet survey #1
Comment author: anonymous259 15 March 2009 05:57:24PM 10 points [-]

With probability 50% or greater, the long-term benefits of the invasion of Iraq will outweigh the costs suffered in the short term.

Comment author: hargup 23 March 2015 10:40:59AM 1 point [-]

Do you still maintain the statement, in 2015 with ISIL attacks?

Comment author: JoshuaZ 11 March 2015 11:02:17PM 0 points [-]

I'm not sure I understand. Can you expand on what the point is?

Comment author: hargup 13 March 2015 11:38:10AM 0 points [-]

Postman said this in context of television and new age media, where even "news" other relevant information is shown for its entertainment value and not because it can help us take better decisions.

Comment author: hargup 11 March 2015 08:55:43AM 3 points [-]

Facts push other facts into and then out of consciousness at speeds that neither permit nor require evaluation.

Neil Postman from Amusing ourselves to Death, p 70

Comment author: lindagert 05 August 2011 02:08:16AM 18 points [-]

I am 100% bereft of mental imagery in a waking state of consciousness (I have fully sensory dreams when I sleep). It is dark and quiet in my mind all the time. Thoughts take the form of silently talking to myself. There are only words. No visual memory, no imagination -- I don't know what these things are, they are only words. Seeing things in the mind, hearing things, re-experiencing, exploring non-physical possibilities via imagination: these all sound like paranormal or supernatural experiences to me, literally, because what is normal and natural for me is the dark and quiet mind.

I find it fascinating how the Typical Mind Fallacy works both ways here: many mentally blind people say that they had no idea that other people could actually see pictures in the mind -- this sounds so preposterous to us that, until some point when we break through our denial, we believe that people are speaking metaphorically about the "mind's eye" or "picturing" something... because obviously it's impossible! And the scientific community is largely unaware of the existence of non-imagers, because whenever they show up as research subjects, their self-reports of mental blindness tend to get discounted or ignored -- again, because the researchers are committing the Typical Mind Fallacy -- that can't be true!

So I am writing a book about mental blindness. The book, tentatively titled "Mental Blindness and the Typical Mind Fallacy" will present the history of the non-study of non-imagery (due to the TMF), and characteristics of non-imagers, including some of the emotional and psychological aspects of living with this kind of cognition.

I’ve created a research survey to collect information from others who are non-imagers, or nearly so. To take the survey, click on this link: http://www.surveymonkey.com/s/RQXHZZQ

You are invited to participate in this survey if you fall into one of the following categories:

  1. Non-Imager: you never experience any visual mental imagery in a waking state of consciousness; your mind is always dark, there is nothing picture-like that happens in your mind, either willed or unwilled. You have no sense of having a “mind’s eye.”

  2. Weak-Imager: if there is any visual imagery, it is so vague or fleeting that you do not make use of it in purposeful, constructive thought processes: you do not use imagery for problem solving, memories are not visual, there is no visual component to imagining or daydreaming or planning. You experience a “mind’s eye,” but yours is more or less “legally blind.”

many thanks Linda

Comment author: hargup 19 January 2015 06:33:31PM 2 points [-]

It has more than three years from the date you commented. What is the status on the book? Is it in print now?

Comment author: pjeby 27 April 2009 12:24:58AM 12 points [-]

he keeps offering pronouncements about how the mind works and how to make it work better

...much of which has come attached with things that are actually possible to investigate and test on your own, and a few people have actually posted comments describing their results, positive or negative. I've even pointed to bits of research that support various aspects of my models.

But if you're allergic to self-experimentation, have a strong aversion to considering the possibility that your actions aren't as rational as you'd like to think, or just don't want to stop and pay attention to what goes on in your head, non-verbally... then you really won't have anything useful to say about the validity or lack thereof of the model.

I think it's very interesting that so far, nobody has opposed anything I've said on the grounds that they tested it, and it didn't work.

What they've actually been saying is, they don't think it's right, or they don't think it will work, or that NLP has been invalidated, or ANYTHING at all other than: I tried thus-and-such using so-and-so procedure, and it appears that my results falsify this-or-that portion of the model you are proposing."

In a community of self-professed rationalists, I find that very interesting. Not as interesting, mind you, as I would an actual result falsifying a portion of my model, though.

Because that, I would actually LEARN something from. I could try and replicate the person's result, offer other things to try, or maybe even update my model. It does happen, pretty regularly -- and the updates are almost equally likely to come from:

  1. more-or-less mainstream psych and popularizations thereof,
  2. pop, new age, or NLP stuff,
  3. self-experimentation, and
  4. unexpected events in client work

A recent mainstream psych example would be Dweck's fixed/growth mindsets model, which I've now converted to a more specific model for change work that I call "or"/"more" thinking.

That is, a belief that "either I do this OR I fail" -- a digital control variable of avoidance -- is less useful than one where "the MORE I do this the more/closer I get": an analog variable under your control.

This is a much finer-grained distinction than my older notion that didn't include discrete/continuous, but focused strictly on the approach/avoidance aspect of the variables. It's also a more narrowly-focused understanding of the difference than Dweck's work, which speaks more about the effects of these mindsets than the mechanism of them, or how to change that mechanism in practice.

So now that I have this distinction, I've gone back and reviewed other things I've read that tie into this idea in one way or another, giving it more depth. That is, I can look at other discussions of "naturally successful" behavior, hypnotic techniques or NLP submodality techniques that link an increase in one thing to an increase in another, and so on.

In particular, I've found various techniques by Richard Bandler that describe how certain successful athletes and entertainers he worked with transformed "or" variables into "more" variables (although he didn't use those terms).

I'm now in the process of self-experimenting with some of those techniques, preparatory to selecting ones to add to my personal and training repertoire.

That, more or less is my method for model refinement: read about ideas, try ideas, figure out what works, update models, find relevant techniques, try techniques w/self, w/clients, get ideas about what other ideas might be worth investigating, rinse and repeat.

Is it "the scientific method". Probably not. Is it closer to the scientific method than the "I read something or believe something that means that won't work, but can't be bothered to tell whether it's the same thing" approach favored by some folks? Hell yeah.

Btw, that attitude is why every new self-help author or guru has to come up with new names for every damn thing: the old names get worn out by people who conclude they already "know" what that thing is, because their brother told them something about something like that once and it sounded kind of like something else they tried that didn't work.

Yet century-old techniques work fine, if you actually know how to do them, and you actually DO them. But surprisingly few people ever actually try, let alone try with all their might, in the "shut up and do the impossible" sense.

Comment author: hargup 25 December 2014 06:51:26AM 0 points [-]

So basically are you saying Eliezer, gjm and others are falling for the fallacy fallacy ?

Comment author: hargup 18 December 2014 04:12:53PM *  9 points [-]

Hi I'm Harsh Gupta I'm an undergraduate student studying Mathematics and Computing at IIT Kharagpur, India. I became interested in Rationality when I came across the wikipedia article for Conformational Bias around 2 years ago. That was pretty intriguing, I searched more and read Dan Ariely's book Predictably Irrational. Then also read his other book Upside of Irrationality and now I'm reading hpmor and Khaneman's Thinking Fast and Slow. I also read The Art of Startegy around the same time as Arliey's book and that was a life changer too. The basic background of Game Theory that I got from The Art of Startegy helped me learn to analyze complex real life situation from mathematical perspective. I came to know about lesswrong from grwern.net, which was suggested by friend who is learning functional programming. I want to get more involved with the community and I would like to contribute some articles in future. BTW is there any community todo list?

Comment author: RichardKennaway 02 November 2014 06:49:35PM 3 points [-]

All of the discussion here has been based on the assumption that heroic responsibility is advocated by HPMOR as a fundamental moral virtue. But it is advocated by Harry Potter. Eliezer wrote somewhere about what in HPMOR can and what cannot be taken as the author's own views. I forget the exact criterion, but I'm sure it did not include "everything said by HP".

Heroic responsibility is a moral tool. That not everyone is able to use the tool, that the tool should not always be employed, that the tool exacts its own costs: these are all true. The tool itself is still a thing of usefulness and value, to be taken out and used when appropriate, and kept sharp the rest of the time.

Scaled down from heroic levels, it is what on LW has been called agentiness, or being a PC. I called it initiative in another comment in this thread.

A footnote:

I just looked up "initiative" on Google. Does it no longer mean what it used to? The first page of hits gives good definitions and examples from dictionary sites ("the ability to assess and initiate things independently"), but the rest of the hits are to brand names and actions taken by organisations, not individuals. I went down to the 20th page of hits, and apart from a few media companies using the word as a brand name and one more dictionary entry, it was all activities by organisations. I didn't find a single example of the word used in the sense of agentiness.

What does "initiative" mean to people who learned it in the last 20 years?

Comment author: hargup 15 December 2014 09:15:50PM 3 points [-]

Eliezer wrote somewhere about what in HPMOR can and what cannot be taken as the author's own views. I forget the exact criterion, but I'm sure it did not include "everything said by HP".

This is mentioned at the beginning of the book

" please keep in mind that, beyond the realm of science, the views of the characters may not be those of the author. Not everything the protagonist does is a lesson in wisdom, and advice offered by darker characters may be untrustworthy or dangerously double-edged."