Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Yvain2 02 March 2008 04:11:30AM 12 points [-]

I had a professor, David Berman, who believed some people could image well and other people couldn't. He cited studies by Galton and James in which some people completely denied they had imaginative ability, and other people were near-perfect "eidetic" imagers. Then he suggested psychological theories denying imagination were mostly developed by those who could not themselves imagine. The only online work of his I can find on the subject is http://books.google.co.jp/books?id=fZXoM80K9qgC&pg=PA13&lpg=PA13&ots=Zs03EkNZ-B&sig=2eVzzMmK7WBQnblNx2KMVpUWBnk&hl=en#PPA4,M1 pages 4-14.

My favorite thought experiment of his: Imagine a tiger. Imagine it clearly and distinctly. Got it? Now, how many black stripes does it have? (Some people thought the question was ridiculous. One person responded "Seven. Now what?")

He never formally tested his theory because he was in philosophy instead of the sciences, which is a shame. Does anyone know of any modern psychology experiment that tests variations in imaging ability?

Comment author: DilGreen 12 July 2016 11:50:22AM *  0 points [-]

It's been a few years, but the answer is now - yes. Here's a link to a New Scientist article from earlier this year. I'm afraid there's a pay barrier: https://www.newscientist.com/article/2083706-my-minds-eye-is-blind-so-whats-going-on-in-my-brain/ The article documents recent experiments and thinking about people who are poor or incapable (about 2 to 3% report this) of forming mental pictures (as opposed to manipulating concepts). Key quote:

To find out how MX’s brain worked, Zeman put him into an MRI scanner and showed him pictures of people he was likely to recognise, including former UK prime minister Tony Blair. The visual areas towards the back of his brain lit up in distinctive patterns as expected. However, when MX was asked to picture Blair’s face in his mind’s eye, those areas were silent. In other words, the visual circuits worked when they had a signal from the outside world, but MX couldn’t switch them on at will (Neuropsychologia, vol 48, p 145).

Test yourself here: http://socrates.berkeley.edu/~kihlstrm/MarksVVIQ.htm

Comment author: Caledonian2 13 September 2008 05:58:37PM 0 points [-]

means something else that is hard to define, certainly hard to define in the context of a blog flamewar, but does not contradict the findings of science.

The findings of science are almost irrelevent. The means justify the ends. The usage of concepts that are not clearly and properly defined is incompatible with scientific methodology, and thus incompatible with science.

No sane, rational, and sufficiently-educated person puts forward arguments incompatible with science.

Comment author: DilGreen 13 October 2010 12:49:26AM 0 points [-]

No sane, rational, and sufficiently-educated person puts forward arguments incompatible with science.

The problem with this statement is that it puts 99.999% of everyone 'beyond the pale'. It disallows meaningful conversations about things which have huge functional impacts on all humans, but about which science has little of use or coherence to say. It cripples conversation about things which our current science deems impossible, without allowing for the certainty that key aspects of what is currently accepted science will be superseded in the future.

In other words, it is an example of a reasonable sounding thing to say that is almost perfectly useless. You have argued yourself into a box.

I would suggest that no sane, rational and sufficiently-educated person ascribes zero probability to irrational seeming propositions.

In response to Planning Fallacy
Comment author: David 25 August 2008 09:13:41AM 3 points [-]

What a wonderful blog, I just discovered it. This is an old post so I am not sure if anyone is still following it. While I think the article raises some excellent points, I think it may be missing the forest for the trees. Perhaps due to bias :-).

For instance, the article states: * 13% of subjects finished their project by the time they had assigned a 50% probability level; * 19% finished by the time assigned a 75% probability level; * and only 45% (less than half!) finished by the time of their 99% probability level.

The conclusion then seems to be that everyone did a poor job of estimating. Maybe, maybe not. Why not instead question if their were other cognitive/behavioral factors at play? For example: 1. Procrastinating until the last moment to actually do the work (you have never heard of students doing that, have you?) :-). This is a common reason that no matter how long people are given to complete a task, they do not complete it on time, or do so at the last minute. 2. Parkinson's law (work expands to fill the time available). The more time the students have, the more they will change the scope of work to make it impressive (it will turn into a longer paper, or they will obsess more over details).

These are just a few thoughts. I submit the opposite of the articles conclusion (without invalidating it). Most projects take longer than they have to because of cognitive/behavior issues. And here I will quote the blog mission statement: "If we know the common patterns of error or self-deception, maybe we can work around them ourselves, or build social structures for smarter groups."

And *that* is also the key to achieving faster and on/time projects, not just accepting that our planning is faulty and looking at past projects - many of those past projects likely took longer than they needed to because of cognitive bias.

In response to comment by David on Planning Fallacy
Comment author: DilGreen 11 October 2010 01:09:00PM 3 points [-]

As an architect and sometime builder, as an excellent procrastinator, I heartily concur with this comment.

The range of biases, psychological and 'structural' factors at work is wide. Here are a few:

  • 'tactical optimism' : David Bohm's term for the way in which humans overcome the (so far) inescapable assessment that; 'in the long run, we're all dead'. Specifically, within the building industry, rife with non-optimal ingrained conditions, you wouldn't come to work if you weren't an optimist. Builders who cease to have an optimistic outlook go and find other things to do.

  • maintaining flexibility has benefits: non-trivial projects have hidden detail. It often happens that spending longer working around the project - at the expense of straight-ahead progress - can lead to higher quality at the end, as delayed completion has allowed a more elegant/efficient response to inherent, but unforeseen problems.

  • self-application of pressure: as someone tending to procrastinate, I know that I sometimes use ambitious deadlines in order to attempt to manage myself - especially if I can advertise that deadline - as in the study

  • deadline/sanction fatigue: if the loss incurred for missing deadlines is small, or alternatively if it is purely psychological, then the 'weight' of time pressure is diminished with each failure.

I'm going to stop now, before I lose the will to live.

In response to Magical Categories
Comment author: DilGreen 11 October 2010 12:23:53PM *  0 points [-]

So many of the comments here seem designed to illustrate the extreme difficulty, even for intelligent humans interested in rationality, and trying hard to participate usefully in a conversation about hard-edged situations of perceived non-trivial import, to avoid fairly simplistic anthropomorphisms of one kind or another.

Saying, of a supposed super-intelligent AI - one that works by being able to parallel, somehow, the 'might as well be magic' bits of intelligence that we currently have at best a crude assembly of speculative guesses for - any version of "of course, it would do X", seems - well - foolish.

In response to Magical Categories
Comment author: Dan_Burfoot 25 August 2008 03:45:58AM 0 points [-]

@Eliezer - I think Shane is right. "Good" abstractions do exist, and are independent of the observer. The value of an abstraction relates to its ability to allow you to predict the future. For example, "mass" is a good abstraction, because when coupled with a physical law it allows you to make good predictions.

If we assume a superintelligent AI, we have to assume that the AI has the ability to discover abstractions. Human happiness is one such abstraction. Understanding the abstraction "happiness" allows one to predict certain events related to human activity. Thus a superintelligent AI will necessarily develop the concept of happiness in order to allow it to predict human events, in much the same way that it will develop a concept of mass in order to predict physical events.

Plato had a concept of "forms". Forms are ideal shapes or abstractions: every dog is an imperfect instantiation of the "dog" form that exists only in our brains. If we can accept the existence of a "dog" form or a "house" form or a "face" form, then it is not difficult to believe in the existence of a "good" form. Plato called this the Form of the Good. If we assume an AI that can develop its own forms, then it should be able to discover the Form of the Good.

http://en.wikipedia.org/wiki/Form_of_the_Good

Comment author: DilGreen 11 October 2010 12:16:02PM 1 point [-]

Whether or not the AI finds the abstraction of human happiness to be pertinent, and whether it considers increasing it to be worthwhile sacrificing other possible benefits for, are unpredictable, unless we have succeeded in achieving EY's goal of pre-destining the AI to be Friendly.

In response to Magical Categories
Comment author: Shane_Legg 24 August 2008 11:53:35PM 2 points [-]

I mean differentiation in the sense of differentiating between the abstract categories. Is a half a face that appears to be smiling while the other half is burn off still a "smiley face"? Even I'm not sure.

I'm certainly not arguing that training an AGI to maximise smiling faces is a good idea. It's simply a case of giving the AGI the wrong goal.

My point is that a super intelligence will form very good abstractions, and based on these it will learn to classify very well. The problem with the famous tank example you cite is that they were training the system from scratch on a limited number of examples that all contained a clear bias. That's a problem for inductive inference systems in general. A super intelligent machine will be able to process vast amounts of information, ideally from a wide range of sources and thus avoid these types of problems for common categories, such as happiness and smiley faces.

If what I'm saying is correct, this is great news as it means that a sufficiently intelligent machine that has been exposed to a wide range of input will form good models of happiness, wisdom, kindness etc. Things that, as you like to point out, even we can't define all that well. Hooking up the machine to then take these as its goals, I suspect won't then be all that hard as we can open up its "brain" and work this out.

Comment author: DilGreen 11 October 2010 12:08:04PM 0 points [-]

Surely the discussion is not about the issue of whether an AI will be able to be sophisticated in forming abstractions - if it is of interest, then presumably it will be.

But the concern discussed here is how to determine beforehand that those abstractions will be formed in a context characterised here as Friendly AI. The concern is to pre-ordain that context before the AI achieves superintelligence.

Thus the limitations of communicating desirable concepts apply.

In response to Magical Categories
Comment author: Tim_Tyler 24 August 2008 08:55:03PM -1 points [-]

Early AIs are far more likely to be built to maximise the worth of the company that made them than anything to do with human hapiness. E.g. see: Artificial intelligence applied heavily to picking stocks

A utility function measured in dollars seems fairly unambiguous.

Comment author: DilGreen 11 October 2010 11:48:29AM 14 points [-]

A utility function measured in dollars seems fairly unambiguously to lead to decisions that are non-optimal for humans, without a sophisticated understanding of what dollars are.

Dollars mean something for humans because they are tokens in a vast, partly consensual and partially reified game. Economics, which is our approach to developing dollar maximising strategies, is non-trivial.

Training an AI to understand dollars as something more than data points would be similarly non-trivial to training an AI to faultlessly assess human happiness.

In response to Surprised by Brains
Comment author: Tim_Tyler 26 November 2008 09:03:03AM 1 point [-]

Oh, and I suppose evolution is trivial? [...] By comparison... yeah, actually.

Nature was compressing the search space long before large brains came along.

For example, bilateral symmetry is based partly on the observation that an even number of legs works best. Nature doesn't need to search the space of centipedes with an odd number of legs. It has thus compressed the search space by a factor of two. There are very many such economies - explored by those that study the evolution of evolvability.

Comment author: DilGreen 11 October 2010 02:42:05AM *  3 points [-]

Surely this is not an example of search-space compression, but an example of local islands of fitness within the space? Evolution does not 'make observations', or proceed on the basis of abstractions.

An even number of legs 'works best' precisely for the creatures who have evolved in the curtailed (as opposed to compressed) practical search space of a local maxima. This is not a proof that an even number of legs works best, period.

Once bilateral symmetry has evolved, the journey from bilateralism to any other viable body plan is simply too difficult to traverse. Nature DOES search the fringes of the space of centipedes with an odd number of legs- all the time.

http://www.wired.com/magazine/2010/04/pl_arts_mutantbugs/

That space just turns out to be inhospitable, time and time again. One day, under different conditions, it might not.

BTW, I am not claiming, either, that it is untrue that an even number of legs works best - simply that the evolution of creatures with even numbers of legs and any experimental study showing that even numbers of legs are optimal are two different things. Mutually reinforcing, but distinct.

In response to Surprised by Brains
Comment author: RobinHanson 23 November 2008 07:52:00PM 4 points [-]

Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?

Comment author: DilGreen 11 October 2010 02:20:21AM *  6 points [-]

This comment crystallised for me the weirdness of this whole debate (I'm not picking sides, or even imagining that I have the capacity to do so intelligently).

In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I'm using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum - that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);

Worm1: I tell you it's really important to consider the possibility that these "intelligent beings" might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....

Worm2: But why can't you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won't understand that disrupting this flow will be sub-optimal....

I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those 'how many angels would fit onto the head of a pin' ones that we fondly ridicule.

The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method - and they didn't distinguish between them!

In response to Surprised by Brains
Comment author: Will_Pearson 23 November 2008 10:11:24AM 3 points [-]

Believer: The search space is compressible -

The space of behaviors of Turing Machines is not compressible, sub spaces are, but not the whole lot. What space do you expect the SeedAIs to be searching? If you could show that it is compressible and bound to have an uncountable number of better versions of the SeedAI, then you could convince me that I should worry about Fooming.

As such when I think of self-modifiers I think of them searching the space of Turing Machines, which just seems hard.

Comment author: DilGreen 11 October 2010 02:02:43AM 2 points [-]

The space of possible gene combinations is not compressible - under the evolutionary mechanism.

The space of behaviours of Turing machines is not compressible, in the terms in which that compression has been envisaged.

The mechanism that compresses search space that Believer posits is something to do with brains; something to do with intelligence. And it works - we know it does; Kekule works on the structure of benzene without success; sleeps, dreams of a serpent biting its own tail, and waking, conceives of the benzene ring.

The mechanism (and everyone here believes that it is a mechanism) is currently mysterious. AI must possess this mechanism, or it will not be AI.

View more: Next