khafra comments on Rationality Quotes June 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (778)
While I agree with the general point that it's important to consider impossibilities in fact, I'm not quite sure I agree where he's drawing the line between fact and principle. Does the compressive strength of stainless steel, and the implied limit on the height of a ladder constructed of it, not count as a restriction in principle?
It just takes some imagination. Hollow out both the Earth and the Moon to reduce their gravitational pull; support the ladder with carbon nanotube filaments; stave off collapse by pushing it around with high-efficiency ion impulse engines; etc.
I agree, though, that philosophers often make too much of the distinction between "logically impossible" and "physically impossible." There's probably no in principle possible way to hollow out the Earth significantly while retaining its structure; etc.
So basically, build a second ladder out of some other material that's feasible (unlike steel), and then just tie the steel ladder to it so it doesn't have to bear any weight.
I think that often "logically possible" means "possible if you don't think too hard about it". Which is exactly Dennett's point in context: the idea that you are a brain in a vat is only conceivable if you don't think about the computing power that would be necessary for a convincing simulation.
Dreams can be quite convincing simulations that don't need that much computing power.
The worlds that people who do astral traveling perceive can be quite complex. Complex enough to convince people who engage in that practice that they really are on an astral plane. Does that mean that the people are really on an astral plane and aren't just imagining it?
The way I like to think about it is that convincingness is a 2-place function - a simulation is convincing to a particular mind/brain. If there's a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it's cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.
From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.
Dennett's point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.
The 5 senses are brain events. There aren't input channels to the brain. Take taste. How many different tastes of food can you perceive through your taste sense? More than 5. Why? Your brain takes data from nose, tongue and your memory and fits them together to something that you can perceive through your smell sense.
You have no direct access to the data that your nose or tongue sends to your brain through your conscious qualia perception.
If someone is open by receiving suggestions and you give him a hypnotic suggestion that a apple tastes like an orange you can awake him. If he eats the thing he will tell you that the apple is an orange. He might even get angry when someone tells him that the thing isn't an orange because it obviously tastes like an orange.
You don't need to introduce any chemicals. Millions of years of evolutions have trained brains to have an extremly high prior for thinking that they aren't "brains in a vat".
Doubting your own perception is an incredibly hard cognitive task.
There are experients where an experimentor uses a single electron to trigger a subject to do a particular task like raising his arm. If the experimentor afterwards ask the subject why he raised the arm the subject makes up a story and believes in that story. It takes effort for the leader of an experiment to convince a subject that he made up the story and there was no reason he raised his arm.
I suggest you read the opening chapter of Consciousness Explained. Someone's posted it online here.
Dennett seems to quote no actual scientific paper in the paragraph or otherwise really know what the brain does.
You don't need to provide detailed feedback to the brain, Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot.
It's the same with suggesting a brain in the vat that it's acting in the real world. The brain makes up the information that's missing to provide for an experience of being in the real world.
To produce a strong hallucination (as I understand Dennett he means equates strong hallucination with complex hallucination) you might need to have a channel through with you can insert information into the brain but you don't need to provide every detail. Missing details get made up by the brain.
No, Dennett explicitly denies that the brain makes up information to fill the blind spot. This is central to his thesis. He creates a whole concept called 'figment' to mock this notion.
His position is that nothing within the brain's narrative generators expects, requires, or needs data from the blind spot; hence, in consciousness, the blind spot doesn't exist. No gaps need to be filled in, any more that HJPEV can be aware that Eliezer has removed a line that he might, counterfactually, have spoken.
For a hallucination to be strong, does not require the hallucination to have great internal complexity. It suffices that the brain happen to not ask too many questions.
That's a question of definition of strong. But it seems that I read Dennett to charitable for that purpose. He defines it as:
Given that definition, Dennett just seems wrong.
He continues saying:
I know multiple people in real life who report hallucinations of that strength. If you want an online source, the Tulpa forum has plenty of peope who manage to have strong hallucinations of Tulpa's.
The Tupla way seems to take months or a year. If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.
I think I must be misreading you. I'm puzzled that you believe this about hallucinations - that it's possible for the brain to devote enough processing power to create a "strong" hallucination in the Dennettian sense - but upthread, you seemed to be saying that dreams did not require such processing power. Dreams are surely the canonical example, for people who believe that whole swaths of world-geometry are actually being modelled, rendered and lit inside of their heads? After all, there is nothing else to be occupying the brain's horsepower; no conflicting signal source.
If I may share with you my own anecdote; when asleep, I often believe myself to be experiencing a fully sensory, qualia-rich environment. But often as I wake, there is an interim moment when I realise - it seems to be revealed - that there never was a dream. There was only a little voice making language-like statements to itself - "now I am over here now I am talking to Bob the scenery is so beautiful how rich my qualia are".
I think Dennett's position is just this; that there never was a dream, only a series of answers to spurious questions, which don't have to be consistent because nothing was awake to demand consistency.
Do you think he's wrong about dreams, too, or are you saying that waking hallucinations are importantly different? I had a quick look at the Tulpa forum and am unimpressed so far. Could you point to any examples you find particularly compelling?
Ok, so I flat out don't believe that. If waking consciousness was that unstable, a couple of hours of immersive video gaming would leave me psychotic; and all it would take to see angels would be a mildly-well-delivered Latin Mass, rather than weeks of fasting and self-flagellation.
I'll go read about it, though.
I don't think I've ever had an experience quite like that. I've perhaps had experiences that are transitional between images and propositions -- I'm thinking by visualizing a little story to myself, and the images themselves are seamlessly semantic, like I'm on the inside of a novel and the narration is a deep component of the concrete flow of events. But to my knowledge I've never felt a sudden revelation that my mental images were 'only a little voice making language-like statements to itself', à la Dennett's suggestion that all experiences are just judgments.
Perhaps we're conceptualizing the same experience after-the-fact in different ways. Or perhaps we just have different phenomenologies. A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology. (Personally, I wouldn't be surprised if that's a little bit true, but I think it's a small factor compared to Dennett's philosophical commitments.)
It is perfectly consistent to both believe that (some people) can have fully realistic mental imagery, and that (most people's) dreams tend to exhibit sub-realistic mental imagery.
I have one friend who claims to have eidetic mental imagery, and I have no reason to doubt her. Thomas Metzinger discusses in Being No-One the notion of whether the brain can generate fully realistic imagery, and holds that it usually cannot, but notes the existence of eidetic imaginers as an exception to the rule.
Agreed
My tulpa, which belongs to a Kardashev 3b civilization (but has its own penpal tulpas higher up) disagrees.
For example, you can construct a gravitational shell around the earth to guard against collapse by compensating the gravity. Use superglue so the wabbits and stones don't start floating. Edit: This is incorrect, stupid Tulpa. More like Kardashev F!
I think your tulpa is playing tricks on you. A shell around the Earth will have no effect on the interactions of bodies within it, or their interactions with everything outside the shell.
It could counteract the gravitational pull which would cause the surface of a hollow Earth to collapse otherwise. Edit: It would not :-(
A spherically symmetric shell has no effect on the gravitational field inside. It will not pull the surface of a hollow Earth outwards.
You're correct. There's other ways to guard against collapse of an empty shell, it's a similar scenario to guarding against collapse of a Dyson sphere.
Hey, that's a great idea--lots of little black hole-fueled satellites in low-earth orbit, suspending the crust so it doesn't collapse in on itself. I think we can build this ladder, after all!
edit: I think this falls prey to the shell theorem if they're in a geodesic orbit, but not if they're using constant acceleration to maintain their altitude, and vectoring their exhaust so it doesn't touch the Earth.