Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by on The Irrationality Game
Comment author: 05 October 2010 05:11:56PM 0 points [-]

True. I would estimate that our universe resembles the parent universe with probability ~50%.

In response to comment by on The Irrationality Game
Comment author: 07 October 2010 01:33:58PM 1 point [-]

Considering how much stuff like convays game of life which bears no resemblance to our universe is played I'd put the probability much lower.

Whenever you run anything which simulates anything turing compatible (Ok. Finite state machine is actually enough due to finite amount of information storage even in our universe) there is a chance for practically anything to happen.

Comment author: 28 July 2010 09:17:30PM *  4 points [-]

My point was that go and chess are not actually understood. We don't actually know how they're played. There are hacks that allow programs to get good at those games without actually understanding the patterns involved, but recognizing the patterns involved is what humans actually find interesting about the games.

To clarify, "understanding chess" is a interesting problem. It turns out that "writing a program to be very good at chess" isn't, because it can be solved by brute force in an uninteresting way.

Another example: suppose computer program X and computer program Y are both capable of writing great novels, and human reviewers can't tell the difference between X's novels, Y's novels, and a human's. However, X uses statistical analysis at the word and sentence level to fill in a hard-coded "novel template," whereas Y creates characters, simulates their personality and emotions, and simulates interactions between them. Both have solved the (uninteresting) problem of writing great novels, but Y has solved the (interesting) problem of understanding how people write novels.

(ETA: I suspect that program X wouldn't actually be able to write great novels, and I suspect that writing great novels is therefore actually an interesting problem, but I could be wrong. People used to think that about chess.)

What's happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don't let this confuse you into thinking that AI has been successful.

Comment author: 29 July 2010 04:38:39PM 2 points [-]

but Y has solved the (interesting) problem of understanding how people write novels.

I think the whole point in AI research is to do something, not find out how humans do something. You personally might find psychology (How humans work) far more interesting than AI research (How to do things traditionally classified as 'intelligence' regardless of the actual method) but please don't generalize that notion and smack labels "uninteresting" into problems.

What's happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don't let this confuse you into thinking that AI has been successful.

When mysterious things cease to be mysterious they'll tend to resemble the way "X".

Consider the advent of powered flight. By that line of argumentation one could write "We don't actually understand how flight works, There are hacks that allow machines to fly without actually understanding how birds fly." Or we could compare cars with legs and say that transportation is generally just a ugly uninteresting hack.

Comment author: [deleted] 27 July 2010 05:22:14PM *  9 points [-]

The best one-sentence description I've read of how we think is "humans, if given the choice, would prefer to act as context specific pattern recognizers rather than attempting to calculate or optimize."

We're free to coast on simple pattern matching and automatic processing 90% of the time. Consciousness is only there because it's monitoring any deviation of action from intention. If something goes wrong and our learned rules and basic instincts aren't working, consciousness has to step in and try to cobble a solution together on the fly (usually badly).

Consciousness is a failure mode. He was trusted with root access, and he's spent the last hundred thousand years or so abusing it.

In response to comment by [deleted] on Alien parasite technical guy
Comment author: 28 July 2010 01:14:10PM 4 points [-]

If something goes wrong and our learned rules and basic instincts aren't working, consciousness has to step in and try to cobble a solution together on the fly (usually badly).

Considering that we've so completely kicked ass against any other species that we haven't been even on the same playing field for thousands of years I'd say conciousness has done rather well for itself.

Ofcourse this is just in relation to other species, in absolute scale we probably are not that good.

Comment author: 27 July 2010 06:28:52PM *  9 points [-]

While people say this sometimes, I don't think this is accurate. Most of the "AI" advances, as far as I know, haven't shed a lot of light on intelligence. They may have solved problems traditionally classified as AI, but that doesn't make the solutions AI; it means we were actually wrong about what the problems required. I'm thinking specifically of statistical natural language processing, which is essentially based on finding algorithms to analyze a corpus, and then using the results on novel text. It's a useful hack, and it does give good results, but it just tells us that those problems are less interesting than we thought.

Another example is chess and Go computing, where chess programs have gotten very very good just based on pruning and brute-force computation; the advances in chess computer ability were brought on by computing power, not some kind of AI advance. It's looking like the same will be true of Go programs in the next 10 to 20 years, based on Monte Carlo techniques, but this just means that chess and Go are less interesting games than we thought. You can't brute-force a traditional "AI" problem with a really fast computer and say that you've achieved AI.

Comment author: 28 July 2010 09:09:52AM 6 points [-]

but it just tells us that those problems are less interesting than we thought.

Extrapolating from the trend it would not suprise me greatly if we'd eventually find out that intelligence in general is not as interesting as we thought.

When something is actually understood the problem suffers from rainbow effect "Oh it's just reflected light from water droplets, how boring and not interesting at all". It becomes a common thing thus boring for some. I, for one, think go and chess are much more interesting games now that we actually know how they are played, not just how to play.

Comment author: 27 July 2010 02:12:35PM 0 points [-]

Whoever said that this conversation was about understanding consciousness?

Personally I think that that topic is a tarpit, which I prefer to ignore until we know how the brain works.

Comment author: 27 July 2010 02:23:47PM 1 point [-]

I merely wished to clarify the difference between conciousness and how it is implemented in the brain. I had no intention of implying that it was part of the discussion. On retrospect the clarification was not required.

It's just way too common for the two issues to get mixed up, as can be seen on the various threads.

Comment author: 27 July 2010 12:55:24PM *  0 points [-]

I think the brain is probably ultimately computable by a classical computer and yet quantum computing in the brain might be significant. Here are couple of the potential problems we'll have if the brain relies on quantum effects.

1) Difficulty in replacing bits of the brain functionally. If consciousness is some strange transitory gestalt quantum field; then you would need to to make a brain prosthesis that had the same electromagnetic properties as a neuron. Which might be quite hard.

2) A harder time simulating brains/doing AI: You might have to up the date you expect Whole Brain Emulations to become available (depending upon when we expect quantum computers to be useful).

Comment author: 27 July 2010 01:12:05PM 0 points [-]

Quantum computing in the brain might be happening, but if we want to understand conciousness it is irrelevant (Unless conciousness is noncomputable where it becomes a claim about quantum physics yet again). It's as relevant as details about transistors or vacuum tubes are for understanding sorting algorithms.

Naturally when considering brain prostheses or simulating a brain the actual method with which brain computes is relevant.

Comment author: 22 July 2010 10:05:27AM *  2 points [-]

Thanks for the interesting article.

and regulation of blood flow: all important, but mostly things only a biologist could love.

I'd argue that people who like designing computer architectures should be interested in this as well.

Ignoring glia seems to me to have been an (mis-)application of assuming the simplest explanation consistent with the facts, when people weren't in a position to fully explain the brain. I.e. people knew that you needed neurons to explain brain function, but because they couldn't predict how the brain functioned, they didn't know that a neural explanation was insufficient.

It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven't been convincing).

Comment author: 27 July 2010 12:30:31PM 1 point [-]

It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven't been convincing).

Considering that quantum physics is turing complete (unless it's nonlinear etc) any quantum effects could be reproduced with classical computation. Therefore the assumption that cognition must involve quantum effects implicitly assumes that quantum physics is nonlinear or one of the various other requirements.

In this light the first question that ought to be asked from persons claiming quantum effects on brain is: What computation [performed in brain] requires basically infinite loops completed on finite time and based on what physics experiment they believe that quantum effects are more than turing complete.

In response to comment by on Consciousness
Comment author: 08 January 2010 08:47:51PM *  3 points [-]

I believe you are confused about what Dennett asserts. Quining Qualia would probably be the most obviously relevant essay easily located online, if you want to read him in his own words.

If you don't, the key point is that Dennett maintains that qualia, as commonly described, are necessarily:

1. ineffable
2. intrinsic
3. private
4. directly or immediately apprehensible in consciousness

...and that nothing actually exists with these properties. You see blue things, but there is no pure experience of blue behind your seeing blue things.

Edit: Allow me to emphasize that I do not consider the confusion to reflect poorly upon yourself - yours was a reasonable reading of Mitchell_Porter's characterization of Dennett's remarks. A better wording for the opening of my reply would be: "I think the quote doesn't reflect what Dennett believes."

In response to comment by on Consciousness
Comment author: 08 January 2010 09:04:52PM 3 points [-]

It seems I was wrong about Dennett's claims and misinterpreted the relevant sentence.

However the original question remains and can be rephrased: What predictions follow from world containing some intrinsic blueness?

The topmost cached thought I have is that this is exactly the same kind of confusion as presented in Excluding the Supernatural. Basically qualia is assumed as an ontologically basic thing, instead of neural firing pattern.

The big question is therefore (as presented in this thread already in various forms): What would you predict if you'd find yourself in a world with distinct blueness compared to a world without?

In response to Consciousness
Comment author: 08 January 2010 08:27:39PM -1 points [-]

You can do a Dennett and deny that anything is really blue.

I'd like to see what he'd do if presented with blue and a red balls and given a task: "Pick up the blue ball and you'll receive 3^^^3 dollars".

Even though many claim to be confused about these common words their actual behaviour betrays them. Which raises the question that what is the benefit of this wondering of "blueness"? What does it help anyone to actually do?

Comment author: 13 December 2009 10:56:07PM 4 points [-]

My understanding is that it's possible to have a uniform distribution over a finite set, or an interval of the reals, but not over all integers, or all reals, which is why I said in the sentence before the one you quotes, "suppose there is one possible world for each integer in the set of all integers."

Comment author: 14 December 2009 07:37:56AM 0 points [-]

As there is the 1:1 mapping between set of all reals and unit interval we can just use the unit interval and define a uniform mapping there. As whatever distribution you choose we can map it into unit interval as Pengvado said.

In case of set of all integers I'm not completely certain. But I'd look at the set of computable reals which we can use for much of mathematics. Normal calculus can be done with just computable reals (set of all numbers where there is an algorithm which provides arbitrary decimal in a finite time). So basically we have a mapping from computable reals on unit interval into set of all integers.

Another question is that is the uniform distribution the entropy maximising distribution when we consider set of all integers?

From a physical standpoint why are you interested in countably infinite probability distributions? If we assume discrete physical laws we'd have finite amount of possible worlds, on the other hand if we assume continuous we'd have uncountably infinite amount which can be mapped into unit interval.

From the top of my head I can imagine set of discrete worlds of all sizes which would be countably infinite. What other kinds of worlds there could be where this would be relevant?

View more: Next