Comment author: kalla724 17 May 2012 01:11:41AM 7 points [-]

Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.

You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.

1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity of possibilities (i.e. beyond anything that any physically possible intelligence can consider), or it will deviate from reality. The general intelligence is only as good as the data its inferences are based upon.

Experiments take time, data analysis takes time. No matter how efficient the inferential step may become, this puts an absolute limit to the speed of growth in capability to actually change things.

2) The Oracle AI that "goes FOOM" confined to a server cloud would somehow have to create servitors capable of acting out its desires in the material world. Otherwise, you have a very angry and very impotent AI. If you increase a person's intelligence trillionfold, and then enclose them into a sealed concrete cell, they will never get out; their intelligence can calculate all possible escape solutions, but none will actually work.

Do you have a plausible scenario how a "FOOM"-ing AI could - no matter how intelligent - minimize oxygen content of our planet's atmosphere, or any such scenario? After all, it's not like we have any fully-automated nanobot production factories that could be hijacked.

Comment author: dlthomas 17 May 2012 01:26:18AM *  2 points [-]

The answer from the sequences is that yes, there is a limit to how much an AI can infer based on limited sensory data, but you should be careful not to assume that just because it is limited, it's limited to something near our expectations. Until you've demonstrated that FOOM cannot lie below that limit, you have to assume that it might (if you're trying to carefully avoid FOOMing).

Comment author: mistercow 07 May 2010 04:06:07AM *  5 points [-]

I think it was on This American Life that I heard the guy's story. They even contacted a physicist to look at his "theory", who tried to explain to him that the units didn't work out. The guy's response was "OK, but besides that …"

He really seemed to think that this was just a minor nitpick that scientists were using as an excuse to dismiss him.

Comment author: dlthomas 16 May 2012 10:32:45PM *  1 point [-]

Why isn't it a minor nitpick? I mean, we use dimensioned constants in other areas; why, in principle, couldn't the equation be E=mc * (1 m/s)? If that was the only objection, and the theory made better predictions (which, obviously, it didn't, but bear with me), then I don't see any reason not to adopt it. Given that, I'm not sure why it should be a significant objection.

Edited to add: Although I suppose that would privilege the meter and second (actually, the ratio between them) in a universal law, which would be very surprising. Just saying that there are trivial ways you can make the units check out, without tossing out the theory. Likewise, of course, the fact that the units do check out shouldn't be taken too strongly in a theory's favor. Not that anyone here hadn't seen the XKCD, but I still need to link it, lest I lose my nerd license.

In response to GAZP vs. GLUT
Comment author: Monkeymind 16 May 2012 07:42:35PM *  1 point [-]

How can you be 100% confident that a look up table has zero consciousness when you don't even know for sure what consciousness is?

Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout. If we are talking thought experiments here, it is up to us to make assumption(s) in our hypothesis. I don't recall EY giving HIS definition of consciousness for his thought experiment.

However, if the GLUT behaves exactly like a human, and humans are conscious, then by definition the GLUT is conscious, whatever that means.

In response to comment by Monkeymind on GAZP vs. GLUT
Comment author: dlthomas 16 May 2012 08:58:39PM *  3 points [-]

Things that are true "by definition" are generally not very interesting.

If consciousness is defined by referring solely to behavior (which may well be reasonable, but is itself an assumption) then yes, it is true that something that behaves exactly like a human will be conscious IFF humans are conscious.

But what we are trying to ask, at the high level, is whether there is something coherent in conceptspace that partitions objects into "conscious" and "unconscious" in something that resembles what we understand when we talk about "consciousness," and then whether it applies to the GLUT. Demonstrating that it holds for a particular set of definitions only matters if we are convinced that one of the definitions in that set accurately captures what we are actually discussing.

Comment author: JoshuaZ 15 May 2012 11:25:56PM 1 point [-]

Yes. In so far as the output is larger than the set of observations. Take MWI for example- the output includes all the parts of the wavebranch that we can't see. In contrast, Copenhagen only has outputs that we by and large do see. So the key issue here is that outputs and observable outputs aren't the same thing.

Comment author: dlthomas 16 May 2012 12:43:52AM 1 point [-]

Ah, fair. So in this case, we are imagining a sequence of additional observations (from a privileged position we cannot occupy) to explain.

Comment author: dlthomas 15 May 2012 09:44:52PM 13 points [-]

I think this might be the most strongly contrarian post here in a while...

Comment author: JoshuaZ 15 May 2012 08:13:55PM 1 point [-]

No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.

Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity. For example, consider a computer program that when given a positive integer n, outputs the nth prime number. One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.

In this case, the math being used is pretty similar, so the complexity shouldn't be that different. But that's a more subtle and weaker claim.

Comment author: dlthomas 15 May 2012 09:43:56PM 2 points [-]

Not all formalizations that give the same observed predictions have the same Kolmogorov complexity[.]

Is that true? I thought Kolmogorov complexity was "the length of the shortest program that produces the observations" - how can that not be a one place function of the observations?

Comment author: khafra 15 May 2012 02:13:02PM 2 points [-]

Whoops. I wasn't counting the sub-bullet as a power-of-two position; gotcha. FWIW, I still think the agreement bitmask is a fun perspective, even though I got it wrong (and there's the whole big-endian/little-endian question).

Comment author: dlthomas 15 May 2012 09:37:09PM 1 point [-]

(and there's the whole big-endian/little-endian question).

That's cleared up by:

I am number 25 school member, since I agree with the last and two more.

Comment author: Bugmaster 15 May 2012 01:01:37AM 1 point [-]

IMO it would be enough to translate the original text in such a fashion that some large proportion (say, 90%) of humans who are fluent in both languages would look at both texts and say, "meh... close enough".

Comment author: dlthomas 15 May 2012 02:23:47AM 0 points [-]

My point was just that there's a whole lot of little issues that pull in various directions if you're striving for ideal. What is/isn't close enough can depend very much on context. Certainly, for any particular purpose something less than that will be acceptable; how gracefully it degrades no doubt depends on context, and likely won't be uniform across various types of difference.

Comment author: NancyLebovitz 13 May 2012 04:35:51AM 1 point [-]

Is matching the vagueness of the original a reasonable goal?

Comment author: dlthomas 15 May 2012 12:59:21AM 0 points [-]

One complication here is that you ideally want it to be vague in the same ways the original was vague; I am not convinced this is always possible while still having the results feel natural/idomatic.

Comment author: JGWeissman 14 May 2012 11:36:03PM 1 point [-]

Fair enough. By "not interesting", I meant it is not the sort of future that I want to achieve. Which is a somewhat ideosyncratic usage, but I think inline with the context.

Comment author: dlthomas 14 May 2012 11:50:32PM *  2 points [-]

What if we added a module that sat around and was really interested in everything going on?

View more: Prev | Next