In response to Surprised by Brains
Comment author: Will_Pearson 23 November 2008 10:11:24AM 3 points [-]

Believer: The search space is compressible -

The space of behaviors of Turing Machines is not compressible, sub spaces are, but not the whole lot. What space do you expect the SeedAIs to be searching? If you could show that it is compressible and bound to have an uncountable number of better versions of the SeedAI, then you could convince me that I should worry about Fooming.

As such when I think of self-modifiers I think of them searching the space of Turing Machines, which just seems hard.

Comment author: DilGreen 11 October 2010 02:02:43AM 2 points [-]

The space of possible gene combinations is not compressible - under the evolutionary mechanism.

The space of behaviours of Turing machines is not compressible, in the terms in which that compression has been envisaged.

The mechanism that compresses search space that Believer posits is something to do with brains; something to do with intelligence. And it works - we know it does; Kekule works on the structure of benzene without success; sleeps, dreams of a serpent biting its own tail, and waking, conceives of the benzene ring.

The mechanism (and everyone here believes that it is a mechanism) is currently mysterious. AI must possess this mechanism, or it will not be AI.

In response to Surprised by Brains
Comment author: DilGreen 11 October 2010 01:52:36AM 0 points [-]

If nothing else, hugely entertaining, and covers a great deal of ground too.

Comment author: Will_Pearson 19 November 2008 10:06:34PM 0 points [-]

to say nothing of a whole multicellular C. elegans earthworm

It would probably better if you had said nothing of it. It eats dead rotting vegetable matter, which you have just hypothetically removed. Plants are replicators too! So it would die in short order and time would be reversed.

But the significant thing was not how much material was recruited into the world of replication; the significant thing was the search, and the material just carried out that search.

Search is significant, but it was not the only significant thing. What was searched was also significant. If by chance a brain that searched the space of good chess strategies had spontaneously appeared, it would not be important. What was searched was "what patterns are good for survival to date", not "what patterns that are expected to be good for survival". This is important, it is real first hand information, we cannot exist because of some delusive part of natures mind that thinks we are good at surviving so far, we have to be!

Our existence is first order information about the world. Our mental models are only second order, they are one removed from reality. We try and update them by testing our reality against what the models predict, but there might always be black swans the models don't see. The machinery that creates them has to have been useful for surviving to date and carrying out many tasks that helps that survival, but the models themselves do not necessarily have to be useful, right, correct or true. I think there will always be a flow of information from the first order bodies to the second order models.

Comment author: DilGreen 11 October 2010 12:35:29AM 0 points [-]

Even a very small step forward in evolution, taken as a 'short-cut', would result in failure. - life changed the chemistry around it - headline is the relative abundance and influence of free oxygen relative to CO2.

The point is that the search is ALWAYS for near neighbour variants, and even then, the huge majority of these are failures.

The (seemingly) vastly improbable success of variants that are not near neighbours has, I think, to do with complexity and the concomitant law of unintended (in this context, 'unwelcome' would be a better word, since no intention is involved) consequences. The larger the step, the exponentially larger probability of corollary catastrophic implications.

Comment author: aausch 08 October 2010 04:30:07PM *  0 points [-]

I may not be smart enough to debate you point-for-point on this, but I have the feeling about 60% of what you say is crap.

David Letterman, To Bill O'Reilly, in discussion about the supposed War on Christmas, as quoted in "In Letterman appearance, O'Reilly repeated false claim that school changed 'Silent Night' lyrics", Media Matters for America, (2006-01-04) (From Wikiquote)

Comment author: DilGreen 09 October 2010 11:34:13PM 1 point [-]

I imagine this is getting up-voted here in response to the sentiment, and I'm not going to vote it down. But this approach is more often used by deists against rationalists, and the next step is book-burning.

Comment author: Jonathan_Graehl 07 October 2010 08:00:01PM *  1 point [-]

I see your point. I was presuming a human mind w/ the typical range of experiences available to it.

Comment author: DilGreen 09 October 2010 11:17:00PM *  0 points [-]

The interesting thing about minds is that they are able to produce interesting conjunctions of and inferences from, seemingly unrelated data/experiences. Minds appear to be more than the sum of their experiences. This ability appears to defy the best efforts of coders to parallel.

EDIT: This got voted down, perhaps because of the above: it may be worth me stating that I am not posing a 'mysterious question' - the key words are 'appears to' - in other words, this is an aspect which needs significant further work..

I consider almost all code 'banal', in that almost all code 'performs little computation of interest'. Pavitra clearly distinguishes between 'interest' and 'value'.

Surely one way of looking at AI research is that it is an attempt to produce code that is not banal?

Comment author: Jonathan_Graehl 05 October 2010 08:11:52PM 1 point [-]

I'm having an extremely hard time understanding this quote. Its premises seem to contradict themselves.

How can a mind be original (not banal) if everything has been said and all knowledge is banal?

Only the set of beliefs that are actually routinely expressed can be considered banal; no matter if someone else has already said something, if it occurs to me organically, then it's probably useful.

Comment author: DilGreen 09 October 2010 11:08:48PM 0 points [-]

The implication is that connections between data are made by minds, and that minds that are not banal can make new and interesting connections between data.

Comment author: NihilCredo 07 October 2010 02:52:19AM 1 point [-]

Wouldn't surprise me if he'd been home-schooled.

Comment author: DilGreen 09 October 2010 10:14:44PM 4 points [-]

from a European perspective, and simultaneously from the perspective of one who sees most state-sanctioned educational approaches as almost comically counter-productive, the idea that appears common in the US, that home schooled = fundamentalist christian parents is confusing. Many home educators in europe are specifically atheist.

Comment author: magfrump 07 October 2010 04:59:45PM 0 points [-]

So when you say "speculative" you mean "generations-away speculation"?

I agree that I didn't really understand what your intent was from your post. If you were to say something along the lines of "AI is far enough away (on the tech-tree) that the predictions of current researchers shouldn't be taken into account by those who eventually design it" then I would disagree because it seems substantially overconfident. Is that about right?

Comment author: DilGreen 09 October 2010 09:47:49PM -2 points [-]

Um. I've still failed to be clear.

The nature of AI is that it is inherently sufficiently complex, that, although we may well be able to get better at predicting the kinds of characteristics that might result from implementation, the actuality of implementation will likely not just surprise us, but confound us.

I'm saying that any attempt to develop approaches that lead to Friendly AI , while they are surely interesting and as worthwhile as any other attempts to push understanding, cannot be relied on by implementers of AI as more than hopeful pointers.

It's the relationship between the inevitable surprise and the attitude of researchers that is at the core of what I was trying to say, but having started out attempting to be contrarian, I've ended up risking appearing mean. I'm going to stop here.

Comment author: magfrump 06 October 2010 11:51:43PM 0 points [-]

Either you're saying "we can't say anything about AI" which seems clearly false, or you're saying "an AI will surprise us" which seems clearly true.

Depending on what you mean by speculative, you're either overconfident or underconfident, but I can't imagine a proposition that is "in between" enough to be 80% likely.

Comment author: DilGreen 07 October 2010 02:40:41PM 0 points [-]

I accept this analysis of what I wrote. In the attempt to be concise, I haven't really said what I meant very clearly.

I don't mean that "we can't say anything about AI", but what I am saying is that we are a very long way from being able to say anything particularly worth saying about AI.

By which I mean to say that we are in some situation analogous to that of a 19th century weather forecaster, trying to predict the following week's weather. It's worth pushing the quality of the tools and the analysis, but don't expect any useful, real-world applicable information for a few lifetimes. And my confidence goes up the more I think about it.

Which, in the context of the audience of LW, I hoped would be seen as more contrarian than it has been! Perhaps this clarification will help.

Comment author: Will_Newsome 07 October 2010 10:13:39AM 6 points [-]

My own brief and mostly ignorant thoughts: Climate change is probably anthropogenic. Climate change is possibly very dangerous, with, say, as a wild guess, a 10% chance of having severe socioeconomic worldwide repercussions, conditional on no AGI and no nanotech. There seem to be various easy ways to solve or ameliorate the problem (pumping stuff into the atmosphere or oceans), and if it came down to it, I think we'd implement those. The relevant nanotech doesn't seem incredibly difficult. Trying to cut down on carbon emissions seems obviously insane. Moralizing about the virtues of being green sounds obviously insane unless you're a politician or liberal socialite. If you find yourself caring deeply about climate change, your time would probably be better spent caring about bigger, more urgent, and less well-funded problems, like aging/death, or existential risks.

Comment author: DilGreen 07 October 2010 02:28:31PM 1 point [-]

Well, I promised I wouldn't, so I won't, but there are lots of broad statements without justification here, that I would love to expand.

Nevertheless my actual question is being answered, if you and SarahC are at all representative (obviously, I understand that many other opinions will exist). Climate change IS seen as a serious threat, but the idea of changing lifestyles/direction of industrilaisation is seen variously as difficult/impossible/not objectively worthwhile - so we'll deal with the consequences as they arise.

View more: Prev | Next