Comment author: bambi 13 June 2008 02:23:51PM 0 points [-]

Re your moral dilemma: you've stated that you think your approach needs a half-dozen or so supergeniuses (on the level of the titans of physics). Unless they have already been found -- and only history can judge that -- some recruitment seems necessary. Whether these essays capture supergeniuses is the question.

Demonstrated (published) tangible and rigorous progress on your AI theory seems more likely to attract brilliant productive people to your cause.

In response to Timeless Control
Comment author: bambi 07 June 2008 02:52:18PM 0 points [-]

Unknown, your comment strikes me as a good way of looking at it.

The "me of now" as a region of configuration space contains residue of causal relationships to other regions of configuration space ("the past" and my memories of it). And the timeless probability field on configuration space causally connects the "me of now" to the "future" (other regions of configuration space). Just because this is true, and -- even more profoundly -- even though the "me of now" configuration space region has no special status (no shining "moment in the sun" as the privileged focus of a global clock ticking a path through configuration space), I am still what I am and I do what I do (from a local perspective which is all I have detailed information about), which includes making decisions.

Our decisions are based on what we know and believe, so an acceptance of the viewpoint Eliezer has been putting forth is likely to have *some* impact on decisions we make... I wonder what that impact is, and what should it be?

Comment author: bambi 05 June 2008 03:38:01AM 0 points [-]

So what tools do all you self-improving rationalists use to help with the "multiply" part of "shut up and multiply"? A development environment for a programming/scripting language? Mathematica? A desk calculator? Mathcad? Spreadsheet? Pen and paper?

Comment author: bambi 31 May 2008 07:23:41PM 0 points [-]

Eliezer, your observers would hopefully have noticed hundreds of millions of years of increasing world-modeling cognitive capability, eventually leading to a species with sufficient capacity to form a substrate for memetic progress, followed by a hundred thousand years and a hundred billion individual lives leading up to now.

Looking at a trilobyte, the conclusion would not be that such future development is "impossible", but perhaps "unlikely to occur while I'm eating lunch today".

Comment author: bambi 31 May 2008 06:41:12PM 3 points [-]

Ok, sure. Maybe Bayesianism is much more broadly applicable than it seems. And maybe there are fewer fundamental breakthroughs still needed for a sufficient RSI-AI theory than it seems. And maybe the fundamentals could be elaborated into a full framework more easily than it seems. And maybe such a framework could be implemented into computer programming more easily than it seems. And maybe the computing power required to execute the computer program at non-glacial speeds is less than it seems. And maybe the efficiency of the program can be automatically increased more than seems reasonable. And maybe "self improvement" can progress further into the unknown than seems reasonably to guess. And maybe out there in the undiscovered territory there are ways of reasoning about and subsequently controlling matter that are more effective than seems likely, and maybe as these things are revealed we will be too stupid to recognize them.

Maybe.

To make most people not roll their eyes at the prospect, though, they'll have to be shown *something* more concrete than a "Maybe AI is like a nuclear reaction" metaphor.

In response to Timeless Beauty
Comment author: bambi 28 May 2008 11:55:30PM 0 points [-]

Ok, it looks to me like these answers (invoking the future over and over after accepting that there is no 't') are admissions that this type of physics thinking is just playfulness -- no consequences whatsoever, to our own actions or to any observable aspect of the universe.

That's cool, I misunderstood is all. Maybe life is just a dream, eh?

In response to Timeless Beauty
Comment author: bambi 28 May 2008 07:24:45PM -1 points [-]

Eliezer, if you believe all of this, why do you care so much about saving the world from "future" ravenous AIs? The paperclip universes just are and the non-paperclip-universes just are. Go to the beach, man! Chill out. You can't change anyting; there is nothing to change.

Comment author: bambi 23 May 2008 05:41:57PM 0 points [-]

As long as arguing from fictional evidence is ok as long as you admit you're doing it, somebody should write the novelization.

Bayesian Ninja Army contacted by secret government agency due to imminent detonation of Logic Bomb* in evil corporate laboratory buried deep beneath some exotic location. Hijinks ensue; they fail to stop Logic Bomb detonation but do manage to stuff in a Friendliness supergoal at the last minute. Singularity ensues, with lots of blinky lights and earth-rending. Commentary on the human condition follows, ending in a sequel-preparing twist.

* see commentary on yesterday's post

In response to That Alien Message
Comment author: bambi 23 May 2008 05:13:00PM 1 point [-]

Ok, the phrase was just an evocative alternative to "scary optimization process" or whatever term the secret society is using these days to avoid saying "AI" -- because "AI" raises all sorts of (purportedly) irrelevant associations like consciousness and other anthropomorphisms. The thing that is feared here is really just the brute power of bayesian modeling and reasoning applied to self improvement (through self modeling) and world control (through world modeling).

If an already existing type of malware has claimed the term, invent your own colorful name. How about "Master"?


In response to That Alien Message
Comment author: bambi 23 May 2008 02:47:00PM 2 points [-]

Phillip Huggan: bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...

A Logic Bomb is the thing that Yudkowsky is trying to warn us about. Ice-Nine might be a more apropos analogy, though -- the start of a catalytic chain reaction that transforms everything. Homo Sapiens is one such logically exothermic self-sustaining chain reaction but it's a slow burn because brains suck.

A Logic Bomb has the following components: a modeling language and model-transformation operators based on Bayesian logic. A decision system (including goals and reasoning methods) that decides which operators to apply. A sufficiently complete self-model described in the modeling language. Similar built-in models of truth, efficiency, the nature of the physical universe (say, QM), and (hopefully) ethics.

Flip the switch and watch the wavefront expand at the speed of light.

I assume that the purpose here is not so much to teach humanity to think and behave rationally, but rather to teach a few people to do so, or attract some who already do, then recruit them into the Bayesian Ninja Army whose purpose is to make sure that the immininent inevetable construction and detonation of a Logic Bomb has results we like.


View more: Prev | Next