Comment author: outlawpoet 19 April 2010 02:46:15AM 5 points [-]

hi

Comment author: Wei_Dai 16 January 2010 04:35:13AM *  30 points [-]

This article (which I happened across today) written by Ben Goertzel should make interesting reading for a would-be AI maker. It details Ben's experience trying to build an AGI during the dot-com bubble. His startup company, Webmind, Inc., apparently had up to 130 (!) employees at its peak.

According to the article, the AGI was almost completed, and the main reason his effort failed was that the company ran out of money due to the bursting of the bubble. Together with the anthropic principle, this seems to imply that Ben is the person responsible for the stock market crash of 2000.

I was always puzzled why SIAI hired Ben Goertzel to be its research director, and this article only deepens the mystery. If Ben has done an Eliezer-style mind-change since writing that article, I think I've missed it.

ETA: Apparently Ben has recently been helping his friend Hugo de Garis build an AI at Xiamen University under a grant from the Chinese government. How do you convince someone to give up building an AGI when your own research director is essentially helping the Chinese government build one?

Comment author: outlawpoet 16 January 2010 10:20:05PM 1 point [-]

That is an excellent question.

Comment author: alyssavance 01 December 2009 01:53:57AM 7 points [-]

"Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;"

Though good popular writing is, of course, very important, I think we sometimes overestimate the value of producing summaries/rehashings of earlier writing by Vinge, Kurzweil, Eliezer, Michael Vassar and Anissimov, etc.

Comment author: outlawpoet 01 December 2009 01:58:03AM *  1 point [-]

I must agree with this, although video and most writing OTHER than short essays and polemics would be mostly novel, and interesting.

Comment author: Eliezer_Yudkowsky 30 November 2009 12:10:31AM 3 points [-]

Why yes, as a matter of fact, I previously came up with a very simple one-sentence test along these lines which I am not going to post here for obvious reasons.

Here's a different test that would also work, if I'd previously memorized the answer: "5 decimal digits of pi starting at the 243rd digit!" Although it might be too obvious, and now that I've posted it here, it wouldn't work in any case.

Comment author: outlawpoet 01 December 2009 01:43:37AM 2 points [-]

If every snide, unhelpful jokey reply you post is secretly a knowing reference to something only one other person in the world can recognize, I retract every bad thing I ever said about you.

Comment author: Eliezer_Yudkowsky 29 September 2009 03:12:34AM 1 point [-]

This problem is also hidden in a great many AI decision systems within the 'hypothesis generation' system

At that point we're dealing with a full-fledged artificial heuristic and bias - the generation system is the heuristic, and the bias is the overly limited collection of hypotheses it manages to formulate for explicit attention at a given point.

I'd reserve "fallacy" for motivated or egregious cases, the sort that humans try to get away with.

Comment author: outlawpoet 29 September 2009 08:40:08PM 0 points [-]

Is then the ability to explicitly (at a high, abstract level) reach down to the initial hypothesis generation and include, raise, or add hypotheses for consideration always a pathology?

I can imagine a system where extremely low probability hypotheses, by virtue of complexity or special evidence required, might need to be formulated or added by high level processes, but you could simply view that as another failure of the generation system, and require that even extremely rare or novel structures of hypotheses must go through channels to avoid this kind of disturbance of natural frequencies, as it were.

Comment author: Furcas 29 September 2009 02:35:39AM *  1 point [-]

Good post.

I'm not sure that 'privileging the hypothesis' deserves to be called a fallacy, though. It's only a bad idea because of the biases that humans happen to have. It can lead to misconceptions for us primates, but it's not a logical error in itself, is it?

Comment author: outlawpoet 29 September 2009 03:02:48AM 5 points [-]

It may not be a completely generic bias or fallacy, but it certainly can affect more than just human decision processes. There are a number of primitive systems that exhibit pathologies similar to what Eliezer is describing, speech recognition systems, for example, have a huge issue almost exactly isomorphic to this. Once some interpretation of a audio wave is a hypothesis, it is chosen in great excess to it's real probability or confidence. This is the primary weakness of rule-based voice grammars, that their pre-determined possible interpretations lead to unexpected inputs being slotted into the nearest pre-existing hypothesis, rather than leading to a novel interpretation. The use of statistical grammars to try to pound interpretations to their 'natural' probabilistic initial weight is an attempt to avoid this issue.

This problem is also hidden in a great many AI decision systems within the 'hypothesis generation' system, or equivalent. However elegant the ranking and updating system, if your initial possible list is weak, you distort your whole decisions process.

Comment author: wedrifid 28 September 2009 09:22:33AM *  4 points [-]

Geddes-standard? I don't understand the reference. An unflattering comparison to the biologist?

Edit: Thanks for the explaination EY. (And my expressed contempt for whoever thinks a query trying to understand the reference must be punished.)

Comment author: outlawpoet 28 September 2009 05:19:54PM 0 points [-]

Marc Geddes was a participant in the SL4 list back when it was a bit more active, and kind of proverbial for degenerating into posting very incoherent abstract theory. He's basically carrying on the torch for Mentifex and Achimedes Plutonium and other such celebrities of incommunicable genius.

Comment author: Eliezer_Yudkowsky 21 September 2009 12:57:59AM 5 points [-]

Interesting; I confess I hadn't thought of that at all! Now I wonder if using this rule along with the underlying anthropic premise, would cause subjective experience to dissolve into chaos, or make no discernable difference (i.e. reality still ends up looking just as ordered for the most part), or if it argues against the underlying anthropic premise by showing how easy it is to make probabilities refuse to converge to a timeless limit.

(And yes, it's that Rogers - you can tell because he's the closest thing the group has to a leader. One wonders how the blood got on his sweater. Surely it's not the blood of an enemy, as the original song implies. Perhaps it's the blood of Big Bird, who died fighting for Amber, or something along those lines.)

Comment author: outlawpoet 21 September 2009 05:36:46PM 3 points [-]

The bloodstained sweater in the original song refers to an urban legend that Mr. Rogers was a Marine Sniper in real life.

Comment author: DanArmak 11 August 2009 09:32:43AM *  1 point [-]

Like Douglas_Knight, I don't think current utilons are a useful unit.

Suppose your utility function behaves as you describe. If you play once (and win, with 90% probability), Omega will modify the universe in a way that all the concrete things you derive utility from will bring you twice as much utility, over the course of the infinite future. You'll live out your life with twice as much of all the things you value. So it makes sense to play this once, by the terms of your utility function.

You don't know, when you play your first game, whether or not you'll ever play again; your future includes both options. You can decide, for yourself, that you'll play once but never again. It's a free decision both now and later.

And now a second has passed and Omega is offering a second game. You remember your decision. But what place do decisions have in a utility function? You're free to choose to play again if you wish, and the logic for playing is the same as the first time around...

Now, you could bind yourself to your promise (after the first game). Maybe you have a way to hardwire your own decision procedure to force something like this. But how do you decide (in advance) after how many games to stop? Why one and not, say, ten?

OTOH, if you decide not to play at all - would you really forgo a one-time 90% chance of doubling your lifelong future utility? How about a 99.999% chance? The probability of death in any one round of the game can be made as small as you like, as long as it's finite and fixed for all future rounds. Is there no probability at which you'd take the risk for one round?

Comment author: outlawpoet 11 August 2009 06:58:41PM 2 points [-]

Why on earth wouldn't I consider whether or not I would play again? Am I barred from doing so?

If I know that the card game will continue to be available, and that Omega can truly double my expected utility every draw, either it's a relatively insignificant increase of expected utility over the next few minutes it takes me to die, in which case it's a foolish bet, compared to my expected utility over the decades I have left, conservatively, or Omega can somehow change the whole world in the radical fashion needed for my expected utility over the next few minutes it takes me to die to dwarf my expected utility right now.

This paradox seems to depend on the idea that the card game is somehow excepted from the 90% likely doubling of expected utility. As I mentioned before, my expected utility certainly includes the decisions I'm likely to make, and it's easy to see that continuing to draw cards will result in my death. So, it depends on what you mean. If it's just doubling expected utility over my expected life IF I don't die in the card game, then it's a foolish decision to draw the first or any number of cards. If it's doubling expected utility in all cases, then I draw cards until I die, happily forcing Omega to make verifiable changes to the universe and myself.

Now, there are terms at which I would take the one round, IF you don't die in the card game version of the gamble, but it would probably depend on how it's implemented. I don't have a way of accessing my utility function directly, and my ability to appreciate maximizing it is indirect at best. So I would be very concerned about the way Omega plans to double my expected utility, and how I'm meant to experience it.

In practice, of course, any possible doubt that it's not Omega giving you this gamble far outweighs any possibility of such lofty returns, but the thought experiment has some interesting complexities.

Comment author: DanArmak 11 August 2009 03:12:16AM 1 point [-]

If you accept that you're maximizing expected utility, then you should draw the first card, and all future cards. It doesn't matter what terms your utility function includes. The logic for the first step is the same as for any other step.

If you don't accept this, then what precisely do you mean when you talk about your utility function?

Comment author: outlawpoet 11 August 2009 03:39:20AM 0 points [-]

I see, I misparsed the terms of the argument, I thought it was doubling my current utilons, you're positing I have a 90% chance of doubling my currently expected utility over my entire life.

The reason I bring up the terms in my utility function, is that they reference concrete objects, people, time passing, and so on. So, measuring expected utility, for me, involves projecting the course of the world, and my place in it.

So, assuming I follow the suggested course of action, and keep drawing cards until I die, to fulfill the terms, Omega must either give me all the utilons before I die, or somehow compress the things I value into something that can be achieved in between drawing cards as fast as I can. This either involves massive changes to reality, which I can verify instantly, or some sort of orthogonal life I get to lead while simultaneously drawing cards, so I guess that's fine.

Otherwise, given the certainty that I will die essentially immediately, I certainly don't recognize that I'm getting a 90% chance of doubled expected utility, as my expectations certainly include whether or not I will draw a card.

View more: Next