Comment author: komponisto 19 January 2010 04:28:33PM 2 points [-]

Personally, I was a fan of the previous title. The perils of not speaking out, alas.

Comment author: Technologos 19 January 2010 04:59:35PM 3 points [-]

I didn't mind the old one, but I do like the "sticky brains" label that we can use for this concept in the future.

Comment author: PhilGoetz 19 January 2010 05:54:49AM 5 points [-]

Why, when you consider the case where you calculated the odds of winning the lottery incorrectly, do you increase rather than decrease the odds?

In any case, with a lottery, you do know the odds of winning; they're stated on the ticket.

Comment author: Technologos 19 January 2010 09:01:36AM 1 point [-]

Agreed--the trick is that being wrong "only once" is deceptive. I may be wrong more than once on a one-in-forty-million chance. But I may also be wrong zero times in 100 million tries, on a problem as frequent and well-understood as the lottery, and I'm hesitant to say that any reading problems I may have would bias the test toward more lucrative mistakes.

In response to comment by Roko on Advice for AI makers
Comment author: timtyler 17 January 2010 09:56:32AM *  1 point [-]

I figure a fair amount of modern heritable information (such as morals) will not be lost. Civilization seems to be getting better at keeping and passing on records. You pretty-much have to hypothesize a breakdown of civilization for much of genuine value to be lost - an unprecedented and unlikely phenomenon.

However, I expect increasing amounts of it to be preserved mostly in history books and museums as time passes. Over time, that will probably include most DNA-based creatures - including humans.

Evolution is rather like a rope. Just as no strand in a rope goes from one end to the other, most genes don't tend to do that either. That doesn't mean the rope is weak, or that future creatures are not - partly - our descendants.

Comment author: Technologos 17 January 2010 09:16:23PM 1 point [-]

an unprecedented and unlikely phenomenon

Possible precedents: the Library of Alexandria and the Dark Ages.

Comment author: RobinZ 16 January 2010 10:19:19PM 2 points [-]

Certainly they can; what I am emphasizing is that "transhuman" is an overly strong criterion.

Comment author: Technologos 16 January 2010 10:21:25PM 3 points [-]

Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.

Comment author: RobinZ 16 January 2010 09:03:28PM 1 point [-]

Wait, since when is Eliezer transhuman?

Comment author: Technologos 16 January 2010 09:23:54PM 6 points [-]

Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?

Comment author: Alicorn 16 January 2010 08:35:56PM 7 points [-]

I don't think the publicly available details establish "how", merely "that".

Comment author: Technologos 16 January 2010 08:56:23PM 4 points [-]

Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."

In response to Advice for AI makers
Comment author: zero_call 16 January 2010 08:29:05PM 2 points [-]

For solving the Friendly AI problem, I suggest the following constraints for your initial hardware system:

1.) All outside input (and input libraries) are explicitly user selected. 2.) No means for the system to establish physical action (e.g., no robotic arms.) 3.) No means for the system to establish unexpected communication (e.g., no radio transmitters.)

Once this closed system has reached a suitable level of AI, then the problem of making it friendly can be worked on much easier and more practically, and without risk of the world ending.

To start out from the beginning to make a GAI friendly through some other means seems rather ambitious to me. Why not just work on AI now, make sure when you're getting close to the goal, that the AI is suitably restricted, and then finally use the AI itself as an experimental testbed for "personality certification".

(Can someone explain/link me to why this isn't currently espoused?)

Comment author: Technologos 16 January 2010 08:32:26PM 8 points [-]

This is essentially the AI box experiment. Check out the link to see how even an AI that can only communicate with its handler(s) might be lethal without guaranteed Friendliness.

Comment author: orthonormal 16 January 2010 03:06:07AM 7 points [-]

Best I can tell, it's not so much what you believe that matters here as what you say and do

I agree with the rest of your comment, but this seems very wrong to me. I'd say rather that the unity we (should) look for on LW is usually more meta-level than object-level, more about pursuing correct processes of changing belief than about holding the right conclusions. Object-level understanding, if not agreement, will usually emerge on its own if the meta-level is in good shape.

Comment author: Technologos 16 January 2010 07:03:27PM 1 point [-]

Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.

In response to comment by Kevin on The Wannabe Rational
Comment author: MrHen 15 January 2010 10:04:28PM *  5 points [-]

Are you saying that the difference between your examples is enough to include me or exclude me from LessWrong? Or is the difference in how you in particular relate to me here? What actions revolve around the differences you see in those examples?

In response to comment by MrHen on The Wannabe Rational
Comment author: Technologos 16 January 2010 12:45:27AM 1 point [-]

I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).

The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson's religiosity has a great deal of effect on what he says and does, and that would cause a problem.

The fact that you wrote this post suggests you are in the former category, and I for one am glad you're here.

Comment author: Technologos 15 January 2010 04:48:46PM -1 points [-]

To be clear, I wasn't arguing against applying the outside view--just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?

I agree with your comment on the previous post, incidentally, that the probability of the Singularity as conceived by any individual or even LW in general is low; the possible types of Singularity are so great that it would be rather shocking if we could get it right from our current perspective. Again, I was responding only to the assertion that the outside view shows no successes for the class of breakthroughs containing AGI/cryo/Singularity.

I should note too that the entirety of the quotation you ascribe to me is originally from Eliezer, as the omitted beginning of the quoted sentence indicates.

View more: Prev | Next