Why, when you consider the case where you calculated the odds of winning the lottery incorrectly, do you increase rather than decrease the odds?
In any case, with a lottery, you do know the odds of winning; they're stated on the ticket.
Why, when you consider the case where you calculated the odds of winning the lottery incorrectly, do you increase rather than decrease the odds?
In any case, with a lottery, you do know the odds of winning; they're stated on the ticket.
Agreed--the trick is that being wrong "only once" is deceptive. I may be wrong more than once on a one-in-forty-million chance. But I may also be wrong zero times in 100 million tries, on a problem as frequent and well-understood as the lottery, and I'm hesitant to say that any reading problems I may have would bias the test toward more lucrative mistakes.
I figure a fair amount of modern heritable information (such as morals) will not be lost. Civilization seems to be getting better at keeping and passing on records. You pretty-much have to hypothesize a breakdown of civilization for much of genuine value to be lost - an unprecedented and unlikely phenomenon.
However, I expect increasing amounts of it to be preserved mostly in history books and museums as time passes. Over time, that will probably include most DNA-based creatures - including humans.
Evolution is rather like a rope. Just as no strand in a rope goes from one end to the other, most genes don't tend to do that either. That doesn't mean the rope is weak, or that future creatures are not - partly - our descendants.
an unprecedented and unlikely phenomenon
Possible precedents: the Library of Alexandria and the Dark Ages.
Certainly they can; what I am emphasizing is that "transhuman" is an overly strong criterion.
Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.
Wait, since when is Eliezer transhuman?
Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?
I don't think the publicly available details establish "how", merely "that".
Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."
For solving the Friendly AI problem, I suggest the following constraints for your initial hardware system:
1.) All outside input (and input libraries) are explicitly user selected. 2.) No means for the system to establish physical action (e.g., no robotic arms.) 3.) No means for the system to establish unexpected communication (e.g., no radio transmitters.)
Once this closed system has reached a suitable level of AI, then the problem of making it friendly can be worked on much easier and more practically, and without risk of the world ending.
To start out from the beginning to make a GAI friendly through some other means seems rather ambitious to me. Why not just work on AI now, make sure when you're getting close to the goal, that the AI is suitably restricted, and then finally use the AI itself as an experimental testbed for "personality certification".
(Can someone explain/link me to why this isn't currently espoused?)
Best I can tell, it's not so much what you believe that matters here as what you say and do
I agree with the rest of your comment, but this seems very wrong to me. I'd say rather that the unity we (should) look for on LW is usually more meta-level than object-level, more about pursuing correct processes of changing belief than about holding the right conclusions. Object-level understanding, if not agreement, will usually emerge on its own if the meta-level is in good shape.
Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.
Are you saying that the difference between your examples is enough to include me or exclude me from LessWrong? Or is the difference in how you in particular relate to me here? What actions revolve around the differences you see in those examples?
I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).
The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson's religiosity has a great deal of effect on what he says and does, and that would cause a problem.
The fact that you wrote this post suggests you are in the former category, and I for one am glad you're here.
To be clear, I wasn't arguing against applying the outside view--just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?
I agree with your comment on the previous post, incidentally, that the probability of the Singularity as conceived by any individual or even LW in general is low; the possible types of Singularity are so great that it would be rather shocking if we could get it right from our current perspective. Again, I was responding only to the assertion that the outside view shows no successes for the class of breakthroughs containing AGI/cryo/Singularity.
I should note too that the entirety of the quotation you ascribe to me is originally from Eliezer, as the omitted beginning of the quoted sentence indicates.
Personally, I was a fan of the previous title. The perils of not speaking out, alas.
I didn't mind the old one, but I do like the "sticky brains" label that we can use for this concept in the future.