In response to Imaginary Positions
Comment author: michael_vassar3 24 December 2008 04:14:24AM 6 points [-]

Eliezer: The distinction between direct observation and deduction is pretty ambiguous for a Bayesian, is it not? Also, MANY rationalists advocate "giving people the benefit of the doubt" which for them implies "behaving as if all people are reasonable and fair?". Furthermore, almost all rationalists, you for instance, advocate stating literally true beliefs towards people rather than stating the beliefs that you have most reason to expect to be most informative or to produce the best results. MANY people refrain from becoming more rational out of fear that they would have to do the same and out of justified belief that doing so would cripple their efficacy in life.

James Miller: Good call! That point about non-lawyers deserves a post of its own somewhere? I seriously wonder where they got that idea. Strangest of all, they seem to have generalized that misconception to invent the "laws of nature" which really are literal.

Paul Crowley: Both my wife and I have had brief phases when we were atheists of the type you question exists.

Comment author: michael_vassar3 23 December 2008 05:52:21PM 2 points [-]

I'd like to believe General Kurt, but I'm pretty sure he's a fictional character and that the line was invented for the pleasure of clever lazy people. DAMN! he's real!?! Where can I find such an employer in a position of great power today?

Comment author: michael_vassar3 22 December 2008 08:13:56AM 8 points [-]

"Borrowing someone else's knowledge really doesn't give you anything remotely like the same power level required to discover that knowledge for yourself." Hmmm. This doesn't seem to me to be the way it works in domains of cumulatively developed competitive expertise such as chess, go, gymnastics and the like. In those domains the depth with which a technique penetrates you when you invent it is far less than that with which it penetrates your students to whom you teach it when they are children, or at least, that's my impression. Of course, if we could alternatively raise and lower our neoteny, gaining adult insights and then returning to childhood to truly learn them our minds might grow beyond what humans have yet experienced.

Comment author: michael_vassar3 18 December 2008 09:39:31AM 3 points [-]

Eliezer: Isn't your possible future self's disapproval one highly plausible reason for not spending lots of resources developing slowly?

Honestly, the long recognized awfulness of classic descriptions of heaven seems like counter-evidence to the thesis of "Stumbling on Happiness". I can't be confident regarding how good I am at knowing what would make me happy, so if the evidence that people in general are bad at knowing what will make them happy I should expect to be bad at it, but if I know that people in general are comically awful at knowing what will make them happy *compared to myself and to most people the judgment of whom I respect* then that fact basically screens off the standard empirical evidence of bad judgment as it applies to me.

Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?

Komponisto: "Moral progress takes work, just like technological and intellectual progress. Indeed we should expect some correlation among these modes of progress, should we not?" Honestly, this seemed obvious before the 20th century when the Germans showed that it was possible to be plausibly the world's most scientifically advanced culture but morally backward. Our civilization still doesn't know what to make of that. We obviously see correlation, but also outliers.

Comment author: michael_vassar3 15 December 2008 02:19:40AM 8 points [-]

I'm just incredibly skeptical of attempts to do moral reasoning by invoking exotic metaphysical considerations such as anthropics, even if one is confident that ultimately one will have to do so. Human rationality has enough trouble dealing with science. It's nice that we seen to be able to do better than that, but THIS MUCH better? REALLY? I think that there are terribly strong biases towards deciding that "it all adds up to normality" involved here, even when it's not clear what 'normality' means. When one doesn't decide that, it seems that the tendency is to decide that it all adds up to some cliche, which seems VERY unlikely. I'm also not at all sure how certain we should be of a big universe, but personally I don't feel very confident of it. I'd say it's the way to bet, but not at what odds it remains the way to bet. I rarely find myself in practical situations where my actions would be different if I had some particular metaphysical belief rather than another, though it does come up and have some influence on e.g. my thoughts on vegetarianism.

In response to You Only Live Twice
Comment author: michael_vassar3 13 December 2008 01:26:08AM 2 points [-]

I would really like a full poll of this blog listing how many people are signed up for cryonics. Personally, I'm not, but I would consider it if existential risk was significantly lower OR my income was >$70K and would definitely do it if both were the case AND SIAI had $15M of pledged endowment.

Comment author: michael_vassar3 07 December 2008 09:21:30PM 1 point [-]

4 seems important to me. I wouldn't expect intelligence to come via that route, but that route does seem to put a fairly credible (e.g. I would bet 4:1 on claims that credible and expect to win in the long term), though high, soft upper bound to how long we can go on with roughly current rate scientific progress without achieving AI. I'd say that it suggests such a soft uupper bound in the 2070s. That said, I wouldn't be at all surprised by science ceasing to advance at something like the current rate long before then, accelerating or decelerating a lot even without a singularity.

Comment author: michael_vassar3 06 December 2008 05:38:46PM 5 points [-]

Can't do basic derivatives? Seriously?!? I'm for kicking the troll out. His bragging about mediocre mathematical accomplishments isn't informative or entertaining to us readers.

In response to Hard Takeoff
Comment author: michael_vassar3 03 December 2008 05:52:23AM 1 point [-]

Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn't expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.

Aron: It's rational to plan for the most dangerous survivable situations. However, it doesn't really make sense to claim that we can build computers that are superior to ourselves but that they can't improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can't quickly self-improve and which we can't quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.

Comment author: michael_vassar3 02 December 2008 06:16:35AM 3 points [-]

Phil: Anthropic pressures should by default be expected to be spread uniformly through our evolutionary history accelerating the evolutionary and pre-evolutionary record of events leading to us rather than merely accelerating the last stretch.

Exponential inputs into computer chip manufacture seem to produce exponential returns with a doubling time significantly less than that for the inputs, implying increasing returns per unit input, at least if one measures in terms of feature number. Obviously returns are exponentially diminishing if one measures in time to finish some particular calculation. Returns will more interestingly be diminishing per unit labor in terms of hardware design effor per unit of depth to which a NP and exponential complexity class problems can be calculated, e.g. the number of moves ahead a chess program can look. OTOH, it bizarrely appears to be the case that over a large range of chess ranks, human players seem to gain effective chess skill measured by chess rank with roughly linear training while chess programs gain it via exponential speed-up.

Society seems to in aggregate get constant zero returns on efforts to cure cancer, though one can't rule out exponential returns starting from zero. OTOH, this seems consistent with the general inefficacy of medicine in aggregate as shown by the Rand study, which doesn't overturn the individual impacts, as shown by FDA testing, of many individual medical procedures. Life expectancy in the US has grown linearly while GDP per capita has grown exponentially, but among nations in the modern world life expectancy clearly has a different relationship to income, not linear, not logarithmic, more plausibly asymptotic moving towards something in the early 80s.

I'm glad that you consider the claim about turning object level knowledge metacognitive to be the most important and controvercial claim. This seems like a much more substantial and precise criticism of Eliezer's position than anything Robin has made so far. It would be very interesting to see you and Eliezer discuss evidence for or against sufficient negative feedback mechanisms, Eliezer's "just the right law of diminishing retunrs" existing.

View more: Prev | Next