wedrifid comments on Goals for which Less Wrong does (and doesn't) help - Less Wrong

57 Post author: AnnaSalamon 18 November 2010 10:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 November 2010 11:40:13AM 5 points [-]

The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak...

There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under. Regarding retirement investment, just ask your parents or grandparents. Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.

"I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified." Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do? This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

The basic problem here is that the necessary education to follow Less Wrong will not only teach me to be wary of the arguments on Less Wrong but will also preclude me to act on the suggestions. How so? The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong. Consequentially this means that someone like me should care to earn enough money to support friendly AI research and to buy a Cryonics contract. But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

Comment author: wedrifid 19 November 2010 05:29:33PM 2 points [-]

"I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified." Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do?

Your line of questioning here just seems strange in the context of the quote. The quote seems straightforward and not even all that relevant to whether lesswrong is useful for people who struggle to understand lesswrong. To the kind of people who have even a remote possibility of doing useful work on a seed AI it is just a trivial statement of Eliezer's personal opinion. While many don't agree with him Eliezer has written elsewhere on his opinion on academic orthodoxy as well as his own development with respect to approach to AI. Such opinions can just be taken with a grain of salt as they would be from anyone else.

This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

There is value in making things as accessible as possible where this can be done without sacrificing the depth of the content. At the same time there are always going to be people who are not capable of following content of complex topics, whether that be on rationality or anything else. Ultimately all communities whether online or off have a target demographic and are not for everyone.