Comment author: dxu 15 December 2014 05:25:27AM 0 points [-]

LW seems to have a rather mixed reputation

This interests me. I haven't been around here for very long, so if there are any particular incidents that have occurred in the past, I wouldn't be aware of them (sans the basilisk, of course, because that whole thing just blew up). Why does LW have such a mixed reputation? I would chalk it up to the "Internet forum" effect, because most mainstream researchers probably don't trust Internet forums, but MIRI seems to have the same thing going on, so it can't (just) be that. Is it just due to the weirdness, possibly causing LW/MIRI to be viewed as crankish? Or something else?

Comment author: ctintera 16 December 2014 11:05:29AM *  2 points [-]

Many people (specifically, people over at RationalWiki, and probably elsewhere as well) see the community as being insular, or as being a Yudkowsky Personality Cult, or think that some of the weirder-sounding ideas widely espoused here (cryonics, FAI, etc) "might benefit from a better grounding in reality".

Still others reflexively write LW off based on the use of fanfiction (a word of dread and derision in many circles) to recruit members.

Even the jargon derived from the Sequences may put some people off. Despite the staunch avoidance of hot-button politics, they still import a few lesser controversies. For example, there still exist people who outright reject Bayesian probability, and there are many more who see Bayes' theorem as a tool that is valid only in a very narrow domain. Brazenly disregarding their opinion can be seen as haughty, even if the maths are on your side.

Comment author: DanielLC 09 December 2014 06:18:25PM 0 points [-]

And I doubt whether that is ever truly possible.

It's possible. We're an example of that. The question is if it's humanly possible.

There's a common idea of an AI being able to make another twice as smart as itself, which could make another twice as smart as itself, etc. causing an exponential increase in intelligence. But it seems just as likely that an AI could only make one half as smart as itself, in which case we'll never even be able to get the first human-level AI.

Comment author: ctintera 10 December 2014 11:40:00AM *  0 points [-]

The example you give to prove plausibility is also a counterexample to the argument you make immediately afterwards. We know that less-intelligent or even non-intelligent things can produce greater intelligence because humans evolved, and evolution is not intelligent.

It's more a matter of whether we have enough time to drudge something reasonable out of the problem space. If we were smarter we could search it faster.

View more: Prev