Comment author: shrink 01 May 2012 05:34:40AM *  -1 points [-]

There's so much that can go wrong with such reasoning, given that intelligence (even at the size of a galaxy of Dyson spheres) is not a perfect God, as to render such arguments irrelevant and entirely worthless. Furthermore there's enough ways how the non-orthogonality can hold, such as e.g. almost all intelligences with wrong moral systems crashing or failing to improve, that are not covered by 'converges'.

meta: Tendency to talk seriously about products of very bad reasoning really puts an upper bracket on the sanity of newcomers to LW. As is the idea that very bad argument trumps authority (when it comes to the whole topic).

Comment author: shrink 01 May 2012 05:31:04AM *  3 points [-]

You can represent any form of agency with utility function that is 0 for doing what agency does not want to do, and 1 for doing what agency want to do. This looks like a special case of such triviality, as true as it is irrelevant. Generally one of the problems with insufficient training in math is the lack of training for not reading extra purpose into mathematical definitions.

Comment author: Sarokrae 29 April 2012 08:14:37PM *  13 points [-]

I have no grounding in cogsci/popular rationality, but my initial impression of LW was along the lines of "hmm, this seems interesting, but nothing seems that new to me..." I stuck around for a while and eventually found the parts that interested me (hitting rocky ground around the time I reached the /weird/ parts), but for a long while the impression was that this site had too high a rhetoric to actual content ratio, and presented itself as more revolutionary than its content justifies.

My (better at rationality than me) OH had a more extreme first impression of approximately "These people are telling me nothing new, or vaguely new things that aren't actually useful, in a tone that suggests that it's going to change my life. They sound like a bunch of pompous idiots." He also stuck around though, and enjoyed reading the sequences as consolidating his existing ideas into concrete lumps of usefulness.

From these two limited points of evidence, I timidly suggest that although LW is pitched at generic rational folk, and contain lots of good ideas about rationality, the way things are written over-represent the novelty and importance of some of the ideas here, and may actively put off people who have good ideas about philosophy and rationality but treat them as "nothing big".

Another note - jumping straight into the articles helped neither of us, so it's probably a good idea to simplify navigation, as has already been mentioned, and make the "About" page more prominent, since that gives a good idea to someone new as to what actually happens on this site - something that is quite non-obvious.

Comment author: shrink 30 April 2012 07:47:50AM *  4 points [-]

I think you hit nail on the head. It seems to me that LW represent bracketing by rationality - i.e. there's lower limit below which you don't find site interesting, there is the range where you see it as rationality community, and there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.

Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no understanding of the limitations of computational processes in general (let alone 'intelligent' processes) is not unlike the anti-vaccination campaigning by people with no solid background in biochemistry. Donating for vaccine safety research performed by someone without solid background in biochemistry, is not only stupid, it will kill people. The computer science is no different now, that it is used for biochemical research. No honest moral individual can go ahead and speak of great harms of medically relevant technologies without first obtaining a very very solid background with solid understanding of the boring fundamentals, and with independent testing of oneself - to avoid self delusion - by doing something competitive in the field. Especially so when those concerns are not shared by the more educated or knowledgeable or accomplished individuals. The only way it could be honest is if one is to honestly believe oneself to be a lot, lot, lot smarter than the smartest people on Earth, and one can't honestly believe such a thing without either accomplishing something impressive that great number of smartest people failed to accomplish, or being a fool.

Comment author: fubarobfusco 29 April 2012 09:09:15PM 2 points [-]

Popularizing ideas from contemporary cognitive science and naturalized philosophy seems like a pretty worthy goal in and of itself. I wonder to what extent the "Less Wrong" identity helps this (by providing a convenient label and reference point), and to what extend it hurts (by providing an opportunity to dismiss ideas as "that Less Wrong phyg"). I suspect the former dominates, but the latter might be heard from more.

Comment author: shrink 30 April 2012 07:44:28AM *  3 points [-]

Popularization is better without novel jargon though.

Comment author: asr 29 April 2012 09:56:05PM *  12 points [-]

honest people can't stay self deluded for very long.

This is surely not true. Lots of wrong ideas last a long time beyond when they are, in theory, recognizably wrong. Humans have tremendous inertia to stick with familiar delusion, rather than replace them with new notions.

Consider any long-lived superstition, pseudoscience, etc. To pick an uncontroversial example, astrology. There were very powerful arguments against it going back to antiquity, and there are believers down to the present. There are certainly also conscious con artists propping up these belief structures -- but they are necessarily the minority of purported believers. You need more victims than con artists for the system to be stable.

People like Newton and Kepler -- and many eminent scientists since -- were serious sincere believers in all sorts of mystical nonsense -- alchemy, numerology, and so forth. I's possible for smart careful people to persistently delude themselves -- even when the same people, in other contexts, are able to evaluate evidence accurately and form correct conclusions.

Comment author: shrink 30 April 2012 07:34:21AM *  3 points [-]

That's why I said 'self deluded', rather than just 'deluded'. There is a big difference between believing something incorrect that's believed by default, and coming up yourself with a very convenient incorrect belief that makes you feel good and pays the bills, and then actively working to avoid any challenges to this belief. Honest people are those who put such beliefs to good scrutiny (not just talk about putting such beliefs to scrutiny).

The honesty is elusive matter, when the belief works like that dragon in the garage. When you are lying, you have to deceive computational processes that are roughly your equals. That excludes all straightforward approaches to lying, such as waking up in the morning and thinking 'how can i be really bad and evil today?'. Lying is a complicated process, with many shortcuts when it comes to the truth. I define lying as successful generation of convincing untruths - a black box definition without getting into details with regards to what parts of the cortex are processing the truth and what are processing the falsehoods. (I exclude the inconsistent accidental generation of such untruths by mistake, unless the mistakes are being chosen)

Comment author: IlyaShpitser 29 April 2012 03:22:24PM *  22 points [-]

I am a counterexample. I think Eliezer is a self-important wanker, but I have a favorable view of LW as a whole. I agree that I might be rare. I also wouldn't describe myself as a "part of the LW community." I think I attended a total of 1 meetup.

Comment author: shrink 29 April 2012 07:44:30PM *  5 points [-]

Well the issue is that LW is heavily biased towards agreement with the rationalizations of the self important wankery in question (the whole FAI/uFAI thing)...

With the AI, basically, you can see folks who have no understanding what so ever of how to build practical software and whose idea of the AI is 'predict outcomes of actions, choose actions that give best outcome' (entirely impractical model given the enormous number of actions when innovating) accusing the folks in the industry whom do, of anthropomorphizing the AI - and taking it as operating assumption that they somehow know better, on basis of them thinking about some impractical abstract mathematical model. It is like having futurists in 1900 accuse the engineers of bird-morphizing the future modes of transportation when the engineers speak of wings. Then you see widespread agreement with various irrational nonsense, mostly when it's a case of some reverse stupidity, like with 'not anthropomorphizing' the AI far past the point of actually not anthropomorphizing, into the negative anthropomorphizing land whereby if human does some sort of efficient but imperfect trick, the AI necessarily does the terribly inefficient perfect solution, to the point of utter ridiculousness where the inefficiency may be too big for the galaxy of dyson spheres to handle given the quantum computing.

Then there's this association with a bunch of folk whom basically talk other people into giving them money. That puts a very sharp divide - either you agree they are geniuses saving the world, or they are sociopaths, not a lot of middle road here as honest people can't stay self deluded for very long.

Comment author: siodine 28 April 2012 04:56:02PM *  16 points [-]

My first impression of lesswrong was of a community devoted to pop science, sci-fi, and futurism. Also, around that time singulartian was getting a bad name for good reasons (but it was the Kurzweil kind d'oh), and so I closed the tab thinking I wasn't missing anything interesting. It wasn't until years later when I was getting tired of the ignorance and arrogance of the skeptic community that I found my way back to lesswrong with some linked post that showed careful, honest thinking.

It would be a good idea to put up a highly visible link on the front page addressing new visitors' immediate criticisms. For example:

  • Is this place a Kurzweil fanclub?
  • Are you guys pulling most of the content on this site out of your ass?
  • Why should I care what you people have to say? The people here seem weird.
  • I think most of what you people believe is bullshit, am I not welcome here?

Another thing, the layout of this site will take people more than ten seconds to grok which is enough to have most people just leave. For instance, I'd rename 'discussion' to 'forum' and 'main' to 'rationality blog' or just 'blog'.

Comment author: shrink 28 April 2012 07:23:17PM *  -6 points [-]

Is this place a Kurzweil fanclub?

TBH, I'd rather listen to Kurzweil... I mean, he did create reading OCR software, and other cool stuff. Here we have:

http://lesswrong.com/lw/6dr/discussion_yudowskys_actual_accomplishments/

http://lesswrong.com/lw/bvg/a_question_about_eliezer/

Looks like this gone straight to the hardest problems in the world (I can't see successful practice on easier problems that are not trivial).

This site has captcha, a challenge that people easily solve but bots don't. Despite the possibility that some blind guy would not post a world changing insight because of it, and the FAI effort would go the wrong way, and we all die. That is not seen as irrational. Many smart people, likewise, usually implement an 'arrogant newbie filter'; a genius can rather easily solve things that other smart people can't...

It is kind of hypocritical (and irrational) to assume stupid bot if captcha is not answered, but expects others to assume genius when no challenges were solved. Of course not everyone is filtering, and via internet you can reach plenty of people who won't filter for this reason or that, or people who will only look at superficial signals, but to exploit this is not good.

Comment author: khafra 27 April 2012 02:40:53PM 8 points [-]

You know, an uncharitable reading of this would almost sort-of kinda maybe construe it as a rebuke of the LW community. Almost.

Comment author: shrink 28 April 2012 08:34:25AM *  2 points [-]

It's more a question of how charitably you read LW, maybe? The phenomenon I am speaking of is quite generic. About 1% of people are clinical narcissists (probably more), that's a lot of people, and the narcissists dedicate more resources to self promotion, and take on projects that no well calibrated person of same expertise would attempt, such as e.g. making a free energy generator without having studied physics or invented anything not so grandiose first.

Comment author: shrink 28 April 2012 07:54:17AM *  6 points [-]

Some of the rationality may to significant extent be a subset of standard, but it has important omissions - in the areas of game theory for instance - and much more importantly significant miss-application such as taking the theoretically ideal approaches given infinite computing power as the ideal, and seeing as the best try the approximations to them which are grossly sub-optimal on the limited hardware where different algorithms have to be employed instead. One has to also understand that in practice computations have cost, and any form of fuzzy reasoning (anything other than very well verified mathematical proof) accumulates errors with each step, regardless of whenever it is 'biased' or not.

Choosing such a source for self education is definitely not common. As is the undue focus on what is 'wrong' about thinking (e.g. lists of biases) rather than on more effective alternatives to biases; if you remove the biases that won't in itself give you extra powers of rational thinking; your reasoning will be as sloppy as before and you'll simply be wrong in an unusual way (for instance you'll end up believing in unfalsifiable unjustified propositions other than God; it seems to me that this has occurred in practice)

edit: Note: he asked a question, I'm answering why it is seen as fringe, it may sound like unfair critique but I am just explaining what it looks like from outside. The world is not fair; if you use dense non-standard jargon, that raises the costs, and lowers the expected utility of reading what you wrote (because most people using non-standard jargon don't really have anything new to say). Processing has non zero utility cost, that must be understood, if the mainstream rationalists don't instantly see you as worth reading, they won't read you, that's only rational on their part. You must allow for other agents to act rationally. It is not always rational to even read an argument.

Actually, given that one could only read some small fraction of rationality related material, it is irrational to read anything but known best material, where you have some assurance that the authors have good understanding of the topic, including those parts that are not exciting, or seem too elementary, or go counter to the optimism - the sort of assurance you get when the authors of the material have advanced degrees.

edit: formatting, somewhat expanded.

Comment author: shminux 27 April 2012 09:16:11PM *  1 point [-]

large objects tend to violate 'laws of quantum mechanics' as we know them (the violation is known as gravity)

I cannot agree with this assertion. Except for the mysterious "measurement" thing, where only a single outcome is seen where many were possible (I'm intentionally use the word "seen" to describe our perception, as opposed to "occurs", which may irk the MWI crowd), the quantum world gracefully turns classical as the objects get larger (the energy levels bunch tighter together, the tunneling probabilities vanish exponentially, the interaction with the environment, resulting in decoherence, gets stronger, etc.).

This has not been shown to have anything to do with gravity, though Roger Penrose thinks that gravity may limit the mass of quantum objects, and I am aware of some research trying to test this assertion.

For all I know, someone might be writing a numerical code to trace through decoherence all the way to the microscopic level as we speak, based on the standard QM/QFT laws.

Comment author: shrink 27 April 2012 09:30:38PM -1 points [-]

Look up on quantum gravity (or rather, lack of unified theory with both QM and GR). It is a very complex issue and many basics have to be learnt before it can be at all discussed. The way we do physics right now is by applying inconsistent rules. We can't get QM to work out to GR in large scale. It may gracefully turn 'classical' but this is precisely the problem because the world is not classical at large scale (GR).

View more: Prev | Next