The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.
It was definitely important to make animals come, or to make it rain, tens thousands years ago. I'm getting a feeling that as I tell you that your rain making method doesn't work, you aren't going to give up trying if I don't provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).
As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn't rea...
I think you have somewhat simplistic idea of justice... there is the "voluntary manslaughter", there's the "gross negligence", and so on. I think SIAI falls under the latter category.
How are they worse than any scientist fighting for a grant based on shakey evidence?
Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this...
You are declaring everything gray here so that verbally everything is equal.
There are people with no knowledge in physics and no inventions to their name, whose first 'invention' is a perpetual motion device. You really don't see anything dishonest about holding an unfounded belief that you're this smart? You really see nothing dishonest about accepting money under this premise without doing due diligence such as trying yourself at something testable, even if you think you're this smart?
There are scientists whom are trying very hard to follow processes th...
That's how religions were created, you know - they could not actually answer why lightning is thundering, why sun is moving through the sky, etc. So they did look way 'beyond' the non-faulty reasoning, in search for answers now (being inpatient), and got answers that were much much worse than no answers at all. I feel LW is doing precisely same thing with AIs. Ultimately, when you can't compute the right answer in the given time, you will either have no answer or compute a wrong one.
On the orthogonality thesis, it is the case that you can't answer this qu...
Did they make a living out of those beliefs?
See, what we have here is a belief cluster that makes the belief-generator feel very good (saving the world, the other smart people are less smart, etc etc) and pays his bills. That is awfully convenient for a reasoning error. Not saying that it is entirely impossible to have a serendipitously useful reasoning error, but doesn't seem likely.
edit: note, I'm not speaking about some inconsequential honesty in idle thought, or anything likewise philosophical. I'm speaking of not exploiting others for money. There'...
Would you take criticism if it is not 'positive' and doesn't give you alternative method to use for talking about same topic? Faulty reasoning has unlimited domain of application - you can 'reason' about purpose of the universe, number of angels that fit on a tip of a pin, of what superintelligences would do, etc. In those areas, non-faulty reasoning can not compete in terms of providing a sort of pleasure from reasoning, or in terms of interesting sounding 'results' that can be obtained with little effort and knowledge.
You can reason what particular cogn...
There's so much that can go wrong with such reasoning, given that intelligence (even at the size of a galaxy of Dyson spheres) is not a perfect God, as to render such arguments irrelevant and entirely worthless. Furthermore there's enough ways how the non-orthogonality can hold, such as e.g. almost all intelligences with wrong moral systems crashing or failing to improve, that are not covered by 'converges'.
meta: Tendency to talk seriously about products of very bad reasoning really puts an upper bracket on the sanity of newcomers to LW. As is the idea that very bad argument trumps authority (when it comes to the whole topic).
You can represent any form of agency with utility function that is 0 for doing what agency does not want to do, and 1 for doing what agency want to do. This looks like a special case of such triviality, as true as it is irrelevant. Generally one of the problems with insufficient training in math is the lack of training for not reading extra purpose into mathematical definitions.
I think you hit nail on the head. It seems to me that LW represent bracketing by rationality - i.e. there's lower limit below which you don't find site interesting, there is the range where you see it as rationality community, and there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.
Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no under...
Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?
Also, if someone was worried about the possibility of a bad singularity, but didn't think that supporting SIAI was a good way to address that concern, what should they do instead?
Popularization is better without novel jargon though.
That's why I said 'self deluded', rather than just 'deluded'. There is a big difference between believing something incorrect that's believed by default, and coming up yourself with a very convenient incorrect belief that makes you feel good and pays the bills, and then actively working to avoid any challenges to this belief. Honest people are those who put such beliefs to good scrutiny (not just talk about putting such beliefs to scrutiny).
The honesty is elusive matter, when the belief works like that dragon in the garage. When you are lying, you have to ...
Well the issue is that LW is heavily biased towards agreement with the rationalizations of the self important wankery in question (the whole FAI/uFAI thing)...
With the AI, basically, you can see folks who have no understanding what so ever of how to build practical software and whose idea of the AI is 'predict outcomes of actions, choose actions that give best outcome' (entirely impractical model given the enormous number of actions when innovating) accusing the folks in the industry whom do, of anthropomorphizing the AI - and taking it as operating assum...
honest people can't stay self deluded for very long.
This is surely not true. Lots of wrong ideas last a long time beyond when they are, in theory, recognizably wrong. Humans have tremendous inertia to stick with familiar delusion, rather than replace them with new notions.
Consider any long-lived superstition, pseudoscience, etc. To pick an uncontroversial example, astrology. There were very powerful arguments against it going back to antiquity, and there are believers down to the present. There are certainly also conscious con artists propping up the...
It's more a question of how charitably you read LW, maybe? The phenomenon I am speaking of is quite generic. About 1% of people are clinical narcissists (probably more), that's a lot of people, and the narcissists dedicate more resources to self promotion, and take on projects that no well calibrated person of same expertise would attempt, such as e.g. making a free energy generator without having studied physics or invented anything not so grandiose first.
Some of the rationality may to significant extent be a subset of standard, but it has important omissions - in the areas of game theory for instance - and much more importantly significant miss-application such as taking the theoretically ideal approaches given infinite computing power as the ideal, and seeing as the best try the approximations to them which are grossly sub-optimal on the limited hardware where different algorithms have to be employed instead. One has to also understand that in practice computations have cost, and any form of fuzzy reasoni...
Look up on quantum gravity (or rather, lack of unified theory with both QM and GR). It is a very complex issue and many basics have to be learnt before it can be at all discussed. The way we do physics right now is by applying inconsistent rules. We can't get QM to work out to GR in large scale. It may gracefully turn 'classical' but this is precisely the problem because the world is not classical at large scale (GR).
One basic thing about MWI is that it is a matter of physical fact that large objects tend to violate 'laws of quantum mechanics' as we know them (the violation is known as gravity), and actual physicists do know that we simply do not know what the quantum mechanics works out to at large scale. To actually have a case for MWI one would need to develop a good quantum gravity theory where many worlds would naturally arise, but that is very difficult (and many worlds may well not naturally arise).
Various cases of NPD online. The NPD-afflicted individuals usually are too arrogant to study or do anything difficult where they can measurably fail, and instead opt to blog on the topics where they don't know the fundamentals, promoting misinformed opinions. Some even live on donations for performing work that they never tried to study for doing. It's unclear what attracts normal people to such individuals, but I guess if you don't think yourself a supergenius you can still think yourself clever for following a genius whom you can detect without relying o...
You know, an uncharitable reading of this would almost sort-of kinda maybe construe it as a rebuke of the LW community. Almost.
If you want to maximize your win, it is a relevant answer.
For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence - in non cherry picked manner - and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying ... (read more)