Hmm. I remember being non-reflective in first grade but not in second grade. One consequence was that I couldn't re-write explicit beliefs in response to new information and I saw general injunctions and commands as relatively binding and automatic. Conflicting commands couldn't be accommodated, nor could common sense. I don't think that my emotions were any more intense. I never re-wrote myself, or noticed a change at the time, but I notice it in my memories. Early ones don't include the question "why am I doing this?" or "why do this...
Actually RU, that's a good approximation for many/most professions, but not all that good an approximation.
http://www.vanderbilt.edu/Peabody/SMPY/DoingPsychScience2006.pdf
gives more detail, showing a significant marginal impact from, at the least, 99.99th percentile math achievement at age 12 relative to merely 99.8th percentile math achievement at age 12.
Phil: Your estimate rewards precision and penalizes self estimate of precision. A person of a given level of precision should be rewarded for believing their precision to be what it is, not for believing it to be low. If you had self-estimate of precision in the numerator that would negate Nick's claim, but then you could drop the term from both sides.
Eliezer: I'm pretty sure that MANY very smart people learn more from working on hard problems and failing quite frequently than from reading textbooks and practicing easy problems. Both should be part of an intellectual diet.
Maksym: We actually do need someone to translate all this OB stuff very badly, though maybe it's desirable to wait for the book. Still, someone should be presenting it. As for convincing smart college students, there are three fairly separate barriers here, those to rationality, those of information and those to action. I recommend working on barriers to rationality and action first and in conjunction, belief second, and let people find the info themselves. Politics is the natural subject to frame as rationality. Simply turn every conversation where ...
You pretty much said it. Hypotheses suggested by mind-projection priors turning out to be true pretty much refutes Occam and consequentially science.
It's not on that level, that's the level which I respond to with the forbidden bet, e.g. p = 0, along with all the other stuff that implies strongly that our concepts of probability are simply broken.
Reason is a mistake for less extreme reasons such as "I'm dreaming" or "I'm a Boltzman Brain" or some forms of "my life is not merely a simulation but a psychological experiment".
I took psi seriously back when I thought that the scientific method defined rationality. Once I learned about Bayes I realized that the sort of reports of psi that science turns up would be expected if psi isn't real while much more blatant things would be expected if real psi inspired the investigation. I also noticed that priors matter and psi really should be ignored without very large effects based on low priors. Somewhat earlier pre-Bayes psi had blended somewhat into the category "Everything you know is wrong" and loose specific identity as 'psi'. Post-Bayes the "Everything you know is wrong" itself split into a few categories and psi went in the "reason is a mistake" extreme category.
Thinking about the future as today + diff is another serious problem with similar roots.
Robin: Great Point! Eliezer: I'm awaiting that too.
Shane Legg: I don't generally like sf, film or otherwise, but try "Primer". Best movie ever made for <$6000 AND arguably best sf movie. The Truman Show was good too in the last decade or so. That's probably it though. Minority Report was OK.
Dennis Bider: A BASIC and ESSENTIAL though these days largely forgotten principle of liberal society is that it can be the case, and often is, that behavior X is NOT OK but that banning behavior X would also be NOT OK.
Pete, if you do that then being a casual decision theorist won't, you know, actually Win in the one shot case. Note that evolution doesn't produce organisms that cooperate in one shot prisoners dilemmas.
But Eliezer, you can't assume that Clippy uses the same decision making process that you do unless you know that you both unfold from the same program with different utility functions or something. If you have the code that unfolds into Clippy and Clippy has the code that unfolds into you it may be that you can look at Clippy's code and see that Clippy defects if his model of you defects regardless of what he does and cooperates if his model of you cooperates if your model of him cooperates, but you don't have his code. You can't say much about all possible minds or about all possible paperclip maximizing minds.
I would say that in my experience evangelicals and traditional Christians max out at in the low 140s IQ wise.
Nick: Do you imagine that they would tell you so? Also, you are a) young, and b) haven't been in any setting where people come from a large variety of social backgrounds.
Highly intelligent Christians, Dyson for instance, are likely to believe roughly the same things you do but frame them differently. Tegmark 4 = Spinoza's god, for instance.
Hopefully: You and may others. I will if I ever pull together the emotional resources to, which seem unusually high for me. It's very demanding of effort for me to address a large group, most of whom will fail to "get it" whatever I do.
Seconding Hopefully Anonymous on this point. Also emphasizing that not infrequently, when the accuracy of beliefs that a nerd can promote is low due to inferential distance or to gaps in intelligence, nerds tend to give true statements without bridging the inferential distance, predictably promoting less than maximally accurate beliefs, primarily, I would say, motivated by the self-righteous feeling they get from telling the truth and the feeling of superior righteous indignation they get from their perceived inferiors showing ignorance by not understandi...
Seconding Scott Aaronson (especially on the tragedy of the US getting nukes too late) and Michael G R, though with the suggestion that the "demo" bomb could have been Trinity.
Eliezer: I think that the hypocrisy of the US is mostly in our maintaining a large arsenal but telling others that they can't have any, not in having used nukes. I don't see a case for .1% probability increase for nuclear war without using the bomb which is stronger than the case for the inverse, but I do see a million dead Japanese. Also, in terms of catastrophic risks, ...
Eliezer: I think that you misunderstand Roko, but that doesn't really matter, as he seems to understand you fairly well right now and to be learning effectively.
Unknown: Not at all. Utility maximization is very likely to lead to counterintuitive actions, and might even lead to humanly useless ones, but the particular actions it leads to are NOT whatever salient actions you wish to justify but are rather some very specific set of actions that have to be discovered. Seriously, you NEED to stop reasoning with rough verbal approximations of the math and act...
Great comments thread! Thanks all!
Seconding Roko, Carl, HA, Nick T, etc.
Eliezer or Robin: Can you cite evidence for "we can more persuasively argue, for what we honestly believe". My impression is that it has been widely assumed in evolutionary psychology and fairly soundly refuted in the general psychology of deception, which tells us that the large majority of people detect lies at about chance and that similar effort seems to enable the development of the fairly rare skill of the detection of lies and evasion of such detection.
Carl: Unknow...
I second Hopefully on criticism of the strawman postmodernist. Honestly, I think that academic disciplines, or even schools, where everyone is completely full of it are extremely rare. There are thoughtful, intelligent, and honest people who frame important and fairly novel ideas in the terminology of sociology, academic feminism, Freudianism, behaviorism, even, math-help-us, Jung. Perfect intellectual honest and intelligence among humans are a chimera, and different disciplines aspire to different approximations thereof by establishing different sorts ...
I agree that deriving morality from stated human values is MUCH more ethically questionable than deriving it from human values, stated or not, and suggest that it is also more likely to converge. This creates a probable difficulty for CEV.
It seems to me that if it's worth destroying Huygens to stop the Superhappies it's plausibly worth destroying Earth instead to fragment humanity so that some branch experiences an infinite future so long as fragmentation frequency exceeds first contact frequency. Without mankind fragmented, the normal ending seems ine... (read more)