If you are solving an equation, debugging a software system, designing an algorithm, or any number of other cognitive tasks, understanding the methods of rationality involved in interacting with other people will be of no use to you (unless it just happens to be that some of the material applies across the domains). These are things that have to be done in and of yourself.
It appears that the majority of the activities and the primary focus of this boot camp is on rationality when interacting with others... social rationality training. While some of this may apply across domains, my interest is strictly in "selfish" rationality... the kind of rationality that one uses internally and entirely on your own. So I don't really know if this would be worth the grandiose expense of 10 "all-consuming" weeks. Maybe it would help if I had more information on the exact curriculum you are proposing.
where the hell would you find a group like that!?
well in that case, can you explain that emoticon (:3)? I have yet to hear any explanation that makes sense :)
Is this really relevant ...
Does anyone know if Blink: The Power of Thinking Without Thinking is a good book?
http://www.amazon.com/Blink-Power-Thinking-Without/dp/0316172324
Amazon.com Review
Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military m...
If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"
http://www.alcor.org/FAQs/faq01.html#evidence
Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest n
As a question for everyone (and as a counter argument to CEV),
Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?
And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.
A) Yes B) No
I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.
Whatever happened to Nick Hay, wasn't he doing some kind of FAI related research?
Sure, but it's also reasonable for him to think that contributing something that was much harder would be that much more of a contribution to his goal (whatever those selfish or non-selfish goals are), after all, something hard for him would be much harder or impossible for someone less capable.
I don't see how this reveals his motive at all. He could easily be a person motivated to make the best contributions to science as he can, for entirely altruistic reasons. His reasoning was that he could make better contributions elsewhere, and it's entirely plausible for him to have left the field for ultimately altruistic, purely non-selfish reasons.
And what is it about selfishness exactly that is so bad?
"And what is it about selfishness exactly that is so bad?"
It's fine and dandy in me, but I tend to discourage it in other people. I find that I get what I want faster that way.
Now give me some cash.
If making a major contribution seemed so easy, and would be harder in some other field, it sure would suggest that his comparative advantage in the easy field is much greater; would not that suggest that he ought to devote his efforts there, since other people have proven relatively capable in the harder fields?
And this is a great follow up:
..."Very recently - in just the last few decades - the human species has acquired a great deal of new knowledge about human rationality. The most salient example would be the heuristics and biases program in experimental psychology. There is also the Bayesian systematization of probability theory and statistics; evolutionary psychology; social psychology. Experimental investigations of empirical human psychology; and theoretical probability theory to interpret what our experiments tell us; and evolutionary theory to exp
"But goodness alone is never enough. A hard, cold wisdom is required for goodness to accomplish good. Goodness without wisdom always accomplishes evil." - Robert Heinlein (SISL)
That reminds me of "counting doubles" from Ender's Game: 2, 4, 8, 16 ... etc until you lose track.
==Re comments on "Singularity Paper"== Re comments, I had been given to understand that the point of the page was to summarize and cite Eliezer's arguments for the audience of ''Minds and Machines''. Do you think this was just a bad idea from the start? (That's a serious question; it might very well be.) Or do you think the endeavor is a good one, but the writing on the page is just lame? --User:Zack M. Davis 20:19, 21 November 2009 (UTC)
(this is about my opinion on the writing in the wiki page)
No, just use his writing as much as possible- direct...
Eliezer is arguing about one view of the Singularity, though there are others. This is one reason I thought to include http://yudkowsky.net/singularity/schools on the wiki. If leaders/proponents of the other two schools could acknowledge this model Eliezer has described of there being three schools of the Singularity, I think that might lend it more authority as you are describing.
I found the two SIAI introductory pages very compelling the first time I read them. This was back before I knew what SIAI or the Singularity really was, as soon as I read through those I just had to find out more.
I thought similarly about LOGI part 3 (Seed AI). I actually thought of that immediately and put a link up to that on the wiki page.
http://news.ycombinator.com/item?id=195959
"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...
All right, this much of a hint:
There's no super-clever special trick to it. I just did it the hard way.
Something of an entrepreneurial lesson there, I guess."
Really?
I mean come on, that's a cheap, weak analogy. I haven't finished yet but I'm compiling all of the good quotes from Atlas Shrugged. The book is full of these awesome quotes and truths that are portable to many other subjects of rationality.
It is far more real and relevant than you are giving it credit for.
what the hell?
What does the cultish behavior of followers have to do with the actual content? Affective death spirals can characterize virtually any group. Idiots and crazies are everywhere.
Why is this so down rated??
I realize that you didn't vote it down, but using this logic to vote it down would be something like a reverse affective death spiral- you let the visibly obvious ADS cast a negative halo on the entire philosophy, and thus become irrationally biased against the legitimate value in the center of the ADS that got blown up by the over-zealous crazies and idiots.
Reading Ayn Rand to learn about rationality is like reading Aristotle to learn about physics.
"Walking on the moon is power! Being a great wizard is power! There are kinds of power that don't require me to spend the rest of my life pandering to morons!"