"If cryonics is super-far and altruism is seen as more important in far mode, why isn’t buying cryonics for others seen as especially praiseworthy? Your list of ways in which cryo is far-mode seems too much of a coincidence unless cryo was somehow optimized for distance."
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A week ago I made a pitch for the Singularity Institute to a crowd of interested potential donors, along with a number of leaders of other non-profit organizations with relatively radical and innovative goals. The videos are here, and should be a good introduction to a significant part of the noosphere for people not yet familiar with it.
Call for Essays:<http://singularityhypothesis.blogspot.com/p/submit.html>
The Singularity Hypothesis
A Scientific and Philosophical Assessment
Edited volume, to appear in The Frontiers Collection<http://www.springer.com/series/5342>, Springer
Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and 'carbon chauvinism'? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.
Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.
* Extended abstracts (500–1,000 words): 15 January 2011
* Full essays: (around 7,000 words): 30 September 2011
* Notifications: 30 February 2012 (tentative)
* Proofs: 30 April 2012 (tentative)
We aim to get this volume published by the end of 2012.
Purpose of this volume
· Please read: Purpose of This Volume<http://singularityhypothesis.blogspot.com/p/theme.html>
· Please read: Central Questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>:
Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html> and indicating how they will be treated in the full essay.
Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions<http://singularityhypothesis.blogspot.com/p/central-questions.html>. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).
Thank you for reading this call. Please forward it to individual who may wish to contribute.
Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University
Katja's recent post on cryonics elicited this comment from Steven Kaas,
...a comment which finally caused the following hypothesis to click into sharp resolution for me.
My guess is that it's cryonics advocates who are optimized for distance. Most people are basically natives of near mode, using far mode only casually and occasionally for signaling, and never reasoning about its contents. Even those who reason about its contents usually do so and then ignore their reasoning, acting on near mode motivations and against their explicit beliefs. Children, however, actually need to use far mode to guide their actions because they lack the rich tacit knowledge that makes near mode functional.
I am far from convinced that people in general wish to be seen as caring more about morality than they actually do. If this was the case, why would the persistent claim that people are -- and, logically, must be -- egoists have so long survived strong counter-arguments? The argument appears to me to be a way of signaling a lack of excessive and low status moral scruples.
It seems to me that the desire to signal as much morality as possible is held by a minority of women and by a small minority of men. Those people are also the main people who talk about morality. This is commonly a problem in the development of thought. People with an interest in verbally discussing a subject may have systematically atypical attitudes towards that subject. Of course, this issue is further complicated by the fact that people don't agree on what broad type of thing morality is.
The conflict within philosophy between Utilitarians and Kantians is among the most famous examples of this disagreement. <a href=" http://people.virginia.edu/~jdh6n/moraljudgment.html”> Haidt’s views on conservative vs. liberal morality </a> is another. Major, usually implicit disagreements regard whether morality is supposed to serve as a decision system, a set of constraints on a decision system, or a set of reasons that should influence a person along with prudential, honor, spontaneity, authenticity, and other such types of reasons.
It seems to me that people usually want to signal whatever gives others the most reason to respect their interests. Roughly, this amounts to wanting to signal what Haidt calls conservative morality. Basically, people would like to signal "I am slightly more committed to the group’s welfare, particularly to that of its weakest members (caring), than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to the group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like (loyal) and substantially more vindictive and aggressive towards those I dislike (honorable, ignored by Haidt). I am generally stable in who I like (loyalty and identity, implying low cognitive cost for allies, low variance long term investment). I am much more capable and popular than most members of the group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself (status/hierarchy). I adhere to simple taboos (not disgusting) so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences."
An interesting point here is that this suggests the existence of a trade-off in the level of intelligence that a person wishes to signal. In this model, intelligence is necessary to distinguish between costly/genuine and cheap/fake signals of affiliation and to be effective as a friend or as an enemy. For these reasons, people want to be seen as somewhat more intelligent than the average member of the group. People also want to appear slightly less intelligent than whoever they are addressing, in order to avoid appearing unpredictable.
This is plausibly a multiple equilibrium model. You can appear slightly more or less intelligent with effort, confidence and affectations. Trying to appear much less intelligent than you are is difficult as you must essentially simulate one system with another system, which implies an overhead cost. If you can't appear to be little more intelligent than the higher status members of the group, who typically have modestly above average intelligence, you can't easily be a trusted ally of the people you most need to ally with. If you can't effectively show yourself to be a predictable ally for individuals you may want to show yourself to be a predictable ally of the group by predictably following rules (justice) and by predictably serving its collective interests (caring). That allows less intelligent individuals in the group to outsource the task of scrutinizing your loyalty. People can more easily communicate indicators of group disloyalty by asserting that you have broken a rule, so people who can't be conservatively moral will attend more closely to rules. On this model, Haidt's liberalism (which I believe includes libertarianism) is a consequence of difficulty credibly signaling personal loyalties and thus having to overemphasize caring and what he calls justice, by which he means following rules.
In America, the explicit rules that people are given are descended from a frontier setting where independence was very practically important and where morality with very strong acts/omissions distinctions was sufficient to satisfy collective needs with low administrative costs and with easy cheater detection. Leaving others alone (and implicitly, tolerance) rather than enforcing purity works well when large distances make good neighbors. As a result, the explicit rules that people are taught de-emphasize status/hierarchy, disgust, to a lesser degree loyalty and identity, and to a still lesser extent caring. When the influence of justice, e.g. rules, is emphasized by difficulty in behaving predictably, liberal morality, or ultimately libertarian morality, are the result.
I don’t want to be too dogmatic about this claim, but Godzilla is unrealistic. I don’t want to be too non-dogmatic about this claim either. OK then, just how dogmatic should I be? I have all sorts of reasons for thinking that skyscraper sized lizards or dinosaurs don’t actually exist. Honestly, the most important of these is probably that none of the people who I imagine would know if they did exist seem to believe in them. I never hear any mention of them in the news, in history books, etc, and I don’t see their effects in the national death statistics. No industries seem to exist to deal with their rampages, and no oil or shipping companies lose stock value from lizard attacks. Casually, at least, Godzilla attacks don’t seem like the sort of basic fact about the world that people could just overlook. How confident should I be that Godzilla type creatures don't exist?
I can also fairly easily recognize good biological reasons not to expect there to be giant rampaging lizards. The square/cube law, in its many manifestations, is the most basic of these, but by itself is not completely decisive. I can imagine physical workarounds that would allow sequoia giganticus sized reptiles, but not without novel bio-machinery that would take a long time to evolve and would surely be found in many other organisms. I can even vaguely imagine ways in which biology might prove resistant to conventional military weaponry and ecological niches and lifestyles that might support both such biology and such size, though much of my knowledge of Earth’s ecosystems would have to be re-written. For all that, if I lived in a world where essentially all authorities did refer to the activities of godzilla giganticus I would probably accept that they were probably correct regarding its existence. What should a hypothetical person who lived in a world where the existence of Godzilla type creatures was common knowledge and was regarded as an ordinary non-numinous fact about the world believe?
Godzilla would be considerably more perplexing than thunderstones, and would have to be considerably better documented to be credible. Even with the strongest documentation I would have substantial unresolved questions, inferring that Godzilla’s native ecosystem must be quite different from any known (possibly inferring that the details are classified), and even wondering whether Godzilla was a biological creature at all as opposed to, for instance, a giant robot left behind by an advanced and forgotten civilization, a line of inquiry that would greatly increase my credence in secret history of all kinds. For the most part though, I would probably go about life as normal. Even Natural Selection, the most damaged part of my world-view, would endure as a great intellectual triumph explaining the origins of almost all of Earth’s life forms. Only peripheral facts, such as distant history and the nature of some exotic ecosystems would be deeply called into question, and such facts are not tightly integrated with the broader edifice of science. In a conversation with a hypothetical Michael Vassar who believed in Godzilla, the issue would typically not come up. Science in general would not be called into question in my mind, but should it be?
This recent blog post strikes me as an interesting instance of a common phenomenon. The phenomenon looks like the following; an intellectual, working within the assumption that the world is not mad, (an assumption not generally found outside of the Anglo-American Enlightenment intellectual tradition) notices that some feature of the world would only make sense if the world was mad. This intellectual responds by denouncing as silly one of the few features of this vale of tears to be, while not intelligently designed, at least structured by generalized evolution rather than by entropy. The key line in the post is
"Conversely in all those disciplines where we have reliable quantatative measurements of progress (with the obvious exception of history) returning to the original works of past great thinkers is decidedly unhelpful."
I agree with the above statement, and find that the post makes a compelling argument for it. My only caveat is that we essentially never have quantitative measures of progress. Even in physics, when one regards not the theory but the technique of actually doing physics, tools and modes of thought rise and fall for reasons of fashion, and once widespread techniques that remain useful fall into disuse.
I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached. I'll probably make some in the comments.
I would like to propose this as a thread for people to write in their New Year’s Resolutions (goals and sub-goals) as instrumental rationalists.
So far, only one of the less-wrong meet-ups that were discussed has been scheduled. The Boston meet-up was scheduled for:
Carberry's at 74 Prospect St Cambridge, MA
(1.5 blocks northeast from the Central Square T station)
Sunday November 15th at 2pm
though it may move after an hour or two to the Clear Conscience Cafe a couple blocks away if things get too crowded.
My cell number is (610) 213 2487 so you can contact me if there is a problem.
Regarding Philly, Florida and New Orleans there is still need for detail in the schedule. I'm leaving New Orleans at 5:10 on the 14th so the 13th is probably better but I can do early on the 14th if people want. There has been some interest in an event there but I would appreciate more interested people saying so and possibly contacting me via phone or email. If several people are interested we will have a meet-up. Just one or two and I can meet less formally.
Hi from Michael Vassar, the president of SIAI. With my wife and Singularity Summit co-director Aruna, I'll be traveling over the next few months to meet with rationality and singularity enthusiasts throughout the U.S. Specifically, I'll be in Boston from November 14-18, Philadelphia from December 1st to 10th and December 15th through January 4th, New Orleans from December 11th to 14th, Orlando on January 5th and 6th, Sarasota on January 7th through 12th, and in Tampa on the 11th if there is substantial interest in a meet-up there.
Please comment if you are interested in attending a meet-up in any of the cities in question and we can start planning.
I hope to find that there are thriving communities of rationalists in each of those cities already, but I'm traveling there to try to seed their precipitation from the local populace. If things go really well the groups and their respective cities will be on SIAI radar and who knows, maybe eventually there will be a Singularity Summit near Tomorrow-Land, a global catastrophic risk conference by the New Orleans levies, or a FAI extrapolation dynamics exploratory workshop near Independence Hall.
View more: Next