One Medical? Expansion of MIRI?

9 rhollerith_dot_com 18 March 2014 02:38PM

It has been 5.5 days since the MIRI Expansion party. Could someone, anyone who attended please describe briefly what was announced?

(I attempted unsuccessfully to satisfy my curiosity by reading around all occurrences of "expansion" and "one medical" in /r/all/comments and scanning all the titles in /r/all/recentposts.)

Computer-mediated communication and the sense of social connectedness

4 rhollerith_dot_com 18 March 2011 05:13PM

IMHO there is little chance that an online-only community could replicate the successes (many friendships among the members, very high levels of enjoyment, motivation and engagement) of LW NYC.

Why not? Well, at the risk of putting off those readers who dislike explanations from evolutionary psychology, friendship relies on complex functional adaptations that were "tuned" or "designed" by natural selection for an environment in which every friendship has significant costs. By "costs" I mean that either the friends had to pay the social cost (which was significant in the ancestral environment) of being seen to be talking to each other or they had to go to significant trouble to talk without being observed. Even after the rise of the city (where unlike in the ancestral environment, most observers do not care who you talk to) maintaining a friendship had costs, in that the friends have to commit to being at a particular location at a particular time, incur transportation costs, etc.

My theory is that there is important information in whether (and how readily) a friend continues to choose to incur the costs of maintaining an off-line friendship and that when that source information is lost, most people have trouble accurately assessing the value of the relationship and start to make bad decisions on how much time and mental energy to invest in the relationship.

IMHO the same argument from evolutionary psychology holds to a lesser extent for the sense of belonging that people feel for various groups and communities. There was for example probably nothing like a lurker in any community before the online communities enabled by the BBS, the (now defunct) proprietary computer networks and the global internet.

Online-only communities and using the internet to keep up with friends can be extremely useful of course, but a person should watch out for the common failure mode in which online participation lulls one into a false sense of belonging or connectedness which prevents one from deriving the benefits people can get from things like NYC LW and the visiting fellow program in the Bay Area -- benefits that most people here should pursue and that are not available without face-to-face interaction.

LW was started to help altruists

-6 rhollerith_dot_com 19 February 2011 09:13PM

The following excerpt from a recent post, Recursively Self-Improving Human Intelligence, suggests to me that it is time for a reminder of the reason LW was started.

"[C]an anyone think of specific ways in which we can improve ourselves via iterative cycles? Is there a limit to how far we can currently improve our abilities by improving our abilities to improve our abilities? Or are these not the right questions; the concept a mere semantic illusion[?]"

These are not the right questions -- not because the concept is a semantic illusion, but rather because the questions are a little too selfish. I hope the author of the above words does not mind my saying that. It is the hope of the people who started this site (and my hope) that the readers of LW will eventually turn from the desire to improve their selves to the desire to improve the world. How the world (i.e., human civilization) can recursively self-improve has been extensively discussed on LW.

Eliezer started devoting a significant portion of his time and energy to non-selfish pursuits when he was still a teenager, and in the 12 years since then, he has definitely spent more of his time and energy improving the world than improving his self (where "self" is defined to include his income, status, access to important people and other elements of his situation). About 3 years ago, when she was 28 or 29, Anna Salamon started spending most of her waking hours trying to improve the world. Both will almost certainly devote the majority of rest of their lives to altruistic goals.

Self-improvement cannot be ignored or neglected even by pure altruists because the vast majority of people are not rational enough to cooperate with an Eliezer or an Anna without just slowing them down and the vast majority are not rational enough to avoid catastrophic mistakes were they to try without supervision to wield the most potent methods for improving the world. In other words, self-improvement cannot be ignored because now that we have modern science and technology, it takes more rationality than most people have just to be able to tell good from evil where "good" is defined as the actions that actually improve the world.

One of the main reasons Eliezer started LW is to increase the rationality of altruists and of people who will become altruists. In other words, of people committed to improving the world. (The other main reason is recruitment for Eliezer's altruistic FAI project and altruistic organization). If the only people whose rationality they could hope to increase through LW were completely selfish, Eliezer and Anna would probably have put a lot less time and energy into posting rationality clues on LW and a lot more into other altruistic plans.

Most altruists who are sufficiently strategic about their altruism come to believe that improving the effectiveness of other altruists is an extremely potent way to improve the world. Anna for example spends vastly more of her time and energy improving the rationality of other altruists than she spends improving her own rationality because that is the allotment of her resources that maximizes her altruistic goal of improving the world. Even the staff of the Singularity Institute who do not have Anna's teaching and helping skills and who consequently specialize in math, science and computers spend a significant fraction of their resources trying to improve the rationality of other altruists.

In summary, although no one (that I know of) is opposed to self-improvement's being the focus of most of the posts on LW and no one is opposed to non-altruists' using the site for self-improvement, this site was founded in the hope of increasing the rationality of altruists.