Meetup : Less Wrong Israel Meetup: Social and Board Games
Discussion article for the meetup : Less Wrong Israel Meetup: Social and Board Games
This time we're going to have a social meetup! It's going to be a game night full of people talking about physics, friendly AI, and how to effectively save the world. Please bring any games you'd like to play. Contact: If you can't find us, call Anatoly, who is graciously hosting us, at 054-245-1060; or Joshua at 054-569-1165. There's a Facebook event for the meetup. Please RSVP if you have Facebook, to help us get a sense of what to expect, and you can also join the LessWrong Israel Facebook group where we publish new events.
Discussion article for the meetup : Less Wrong Israel Meetup: Social and Board Games
Meetup : Israel Less Wrong Meetup - Social, Board Games
Discussion article for the meetup : Israel Less Wrong Meetup - Social, Board Games
We're going to have a meetup on Thursday, August 7th at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv.
Note: the meetups are being moved from Thursday nights to Tuesday nights; we'll probably finalize the schedule at this meetup. We're still meeting every two weeks.
This time we're going to have a social meetup! We'll be socializing and playing games.
We'll start the meetup at 19:00, and we'll go on as much as we like to. Feel free to come a little bit later, as there is no agenda.
We'll meet at the 29th floor of the building (Note: Not the 34th where Google Campus is). If you arrive and can't find your way around, call Anatoly who is very graciously hosting us at 054-245-1060, or Daniel at 054-7576480.
Things that might happen: - You'll trade cool ideas with cool people from the Israel LW community. - You'll discover kindred spirits who agree with you about one/two boxing. - You'll kick someone's ass (and teach them how you did it) at some awesome boardgame - You'll discover how to build a friendly AGI running on cold fusion (well probably not) - You'll discuss interesting AI topics with new friends!
Things that will happen for sure: - You'll get to hang out with awesome people and have fun!
If you have any questions feel free to email the LW-IL mailing list. See you there!
Discussion article for the meetup : Israel Less Wrong Meetup - Social, Board Games
Meetup : Less Wrong Israel Meetup (Herzliya): Social and Board Games
Discussion article for the meetup : Less Wrong Israel Meetup (Herzliya): Social and Board Games
This time we're going to have a social meetup! We'll be socializing and playing games.
We'll start the meetup at 19:00 and finish at 22:00 (we may move the discussion to a nearby cafe afterwards). Feel free to come a little bit later, as there is no agenda.
We'll meet at the 4th floor of the building. Our gracious host is Yonatan Cale, and if you have trouble finding us, you can reach him at 052-5563141.
Things that might happen: - You'll trade cool ideas with cool people from the Israel LW community. - You'll discover kindred spirits who agree with you about one/two boxing. - You'll kick someone's ass (and teach them how you did it) at some awesome boardgame - You'll discover how to build a friendly AGI running on cold fusion (well probably not) - You'll discuss interesting AI topics with new friends!
Things that will happen for sure: - You'll get to hang out with awesome people and have fun!
Parking: There's a parking place under the building, and several others around us too. I'll update about free parking (I don't have a car)
Public transport: Busses: Namir, get off at צומת הרצליה/אקדיה (not הסירה), it's 5 minutes slow-walking from there. Herzelia train: ~15 minutes walking Use your GPS!
Discussion article for the meetup : Less Wrong Israel Meetup (Herzliya): Social and Board Games
[LINK] Behind the Shock Machine: book reexamining Milgram obedience experiments
There's a book called Behind the Shock Machine by psychologist Gina Perry, published just a week ago, which investigates the original Milgram obedience experiments. I haven't read it, but I've read a summary / editorial published in the Pacific Standard.
Of course, the editorial is in some measure designed to provoke outrage, generate click-throughs, and leave readers biased against Milgram. I don't trust the editorial to report unbiased truth. If anyone has read the book, what do you think about it?
Key quote from the editorial:
Perry also caught Milgram cooking his data. In his articles, Milgram stressed the uniformity of his procedures, hoping to appear as scientific as possible. By his account, each time a subject protested or expressed doubt about continuing, the experimenter would employ a set series of four counter-prompts. If, after the fourth prompt (“You have no other choice, teacher; you must go on”), the subject still refused to continue, the experiment would be called to a halt, and the subject counted as “disobedient.” But on the audiotapes in the Yale archives, Perry heard Milgram’s experimenter improvising, roaming further and further off script, coaxing or, depending on your point of view, coercing participants into continuing. Inconsistency in the standards meant that the line between obedience and disobedience was shifting from subject to subject, and from variation to variation—and that the famous 65 percent compliance rate had less to do with human nature than with arbitrary semantic distinctions.
The wrinkles in Milgram’s research kept revealing themselves. Perhaps most damningly, after Perry tracked down one of Milgram’s research analysts, she found reason to believe that most of his subjects had actually seen through the deception. They knew, in other words, that they were taking part in a low-stakes charade.
Meetup : LessWrong Israel September meetup
Discussion article for the meetup : LessWrong Israel September meetup
* NOTE: THE LOCATION HAS CHANGED TO PARK HAYARQON!! *
Call 0545330678 (Gal) for details.
We're going to have a meetup on Thursday, September 12 at VisionMap's offices, Gibor Sport House, 15th floor, 7 Menachem Begin st., Ramat-Gan. Our program is: * 20:00-20:15: Assembly * 20:15-21:00: Main Talk * 21:00-22:00: Dinner & Discussion * 22:00-23:00: Rump Session (minitalks) * 23:00-: End of official programming Main Talk: Mirrors and Sunlight: Dealing with Emotional Vampires / Guy Banay We will learn how to recognize the Dramatic Personality Disorders (antisocial, borderline, narcissistic and histrionic) and some strategies to minimize the damage they can do. Backup Talk: Solomonoff Induction / Benjamin Fox Solomonoff Induction is basically the most generalized form of intelligence, intelligence treated on the most basic mathematical level.If we wish to find facts given provided information, Solomonoff Induction is the most fundamental algorithm to do so. Rump Session: each participant will give a 4-minute talk (+3 minute encore if we applaud hard enough). Giving a talk isn't mandatory, but it's highly recommended. Not confident that what you have to say is relevant to our interests? Unsure about your public speaking skills? Doesn't matter - in the rump session, anything goes. (Posted for Guy Banay, the organizer, who doesn't have enough LW karma yet.)
Discussion article for the meetup : LessWrong Israel September meetup
Meetup : Israel LW meetup
Discussion article for the meetup : Israel LW meetup
The Israel Less Wrong group is meeting again. We haven't chosen the subject of the meeting yet (I will update the post when we do). The discussion is ongoing in our Google Group.
I want to say thank you again to Cat from CFAR, and Gal Hochberg, for organizing the previous meetup six weeks ago and catalyzing the reboot of LW Israel!
The location is in the offices of a company called Visionmap.
The exact time given is provisional. Please follow the discussion in the group for updates.
Discussion article for the meetup : Israel LW meetup
Does evolution select for mortality?
At a recent Reddit AMA, Eric Lander, a professor of biology who played an important part in the Human Genome Project, answered this question:
Do you think immortatility is technically possible for human beings?
I don't think immortality is technically possible -- evolution has installed many many mechanisms to ensure that organisms die and make room for the next generation. I bet it is going to be very hard to completely overcome all these mechanisms.
This seems to me, at first blush, to exhibit the Evolution of Species Fairy fallacy. Evolution doesn't work to benefit species, populations, or the "next generation". If a mutation arises that increases longevity, and has no other downsides, then animals with that mutation should become more common in the gene pool, because they die less often. I remember reading that the effect would not be very strong, because most animals don't die of old age. But why would there be the opposite effect?
I am loath to attribute a very basic error to a distinguished professor of biology. Is there another explanation? Is the claim that evolution selects for mortality true?
Note: Eric went on to add:
I'm also not convinced immortality is such a good idea. A lot of human progress depends on having a new generation with new ideas. Immortality may equal stagnation.
This seems to be blatant rationalization of a preconceived idea that death is good. (I doubt he truly believes that extra progress is worth everybody dying.) So perhaps his first statement is also a form of rationalization. But it seems improbable to me that he would make such a statement about biology if he didn't think it well-founded. More likely there's something I'm misunderstanding.
ETA: one of the first Google results is this page at nature.com, The Evolution of Aging by Daniel Fabian, which goes into some depth on the subject. The bottom line is that it agrees with my expectation that evolution does not select for mortality. Choice quotes:
The Roman poet and philosopher Lucretius, for example, argued in his De Rerum Natura (On the Nature of Things) that aging and death are beneficial because they make room for the next generation (Bailey 1947), a view that persisted among biologists well into the 20th century. [...]
A more parsimonious evolutionary explanation for the existence of aging therefore requires an explanation that is based on individual fitness and selection, not on group selection. This was understood in the 1940's and 1950's by three evolutionary biologists, J.B.S. Haldane, Peter B. Medawar and George C. Williams, who realized that aging does not evolve for the "good of the species". Instead, they argued, aging evolves because natural selection becomes inefficient at maintaining function (and fitness) at old age. Their ideas were later mathematically formalized by William D. Hamilton and Brian Charlesworth in the 1960's and 1970's, and today they are empirically well supported. Below we review these major evolutionary insights and the empirical evidence for why we grow old and die.
How could a distinguished professor of biology, a leader of the HGP and advisor to the US President, get something so elementary wrong, when even a biology undergrad dropout like myself notices this seems wrong?
ETA #2: Gwern points to the Wikipedia article on Evolution of Ageing, which lists several competing theories of the evolution of aging (and therefore mortality). This shows the subject is more complex than I had thought and there may be good reason to believe mortality is selected for by evolution (or at least is reliably linked to something else that is selected).
I should be glad that I didn't discover an obvious error being committed by a distinguished professional, even if he may be ultimately wrong!
I want to save myself
Related to: People who want to save the world
I have recently been diagnosed with cancer, for which I am currently being treated with good prognosis. I've been reevaluating my life plans and priorities in response. To be clear, I estimate that the cancer is responsible for much less than half the total danger to my life. The universals - X-risks, diseases I don't have yet, traffic accidents, etc. - are worse.
I would like to affirm my desire to Save Myself (and Save The World For Myself). Saving the world is a prerequisite simply because the world is in danger. I believe my values are well aligned with those of the LW community; wanting to Save The World is a good applause light but I believe most people want to do so for selfish reasons.
I would also like to ask LW members: why do you prefer to contribute (in part) towards humankind-wide X-risk problems rather than more narrow but personally important issues? How do you determine the time- and risk- tradeoffs between things like saving money for healthcare, and investing money in preventing an unfriendly AI FOOM?
It is common advice here to focus on earning money and donating it to research, rather than donating in kind. How do you decide what portion of income to donate to SIAI, which to SENS, and which to keep as money for purely personal problems that others won't invest in? There's no conceptual difficulty here, but I have no idea how to quantify the risks involved.
Choose To Be Happy
Related to: I'm Scared; Purchase utilons and fuzzies separately
Expanded from this comment.
You have awakened as a rationalist, discarded your false beliefs, and updated on new evidence. You understand the dangers of UFAI, you do not look away from death or justify it. You realize your own weakness, and the Vast space of possible failures.
And understanding all this, you feel bad about it. Very bad, in fact. You are afraid of the dangers of the future, and you are horrified by the huge amounts of suffering. You have shut up and calculated, and the calculation output that you should feel 3^^^3 times as bad as over a stubbed toe. And a stubbed toe can be pretty bad.
But this reaction of yours is not rational. You should consider the options of choosing not to feel bad about bad things happening, and choosing to feel good no matter what.
Proposal: Anti-Akrasia Alliance
Related to: Kicking Akrasia: now or never; Tsuioku Naritai
The situation
I am greatly afficted by akrasia, and in all probability, so are you. Akrasia is a destroyer of worlds.1
I have come to the conclusion that akrasia is the single biggest problem I have in life. It is greater than my impending biological death, my imperfect enjoyment of life, or the danger of a car accident.
For if I could solve the problem of akrasia, I would work on these other problems, and I believe I would solve them too. Even a big problem like physical mortality can be meaningfully challenged if I spend a lifetime tackling it. But until I solve the problem of akrasia, I will sit around and do nothing about my mortality.
(Edited here) Without solving akrasia, we are relatively inefficient in attacking the other problems that matter to us. However, if LW readers - typically smart, rational, luminous, and relatively rich people - were to defeat akrasia and become highly productive, I think we would possess real world-changing power2.
Some people have either solved this problem or never had it. Thus, we know it is possible to vanquish akrasia. However, it is a unique problem that fights its own cure: because of akrasia, we don't spend as much effort as we'd like fighting akrasia.
I propose forming a community dedicated to fighting akrasia.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)