Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : Giving What We Can at LessWrong Tel Aviv

0 Squark 01 July 2016 02:07PM

Discussion article for the meetup : Giving What We Can at LessWrong Tel Aviv

WHEN: 05 July 2016 05:06:58PM (+0300)

WHERE: Cluster - Disruptive Technologies hub, Yigal Alon 118 Tel Aviv

This Tueday, LessWrong Tel Aviv is proud to host a talk by Erwan Atcheson from Giving What We Can. GWWC is an organization based in the United Kingdom whose mission is promoting donations to effective charities in the global poverty domain. GWWC is associated with the Centre for Effective Altruism and is one of the most famous organizations in the effective altruism movement, known among other things for the pledge to give 10% of one's income to charity which anyone can take to become a member. Erwan will talk about GWWC's mission and work and will take questions from the audience.

As usual, the meetup begins at 19:00 but the talk will only begin around 19:30-19:45. Entrance to the Cluster is from Totseret Ha'aretz street, through a brown door with a doorbell.

See you all there!

Discussion article for the meetup : Giving What We Can at LessWrong Tel Aviv

Comment author: cousin_it 09 March 2016 12:56:20PM *  12 points [-]

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Comment author: Squark 15 April 2016 04:41:30PM *  0 points [-]

"Neural networks" vs. "Not neural networks" is a completely wrong way to look at the problem.

For one thing, there are very different algorithms lumped under the title "neural networks". For example Boltzmann machines and feedforward networks are both called "neural networks" but IMO it's more because it's a fashionable name than because of actual similarity in how they work.

More importantly, the really significant distinction is making progress by trail and error vs. making progress by theoretical understanding. The goal of AI safety research should be shifting the balance towards the second option, since the second option is much more likely to yield results that are predictable and satisfy provable guarantees. In this context I believe MIRI correctly identified multiple important problems (logical uncertainty, decision theory, naturalized induction, Vingean reflection). I am mildly skeptical about the attempts to attack these problems using formal logic, but the approaches based on complexity theory and statistical learning theory that I'm pursuing seem completely compatible with various machine learning techniques including ANNs.

Comment author: aarongertler 02 April 2016 04:47:09AM 6 points [-]

I have taken the survey.

Comment: "90% of humanity" seems a little high for "minimum viable existential risk". I'd think that 75% or so would likely be enough to stop us from getting back out of the hole (though the nature of the destruction could make a major difference here).

Comment author: Squark 04 April 2016 06:45:02PM *  3 points [-]

What makes you think so? The main reason I can see why the death of less than 100% of the population would stop us from getting back is if it's followed by a natural event that finishes off the rest. However 25% of current humanity seems much more than enough to survive all natural disasters that are likely to happen in the following 10,000 years. The black death killed about half the population of Europe and it wasn't enough even to destroy the pre-existing social institutions.

In response to comment by Squark on Why CFAR's Mission?
Comment author: pcm 22 January 2016 07:56:59PM 0 points [-]

I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.

In response to comment by pcm on Why CFAR's Mission?
Comment author: Squark 25 January 2016 09:38:39AM *  0 points [-]

Hi Peter! I am Vadim, we met in a LW meetup in CFAR's office last May.

You might be right that SPARC is important but I really want to hear from the horse's mouth what is their strategy in this regard. I'm inclined to disagree with you regarding younger people, what makes you think so? Regardless of age I would guess establishing a continuous education programme would have much more impact than a two-week summer workshop. It's not obvious what is the optimal distribution of resources (many two week workshops for many people or one long program for fewer people) but I haven't seen such an analysis by CFAR.

In response to Dying Outside
Comment author: Squark 25 January 2016 08:47:45AM *  6 points [-]

The body of this worthy man died in August 2014, but his brain is preserved by Alcor. May a day come when he lives again and death is banished forever.

In response to Why CFAR's Mission?
Comment author: Squark 17 January 2016 06:50:19AM 0 points [-]

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now. Otherwise it would make sense to place more emphasis on creating educational programs for children where the long term impact can be larger (I think). Do you agree with this assessment? If so, how do you justify the short term assumption?

Meetup : Tel Aviv: Nick Lane's Vital Question

1 Squark 15 January 2016 06:54PM

Discussion article for the meetup : Tel Aviv: Nick Lane's Vital Question

WHEN: 19 January 2016 07:00:00PM (+0200)

WHERE: Yigal Alon 118, Tel Aviv

This time in LessWrong Tel Aviv, Daniel Armak will review Nick Lane's books on evolutionary biology. The abstract of the talk:

"The largest scale of biology contains many unexplained facts. Many important traits are exclusive to eukaryotes: large cell and genome size, internal complexity, multicellularity, sex and the haploid-diploid cell cycle, mitochondria, phagocytosis, and hundreds more. What is the relation between them? How did they evolve, why do all eukaryotes have them (or once had), and why don't any other cells have any of them? What are some of them even for? I will present a summary of Nick Lane's books, themselves a popular summary of other researchers, that attempts to answer many questions with one big answer."

The event will take place in the Cluster. Entrance is from Totzeret Haaretz street. Ring the doorbell on the brown door.

Facebook event: https://www.facebook.com/events/435804276616336/

Contant number: +972542600919 (Vadim Kosoy)

Discussion article for the meetup : Tel Aviv: Nick Lane's Vital Question

Comment author: Squark 06 January 2016 03:49:43PM *  1 point [-]

Link to "Limited intelligence AIs evaluated on their mathematical ability", and link to "AIs locked in cryptographic boxes".

In response to comment by lmm on LessWrong 2.0
Comment author: Vaniver 05 December 2015 05:58:09PM 2 points [-]

The social proof effect of physically attending a workshop and spending a weekend around similarly inclined people is not to be underestimated. In-person instruction also provides better feedback for the instructors, allowing for more rapid iteration.

In response to comment by Vaniver on LessWrong 2.0
Comment author: Squark 23 December 2015 04:44:30PM 0 points [-]

On the other hand, articles and books can reach a much larger number of people (case in point: the Sequences). I would really want to see a more detailed explanation by CFAR of the rationale behind their strategy.

Comment author: Squark 18 December 2015 07:31:00PM *  5 points [-]

Thank you for writing this. Several questions.

  • How do you see CFAR in the long term? Are workshops going to remain in the center? Are you planning some entirely new approaches to promoting rationality?

  • How much do you plan to upscale? Are the workshops intended to produce a rationality elite or eventually become more of a mass phenomenon?

  • It seem possible that revolutionizing the school system would have much higher impact on rationality than providing workshops for adults. SPARC might be one step in this direction. What are you thoughts / plans regarding this approach?

View more: Next