Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cousin_it 09 March 2016 12:56:20PM *  12 points [-]

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Comment author: Squark 15 April 2016 04:41:30PM *  0 points [-]

"Neural networks" vs. "Not neural networks" is a completely wrong way to look at the problem.

For one thing, there are very different algorithms lumped under the title "neural networks". For example Boltzmann machines and feedforward networks are both called "neural networks" but IMO it's more because it's a fashionable name than because of actual similarity in how they work.

More importantly, the really significant distinction is making progress by trail and error vs. making progress by theoretical understanding. The goal of AI safety research should be shifting the balance towards the second option, since the second option is much more likely to yield results that are predictable and satisfy provable guarantees. In this context I believe MIRI correctly identified multiple important problems (logical uncertainty, decision theory, naturalized induction, Vingean reflection). I am mildly skeptical about the attempts to attack these problems using formal logic, but the approaches based on complexity theory and statistical learning theory that I'm pursuing seem completely compatible with various machine learning techniques including ANNs.

Comment author: aarongertler 02 April 2016 04:47:09AM 6 points [-]

I have taken the survey.

Comment: "90% of humanity" seems a little high for "minimum viable existential risk". I'd think that 75% or so would likely be enough to stop us from getting back out of the hole (though the nature of the destruction could make a major difference here).

Comment author: Squark 04 April 2016 06:45:02PM *  3 points [-]

What makes you think so? The main reason I can see why the death of less than 100% of the population would stop us from getting back is if it's followed by a natural event that finishes off the rest. However 25% of current humanity seems much more than enough to survive all natural disasters that are likely to happen in the following 10,000 years. The black death killed about half the population of Europe and it wasn't enough even to destroy the pre-existing social institutions.

In response to comment by Squark on Why CFAR's Mission?
Comment author: pcm 22 January 2016 07:56:59PM 0 points [-]

I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.

In response to comment by pcm on Why CFAR's Mission?
Comment author: Squark 25 January 2016 09:38:39AM *  0 points [-]

Hi Peter! I am Vadim, we met in a LW meetup in CFAR's office last May.

You might be right that SPARC is important but I really want to hear from the horse's mouth what is their strategy in this regard. I'm inclined to disagree with you regarding younger people, what makes you think so? Regardless of age I would guess establishing a continuous education programme would have much more impact than a two-week summer workshop. It's not obvious what is the optimal distribution of resources (many two week workshops for many people or one long program for fewer people) but I haven't seen such an analysis by CFAR.

In response to Dying Outside
Comment author: Squark 25 January 2016 08:47:45AM *  6 points [-]

The body of this worthy man died in August 2014, but his brain is preserved by Alcor. May a day come when he lives again and death is banished forever.

In response to Why CFAR's Mission?
Comment author: Squark 17 January 2016 06:50:19AM 0 points [-]

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now. Otherwise it would make sense to place more emphasis on creating educational programs for children where the long term impact can be larger (I think). Do you agree with this assessment? If so, how do you justify the short term assumption?

Comment author: Squark 06 January 2016 03:49:43PM *  1 point [-]

Link to "Limited intelligence AIs evaluated on their mathematical ability", and link to "AIs locked in cryptographic boxes".

In response to comment by lmm on LessWrong 2.0
Comment author: Vaniver 05 December 2015 05:58:09PM 2 points [-]

The social proof effect of physically attending a workshop and spending a weekend around similarly inclined people is not to be underestimated. In-person instruction also provides better feedback for the instructors, allowing for more rapid iteration.

In response to comment by Vaniver on LessWrong 2.0
Comment author: Squark 23 December 2015 04:44:30PM 0 points [-]

On the other hand, articles and books can reach a much larger number of people (case in point: the Sequences). I would really want to see a more detailed explanation by CFAR of the rationale behind their strategy.

Comment author: Squark 18 December 2015 07:31:00PM *  5 points [-]

Thank you for writing this. Several questions.

  • How do you see CFAR in the long term? Are workshops going to remain in the center? Are you planning some entirely new approaches to promoting rationality?

  • How much do you plan to upscale? Are the workshops intended to produce a rationality elite or eventually become more of a mass phenomenon?

  • It seem possible that revolutionizing the school system would have much higher impact on rationality than providing workshops for adults. SPARC might be one step in this direction. What are you thoughts / plans regarding this approach?

Comment author: Squark 03 December 2015 07:47:46PM *  0 points [-]
Comment author: Squark 01 October 2015 09:02:02PM *  0 points [-]

!!! It is October 27, not 28 !!!

Also, it's at 19:00

Sorry but it's impossible to edit the post.

View more: Next