Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: whpearson 27 May 2017 08:10:55PM *  1 point [-]

I didn't organise this one so well, so it was a wash. No concrete plans for next time yet. Other priorities may interfere

Comment author: SoerenE 29 May 2017 06:30:51AM 0 points [-]

My apologies for not being present. I did not put it into my calendar, and it slipped my mind. :(

Comment author: SoerenE 26 May 2017 11:53:30AM 3 points [-]

You might also be interested in this article by Kaj Sotala: http://kajsotala.fi/2016/04/decisive-strategic-advantage-without-a-hard-takeoff/

Even though you are writing about the exact same subject, there is (as far as I can tell) no substantial overlap with the points you highlight. Kaj Sotala titled his blog post "(Part 1)" but never wrote a subsequent part.

Comment author: whpearson 22 May 2017 03:26:12PM 1 point [-]

Thanks! Fixing now.

Comment author: SoerenE 23 May 2017 05:49:02AM 0 points [-]

Also, it looks like the last time slot is 2200 UTC. I can participate from 1900 and forward.

I will promote this in the AI Safety reading group tomorrow evening.

Comment author: SoerenE 22 May 2017 10:30:32AM 0 points [-]

The title says 2017/6/27. Should it be 2017-05-27?

Comment author: SoerenE 16 March 2017 10:10:18AM 0 points [-]

Good luck with meetup!

In the Skype-based reading group, we followed the "Ambitious" plan from MIRI's reading guide: https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf We liked the plan. Among other things, the guide recommended splitting chapter 9 into two parts, and that was good advice.

Starting from chapter 7, I made slides appropriate for a 30 minute summary: http://airca.dk/reading_group.htm

Be sure to check out the comments from the Lesswrong reading group by Katja Grace: http://lesswrong.com/lw/kw4/superintelligence_reading_group/

Comment author: RedMan 30 January 2017 04:49:38PM 1 point [-]

Addressing your question, Szilard's political action: https://en.m.wikipedia.org/wiki/Einstein–Szilárd_letter directly led to the construction of the a-bomb and the nuclear arms race. The jury is still out on whether that wipes out the human race.

I assert that at present, the number of AGIs capable of doing as much damage as the two human figures you named is zero. I further assert that the number of humans capable of doing tremendous damage to the earth or the human race is likely to increase, not decrease.

I assert that the risk posed of AGI acting without human influence destroying the human race will never exceed the risk of humans, making use of technology (including AGI), destroying the human race through malice or incompetence.

Therefore, I assert that your If-Then statement is more likely to become true in the future than the opposite (if no humans have the capability to kill all humans then long-term ai safety is probably a good priority).

Comment author: SoerenE 30 January 2017 08:40:27PM 0 points [-]

I think I agree with all your assertions :).

(Please forgive me for a nitpick: The opposite statement would be "Many humans have the ability to kill all humans AND AI Safety is a good priority". NOT (A IMPLIES B) is equivalent to A AND NOT B. )

Comment author: LawrenceC 30 January 2017 05:03:31PM *  1 point [-]

Thanks Søren! Could I ask what you're planning on covering in the future? Is this mainly going to be a technical or non-technical reading group?

I noticed that your group seems to have covered a lot of the basic readings on AI Safety, but I'm curious what your future plans.

Comment author: SoerenE 30 January 2017 08:10:51PM 0 points [-]

There are no specific plans - at the end of each session we discuss briefly what we should read for next time. I expect it will remain a mostly non-technical reading group.

Comment author: RedMan 30 January 2017 02:56:25PM *  0 points [-]

He produced a then novel scenario for a technological development which could potentially have that consequence: https://en.m.wikipedia.org/wiki/Cobalt_bomb

He also worked in the field of nuclear weapons development, and may have had access to the necessary material, equipment, and personnel required to construct such a device, or modify an existing device intended for use in a nuclear test.

I assert that my use of 'sufficiently' in this context is appropriate, the intellectual threshold for humanity-destroying action is fairly low, and certainly within the capacity of many humans today.

Comment author: SoerenE 30 January 2017 03:55:40PM 0 points [-]

Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).

I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority. I strongly deny the antecedent.

I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).

Comment author: RedMan 30 January 2017 02:33:59PM 0 points [-]

What evil can be perpetrated by AGI that cannot be perpetrated by a sufficiently capable human or group of colluding humans?

Leo Szilard could probably have built a bomb that would wipe out the human race, we are still here, and do not credit that to the success of developing a 'Friendly Hungarian' or the success of the 'Hungarian Safety' research community. Arguably, Edward Teller was a 'slightly unfriendly' Hungarian, and we did OK with him too.

Comment author: SoerenE 30 January 2017 02:50:51PM 0 points [-]

The word 'sufficiently' makes your claim a tautology. A 'sufficiently' capable human is capable of anything, by definition.

Your claim that Leo Szilard probably could have wiped out the human race seems very far from the historical consensus.

Comment author: ignoranceprior 28 January 2017 11:19:02PM 4 points [-]

You could advertise this on /r/ControlProblem too.

Comment author: SoerenE 29 January 2017 07:57:21PM 0 points [-]

Good idea. I will do so.

View more: Next