Comment author: Bryan-san 24 June 2015 04:39:43PM *  2 points [-]

The Rejection Game using Beeminder can be a good start for social skills development in general

If you're interested in a specific area of social interactions then finding a partner or two in that area could help out. Toastmasters, pua groups, book clubs, and improv groups fall into this category.

Alternatively, obtaining a job in sales can take you far

Comment author: William_S 24 June 2015 04:55:12PM 0 points [-]

My impression of Toastmasters is that it might be similar to what I'm looking for, but only covers public speaking.

Comment author: ChristianKl 24 June 2015 04:31:56PM 1 point [-]

Advice about picking in person training is location dependent. Without knowing what's available where you live it's impossible to give good recommendations.

Comment author: William_S 24 June 2015 04:48:44PM *  0 points [-]

Recommendations for in person training around the Bay Area would be useful (as I'm likely to end up there).

Comment author: William_S 24 June 2015 02:49:26PM *  1 point [-]

Does anyone know about any programs for improving confidence in social situations and social skills that involve lots of practice (in real world situations or in something scripted/roleplayed)? Reading through books on social skills (ie. How to Win Friends and Influence People) seems to provide a lot of tips that would be useful to implement in real life, but they don't seem to stick without actually practicing them. The traditional advice to find a situation in your own life that you would already be involved in hasn't worked well for me because it is missing features that would be good for learning (sporadic, not repeatable, can't get feedback from someone who knows what they are doing on your performance, have a lot of things going on beyond the aspects you want to focus on, things can move on without giving you time to think, etc.). For example, this might look like a workshop that involved a significant amount of time pairing up with other participants and practicing small talk, with breaks in between to cool down, get feedback, and learn new tips to practice in later rounds.

Comment author: William_S 08 May 2015 01:38:43PM 2 points [-]

This post is mentioned in a Slate Star Codex blog post, which has some examples of blocking groups of people on social media (less sophisticated than proposed here), and raises different scenarios of how this trend could play out http://slatestarcodex.com/2015/05/06/the-future-is-filters/

Comment author: William_S 09 April 2015 01:52:56PM 3 points [-]

Negativity bias (from wikipedia: "Psychological phenomenon by which humans have a greater recall of unpleasant memories compared with positive memories"), or some other modification reducing negativity in my perception of the world for myself.

Maybe toning down scope insensitivity for the whole human race? (I'm not sure if completely eliminating it is a good idea - I don't know if having a brain which feels the correct emotions about arbitrary amounts of suffering would be too traumatic and lead to paralysis. Whatever level of modification that leads to correct actions)

Comment author: William_S 05 April 2015 02:31:17AM 1 point [-]

I think that Andrew Ng's position is somewhat reasonable, especially applied to technical work - it does seem like human level AI would require some things we don't understand, which makes the technical work harder before those things are known (though I don't agree that there's no value in technical work today). However, the tone of the analogy to "overpopulation on Mars" leaves the question as to at what point the problem transitions to "something we can't make much progress on today" to "something we can make progress on today". Martian overpopulation would have pretty clear signs when it's a problem, whereas it's quite plausible that the point where technical AI work becomes tractable will not be obvious, and may occur after the point where it's too late to do anything.

I wonder if it would be worth developing and promoting a position that is consistent with technical work seeming intractible and non-urgent today, but with a more clearly defined point where it becomes something worth working on (ie. AI passes some test of human like performance, some well-defined measure of expert opinion says human level AI is X-years off). In principle, this seems like it would be low cost for an AI researcher to adopt this sort of position (though in practice, it might be rejected if AI researchers really believes that dangerous AI is too weird and will never happen).

Comment author: KatjaGrace 31 March 2015 04:27:58AM 5 points [-]

Do you agree with Bostrom that humanity should defer non-urgent scientific questions, and work on time-sensitive issues such as AI safety?

Comment author: William_S 04 April 2015 09:38:49PM 1 point [-]

I agree in general, but there's a lot more things than just AI safety that ought to be worked on more (ie. research on neglected diseases), and today's AI safety research might reach diminishing returns quickly because we are likely some time away from reaching human level AI. There's a funding level for AI safety research where I'd want to think about whether it was too much. I don't think we've reached that point quite yet, but it's probably worth keeping track of the marginal impact of new AI research dollars/researchers to see if it falls off.

Comment author: William_S 11 March 2015 01:13:13AM 1 point [-]

It seems like building a group of people who have some interest in reducing x-risk (like the EA movement) is a strategy that is less likely to backfire and more likely to produce positive outcomes than the technology pathway interventions discussed in this chapter. Does anyone think this is not the case?

Comment author: KatjaGrace 10 March 2015 02:09:36AM 2 points [-]

What was your favorite part of this section?

Comment author: William_S 11 March 2015 12:08:10AM 1 point [-]

The definition of state risk/step risk and how to define "levers" that one might pull on development (like the macro-structural development accelerator) - these start to break down the technology strategy problem, which previously seemed like a big goopy mess. (It still seems like a big goopy mess, but maybe we can find things in it that are useful).

Comment author: shminux 08 March 2015 10:12:33PM 1 point [-]

Lies! " shrieked a tall Slytherin, who'd risen up from that table. "Lies! Lies! The Dark Lord will return, and he'll, he'll teach you all the meaning of -"

What did he mean to say before Snape interrupted him?

Comment author: William_S 09 March 2015 01:26:54AM 23 points [-]

Christmas.

View more: Next