Comment author: Bryan-san 24 June 2015 04:39:43PM *  2 points [-]

The Rejection Game using Beeminder can be a good start for social skills development in general

If you're interested in a specific area of social interactions then finding a partner or two in that area could help out. Toastmasters, pua groups, book clubs, and improv groups fall into this category.

Alternatively, obtaining a job in sales can take you far

Comment author: William_S 24 June 2015 04:55:12PM 0 points [-]

My impression of Toastmasters is that it might be similar to what I'm looking for, but only covers public speaking.

Comment author: ChristianKl 24 June 2015 04:31:56PM 1 point [-]

Advice about picking in person training is location dependent. Without knowing what's available where you live it's impossible to give good recommendations.

Comment author: William_S 24 June 2015 04:48:44PM *  0 points [-]

Recommendations for in person training around the Bay Area would be useful (as I'm likely to end up there).

Comment author: William_S 24 June 2015 02:49:26PM *  1 point [-]

Does anyone know about any programs for improving confidence in social situations and social skills that involve lots of practice (in real world situations or in something scripted/roleplayed)? Reading through books on social skills (ie. How to Win Friends and Influence People) seems to provide a lot of tips that would be useful to implement in real life, but they don't seem to stick without actually practicing them. The traditional advice to find a situation in your own life that you would already be involved in hasn't worked well for me because it is missing features that would be good for learning (sporadic, not repeatable, can't get feedback from someone who knows what they are doing on your performance, have a lot of things going on beyond the aspects you want to focus on, things can move on without giving you time to think, etc.). For example, this might look like a workshop that involved a significant amount of time pairing up with other participants and practicing small talk, with breaks in between to cool down, get feedback, and learn new tips to practice in later rounds.

Comment author: William_S 08 May 2015 01:38:43PM 2 points [-]

This post is mentioned in a Slate Star Codex blog post, which has some examples of blocking groups of people on social media (less sophisticated than proposed here), and raises different scenarios of how this trend could play out http://slatestarcodex.com/2015/05/06/the-future-is-filters/

Comment author: VoiceOfRa 01 May 2015 02:23:18AM 9 points [-]

What definition of "hate speech" did you use? For example, does mentioning that members of group X have lower IQ's and are more likely to commit violent crimes count as "hate speech"? Does it matter if all the relevant statistics indicate this is indeed the case? Does it matter if it's true?

Comment author: William_S 03 May 2015 04:32:49PM *  -3 points [-]

Sorry, wasn't meaning to get into the question of how accurate you could be - I just wanted to clarify technical feasibility of data collection and website modification. The project in question was just for a university course, and not intended for anything like the system described in this post. I just used a bag of words model, with tweets only pulled which contained words which are typically used as slurs towards a particular group. Obviously, accuracy wasn't very good individual tweet classification, the model only worked well when specific terms were used, and it missed a lot of nuance (ie. quoted song lyrics). It wasn't what you would need for troll classification.

For anything to work for this application, you'd probably need to limit the use of automatic classification to suggesting that a tweet might be trolling, subject to later manual reclassification, or to identifying users that are frequent and blatant trolls, and it's an open question as to whether you could actually make something useful from the available data.

Comment author: William_S 30 April 2015 02:50:51PM *  -3 points [-]

I think the second problem is solveable. I've extracted suitable data for a project classifying tweets as hate speech from the twitter apis (you can at the very least get all tweets containing certain search terms). As to integration with a site, I think it should be possible to create a browser addon that deletes content (or perhaps replaces it with a trollface image?) identified as trolling from a page as you load it (I don't know of an example of something that does this offhand, but I know there are addons that, ie., remove the entire newsfeed from a facebook page). There might be some problems relative to facebook or twitter implementing it themselves, but it would be possible to at least get a start on it.

Comment author: William_S 09 April 2015 01:52:56PM 3 points [-]

Negativity bias (from wikipedia: "Psychological phenomenon by which humans have a greater recall of unpleasant memories compared with positive memories"), or some other modification reducing negativity in my perception of the world for myself.

Maybe toning down scope insensitivity for the whole human race? (I'm not sure if completely eliminating it is a good idea - I don't know if having a brain which feels the correct emotions about arbitrary amounts of suffering would be too traumatic and lead to paralysis. Whatever level of modification that leads to correct actions)

Comment author: William_S 05 April 2015 02:31:17AM 1 point [-]

I think that Andrew Ng's position is somewhat reasonable, especially applied to technical work - it does seem like human level AI would require some things we don't understand, which makes the technical work harder before those things are known (though I don't agree that there's no value in technical work today). However, the tone of the analogy to "overpopulation on Mars" leaves the question as to at what point the problem transitions to "something we can't make much progress on today" to "something we can make progress on today". Martian overpopulation would have pretty clear signs when it's a problem, whereas it's quite plausible that the point where technical AI work becomes tractable will not be obvious, and may occur after the point where it's too late to do anything.

I wonder if it would be worth developing and promoting a position that is consistent with technical work seeming intractible and non-urgent today, but with a more clearly defined point where it becomes something worth working on (ie. AI passes some test of human like performance, some well-defined measure of expert opinion says human level AI is X-years off). In principle, this seems like it would be low cost for an AI researcher to adopt this sort of position (though in practice, it might be rejected if AI researchers really believes that dangerous AI is too weird and will never happen).

Comment author: KatjaGrace 31 March 2015 04:27:58AM 5 points [-]

Do you agree with Bostrom that humanity should defer non-urgent scientific questions, and work on time-sensitive issues such as AI safety?

Comment author: William_S 04 April 2015 09:38:49PM 1 point [-]

I agree in general, but there's a lot more things than just AI safety that ought to be worked on more (ie. research on neglected diseases), and today's AI safety research might reach diminishing returns quickly because we are likely some time away from reaching human level AI. There's a funding level for AI safety research where I'd want to think about whether it was too much. I don't think we've reached that point quite yet, but it's probably worth keeping track of the marginal impact of new AI research dollars/researchers to see if it falls off.

Comment author: William_S 11 March 2015 01:13:13AM 1 point [-]

It seems like building a group of people who have some interest in reducing x-risk (like the EA movement) is a strategy that is less likely to backfire and more likely to produce positive outcomes than the technology pathway interventions discussed in this chapter. Does anyone think this is not the case?

View more: Next