Comment author: [deleted] 12 May 2012 12:06:16PM -1 points [-]

without expressing any apparent remorse about your behavior

huh?

I did obtain some measure of revenge later that night by spanking her rear end hard, though I do not advise doing such things. She was not amused and her brother threatened me, though as I had apologized, that was the end of it.

(emphasis added)

In response to comment by [deleted] on The Social Coprocessor Model
Comment author: Airedale 12 May 2012 01:11:13PM 3 points [-]

Read in the context of the entire thread, I take this as a non-apology apology, not an expression of remorse or contrition. In the thread, Mallah continued to take the position that the woman “deserved” the spanking, and it appears to me that the apology was made in order to avoid future confrontation/trouble, not remorse. Moreover, Mallah also commented:

It was a mistake. Why? It exposed me to more risk than was worthwhile, and while I might have hoped that (aside from simple punishment) it would teach her the lesson that she ought to follow the Golden Rule, or at least should not pull the same tricks on guys, in retrospect it was unlikely to do so.

Remorse involves some genuine feeling of regret that one's actions had been wrong in some ethical or moral sense, not merely reconsideration because they had been ill-advised in a a practical sense.

[Link] Atlantic Interview with Nick Bostrom - "We're Understimating the Risk of Human Extinction"

11 Airedale 07 March 2012 04:25PM

http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/

My apologies if I've missed this posted anywhere else (google and my scanning the sidebar didn't turn it up).  I'm not sure that there's much that will be new to those who have been following existential risk elsewhere, but it's nice to see an article like this in a relatively mainstream publication.  Bostrom discusses issues such as the concept of existential risk and certain specific types of existential risk, why we as humans underestimate that risk, possible strategies to address existential risk, the simulation argument, how Hollywood and literature don't generally portray existential risk helpfully, and other issues.

 

Comment author: Swimmer963 06 March 2012 01:12:28AM *  5 points [-]

You're probably right. I think that the underlying thought running through my head was "it would be weird to put Part I in discussion but Part II in main." (I was originally planning to post everything together, but deciding that a) it would be too long, and b) I wanted feedback in order to continue with my research.)

Do you think it would be a good idea to move it to discussion at this point? I think you can do that by going back to 'edit'.

Comment author: Airedale 06 March 2012 03:35:40PM *  3 points [-]

I don't know that moving it is necessary at this point, but it's something to keep in mind for the future. It's not like there's a brightline rule, it just struck me as more appropriate for Discussion.

Also, on substance, one possible book to take a look at is The Inner Game of Tennis. Since you have a background in sports, and sports competition seems to be one of the areas where you've had this problem most often, that and/or other sports psychology books might be an interesting way for you to get into the issues. I haven't read it in years, and I'm not sure that it addresses your situation precisely, but I recall it as an interesting and useful read, even if some of it was a little psychobabbly.

edited to add - I also wonder if bringing in some anecdotes from pro/elite athletes who have struggled with emotional issues on the court/field, etc. might be a good way to add interest and also make it somewhat more universal beyond your particular situation.

Comment author: Airedale 06 March 2012 12:54:37AM 13 points [-]

I would personally prefer to see this in Discussion. Your personal story is interesting (and I recognize some of it in myself), but I don't think the personal background (plus your brief request for recommended literature, plan for Part II, etc.) is a sufficiently fleshed out idea at this point given that you aren't yet at the point of offering any guidance on solving the problem. Of course, Part II's literature review/recommendations may be of more benefit to the community and be a better fit for Main.

Comment author: Airedale 27 February 2012 10:29:23PM 2 points [-]

Is 1:46 a typo?

Comment author: steven0461 06 November 2011 08:42:49PM *  12 points [-]

To whoever is upvoting this, it seems like you must be taking one of the following positions:

  1. It is safe to post any view on LessWrong. Doing so will not get you in trouble, or cause blowups.
  2. It is unsafe to post certain views on LessWrong, but if you hold such a view, you are morally obliged to argue for it and suffer the punishment (possibly at the hands of me or my allies).
  3. It is unsafe to post certain views on LessWrong, and you are allowed not to argue for them, but you are not allowed to suggest that this unsafety has any sort of distorting effect on the resulting discussion.

Could you guys clarify?

Comment author: Airedale 07 November 2011 10:30:36PM 9 points [-]

Isn't is possible that Prismattic's comment could be receiving so many upvotes because other people also find comments of the sort described irritating and are embracing the opportunity to signal that irritation? Like Prismattic, I don't generally downvote comments on this basis alone. But I'm definitely tired of seeing the types of comments described, especially in those instances when, at least to my eyes, the commenters seem to be affecting a certain world-weary sorrow and wisdom while hinting at the profound truths that could be freely discussed but for -alas!- the terrible tyranny of modern social norms. But because the commenters are hiding the exact substance of their own views, there's no basis on which to judge whether these views are, as Prismattic suggests, actually more correct than the mainstream view, or perhaps equally or even more wrong in some different direction.

Comment author: loswinter 25 August 2011 04:42:08AM 2 points [-]

I am interested to attend but won't be in Chicago until after Sept. 6th. Do you expect to have a recurring meetup?

Thanks!

Comment author: Airedale 25 August 2011 10:21:40PM 2 points [-]

Yes, we have somewhat irregularly occurring meetups that are announced here on LW and on our Google Group e-mail list (which can be accessed from Nic's Discussion Group Post link).

Comment author: Will_Newsome 21 August 2011 05:47:06PM 6 points [-]

Unless you would be much less involved in this potential program than the comment indicates, this seems like an inappropriate request.

I was more interested in Less Wrong's interest in new FAI-focused organizations generally than in anything particularly tied to me.

Comment author: Airedale 21 August 2011 10:11:13PM 6 points [-]

Fair enough, but in light of your phrasing in both the original comment ("If I [did the following things]") and your comment immediately following it (quoted below; emphasis added), it certainly appeared to me that you seemed to be describing a significant role for yourself, even though your proposal was general overall.

(Some people, including me, would really like it if a competent and FAI-focused uber-rationalist non-profit existed. I know people who will soon have enough momentum to make this happen. I am significantly more familiar with the specifics of FAI (and of hardcore SingInst-style rationality) than many of those people and almost anyone else in the world, so it'd be necessary that I put a lot of hours into working with those who are higher status than me and better at getting things done but less familiar with technical Friendliness. But I have many other things I could be doing. Hence the question.)

Comment author: Airedale 21 August 2011 04:33:44PM 2 points [-]

Steven0461 and I will probably be able to make it. Thanks for taking the initiative!

Comment author: Will_Newsome 19 August 2011 06:03:55PM *  14 points [-]

(Please don't upvote this comment till you've read it fully; I'm interpreting upvotes in a specific way.) Question for anyone on LW: If I had a viable preliminary Friendly AI research program, aimed largely at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty of Friendly AI for various values of "Friendly", and wrote clearly and concretely about the necessary steps in pursuing this analysis, and listed and described a small number of people (less than 5, but how many could actually be convinced to focus on doing the analysis would depend on funds) who I know of who could usefully work on such an analysis, and committed to have certain summaries published online at various points (after actually considering concrete possibilities for failure, planning fallacy, etc., like real rationalists should), and associated with a few (roughly 5) high status people (people like Anders Sandberg or Max Tegmark, e.g. by convincing them to be on an advisory board), would this have a decent chance of causing you or someone you know to donate $100 or more to support this research program? (I have a weird rather mixed reputation among the greater LW community, so if that affects you negatively please pretend that someone with a more solid reputation but without super high karma is asking this question, like Steven Kaas.) You can upvote for "yes" and comment about any details, e.g. if you know someone who would possibly donate significantly more than $100. (Please don't downvote for "no", 'cuz that's the default answer and will drown out any "yes" ones.)

Comment author: Airedale 21 August 2011 04:22:15PM 7 points [-]

I have a weird rather mixed reputation among the greater LW community, so if that affects you negatively please pretend that someone with a more solid reputation but without super high karma is asking this question, like Steven Kaas.

Unless you would be much less involved in this potential program than the comment indicates, this seems like an inappropriate request. If people view you negatively due to your posting history, they should absolutely take that information into account in assessing how likely they would be to provide financial support to such a program (assuming that the negative view is based on relevant considerations such as your apparent communication or reasoning skills as demonstrated in your comments).

View more: Prev | Next