Comment author: Aleksei_Riikonen 17 November 2011 11:29:12AM 4 points [-]

so you decide to only do the least prestigeful work available, in order to prove that you are the kind of person who doesn't care about the prestige of the task!

Another variant is to minimize how much you directly inform your comrades of the work you're doing. You tend to get more prestige when people find out about your work through accidental-seeming ways instead of through you telling them. Also, you have aces up your sleeve with which you can play the martyr ("Well, I have been doing such and such and you didn't even know about it!").

Comment author: lukeprog 09 November 2011 03:02:00AM 8 points [-]

This is not much about Singularity Institute as an organization, so I'll just answer it here in the comments.

  • I do not regulate my information diet.
  • I do not have a reading schedule.
  • I do not follow the news.
  • I haven't read fiction in years. This is not because I'm avoiding "fun stuff," but because my brain complains when I'm reading fiction. I can't even read HPMOR. I don't need to consciously "limit" my consumption of "fun stuff" because reading scientific review articles on subjects I'm researching and writing about is the fun stuff.
  • What I'm trying to learn at this moment almost entirely dictates my reading habits.
  • The only thing beyond this scope is my RSS feed, which I skim through in about 15 minutes per day.
Comment author: Aleksei_Riikonen 10 November 2011 02:55:56AM 0 points [-]

I'm glad to hear I'm not the only fan of Eliezer who isn't reading HPMOR.

In general, like you I also don't tend to get any fiction read (unlike earlier). For years, I haven't progressed on several books I've got started that I enjoy reading and consider very smart also in a semi-useful way. It's rather weird really, since simultaneously I do with great enthusiasm watch some fictional movies and tv series, even repeatedly. (And I do read a considerable amount of non-fiction.)

And I follow the news. A lot. The number one fun thing for me, it seems.

Comment author: Eliezer_Yudkowsky 05 October 2011 08:46:13AM 4 points [-]

I don't think you're a Christian. I do think you want Christianity to have a chance in hell, because... well, I'm not going to speculate. Meta-contrarianism would be one reason. Everyone voting down shminux, please note that they never said they thought Goetz was a Christian.

Comment author: Aleksei_Riikonen 05 October 2011 04:50:58PM 0 points [-]

Everyone voting down shminux, please also note that they did say:

You clearly want Christianity to have a chance in hell

it is pointless to argue about it with you, since you have already written your bottom line and will not budge

I'll downvote for those. While I don't claim Goetz' treatment of the topic to have been perfect, I don't see evidence of it necessarily having been motivated by anything else than an honest curious interest in the topic. Claims that he clearly wants Christianity to have a chance or that he wouldn't be able to change his mind on the topic seem to me to be just as uncalled for as claims that he would be a Christian.

Comment author: wedrifid 02 October 2011 08:17:26AM 19 points [-]

Wow! A 20 page essay on "why I'm breaking up with you"? That's just... brutal!

Comment author: Aleksei_Riikonen 04 October 2011 02:11:17AM *  18 points [-]

Wow! A 20 page essay on "why I'm breaking up with you"? That's just... brutal!

And obviously the title should have been:

"In Which I Explain How Natural Selection Has Built Me To Be Attracted To Certain Features That You Lack"

:D

Comment author: Aleksei_Riikonen 04 October 2011 01:57:18AM 1 point [-]

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

LOL

(Just couldn't resist posting my reaction, even though there's already an essentially identical comment.)

It seems that this was made a lot more amusing by you apparently having great social skills these days.

(And makes me all the more glad I've never broken up with anyone, even though this requirement made it kinda hard to get into a relationship in the first place.)

In response to Polyhacking
Comment author: Aleksei_Riikonen 27 August 2011 09:07:42AM 10 points [-]

Since moving back to the Bay Area I've been out with four other people too, one of whom he's also seeing; I've been in my primary's presence while he kissed one girl, and when he asked another for her phone number; I've gossiped with a secondary about other persons of romantic interest and accepted his offer to hint to a guy I like that this is the case; I hit on someone at a party right in front of my primary. I haven't suffered a hiccup of drama or a twinge of jealousy to speak of and all evidence (including verbal confirmation) indicates that I've been managing my primary's feelings satisfactorily too.</bragging> Does this sort of thing appeal to you?

No.

But I do expect that if humans become immortal superbeings, then given enough time, most people currently in fairytale monogamous relationships will switch to poly. (Though when people are immortal superbeings, I also expect it to become common that they'll spend a very long time if necessary searching for an instance of fairytale monogamy to be their first relationship.)

I guess my philosophy is that fairytale monogamy is optimal for the young (say under 200 years or so), while poly and other non-traditional arrangements are the choice of the adult.

Comment author: Kaj_Sotala 26 August 2011 07:12:57AM 3 points [-]

Relax, I doubt anyone with the ability to produce high-quality thinking is so insecure that (s)he'd be scared of getting a few downvotes on a website.

I wouldn't be surprised to hear that the ability to produce high-quality thinking actually correlated with insecurity. People who spend time developing intellectual skills often neglect developing social skills, and a lack of friends/real social contact then makes them feel insecure.

Comment author: Aleksei_Riikonen 26 August 2011 09:27:43AM 0 points [-]

I think you're probably right if we count more stuff as "high-quality thinking" than I was meaning to do. But if we're rather strict about what counts as high-quality, I think I'm right.

(Also I'll emphasize that I wasn't talking about insecurity in general, but being insecure to such an extent that one refrains from posting high-quality stuff to an anonymity-enabling website because of a fear of getting downvoted.)

Comment author: XiXiDu 21 August 2011 02:04:03PM 4 points [-]

I would love to see a post on the rational behind the reputation system on this site.

Imagine a thousand professional philosophers would join lesswrong, or worse, a thousand creationists. If that happened, would someones karma score still reflect the persons rationality? I'm not saying that this is the case right now, since most people who don't agree with lesswrong won't join or bother to stay around for very long. But technically the lesswrong reputation system is susceptible to failure, it would just need one call by someone like P.Z. Myers to have thousands of mediocre rationalists, or trolls, join and start messing up the voting system.

But that's just the most obvious problem. The availability of a reputation system also discourages people to actually explain themselves by being able to let off steam or ignore cognitive dissonance by downvoting someone with a single mouse click. If people had to actually write a comment to voice their disagreement, everyone would benefit. The person who is wrong would benefit by being provided an actual explanation for why someone disagrees and therefore wouldn't be able to easily believe that the person who disagrees just doesn't like their opinion for irrational reasons. The person who disagrees would have to be more specific and maybe name some concrete reasons for their disagreement and that way notice that it might be them who is wrong or that their disagreement with the other person isn't as strong as they thought. Everyone else reading the conversation would be able to discern if all parties involved in the discussion actually understand each other or talk past each other.

Another problem is that a reputation system might drive away people with valuable insights about certain agreed upon topics. The initial population of a community might have been biased about something and the reputation system might provide a positive incentive to keep the bias and a negative incentive for those who disagree.

Comment author: Aleksei_Riikonen 24 August 2011 09:17:42AM 1 point [-]

Another problem is that a reputation system might drive away people with valuable insights about certain agreed upon topics.

Relax, I doubt anyone with the ability to produce high-quality thinking is so insecure that (s)he'd be scared of getting a few downvotes on a website. (Myself, I once got an article submission voted to oblivion, but it just felt good in a feeling-of-superiority kind of way since I thought the LW community was the party being more wrong there -- though I think that to have found myself to be more wrong than I think I was would have felt good too.)

In general, I find it weird how some people manage to take the karma system so seriously. I thought it was acknowledged all along by the community that it's a very crude thing with only very limited usefulness (though still worth having).

Comment author: Eliezer_Yudkowsky 02 July 2011 05:38:23PM 15 points [-]

That is among the reasons why I keep telling SIAI people to never reply to "AI risks are small" with "but a small probability is still worth addressing". Reason 2: It sounds like an attempt to shut down debate over probability. Reason 3: It sounds like the sort of thing people are more likely to say when defending a weak argument than a strong one. Listeners may instinctively recognize that as well.

Existential risks from AI are not under 5%. If anyone claims they are, that is, in emotional practice, an instant-win knockdown argument unless countered; it should be countered directly and aggressively, not weakly deflected.

Comment author: Aleksei_Riikonen 03 July 2011 06:50:55AM 2 points [-]

I enjoyed reading this comment rather a lot, since it allowed me to find myself in the not-too-common circumstance of noticing that I disagree with Eliezer to a significant (for me) degree.

Insofar as I'm able to put a number on my estimation of existential risks from AI, I also think that they're not under 5%. But I'm not really in the habit of getting into debates on this matter with anyone. The case that I make for myself (or others) for supporting SIAI is rather of the following kind:

  1. If there are any noticeable existential risks, it's extremely important to spend resources on addressing them.

  2. When looking at the various existential risks, most are somewhat simple to understand (at least after one has expended some effort on it), and are either already receiving a somewhat satisfactory amount of attention, or are likely to receive such attention before too long. (This doesn't necessarily mean that they would be of a small probability, but rather that what can be done already seems like it's mostly gonna get done.)

  3. AI risks stand out as a special case, that seems really difficult to understand. There's an exceptionally high degree of uncertainty in estimates I'm able to make of their probability; in fact I find it very difficult to make any satisfactorily rigorous estimations at all. Such lack of understanding is a potentially very dangerous thing. I want to support more research into this.

The key point in my attitude that I would emphasize, is the interest in existential risks in general. I wouldn't try to seriously talk about AI risks to anyone who couldn't first be stimulated to find within themselves such a more general serious interest. And then, if people have that general interest, they're interested in going over the various existential risks there are, and it seems to me that sufficiently smart ones realize that the AI risks are a more difficult topic than others (at least after reading e.g. SIAI stuff; things might seem deceptively simple before one has a minimum threshold level of understanding).

So, my disagreement is that I indeed would to a degree avoid debates over probability. After a general interest in existential risks being present, I would instead of probabilities argue about the difficulty of the AI topic, and how such a lack of understanding is a very dangerous thing.

(I'm not really expressing a view on whether my approach is better or worse, though. Haven't reflected on the matter sufficiently to form a real opinion on that, though for the time being I do continue to cling to my view instead of what Eliezer advocated.)

Comment author: Duke 19 May 2011 07:17:52AM *  2 points [-]

I tend to treat anger and frustration as resulting from my map not matching the terrain somewhere. I suspect that your frustration is rooted in inaccurate mapping concerning the prior commitment that prevented you from meeting Patri. My guess is that you correctly assumed that there would be a small chance that something “better” than your commitment would pop-up that you would have to miss; but, you failed to properly assess the emotional impact this unlikely scenario would have on you. Now you can update your priors, do some re-mapping and be better prepared emotionally to deal with low-probability/high-annoyingness events.

Also, how similar is the present Patri-hysteria in Finland to the Beatles-hysteria in the 60's?

Comment author: Aleksei_Riikonen 19 May 2011 08:08:32AM *  1 point [-]

Also, how similar is the present Patri-hysteria in Finland to the Beatles-hysteria in the 60's?

One difference is that I'm aware that the former happened, but not that the latter would have.

(edit: by "former" and "latter" I mean the chronological order of events, not the order in which they were mentioned in the quoted comment :)

View more: Prev | Next