LucasSloan comments on Best career models for doing research? - Less Wrong

27 Post author: Kaj_Sotala 07 December 2010 04:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (999)

You are viewing a single comment's thread.

Comment author: LucasSloan 07 December 2010 04:27:59PM 1 point [-]

How hard is it to live off the dole in Finland? Also, non-academic research positions in think tanks and the like (including, of course, SIAI).

Comment author: Kaj_Sotala 07 December 2010 04:42:20PM 5 points [-]

Not very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they're supposed to happening every now and then. Also, I couldn't avoid the feeling of being a leech, justified or not.

Non-academic think tanks are a possibility, but for Singularity-related matters I can't think of others than the SIAI, and their resources are limited.

Comment author: [deleted] 07 December 2010 06:00:59PM 2 points [-]

Many people would steal food to save lives of the starving, and that's illegal.

Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.

Comment deleted 07 December 2010 07:39:36PM *  [-]
Comment author: wedrifid 07 December 2010 08:08:29PM 3 points [-]

I was once chastized by a senior singinst member for not being prepared to be tortured or raped for the cause.

Forget entirely 'the cause' nonsense. How far would you go just to avoid not personally getting killed? How much torture per chance that your personal contribution at the margin will prevent your near term death?

Comment author: Eugine_Nier 08 December 2010 05:56:23AM 2 points [-]

Could we move this discussion somewhere, where we don't have to constantly worry about it getting deleted.

Comment author: Nick_Tarleton 08 December 2010 06:55:25AM *  9 points [-]

I'm not aware that LW moderators have ever deleted content merely for being critical of or potentially bad PR for SIAI, and I don't think they're naive enough to believe deletion would help. (Roko's infamous post was considered harmful for other reasons.)

Comment author: waitingforgodel 08 December 2010 07:04:07AM 0 points [-]

"Harmful for other reasons" still has a chilling effect on free speech... and given that those reasons were vague but had something to do with torture, it's not unreasonable to worry about deletion of replies to the above question.

Comment author: Bongo 08 December 2010 02:43:38PM *  2 points [-]

The reasons weren't vague.

Of course this is just your assertion against mine since we're not going to actually discuss the reasons here.

Comment deleted 09 December 2010 05:23:32PM [-]
Comment author: wedrifid 08 December 2010 06:15:13AM *  2 points [-]

There doesn't seem to be anything censor relevant in my question and for my part I tend to let big brother worry about his own paranoia and just go about my business. In any case while the question is an interesting one to me it doesn't seem important enough to create a discussion somewhere else. At least not until I make a post. Putting aside presumptions of extreme altruism just how much contribution to FAI development is rational? To what extent does said rational contribution rely on newcomblike reasoning? How much would a CDT agent contribute on the expectation that his personal contribution will make the difference and save his life?

On second thoughts maybe the discussion does seem to interest me sufficiently. If you are particularly interested in answering me feel free to copy and paste my questions elsewhere and leave a back-link. ;)

Comment author: waitingforgodel 08 December 2010 06:40:49AM -2 points [-]

I think you/we're fine -- just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

Besides, it's looking like after the Roko thing they've decided to cut back on such silliness.

Comment author: Vladimir_Nesov 08 December 2010 11:22:39AM *  9 points [-]

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn't necessarily mean that it's incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it's still correct and should be taken.

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision. )

(This is a note about a problem in your argument, not an argument for correctness of EY's decision. My argument for correctness of EY's decision is here and here.)

Comment author: wedrifid 08 December 2010 11:52:53AM *  4 points [-]

You are compartmentalizing.

This is possible but by no means assured. It is also possible that he simply didn't choose to write a full evaluation of consequences in this particular comment.

Comment author: xamdam 08 December 2010 08:37:17PM *  2 points [-]

whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)

Comment author: Vladimir_Nesov 08 December 2010 08:43:34PM *  0 points [-]

The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It's difficult (for me) to estimate whether it's so.

Comment author: Vladimir_Golovin 08 December 2010 12:01:04PM 2 points [-]

What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Upvoted. This just helped me get unstuck on a problem I've been procrastinating on.

Comment author: waitingforgodel 08 December 2010 11:56:28AM 2 points [-]

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision.)

Very much agree btw

Comment author: red75 08 December 2010 03:21:12PM -1 points [-]

Shouldn't AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.

And please, define how do you tell moral heuristics and moral values apart. E.g. which is "don't change moral values of humans by wireheading"?

Comment author: Eugine_Nier 08 December 2010 07:29:20AM 2 points [-]

Besides, it's looking like after the Roko thing they've decided to cut back on such silliness.

I believe EY takes this issue very seriously.

Comment author: waitingforgodel 08 December 2010 07:35:24AM 2 points [-]

Ahh. Are you aware of any other deletions?

Comment author: Eugine_Nier 08 December 2010 07:52:30AM 3 points [-]

Yes, several times other poster's have brought up the subject and had their comments deleted.

Comment author: XiXiDu 08 December 2010 08:38:41PM *  3 points [-]

Are you aware of any other deletions?

Here...

I'd like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?

The subject matter here has a somewhat different nature that rather fits a more people - more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn't mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.

Comment author: Vladimir_Nesov 08 December 2010 12:48:25PM *  4 points [-]

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

Following is another analysis.

Consider a die that was tossed 20 times, and each time it fell even side up. It's not surprising because it's a low-probability event: you wouldn't be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you've missed. You notice your own confusion.

In this case, you look at the event of censoring a post (topic), and you're surprised, you don't understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like "It's convenient", or "To oppose freedom of speech", or "To manifest dictatorial power".

Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don't know the answer. Acknowledging that you don't know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can't suggest a hypothesis.

Comment author: waitingforgodel 08 December 2010 12:56:57PM 3 points [-]

Since we're playing the condescension game, following is another analysis:

You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.

Comment author: Vladimir_Nesov 08 December 2010 01:21:24PM 1 point [-]

Since we're playing the condescension game

I'm not. Seriously. "Whenever convenient" is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem.

You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.

Please unpack the references. I don't understand.

Comment deleted 07 December 2010 09:19:55PM [-]
Comment deleted 07 December 2010 09:21:07PM *  [-]
Comment deleted 07 December 2010 09:21:41PM *  [-]
Comment deleted 07 December 2010 09:22:13PM [-]
Comment deleted 07 December 2010 09:23:31PM *  [-]
Comment author: waitingforgodel 08 December 2010 06:56:45AM -1 points [-]

Am I the only one who can honestly say that it would depend on the day?

There's a TED talk I once watched about how republicans reason on five moral channels and democrats only reason on two.

They were (roughly):

  1. harm/care
  2. fairness/reciprocity
  3. in-group/out-group
  4. authority
  5. purity/scarcity/correctness

According to the talk, Democrats reason with primarily the first two and Republicans with all of them.

I took this to mean that Republicans were allowed to do moral calculus that Democrats could not... for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn't fair)... If, on the other hand, I'm allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn't from my home town, or because my religion says to.

Republicans therefore have it much easier in rationalizing self-serving motives.

(As an aside, it's interesting to note that Democrats must have started with more than just the two when they were young. "Mommy said not to" is a very good reason to do something when you're young. It seems that they must have grown out of it).

After watching the TED talk, I was reflecting on how it seems that smart people (myself sadly included) let relatively minor moral problems stop them from doing great things... and on how if I were just a little more Republican (in the five channel moral reasoning sense) I might be able to be significantly more successful.

The result is a WFG that cycles in and out of 2-channel/5-channel reasoning.

On my 2-channel days, I'd have a very hard time hurting another person to save myself. If I saw them, and could feel that human connection, I doubt I could do much more than I myself would be willing to endure to save another's life (perhaps two hours assuming hand-over-a-candle level of pain -- permanent disfigurement would be harder to justify, but if it was relatively minor).

On my 5-channel days, I'm (surprisingly not so embarrassed to say) I'd probably go arbitrarily high... after all, what's their life compared to mine?

Probably a bit more than you were looking to hear.

What's your answer?

Comment author: Eugine_Nier 08 December 2010 07:25:45AM 2 points [-]

I took this to mean that Republicans were allowed to do moral calculus that Democrats could not... for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn't fair)... If, on the other hand, I'm allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn't from my home town, or because my religion says to.

First let me say that as a Republican/libertarian I don't entirely agree with Haidt's analysis.

In any case, the above is not quiet how I understand Haidt's analysis. My understanding is that Democracts have no way to categorically say that punching (or even killing) a baby is wrong. While they can say it's wrong because as you said it causes harm and isn't fair, they can always override that judgement by coming up with a reason why not punching and/or killing the baby would also cause harm. (See the philosophy of Peter Singer for an example).

Republicans on the other hand can invoke sanctity of life.

Comment author: waitingforgodel 08 December 2010 07:32:29AM 2 points [-]

Sure, agreed. The way I presented it only showed very simplistic reasoning.

Let's just say that, if you imagine a Democrat that desperately wants to do x but can't justify it morally (punch a baby, start a somewhat shady business, not return a lost wallet full of cash), one way to resolve this conflict is to add Republican channels to his reasoning.

It doesn't always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)

Comment author: Eugine_Nier 08 December 2010 07:49:26AM 1 point [-]

It doesn't always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)

So I've noticed. See the discussion following this comment for an example.

On the other hand other times Democrats take positions that Republicans horrific, e.g., euthanasia, abortion, Peter Singer's position on infanticide.

Comment author: David_Gerard 08 December 2010 08:27:51AM *  5 points [-]

Peter Singer's media-touted "position on infanticide" is an excellent example of why even philosophers might shy away from talking about hypotheticals in public. You appear to have just become Desrtopa's nighmare.

Comment author: waitingforgodel 08 December 2010 10:48:36AM 2 points [-]

Thanks for the link -- very interesting reading :)

Comment deleted 07 December 2010 09:04:26PM [-]
Comment deleted 07 December 2010 09:21:10PM [-]
Comment author: Bongo 08 December 2010 12:14:46PM *  8 points [-]

(I would have liked to reply to the deleted comment, but you can't reply to deleted comments so I'll reply to the repost.)

  • EDIT: Roko reveals that he was actually never asked to delete his comment! Disregard parts of the rest of this comment accordingly.

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause

is not to attempt to erase it, even if that was possible, but to reveal the context. The context, supposedly, would make it seem less scandalous - for example, maybe it was a private dicussion about philosophical hypotheticals. If it wouldn't, that's a bad sign about SIAI.

The fact that that erasure was the reaction suggests that there is no redeeming context!

That someone asked Roko to erase his comment isn't a very bad sign, since it's enough that one person didn't understand the reasoning above for that to happen. That fact that Roko conceded is a bad sign, though.

Now SIAI should save face not by asking a moderator to delete wfg's reposts, but by revealing the redeeming context in which the scandalous remarks that Roko alluded to were made.

Comment author: CarlShulman 08 December 2010 06:43:02PM *  27 points [-]

Roko may have been thinking of [just called him, he was thinking of it] a conversation we had when he and I were roommates in Oxford while I was visiting the Future of Humanity Institute, and frequently discussed philosophical problems and thought experiments. Here's the (redeeming?) context:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

Instead, I typically advocate careful introspection and the use of something like Nick Bostrom's parliamentary model:

The idea here is that moral theories get more influence the more probable they are; yet even a relatively weak theory can still get its way on some issues that the theory think are extremely important by sacrificing its influence on other issues that other theories deem more important. For example, suppose you assign 10% probability to total utilitarianism and 90% to moral egoism (just to illustrate the principle). Then the Parliament would mostly take actions that maximize egoistic satisfaction; however it would make some concessions to utilitarianism on issues that utilitarianism thinks is especially important. In this example, the person might donate some portion of their income to existential risks research and otherwise live completely selfishly.

In the conversation with Roko, we were discussing philosophical thought experiments (trolley problem style, which may indeed be foolish ) to get at 'real' preferences and values for such an exercise. To do that, one often does best to adopt the device of the True Prisoner's Dilemma and select positive and negative payoffs that actually have emotional valence (as opposed to abstract tokens). For positive payoffs, we used indefinite lifespans of steady "peak experiences" involving discovery, health, status, and elite mates. For negative payoffs we used probabilities of personal risk of death (which comes along with almost any effort, e.g. driving to places) and harms that involved pain and/or a decline in status (since these are separate drives). Since we were friends and roommates without excessive squeamishness, hanging out at home, we used less euphemistic language.

Neither of us was keen on huge sacrifices in Pascal's-Mugging-like situations, viewing altruism as only one part of our respective motivational coalitions, or one term in bounded utility functions. I criticized his past "cheap talk" of world-saving as a primary motivation, given that in less convenient possible worlds, it was more easily overcome than his phrasing signaled. I said he should scale back his claims of altruism to match the reality, in the way that I explicitly note my bounded do-gooding impulses.

We also differed in our personal views on the relative badness of torture, humiliation and death. For me, risk of death was the worst, which I was least willing to trade off in trolley-problem type cases to save others. Roko placed relatively more value on the other two, which I jokingly ribbed and teased him about.

In retrospect, I was probably a bit of a jerk in pushing (normative) Hansonian transparency. I wish I had been more careful to distinguish between critiquing a gap between talk and values, and critiquing the underlying values, and probably should just take wedifrid's advice on trolley-problem-type scenarios generally.

Comment author: waitingforgodel 09 December 2010 03:16:21AM *  2 points [-]

First off, great comment -- interesting, and complex.

But, some things still don't make sense to me...

Assuming that what you described led to:

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause. I mean not actually, but, you know, in theory. Precommiting to being prepared to make a sacrifice that big. shrugs

  1. How did precommitting enter in to it?

  2. Are you prepared to be tortured or raped for the cause? Have you precommitted to it?

  3. Have other SIAI people you know of talked about this with you, have other SIAI people precommitted to it?

  4. What do you think of others who do not want to be tortured or raped for the cause?

Thanks, wfg

Comment author: CarlShulman 09 December 2010 09:03:49AM *  18 points [-]

I find this whole line of conversation fairly ludicrous, but here goes:

Number 1. Time-inconsistency: we have different reactions about an immediate certainty of some bad than a future probability of it. So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others. In the actual instant of being tortured anyone would break, but people do choose courses of action that carry risk (every action does, to some extent), so the latter is more meaningful for such hypotheticals.

Number 2. I have driven and flown thousands of kilometers in relation to existential risk, increasing my chance of untimely death in a car accident or plane crash, so obviously I am willing to take some increased probability of death. I think I would prefer a given chance of being tortured to a given chance of death, so obviously I care enough to take at least some tiny risk from what I said above. As I also said above, I'm not willing to make very big sacrifices (big probabilities of such nasty personal outcomes) for tiny shifts in probabilities of big impersonal payoffs (like existential risk reduction). In realistic scenarios, that's what "the cause" would refer to. I haven't made any verbal or explicit "precommitment" or promises or anything like that.

In sufficiently extreme (and ludicrously improbable) trolley-problem style examples, e.g. "if you push this button you'll be tortured for a week, but if you don't then the Earth will be destroyed (including all your loved ones) if this fair coin comes up heads, and you have incredibly (impossibly?) good evidence that this really is the setup" I hope I would push the button, but in a real world of profound uncertainty, limited evidence, limited personal power (I am not Barack Obama or Bill Gates), and cognitive biases, I don't expect that to ever happen. I also haven't made any promises or oaths about that.

I am willing to give of my time and effort, and forgo the financial rewards of a more lucrative career, in exchange for a chance for efficient do-gooding, interaction with interesting people who share my values, and a meaningful project. Given diminishing returns to money in rich countries today, and the ease of obtaining money for folk with high human capital, those aren't big sacrifices, if they are sacrifices at all.

Number 3. SIAIers love to be precise and analytical and consider philosophical thought experiments, including ethical ones. I think most have views pretty similar to mine, with somewhat varying margins. Certainly Michael Vassar, the head of the organization, is also keen on recognizing one's various motives and living a balanced life, and avoiding fanatics. Like me, he actively advocates Bostrom-like parliamentary model approaches to combining self-concern with parochial and universalist altruistic feelings.

I have never heard anyone making oaths or promises to make severe sacrifices.

Number 4. This is a pretty ridiculous question. I think that's fine and normal, and I feel more comfortable with such folk than the alternative. I think people should not exaggerate that do-gooding is the most important thing in their life lest they deceive themselves and others about their willingness to make such choices, which I criticized Roko for.

Comment author: multifoliaterose 08 December 2010 08:35:04PM 0 points [-]

Great comment Carl!

Comment author: Bongo 08 December 2010 06:51:16PM 1 point [-]

Thanks!

Comment author: Nick_Tarleton 08 December 2010 04:54:34PM *  6 points [-]

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

Roko was not requested to delete his comment. See this parallel thread. (I would appreciate it if you would edit your comment to note this, so readers who miss this comment don't have a false belief reinforced.) (ETA: thanks)

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause

is not to attempt to erase it, even if that was possible, but to reveal the context.... Now SIAI should save face not by asking a moderator to delete wfg's reposts....

Agreed (and I think the chance of wfg's reposts being deleted is very low, because most people get this). Unfortunately, I know nothing about the alleged event (Roko may be misdescribing it, as he misdescribed my message to him) or its context.

Comment author: Bongo 08 December 2010 05:28:22PM *  1 point [-]

Roko said he was asked. You didn't ask him but maybe someone else did?

Comment author: Nick_Tarleton 08 December 2010 05:59:11PM *  4 points [-]

Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.

Comment author: waitingforgodel 08 December 2010 02:16:52PM 3 points [-]

If it wouldn't, that's a bad sign about SIAI.

I wish I could upvote twice

Comment author: wedrifid 08 December 2010 02:44:44AM 1 point [-]

Restoring comment above this, for posterity:

How? That is, what tool allowed you to restore the now deleted comments? Browser cache or something more impressive?

Comment author: waitingforgodel 08 December 2010 04:36:19AM 2 points [-]

To be more specific, when I saw that comment I assumed Roko was about to delete it and opened up a second browser window.

I caught your comment with the script, because I've been half sure that EY would delete this thread all day...

Comment author: wedrifid 08 December 2010 04:53:29AM 1 point [-]

Ahh, gotcha.

I like the script by the way... ruby! That is my weapon of choice these days. What is the nokogiri library like? I do a fair bit of work with html automation but haven't used that particular package.

Comment author: waitingforgodel 08 December 2010 05:08:17AM 1 point [-]

It's pretty nice, just a faster version of _why's Hpricot... or a ruby version of jQuery if you're in to that :)

What tools do you use for html automation?

Comment author: waitingforgodel 08 December 2010 04:23:47AM 1 point [-]

A bit of both. I don't maintain a mirror of lesswrong or anything, but I do use a script to make checking for such things easier.

I'd be interested to know what you were hoping for in the way of "more impressive" though :)

Comment author: waitingforgodel 08 December 2010 04:28:30AM *  0 points [-]

note that script is pretty rough -- some false positives, would't count a "[redacted]" edit as deletion (though it would cache the content).

more to avoid rescanning the page while working, etc

Comment author: ata 08 December 2010 02:51:03AM *  1 point [-]

Most likely either browser cache or a left-open browser tab containing the comments, being that the formatting of the line "FormallyknownasRoko | 07 December 2010 07:39:36PM* | 1 point[-]" suggests it was just copied and pasted.

Comment author: waitingforgodel 08 December 2010 04:24:56AM 1 point [-]

Pretty much

Comment deleted 08 December 2010 02:53:59AM [-]
Comment author: waitingforgodel 08 December 2010 04:27:21AM 1 point [-]

See, that doesn't make sense to me. It sounds more like an initiation rite or something... not a thought experiment about quantum billionaires...

I can't picture EY picking up the phone and saying "delete that comment! wouldn't you willingly be tortured to decrease existential risk?"

... but maybe that's a fact about my imagination, and not about the world :p

Comment author: FormallyknownasRoko 08 December 2010 06:10:38PM 0 points [-]

Context was discussing hypothetical sacrifices one would make for utilitarian humanitarian gain, not just from one but from several different conversations.

Comment author: wedrifid 08 December 2010 07:16:32PM 2 points [-]

They actually had multiple conversations about hypothetical sacrifices they would make for utilitarian humanitarian gain? That's... adorable!

Comment author: waitingforgodel 09 December 2010 02:59:10AM 4 points [-]

Care to share a more concrete context?

Comment author: FormallyknownasRoko 09 December 2010 12:25:24PM *  1 point [-]

That is the context in as concrete a way as is possible - discussing what people would really be prepared to sacrifice, versus making signallingly-useful statements. I responded that I wasn't even prepared to say that I would make {sacrifice=rape, being tortured, forgoing many years of good life, being humiliated etc}.

Comment author: waitingforgodel 09 December 2010 03:56:36PM 4 points [-]

Okay, you can leave it abstract. Here's what I was hoping to have explained: why were you discussing what people would really be prepared to sacrifice?

... and not just the surface level of "just for fun," but also considering how these "just for fun" games get started, and what they do to enforce cohesion in a group.

Comment author: FormallyknownasRoko 08 December 2010 06:09:44PM 0 points [-]

Nothing to do with "the ugly".

Comment deleted 07 December 2010 07:59:32PM [-]
Comment deleted 07 December 2010 09:03:43PM [-]
Comment author: Eugine_Nier 08 December 2010 05:54:11AM *  3 points [-]

How hard is it to live off the dole in Finland?

Given the current economic situation in Europe, I'm not sure that's a good long term strategy.

Also, I suspect spending to long on the dole may cause you to develop habits that'll make it harder to work a paying job.