Comment author: JenniferRM 15 June 2012 11:49:38PM *  -1 points [-]

That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.

Perhaps it is not wise to speculate out loud in this area until you've worked through three rounds of "ok, so what are the implications of that idea" and decided that it would help people to hear about the conclusions you've developed three steps back. You can frequently find interesting things when you wander around, but there are certain neighborhoods you should not explore with children along for the ride until you've been there before and made sure its reasonably safe.

Perhaps you could send a PM to Will?

Comment author: tenlier 17 June 2012 04:25:12PM 2 points [-]

Not just going meta for the sake of it: I assert you have not sufficiently thought throught the implications of promoting that sort of non-openness publicly on the board. Perhaps you could PM jsavaltier.

I'm lying, of course. But interesting to register points of strongest divergence between LW and conventional morality (JenniferRM's post, I mean; jsalvatier's is fine and interesting).

Comment author: VincentYu 14 June 2012 02:12:57PM *  8 points [-]

Establish a scholarship to collect information on young talent

Related: Reaching young math/compsci talent

Create a merit scholarship for the type of young talent that SI wants to attract – this can reveal valuable information about this group of people, and can potentially be used a targeted publicity tool if handled well.

Information that could be collected from applications

  • Basic personal details (age, location, contact methods, etc.)
  • Education (past and future)
  • Academic interests
  • Career goals
  • Awards and competition results
  • Third-party reviews (i.e., letters of recommendation)
  • Basic personality assessment (see previous LW discussion on correlates with Big Five personality traits: [1], [2], [3])
  • Ideas about and attitudes toward x-risks/FAI/SI/FHI (these could be responses to prompts – as a bonus, applicants are introduced to the content in the prompts)
  • ... Pretty much anything else (personal anecdote: I've revealed things about myself in college and scholarship applications that I have never expressed to anyone else)

Uses of this information

  • Check whether SI is effectively reaching the right people with its current plans.
  • The compiled list of young talent could be directly used to advertise things like SPARC to the right people.
  • General survey tool.

Potential problems and difficulties

  • Its use as an information gathering tool could be seen negatively.
  • Legal issues?
  • Publicity. The scholarship has to be made known to the relevant people, and this has to be done in such a way that SI is seen as a reputable institute. However, a scholarship does open up new avenues for publicity.
  • Cost and manpower.

Is anyone else doing this?

As with many ideas, we ought to be cautious if we see no one else doing something similar. Indeed, I cannot think of any high school scholarship that is used primarily to collect information for the sponsoring organization (is this really the case?). However, there is good reason for this – no one else is interested in reaching the same group of high school students. SI is the only organization I know of who wants to reach high school students for their research group.

FHI had a competition that could be an attempt to collect information, but I'm not sure.

High school scholarships

It would be wise to consult current high school scholarships, and AoPS has a good list.

Comment author: tenlier 16 June 2012 03:15:16PM -1 points [-]

"Indeed, I cannot think of any high school scholarship that is used primarily to collect information for the sponsoring organization (is this really the case?). However, there is good reason for this – no one else is interested in reaching the same group of high school students. SI is the only organization I know of who wants to reach high school students for their research group."

I find this place persistently surprising, which is nice. Try to imagine what you would think if a religious organization did this and how you would feel. It's alright to hold a scholarship to encourage kids to be interested in a topic; not so to garner information for your own purposes, unless that is incredibly clear upfront. Very Gwernian.

Comment author: magfrump 05 June 2012 06:43:38AM 0 points [-]

The reason I thought you didn't understand what I was talking about was that I was calling on examples from day to day life, this is what I took "instrumentalist" to mean, and you starting talking about killing people, which is not an event from day to day life.

If you are interested in continuing this discussion (which if not I won't object) let's take this one step at a time; does that difference seem reasonable to you?

Comment author: tenlier 05 June 2012 12:28:33PM 0 points [-]

The day to day life bit is irrelevant. The volitional aspect is not at all. Take the exact sacrifice you described but make it non-volitional. "torturing yourself working at a startup" becomes slavery when non-volitional. Presumably you find that trade-off less acceptable.

The volitional aspect is the key difference. The fact that your life is rich with examples of volitional sacrifice and poor in examples of forced sacrifice of this type is not some magic result that has something to do with how we treat "real" examples in day to day life. It is entirely because "we" (humans) have tried to minimize the non-volitional sacrifices because they are what we find immoral!

Comment author: magfrump 05 June 2012 05:38:52AM 0 points [-]

Certainly there's a difference between what I said and the traditional phrasing of the dilemma; certainly the idea of sacrificing oneself versus another is a big one.

But the OP was asking for an instrumentalist reason to choose torture over dust specks. It is pretty far-fetched to imagine that literally torturing someone will actually accomplish... well, almost anything, unless they're a supervillain creating a contrived scenario in which you have to torture them.

When you will actually be trading quality of life for barely-tangible benefit on a large scale is torturing yourself working at a startup. This is an actual decision that people make to make lives miserable in exchange for minor but widespread public goods. And I fully support the actual trades of this sort that people actually make.

That's my instrumentalist argument for, as a human being, accepting the metaphor of dust specks versus torture, not my philosophical argument for a decision theory that selects it.

Comment author: tenlier 05 June 2012 06:18:51AM 0 points [-]

Was there any reason to think I didn't understand exactly what you said the first time? You agree with me and then restate. Fine, but pointless. Additionally, unimaginative re: potential value of torture. Defending lack of imagination in that statement by claiming torture defined in part by primary intent would be inconsistent.

Comment author: magfrump 05 June 2012 12:38:07AM 1 point [-]

If I recall correctly Alicorn made a reference to reversing the utilities in this argument... would you think it better for someone to give up a life of the purest and truest happiness, if in exchange they created all of the 10 second or less cat videos that will ever be on youtube throughout all of history and the future?

My intuitions here say yes; it can be worth sacrificing your life (i.e. torturing yourself working at a startup) to create a public good which will do a small amount for a lot of people (i.e. making standard immunization injections also give people immunity to dust specks in their eyes)

Comment author: tenlier 05 June 2012 04:39:43AM 5 points [-]

Manipulative phrasing. Of course, it will always seem worth torturing yourself, yadda yadda, when framed as a volitional sacrifice. Does your intuition equally answer yes when asked if it is worth killing somebody to do etc etc? Doubt it (and not a deontological phrasing issue)

Comment author: shminux 04 June 2012 09:54:11PM *  2 points [-]

I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.

On the other hand, given the preference for realism over instrumentalism on this forum, I'm still waiting for a convincing (for an instrumentalist) argument for this preference.

Comment author: tenlier 05 June 2012 04:25:24AM 1 point [-]

Do you really have that preference?

For example, if every but one of trillions of humans was being tortured and had dust specks, would you feel like trading the torture-free human's freedom from torture for the removal of specks from the tortured. If that were so, then you just are showing a fairly usual preference (inequality is bad!) which is probably fine as an approximation of stuff you could formalize consequentially.

But that's just an example. Often there's some context in which your moral intuition is reversed, which is a useful probe.

(usual caveat: haven't read the sequences)

Topic for discussion: Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase "I must update on" is half code for (p<0.05) and half signalling, since presumably you're "updating" a lot, just like regular humanssssssssssssssssssssssssssssssssss.

Comment author: Suryc11 12 May 2012 01:51:22AM *  5 points [-]

I don't think it was strictly speaking "persuasive" to anyone

Just as a data point, I was rather greatly persuaded by Karnofsky's argument here. As someone who reads LW more often for the cognitive science/philosophy stuff and not so much for the FAI/Singularity stuff, I did not have a very coherent opinion of the SI, particularly one that incorporated objective critiques (such as Karnofsky's).

Furthermore, I certainly did not, as you assert, know that it is a bad idea to donate to the Singularity Institute. In fact, I had often heard the opposite here.

Comment author: tenlier 12 May 2012 02:52:54AM 2 points [-]

Thanks. That's very interesting to me, even as an anecdote. I've heard the opposite here too; that's why I made it a normative statement ("everyone already should know"). Between the missing money and the publication record, I can't imagine what would make SI look worth investing in to me. Yes, that would sometimes lead you astray. But even posts like, oh: http://lesswrong.com/lw/43m/optimal_employment/?sort=top

are pretty much the norm around here (I picked that since Luke helped write it). Basically, an insufficient attempt to engage with the conventional wisdom.

How much should you like this place just because they're hardliners on issues you believe in? (generic you). There are lots of compatibilists, materialists, consequentialists, MWIers, or whatever in the world. Less Wrong seems unusual in being rather hardline on these issues, but that's usually more a sign that people have turned it into a social issue than a matter of intellectual conviction (or better, competence). Anyway, probably I've become inappropriately off topic for this page; I'm just rambling. To say at least something on topic: A few months back there was an issue of Nature talking about philanthropy in science (cover article and a few other pieces as I recall); easily searchable I'm sure, and may have some relevance (both as SI tries to get money or "commission" pieces).

Comment author: David_Gerard 11 May 2012 08:19:32PM 7 points [-]

Apart from the value of having a smart, sincere person who likes and has seriously tried to appreciate you give you their opinion of you ... Holden's post directly addresses "why the hell should people give money to you?" Particularly as his answer - as a staff member of a charity directory - is "to support your goals, they should not give money to you." That's about as harsh an answer as anyone could give a charity: "you are a net negative."

My small experience is on the fringes of Wikimedia. We get money mostly in the form of lots of small donations from readers. We have a few large donations (and we are very grateful indeed for them!) but we actively look for more small donations (a) to make ourselves less susceptible to the influence of large donors (b) to recruit co-conspirators: if people donate even a small amount, they feel like part of the team, and that's worth quite a lot to us.

The thing is that Wikimedia has never been very good at playing the game. We run this website and we run programmes associated with it. Getting money out of people has been a matter of shoving a banner up. We do A/B testing on the banners! But if we wanted to get rabid about money, there's a lot more we could be doing. (At possible expense of the actual mission.)

SIAI doesn't have the same wide reader base to get donations from. But the goal of a charity that cares about its objectives should be independence. I wonder how far they can go in this direction: to be able to say "well, we don't care what you say about us, us and our readers are enough." I wonder how far the CMR will go.

Comment author: tenlier 11 May 2012 08:44:59PM 2 points [-]

Sorry, I'm not quite understanding your first paragraph. The subsequent piece, I agree completely with and think applies to a lot SI activities in principle (even if not looking for small donors). The same idea could roughly guide their outlook to "academic outreach", except it's a donation of time rather than money. For example, gaining credibility from a few big names is probably a bad idea, as is trying to play the game of seeking credibility.

On the first paragaph, apologies for repeating, but just clarifying: I'm assuming that everyone already should know that even if you're sympathetic to SI goals, it's a bad idea to donate to them. Maybe it was a useful article for the SI to better understand why people might feel that way. I'm just saying I don't think it was strictly speaking "persuasive" to anyone. Except, I was initially somewhat persuaded that Karnofsky is worth listening to in evaluating SI. I'm just claiming, I guess, that I was way more persuaded that it was worth listening to Karnofsky on this topic than I should have been since I think everything he says is too obvious to imply shared values with me. So, in a few years, if he changes his mind on SI, I've now decided that I won't weight that as very important in my own evaluation. I don't mean that as a criticism of Karnofsky (his write-up was obviously fantastic). I'm just explicating my own thought process.

View more: Next