Comment author: red75 11 January 2012 01:37:17AM *  1 point [-]

What bothers me in The Basic AI Drives is a complete lack of quantitativeness.

Temporal discount rate isn't even mentioned. No analysis of self-improvement/getting-things-done tradeoff. Influence of explicit / implicit utility function dichotomy on self-improvement aren't considered.

Comment author: multifoliaterose 11 January 2012 02:54:01AM 1 point [-]

I find some of your issues with the piece legitimate but stand by my characterization of the most serious existential threat from AI being of the type described in the therein.

Comment author: multifoliaterose 10 January 2012 03:38:26PM *  4 points [-]

The whole of question 3 seems problematic to me.

Concerning parts (a) and (b), I doubt that researchers will know what you have in mind by "provably friendly." For that matter I myself don't know what you have in mind by "probably friendly" despite having read a number of relevant posts on Less Wrong.

Concerning part (c); I doubt that experts are thinking in terms of money needed to possible mitigate AI risks at all; presumably in most cases if they saw this as a high priority issue and tractable issue they would have written about it already.

Comment author: James_Miller 10 January 2012 04:23:05AM 1 point [-]

Sorry.

Fine I think. It happens very quickly unlike later in the semester when I insist that a student trade me her jewelry for a glass of water.

Comment author: multifoliaterose 10 January 2012 04:36:38AM 1 point [-]

To illustrate the fact that the value of goods is determined by their scarcity/abundance relative to demand?

Comment author: James_Miller 10 January 2012 02:26:46AM 1 point [-]

I ask if anyone is wearing gold jewelry.

Comment author: multifoliaterose 10 January 2012 03:23:42AM *  1 point [-]

I don't see the relevance of your response to my question, care to elaborate?

Comment author: multifoliaterose 10 January 2012 01:41:30AM 2 points [-]

I generally agree with paulfchristiano here. Regarding Q2, Q5 and Q6 I'll note that that aside from Nils Nilsson, the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro's The Basic AI Drives. Researchers without this background context are unlikely to deliver informative answers on Q2, Q5 and Q6.

Comment author: Dmytry 10 January 2012 12:22:21AM *  2 points [-]

Well, I was more commenting on the choice of dead children as the currency. I do think that it is possible to improve the world, just the issue is quite complicated.

edit: with regards to AI i do plan to contribute directly... I am currently earning my money doing independent game development but my main talents lie elsewhere (engineering). I was thinking over a dramatically cheap mosquito zapping laser (putting as much of the complexity into software rather than high precision hardware). High IQ is similar to being that strong AI - I can solve problems that only few people can, and there's shortage of those people and abundance of problems to solve.

I can't say I care a whole ton though - it's not my fault the world is naturally a hell-hole. Think about it, the condition of suffering has evolved because it is very useful to prodding you forward - in the natural conditions you suffer a lot, the pain circuitry gets a lot of use. No species can just live happily, the evolution will make such species work harder at reproducing and suffer.

In response to comment by Dmytry on Dead Child Currency
Comment author: multifoliaterose 10 January 2012 01:27:20AM *  3 points [-]

I was thinking over a dramatically cheap mosquito zapping laser (putting as much of the complexity into software rather than high precision hardware).

I don't understand this sentence. Is this something that you were contemplating doing personally? The Gates Foundation has already funded such a project.

I can't say I care a whole ton though - it's not my fault the world is naturally a hell-hole.

I agree with the second clause but don't think that it has a great deal to do with the first clause. Most people would upon being confronted by a sabertooth tiger would care about not being maimed by it despite the fact that it's not their fault that there's a possibility that they might be maimed by a saber tooth tiger. A sense of bearing responsibility for a problem is one route toward caring about fixing it but there are other routes.

Nevertheless, sadly I can relate to not caring very much.

Think about it, the condition of suffering has evolved because it is very useful to prodding you forward - in the natural conditions you suffer a lot, the pain circuitry gets a lot of use. No species can just live happily, the evolution will make such species work harder at reproducing and suffer.

Any reason to think that negative feelings are a more effective motivator than positive feelings? If not, is there any reason to doubt that it's in principle possible for a species to have motivational mechanisms consisting exclusively of rewards?

Comment author: James_Miller 09 January 2012 08:15:54PM 5 points [-]

Do you really do this?

Yes.

They find it amusing. We then continue to discuss opportunity costs.

Comment author: multifoliaterose 10 January 2012 01:23:23AM 2 points [-]

How does the person singled out react?

Comment author: shminux 09 January 2012 09:26:55PM *  0 points [-]

Hmm, I wonder what all these silent downvotes indicate.

Comment author: multifoliaterose 10 January 2012 01:12:08AM *  4 points [-]

I didn't downvote you but I suspect that the reason for the downvotes is the combination of your claim appearing dubious and the absence of a supporting argument.

Last chance to donate for 2011

4 multifoliaterose 30 December 2011 06:25PM

Many LW readers choose to direct their charitable donations to SingInst with a view toward reducing existential risk. Others do not, whether because they feel they lack an understanding of the relevant issues, because they value present day humans more than future humans or because they have concern as to the incentive effects that would be created by donating to SingInst at present. I personally feel that there's a strong case for saving money to donate later on account of better information being available in the future.

However, I feel cognitive dissonance attached to saving to donate later rather than now. If you are in this camp you might consider donating to GiveWell's top ranked charities. Also note that spreading the word about GiveWell promotes a culture of effective philanthropy which is likely to have spin off effect of interesting people in x-risk reduction, reducing x-risk. 

See Holden's article on last minute donations http://blog.givewell.org/2011/12/30/last-minute-donations/ :

"Of the money moved to our top charities through our website in 2010, 25% was on December 31st alone. We know that lots of people will be looking to make last-minute donations.

If you only have five minutes but you want to take advantage of the thousands of hours of work we put into finding the best giving opportunities, consider giving to our top charities. They have strong track records, accomplish a lot of good per dollar spent, and have good concrete plans for how to use additional donations.

A couple of things to keep in mind:

  • After you give, spread the word. This is the perfect time to remind people (via Facebook sharingtweeting, etc.) to give before the year ends. And people making last-minute gifts are likely to be receptive to suggestions.
  • If you have any questions, we’re here to help. We should be available by phone for most of the day, and responding to email when we’re not. (See our contact page). Our research FAQ may also be a good resource."
Comment author: Yvain 25 December 2011 09:59:02PM *  21 points [-]

I do worry sometimes that the pendulum has swung too far in the other direction, and that people are starting using correlation-causation as an I'm-smarter-than-you sort of status signal - that is, once people pass a certain intelligence level I worry less about them claiming Facebook causes the Greek debt crisis because they're correlated, and more about them hearing a very well-conducted study showing an r = .98 correlation between some disease and some risk factor, and instead of agreeing we should investigate further they just say "HA! GOTCHA! CORRELATION'S NOT THE SAME THING AS CAUSATION!"

I mean, I admit it's an important lesson, as long as people remember it's just a caution against being too certain of a causal relationship, and not a guarantee that a correlation provides absolutely no evidence.

Comment author: multifoliaterose 26 December 2011 03:00:24PM 5 points [-]

once people pass a certain intelligence level

This seems crucial to me; you're really talking about a few percent of the population, right?

Also, I'll note that when (even very smart) people are motivated to believe in the existence of a phenomenon they're apt to attribute causal structure in.correlated data.

For example: It's common wisdom among math teachers that precalculus is important preparation for calculus. Surely taking precalculus has some positive impact on calculus performance but I would guess that this impact is swamped by preexisting variance in mathematical ability/preparation.

View more: Prev | Next