Comment author: gjm 30 June 2015 12:25:52PM 2 points [-]

because that is the most effective way to satisfy almost any possible goal function

Perhaps more accurate: because that is a likely side effect of the most effective way (etc.).

Comment author: ciphergoth 03 July 2015 04:36:15PM 1 point [-]

Not a side effect. The most effective way is to consume the entire cosmic commons just in case all that computation finds a better way. We have our own ideas about what we'd like to do with the cosmic commons, and we might not like the AI doing that; we might even act to try and prevent it or slow it down in some way. Therefore killing us all ASAP is a convergent instrumental goal.

Comment author: ciphergoth 03 July 2015 02:32:32PM 0 points [-]

Discovery Institute Fellow Erik J Larson

He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.

Larson's been writing plenty of stuff critical of AI risk discussion lately, apparently even the Atlantic is keen to hear the creationist viewpoint.

Comment author: Wei_Dai 02 July 2015 04:03:43AM *  10 points [-]

See this 1998 discussion between Eliezer and Nick. Some relevant quotes from the thread:

Nick: For example, if it is morally preferred that the people who are currently alive get the chance to survive into the postsingularity world, then we would have to take this desideratum into account when deciding when and how hard to push for the singularity.

Eliezer: Not at all! If that is really and truly and objectively the moral thing to do, then we can rely on the Post-Singularity Entities to be bound by the same reasoning. If the reasoning is wrong, the PSEs won't be bound by it. If the PSEs aren't bound by morality, we have a REAL problem, but I don't see any way of finding this out short of trying it.

Nick: Indeed. And this is another point where I seem to disagree with you. I am not at all certain that being superintelligent implies being moral. Certainly there are very intelligent humans that are also very wicked; I don't see why once you pass a certain threshold of intelligence then it is no longer possible to be morally bad. What I might agree with, is that once you are sufficiently intelligent then you should be able to recognize what's good and what's bad. But whether you are motivated to act in accordance with these moral convictions is a different question.

Eliezer: Do you really know all the logical consequences of placing a large value on human survival? Would you care to define "human" for me? Oops! Thanks to your overly rigid definition, you will live for billions and trillions and googolplexes of years, prohibited from uploading, prohibited even from ameliorating your own boredom, endlessly screaming, until the soul burns out of your mind, after which you will continue to scream.

Nick: I think the risk of this happening is pretty slim and it can be made smaller through building smart safeguards into the moral system. For example, rather than rigidly prescribing a certain treatment for humans, we could add a clause allowing for democratic decisions by humans or human descendants to overrule other laws. I bet you could think of some good safety-measures if you put your mind to it.

Nick: How to contol a superintelligence? An interesting topic. I hope to write a paper on that during the Christmas holiday. [Unfortunately it looks like this paper was never written?]

I assume Bostrom called it something else.

He used "control", which is apparently still his preferred word for the problem today, as in "AI control".

Comment author: ciphergoth 03 July 2015 06:52:07AM 6 points [-]

This is fascinating, thank you! It feels like while Nick is pointing in the right direction and Eliezer in the wrong direction here, this is from a time before either of them have had the insights that bring us to seeing the problem in anything like the way we see it today. Large strides have been made by the time of the publication of CFAI three years later, but as Eliezer tells it in "coming of age" story, his "naturalistic awakening" isn't till another couple of years after that.

Comment author: tanagrabeast 03 July 2015 05:55:41AM 8 points [-]

This is probably my dream job... the job I would do for free if I had the means. But any idea of the salary range? Could someone with a family (and a spouse's teacher salary) possibly hope to live close enough to Berkeley to be effective?

Comment author: ciphergoth 03 July 2015 06:09:26AM 9 points [-]

MIRI are pretty flush right now. Obviously they have to get the most out of every dollar they can to give the world the best chance, but if they found just the right person and the only issue was paying them enough to get them, my guess is they'd be pretty generous.

Comment author: ciphergoth 30 June 2015 09:51:57AM 8 points [-]

Three more myths, from Luke Muehlhauser:

  • We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.
  • We don’t think AIs will want to wipe us out. Rather, we worry they’ll wipe us out because that is the most effective way to satisfy almost any possible goal function one could have.
  • AI self-improvement and protection against external modification isn’t just one of many scenarios. Like resource acquisition, self-improvement and protection against external modification are useful for the satisfaction of almost any final goal function.

A similar list by Rob Bensinger:

  • Worrying about AGI means worrying about narrow AI
  • Worrying about AGI means being confident it’s near
  • Worrying about AGI means worrying about “malevolent” AI
Comment author: viuuiuvy 03 June 2015 06:54:34PM -11 points [-]

MIRI doesn't have influence in it's field & shows no progress towards what it believes in. That is what data shows.

Comment author: ciphergoth 10 June 2015 07:21:36PM 1 point [-]

Netcraft confirms it!

Comment author: MileyCyrus 11 May 2012 01:54:01PM 8 points [-]

Can you please add links to the other objections in each of these posts? Just to make the articles a little stickier.

Comment author: ciphergoth 26 May 2015 12:28:15PM 0 points [-]

I've now created a summary page on the Wiki: Holden Karnofsky.

Comment author: ciphergoth 24 May 2015 09:13:12PM *  6 points [-]

In scientific thought we adopt the simplest theory which will explain all the facts under consideration and enable us to predict new facts of the same kind. The catch in this criterion lies in the word "simplest." It is really an aesthetic canon such as we find implicit in our criticisms of poetry or painting. The layman finds such a law as much less simple than "it oozes," of which it is the mathematical statement. The physicist reverses this judgment, and his statement is certainly the more fruitful of the two, so far as prediction is concerned. It is, however, a statement about something very unfamiliar to the plainman, namely, the rate of change of a rate of change.

Comment author: Lumifer 08 May 2015 08:33:25PM *  22 points [-]

The, ahem, money quote:

Interestingly, those who invested their own money in forecasting the outcome performed a lot better in predicting what would happen than did the pollsters. The betting markets had the Conservatives well ahead in the number of seats they would win right through the campaign and were unmoved in this belief throughout. Polls went up, polls went down, but the betting markets had made their mind up. The Tories, they were convinced, were going to win significantly more seats than Labour.

A bet is a tax on bullshit :-)

Comment author: ciphergoth 09 May 2015 07:33:12AM *  7 points [-]

FWIW betting odds were almost exactly 50:50 over who would be prime minister after the election right up until the BBC exit poll was announced.

Comment author: lukeprog 29 April 2015 11:57:55PM 5 points [-]

I think it's partly not doing enough far-advance planning, but also partly just a greater-than-usual willingness to Try Things that seem like good ideas even if the timeline is a bit rushed. That's how the original minicamp happened, which ended up going so well that it inspired us to develop and launch CFAR.

Comment author: ciphergoth 30 April 2015 05:48:24AM 5 points [-]

I know, but something seems not-quite-right about this. If you had all the same events at the same times, but thought of them earlier and so had longer to plan them, you'd be strictly better off. I can think of two constraints that can make rushed timelines like this make sense:

  • you're ideas-bound, not resources-bound: there's little you can do to have ideas any earlier than you already do.
  • the ideas only make sense to implement in the light of information you didn't have earlier, so you couldn't have started acting on them before.

If you're happy that you're already pushing these constraints as far as it makes sense to, then I'll stop moaning :)

View more: Prev | Next