Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lukeprog 21 July 2015 01:56:19AM 24 points [-]

Donated $300.

Comment author: Mirzhan_Irkegulov 14 July 2015 12:57:03AM 0 points [-]

Historical Jesus studies

Does anyone know whether Tim O'Neill is legit, when he talks about historical Jesus? He claims to have studied Jesus for 25 years, but he also an amateur historian. (He's also atheist)

Comment author: lukeprog 17 July 2015 01:07:17PM 1 point [-]

Never heard of him.

Comment author: Wei_Dai 02 July 2015 04:03:43AM *  10 points [-]

See this 1998 discussion between Eliezer and Nick. Some relevant quotes from the thread:

Nick: For example, if it is morally preferred that the people who are currently alive get the chance to survive into the postsingularity world, then we would have to take this desideratum into account when deciding when and how hard to push for the singularity.

Eliezer: Not at all! If that is really and truly and objectively the moral thing to do, then we can rely on the Post-Singularity Entities to be bound by the same reasoning. If the reasoning is wrong, the PSEs won't be bound by it. If the PSEs aren't bound by morality, we have a REAL problem, but I don't see any way of finding this out short of trying it.

Nick: Indeed. And this is another point where I seem to disagree with you. I am not at all certain that being superintelligent implies being moral. Certainly there are very intelligent humans that are also very wicked; I don't see why once you pass a certain threshold of intelligence then it is no longer possible to be morally bad. What I might agree with, is that once you are sufficiently intelligent then you should be able to recognize what's good and what's bad. But whether you are motivated to act in accordance with these moral convictions is a different question.

Eliezer: Do you really know all the logical consequences of placing a large value on human survival? Would you care to define "human" for me? Oops! Thanks to your overly rigid definition, you will live for billions and trillions and googolplexes of years, prohibited from uploading, prohibited even from ameliorating your own boredom, endlessly screaming, until the soul burns out of your mind, after which you will continue to scream.

Nick: I think the risk of this happening is pretty slim and it can be made smaller through building smart safeguards into the moral system. For example, rather than rigidly prescribing a certain treatment for humans, we could add a clause allowing for democratic decisions by humans or human descendants to overrule other laws. I bet you could think of some good safety-measures if you put your mind to it.

Nick: How to contol a superintelligence? An interesting topic. I hope to write a paper on that during the Christmas holiday. [Unfortunately it looks like this paper was never written?]

I assume Bostrom called it something else.

He used "control", which is apparently still his preferred word for the problem today, as in "AI control".

Comment author: lukeprog 02 July 2015 07:08:14AM 4 points [-]

For those who haven't been around as long as Wei Daiā€¦

Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his coming of age sequence.

Comment author: lukeprog 25 June 2015 08:42:51PM 4 points [-]

Just FYI, I plan to be there.

Comment author: lukeprog 24 June 2015 05:43:12PM 1 point [-]

Any idea when the book is coming out?

Comment author: Houshalter 08 June 2015 01:32:53AM 1 point [-]

Take a look at this image.

Stuart Russell said recently "The commercial investment in AI the last five years has exceeded the entire world wide government investment in AI research since it's beginnings in the 1950's."

Comment author: lukeprog 08 June 2015 01:57:10AM 2 points [-]

Just FYI to readers: the source of the first image is here.

Comment author: lukeprog 30 May 2015 12:17:06AM 5 points [-]

I don't know if this is commercially feasible, but I do like this idea from the perspective of building civilizational competence at getting things right on the first try.

Comment author: lukeprog 29 May 2015 06:17:43PM 19 points [-]

Might you be able to slightly retrain so as to become an expert on medium-term and long-term biosecurity risks? Biological engineering presents serious GCR risk over the next 50 years (and of course after that, as well), and very few people are trying to think through the issues on more than a 10-year time horizon. FHI, CSER, GiveWell, and perhaps others each have a decent chance of wanting to hire people into such research positions over the next few years. (GiveWell is looking to hire a biosecurity program manager right now, but I assume you can't acquire the requisite training and background immediately.)

Comment author: ciphergoth 29 April 2015 04:32:32PM 8 points [-]

This isn't a first for CFAR or MIRI - I hope you guys are putting lots of thought into how to have your last-minute ideas earlier :-)

Comment author: lukeprog 29 April 2015 11:57:55PM 5 points [-]

I think it's partly not doing enough far-advance planning, but also partly just a greater-than-usual willingness to Try Things that seem like good ideas even if the timeline is a bit rushed. That's how the original minicamp happened, which ended up going so well that it inspired us to develop and launch CFAR.

Comment author: RyanCarey 24 April 2015 10:01:29PM 1 point [-]

For what it's worth, I used xelatex and some of Alex Vermeer's code, but I can't see why any would effect the links, and can't find any suggestions for why this would occur in Sumatra. I'll just sit on this for now, but if more people have a similar issue, I'll look further. Thanks.

Comment author: lukeprog 26 April 2015 03:02:30AM 1 point [-]

People have complained about Sumatra not working with MIRI's PDF ebooks, too. It was hard enough already to get our process to output the links we want on most readers, so we decided not to make the extra effort to additionally support Sumatra. I'm not sure what it would take.

View more: Next