Comment author: Manfred 27 January 2015 10:18:16PM *  0 points [-]

I dunno if the universe can read, jmmcd. ;P

Comment author: jmmcd 28 January 2015 07:50:08AM *  0 points [-]

It can, but it doesn't have the time...

Comment author: jmmcd 27 January 2015 12:34:42PM 0 points [-]

So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

Maybe read the Fun Theory sequence?

Comment author: jmmcd 17 December 2014 08:16:29AM 2 points [-]

It might useful to look at Pareto dominance and related ideas, and the way they are used to define concrete algorithms for multi-objective optimisation, eg NSGA2 which is probably the most used.

Comment author: [deleted] 02 August 2014 04:15:25PM 1 point [-]

But that's so generic that it misses the point. Saving the world is extraordinary. What things are you doing towards extraordinary outcomes?

In response to comment by [deleted] on Saving the World - Progress Report
Comment author: jmmcd 02 August 2014 10:07:40PM 2 points [-]

OP mentions "I used less water in the shower", so is obviously not only looking for extraordinary outcomes. So "saving the world" does indeed sound silly.

Comment author: jmmcd 16 January 2014 10:55:53PM 2 points [-]

Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. [...] Therefore there is little to fear in the way of being tortured by an AI.

That makes no sense. The uFAIs most likely to be created are not drawn uniformly from the space of possible uFAIs. You need to argue that none of the uFAIs which are likely to be created will be interested in humans, not that few of all possible uFAIs will.

Comment author: jmmcd 29 November 2013 02:56:43PM 0 points [-]

Off-topic:

I'm not talking about a basic vocabulary, but a vocabulary beyond that of the average, white, English-as-a-first-language adult.

Why white?

In response to comment by [deleted] on Open Thread, November 15-22, 2013
Comment author: gwern 19 November 2013 12:38:41AM 7 points [-]

He ducked the question, I think, in simply saying that non-marriage was superior and/or in heaven no one is married maybe: Luke 20:27-38:

Some of the Sadducees, who say there is no resurrection, came to Jesus with a question. "Teacher," they said, "Moses wrote for us that if a man's brother dies and leaves a wife but no children, the man must marry the widow and have children for his brother. Now there were seven brothers. The first one married a woman and died childless. The second and then the third married her, and in the same way the seven died, leaving no children. Finally, the woman died too. Now then, at the resurrection whose wife will she be, since the seven were married to her?"

Jesus replied, "The people of this age marry and are given in marriage. But those who are considered worthy of taking part in that age and in the resurrection from the dead will neither marry nor be given in marriage, and they can no longer die; for they are like the angels. They are God's children, since they are children of the resurrection."

Comment author: jmmcd 22 November 2013 01:37:12AM 1 point [-]

Golly, that sounds to me as if the people of this age don't go to heaven!

Comment author: jmmcd 10 November 2013 07:35:12PM 2 points [-]

it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?

Not sure if this simple example is what you had in mind, but -- evolution wasn't capable of making us grow nice smooth erasable surfaces on our bodies, together with ink-secreting glands in our index fingers, so we couldn't evolve the excellent rationality technique of writing things down to remember them. So when writing was invented, the inventor was entitled to say "my invention passes the EOC because of the "evolutionary restrictions" clause".

Comment author: Kaj_Sotala 05 November 2013 07:30:38AM *  2 points [-]

Humans seem pretty good at making correct predictions even if they have made incorrect predictions in the past. More generally, any agent for whom a single wrong prediction throws everything into disarray will probably not continue to function for very long.

That's basically my point. A human has to predict the answer to questions of the type "what would I do in situation X", and their overall behavior is the sum of their actions over all situations, so they can still get the overall result roughly correct as long as they are correct on average. An AI that's capable of self-modification also has to predict the answer to questions of the type "how would my behavior be affected if I modified my decision-making algorithm in this way", where the answer doesn't just influence the behavior in one situation but all the ones that follow. The effects of individual decisions become global rather than local. It needs to be able to make much more reliable predictions if it wants to have a chance of even remaining basically operational over the long term.

Fair enough. This is an admirable habit that is all too rare, so have an upvote :).

Thanks. :)

Comment author: jmmcd 08 November 2013 09:09:20PM 0 points [-]

And more important, its creators want to be sure that it will be very reliable before they switch it on.

Comment author: Coscott 19 October 2013 08:33:38PM 2 points [-]

This will never catch on unless someone can read the statement on its own, and through google know what it means. (logit=2)

How certain am I of the above statement?

Submitting...

Comment author: jmmcd 19 October 2013 10:43:16PM 2 points [-]

can read the statement on its own

I like the principle behind Markdown: if it renders, fine, but if it doesn't, it degrades to perfectly readable plain-text.

A percentage is just fine.

View more: Prev | Next