Comment author: gRR 27 April 2012 01:24:23AM 2 points [-]

After reading the article, I thought I understood it, but from reading the comments, this appears to be an illusion. Yet, I think I should be able to understand, it doesn't seem to require any special math or radically new concepts... My understanding is below. Could someone check it and tell me where I'm wrong?

The proposal is to define a utility function U(), which takes as input some kind of description of the universe, and returns the evaluation of this description, a number between 0 and 1.

The function U is defined in terms of two other functions - H and T, representing a mathematical description of a specific human brain, and an infinitely powerful computing environment.

Although the U-maximizing AGI will not be able to actually calculate U, it will be able to reason about it (that is, prove theorems), which should allow it to perform at least some actions, which would therefore be provably friendly.

Comment author: paulfchristiano 26 April 2012 05:10:50AM *  2 points [-]

H is used to start off the process. H is then able to interact with a hypothetical unbounded computer, which may eventually run many (potentially quite exotic) simulations, among them of the sorts of minds humans self-modify into.

Comment author: gRR 26 April 2012 11:09:16AM -1 points [-]

But your point (as I understood it) is that all these exotic simulations don't actually get run, they mostly just get reasoned about. If this is so, then as we go farther into future, U becomes increasingly obsolete.

Comment author: gRR 26 April 2012 03:23:47AM 1 point [-]

Possible objection: the proposal appears to fix U in terms of a mathematical description H of some current human brain. What happens in the future, when humans significantly self-modify?

Comment author: Stuart_Armstrong 25 April 2012 12:25:46PM 0 points [-]

EY does not think that modal logic is wrong as a mathematical theory, but that if we interpret philosophically it as its creators seem to intend, we will we be lead astray and believe we have gained an understanding of "necessity" and "possibility" when we haven't actually done so.

I tend to agree with him on this. His comment seemed to have the subtext "you shouldn't be bringing up these ideas on less wrong without some strong justification, it will lead people down fruitless avenues". I was bringing them up to illustrate semantics for relevance logics, but that wasn't clear in the post.

Comment author: gRR 25 April 2012 01:14:21PM 0 points [-]

I'm not sure modal logic's creators ever intended it as an explanation of "necessity" and "possibility". It was always a description of how the two things (and other similar modalities) should behave. Kripke semantics has more of an 'explanation' flavor, but it is also a description.

The thought that we (LW participants interested in these things) will be lead astray by a slight exposure to the forbidden topic is kinda offending. I mean, we already have a satisfactory explanation and understanding of "possibility", don't we?

Comment author: Dorikka 25 April 2012 02:50:38AM 1 point [-]

False dichotomy?

Comment author: gRR 25 April 2012 11:41:22AM *  -1 points [-]

Not quite dichotomy. I'm thinking in terms of an evolution of a painful topic in the noosphere. Some kind of 'five stages of grief' - denial, anger, bargaining, etc :)

Comment author: Stuart_Armstrong 25 April 2012 09:36:12AM 0 points [-]

I don't get the impression he was annoyed by the mathematical theory - rather that he was annoyed at me for bringing it up in less wrong, and wanted to know if I had any justification for doing so.

Comment author: gRR 25 April 2012 11:28:49AM 0 points [-]

I'm not sure what you mean. Do you frequently bring up modal logic in conversations? Do you think he wouldn't be annoyed if someone else brought it up? Do you think he'd be similarly annoyed if you brought up normal conformal Cartan connections instead?

Comment author: JoshuaZ 24 April 2012 11:24:48PM 3 points [-]

The politicalization of existential risk is not something to be happy with. Existential risk has higher stakes (the highest stakes arguably), we need to be more careful about failures of rationality, not happy that it has infected this as well.

Comment author: gRR 25 April 2012 12:29:55AM 0 points [-]

'Politicalization' seems to be an unavoidable stage. And it's much better than total unconcern.

Comment author: JoshuaZ 24 April 2012 02:04:40PM *  12 points [-]

As far as I can tell, some of the most recent conversations to have the most uncvil remarks are conversations involving whether AI risk is a serious problem and if so what should be done about it. The thread on Luke's discussion with Pei Wang seems to be the most recent example. This also appears to be more common in threads that discuss mainstream attitudes about AI risk and where they disagrees with common LW opinion. Given that, I'm becoming worried that AI risk estimates may be becoming a tribalized belief category. Should we worry that AI risk is becoming or has become a mindkiller?

Comment author: gRR 24 April 2012 10:17:42PM -1 points [-]

Isn't it something to celebrate? If the idea of AI risks is to be regarded seriously, it can't not become political.

Comment author: DSimon 18 April 2012 10:52:19PM *  10 points [-]

It needs to be a drawn out and painful and embarrassing process.

Oh, you want a Quest, not a goal. :-)

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.

Comment author: gRR 22 April 2012 07:16:39PM 1 point [-]

And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.

Comment author: shminux 21 April 2012 09:09:06PM 1 point [-]

How often do you find that your intuitive conclusion had been faulty? What do you do in this case with your intuition?

Comment author: gRR 21 April 2012 10:15:05PM 0 points [-]

Too frequently for comfort... I update down my estimate of its reliability.

View more: Prev | Next