paulfchristiano comments on Reflection in Probabilistic Logic - Less Wrong

63 Post author: Eliezer_Yudkowsky 24 March 2013 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (171)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 26 March 2013 07:30:00PM 2 points [-]

This is only meaningful under the assumption that the intelligence of an AI depends on the strength of its proof system. Since the space of intelligent AI systems is not limited those that are dependent on proof systems, the entire argument has narrow scope and importance. And since, arguably, the majority of AIs capable of human-level intelligence in the real world are indeed not those that are dependent on proof systems (but are, instead, complex systems), the argument's importance diminishes to a vanishingly small level.

Comment author: paulfchristiano 31 March 2013 05:49:34AM 8 points [-]

I can never quite tell to what extent you are being deliberately inflammatory; is there a history I'm missing?

I agree that this work is only relevant to systems of a certain kind, i.e. those which rely on formal logical manipulations. It seems unjustifiably strong to say that the work is therefore of vanishing importance. Mostly, because you can't justify such confident statements without a much deeper understanding of AGI than anyone can realistically claim to have right now.

But moreover, we don't yet have any alternatives to first order logic for formalizing and understanding general reasoning, and the only possibilities seem to be: (1) make significant new developments that mathematicians as a whole don't expect us to make, or (2) build systems whose reasoning we don't understand except as an empirical fact (e.g. human brains).

I don't deny that (1) and (2) are plausible, but I think that, if those are the best bets we have, we should think that FOL has a good chance of being the right formalism for understanding general reasoning. Which part of this picture do you disagree with? The claim that (1) and (2) are all we have, the inference that FOL is therefore a plausible formalism, or the claim that the OP is relevant to a generic system whose reasoning is characterized by FOL?

I'm also generally skeptical of the sentiment "build an intelligence which mimics a human as closely as possible." This is competing with the principle "build things you understand," and I think the arguments in favor of the second are currently much stronger than those in favor of the first (though this situation could change with more evidence or argument). I think that work that improves our ability to understand reasoning (of which the OP is a tiny piece) is a good investment, for that reason.

Your characterization of the OP is also not quite fair; we don't necessarily care about the strength of the underlying proof system, we are talking about the semantic issue: can you think thoughts like "Everything I think is pretty likely to be true" or can't you? We would like to have a formalism in which you can articulate such thoughts, but in which the normal facts about FOL, completeness theorems, etc. still apply. That is a much more general issue than the problem of boosting a system's proof-theoretic strength in order to boost its reasoning power.

Comment author: Kaj_Sotala 31 March 2013 03:44:18PM 4 points [-]

The papers that Richard mentioned in the downstream comment are useful for understanding his view: [1, 2].

Comment author: lukeprog 31 March 2013 06:20:38AM *  4 points [-]

is there a history I'm missing?

IIRC, Eliezer banned Richard from SL4 several years ago. I can't find the thread in which Eliezer banned him, but here is a thread in which Eliezer writes (to Richard) "I am wondering whether to ban you from SL4..."

After a few counter-productive discussions with Richard, I've personally stopped communicating with him.

Comment author: jklsemicolon 31 March 2013 07:48:57AM *  8 points [-]

The "bannination" is here.

EDIT: and here is Eliezer's explanation.

Comment author: Kaj_Sotala 31 March 2013 03:47:21PM 6 points [-]

Note that I've personally had many productive discussions with Richard: he does have a bit of a temper, which is compounded by a history of bad experiences with communities such as SL4 and this one, but he's a very reasonable debate partner when treated with courtesy and respect.

Comment author: Richard_Loosemore 09 May 2013 04:56:29PM 3 points [-]

It says something profound about the LessWrong community that:

(a) Whenever I post a remark, no matter how innocent, Luke Muehlhauser makes a point of coming to the thread to make defamatory remarks against me personally ..... and his comment is upvoted;

(b) When I, the victim of Muehlhauser's attack, point out that there is objective evidence to show that the defamation is baseless and unjustified .... my comment is immediately downvoted.

Comment author: [deleted] 09 May 2013 05:08:38PM 1 point [-]

It says something profound about the ten or so people who have voted on your recent comments, assuming none of the votes come from the same person.

Comment author: shminux 09 May 2013 07:07:18PM *  12 points [-]

I was not aware of the prior history, but I tend to downvote anyone coming across as a bitter asshole with an ax to grind.

Comment author: wedrifid 10 May 2013 10:42:41AM 2 points [-]

I was not aware of the prior history, but I tend to downvote anyone coming across as a bitter asshole with an ax to grind.

Ditto. I hypothesise that if Richard had used a few different words here and there to take out the overt barbs he may have far more effectively achieved his objective of gaining the moral high ground and making his adversaries look bad.

Comment author: JoshuaZ 10 May 2013 05:44:34PM 0 points [-]

I'd rather not phrase it in terms of adversaries but the basic point that people would be more inclined to listen to Richard if he was less combative is probably accurate.

Comment deleted 09 May 2013 02:53:26PM [-]
Comment author: ThrustVectoring 09 May 2013 06:04:50PM *  2 points [-]

The above is an extraordinarily good example of why lukeprog no longer talks to you.

Comment author: shminux 31 March 2013 08:05:15AM 1 point [-]

I'm also generally skeptical of the sentiment "build an intelligence which mimics a human as closely as possible." This is competing with the principle "build things you understand,"

Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced (but possibly shouldn't be)?

Comment author: Eugine_Nier 02 April 2013 04:46:24AM 9 points [-]

Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced

It was possibly to achieve heavier than air flight without reproducing the flexible wings of birds.

Comment author: shminux 02 April 2013 06:52:31AM 5 points [-]

Right, an excellent point. Biology can be unnecessarily messy.

Comment author: Kawoomba 31 March 2013 09:18:15AM 2 points [-]

Good understanding of the design principles may be enough, or of the organisation into cortical columns and the like. The rest is partly a mess of evolutionary hacks, such as "let's put the primary visual cortex in the back of the brain" (excuse the personification), and probably not integral for sufficient understanding. So I guess my question would be what granularity of "understanding" you're referring to. 'So that it can be reproduced' seems too low a barrier: Consider we found some alien technology that we could reproduce strictly by copying it, without having any idea how it actually worked.

Do you 'understand' large RNNs that exhibit strange behavior because you understand the underlying mechanisms and could use them to create other RNNs?

There is a sort of trade-off, you can't go too basic and still consider yourself to understand the higher-level abstractions in a meaningful way, just as the physical layer of the TCP/IP stack in principle encapsulates all necessary information, but is still ... user-unfriendly. Otherwise we could say we understand a human brain perfectly just because we know the laws that governs it on a physical level.

I shouldn't comment when sleep deprived ... ignore at your leisure.