Vaniver comments on Our Phyg Is Not Exclusive Enough - Less Wrong

25 [deleted] 14 April 2012 09:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 16 April 2012 08:52:06AM 1 point [-]

You seem to be modeling the AGW disputant's decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having "actual belief about AGW" as a latent node that isn't introspectively accessible.

I'm describing it that way but I don't think the introspection is necessary- it's just easier to talk about as if he had full access to his mind. (Private beliefs don't have to be beliefs that the mind's narrator has access to, and oftentimes are kept out of its reach for security purposes!)

But if you want to model actual people's policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn't work or is way too cumbersome. Does your experience differ from mine?

I don't think I've seen any Bayesian modeling of that sort of thing, but I haven't gone looking for it.

Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it's hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn't have a person traverse them unaided.)

If you wanted to code a narrow AI that determined someone's mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.

Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don't see analysis on the level of Steve_Rayhawk's post coming out of a computer-run Bayes net anytime soon, and I don't think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we've got pretty sophisticated dedicated hardware for very similar things.

Another more theoretical reason I encourage caution about the "belief as anticipation" idea is that I don't think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination

Hmm. I'm going to need to sleep on this, but this sort of coordination still smells to me like anticipation.

(A general comment: this conversation has moved me towards thinking that it's useful for the LW norm to be tabooing "belief" and using "anticipation" instead when appropriate, rather than trying to equate the two terms. I don't know if you're advocating for tabooing "belief", though.)