homunq comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: homunq 24 July 2012 02:13:25AM *  2 points [-]

But in this case, "more likely to be true" means something like "a good enough argument to move my priors by roughly an order of magnitude, or two at the outside". Since in the face of our ignorance of the future, reasonable priors could differ by several orders of magnitude, even the best arguments I've seen aren't enough to dismiss any "side" as silly or not worthy of further consideration (except stuff that was obviously silly to begin with).

Comment author: DaFranker 24 July 2012 01:56:31PM *  1 point [-]

That's a very good point.

I was intuitively tempted to retort a bunch of things about likelyness of exception and information taken into consideration, but I realized before posting that I was actually falling victim to several biases in that train of thought. You've actually given me a new way to think of the issue. I'm still of the intuition that any new way to think about it will only reinforce my beliefs and support the S.I. over time, though.

For now, I'm content to concede that I was weighing too heavily on my priors and my confidence in my own knowledge of the universe (on which my posteriors for AI issues inevitably depend, in one way or another), among possibly more mistakes. However, it seems at first glance to be even more evidence for the need of a new mathematical or logical language to discuss these questions more in depth, detail and formality.