wedrifid comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 August 2010 06:01:10PM 4 points [-]

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Read the Yudkowsky-Hanson AI Foom Debate.

Awesome, I never came across this until now. It's not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don't know yet.

Read Eric Drexler's Nanosystems.

The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

This is an open question and I'm inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.

Exponentials are Kurzweil's thing. They aren't dangerous.

What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.

Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility.

You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense.

When I said that I already cannot follow the chain of reasoning depicted on this site I didn't mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.

Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn't far, it's not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

...realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?

Comment author: wedrifid 15 August 2010 03:50:41AM 3 points [-]

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Leave aside SIAI specific claims here. The point Eliezer was making, was about 'all your eggs in one basket' claims in general. In situations like this (your contribution doesn't drastically change the payoff at the margin, etc) putting all your eggs in best basket is the right thing to do.

You can understand that insight completely independently of your position on existential risk mitigation.