wedrifid comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 14 August 2010 06:01:10PM 4 points [-]

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Read the Yudkowsky-Hanson AI Foom Debate.

Awesome, I never came across this until now. It's not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don't know yet.

Read Eric Drexler's Nanosystems.

The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

This is an open question and I'm inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.

Exponentials are Kurzweil's thing. They aren't dangerous.

What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.

Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility.

You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense.

When I said that I already cannot follow the chain of reasoning depicted on this site I didn't mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.

Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn't far, it's not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

...realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?

Comment author: wedrifid 15 August 2010 03:52:19AM *  7 points [-]

Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

Um... yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

Evidence based? By which you seem to mean 'some sort of experiment'? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to 'reference to historical experimental outcomes'. You actually will need to look at 'consistent internal logic'... just make sure the consistent internal logic is well grounded on known physics.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.

Comment author: XiXiDu 15 August 2010 08:49:46AM 1 point [-]

Um... yes? Superhuman is a low bar...

Uhm...yes? It's just something I would expect to be integrated into any probability estimates of suspected risks. More here.

Who would be insane enough to experiment with destroying the world?

Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. "It's somewhere in there, line 10020035, +/- a million lines...." is not transparency! That is, an organisation who's conerned with something taking over the universe and asks for your money. And organisation I'm told of which some members get nightmares just reading about evil AI...

Comment author: XiXiDu 08 June 2011 02:02:56PM 0 points [-]

...... just make sure the consistent internal logic is well grounded on known physics.

Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about "explosive" recursive self-improvement?

Comment author: wedrifid 09 June 2011 07:37:58PM 1 point [-]

Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions.

What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.

Comment author: [deleted] 09 June 2011 08:05:08PM *  2 points [-]

Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.

Please provide a link to this effect? (Going off topic, I would suggest that a "show all threads with one or more comments by users X, Y and Z" or "show conversations between users X and Y" feature on LW might be useful.)

(First reply below)

Comment deleted 09 June 2011 07:59:57PM *  [-]
Comment author: wedrifid 09 June 2011 08:06:58PM 0 points [-]

It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.

Comment author: jimrandomh 09 June 2011 08:17:02PM 1 point [-]

The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren't labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you're currently using is badly broken.

Comment author: wedrifid 09 June 2011 10:37:49PM 0 points [-]

SwiftKey x beta Brilliant!

Comment author: timtyler 09 June 2011 11:14:55PM *  0 points [-]

That smarter(faster)-than-human intelligence is possible is well grounded on known physics?

Some still seem sceptical - and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.