Rain comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?
That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.
Awesome, I never came across this until now. It's not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don't know yet.
The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
This is an open question and I'm inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.
What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.
Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?
You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.
When I said that I already cannot follow the chain of reasoning depicted on this site I didn't mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.
Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn't far, it's not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?
I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.
If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?
I think you just want a brochure. We keep telling you to read archived articles explaining many of the positions and you only read the comment where we gave the pointers, pretending as if that's all that's contained in our answers. It'd be more like him saying, "I have a bunch of good arguments right over there," and then you ignore the second half of the sentence.
I'm not asking for arguments. I know them. I donate. I'm asking for more now. I'm using the same kind of anti-argumentation that academics would use against your arguments. Which I've encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? "I skimmed over it, but there were no references besides some sound argumentation, an internal logic.", "You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for."
Pardon my bluntness, but I don't believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.
For example if you already understood the arguments for, or basic explanation of why 'putting all your eggs in one basket' is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn't?
Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you'd end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It's obvious, I don't see how someone wouldn't get this.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don't care to save or that doesn't need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?
I'm starting to doubt that anyone actually read my OP...
I know this is just a tangent... but that isn't actually the reason.
Just to be clear, I'm not objecting to this. That's a reasonable point.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I've missed the reason then. Seriously, I'd love to read up on it now.
Here is an example of what I want:
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
... but unfortunately only asked for a link for the 'scope insensivity' part, not a link to a 'marginal utility' tutorial. I've had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
That's another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I'm not trying to be a nuisance here, but it is the only point I'm making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.