Would a (hypothetically) pure altruist have children (in our current situation)?
I'm curious about your personal experiences with physical pain. What is the most painful thing you've experienced and what was the duration?
I'm sympathetic to your preference in the abstract, I just think you might be surprised at how little pain you're actually willing to endure once it's happening (not a slight against you, I think people in general overestimate what degree of physical pain they can handle as a function of the stakes involved, based largely on anecdotal and second hand experience from my time in the military).
At the risk of being overly morbid, I have high confidence (>95%) that I could have you begging for death inside of an hour if that were my goal (don't worry, it's certainly not). An unfriendly AI capable of keeping you alive for eternity just to torture you would be capable of making you experience worse pain than anyone ever has in the history of our species so far. I believe you that you might sign a piece of paper to pre-commit to an eternity of torture vice simple death. I just think you'd be very very upset about that decision. Probably less than 5 minutes into it.
I would definitely pre-commit to immortality.
So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's.
Shouldn't it be more something like 15+(100-15)/2$? So both win (about) the same amount of utility? Otherwise, the one who was ready to pay 100$ saved ("won") 85$ and the other won nothing (s/he was indifferent to pay or do it for 15$).
Nice post by the way. Such techniques seem useful if you trust the other will make a bid that really represents the amount s/he's ready to pay.
If doubting is/was accepted in our current society, and we wanted Archimede to doubt about his beliefs, would we have to doubt about the value doubting, or be certain about the value of doubting?
It's a joke. As Eliezer said "to get nonobvious output, you need nonobvious input", so obviously, we'd just have to find something nonobvious. :-)
I wonder if we will ever come up with something that is as nonobvious to us right now as bayesian thinking was to Archimede.
You should ask the greatest mathematician of the ancient world to work on FAI theory. If he solves the analogous problem, then when he explains his solution to you over the Chronophone, it'll come out on your end as a design for an AI.
Nice :-) but in fact, the chronophone will transmit a problem just as hard for Archimede as FAI is to us. So he'll probably solve the problem in the same amount of time than us (so it won't help us). I wonder what would be this problem? Is going do the Moon for Archimede just as hard than building a FAI for us?
Eliezer sometime ask something that I would now like to ask him: how would the world looks like if mathematics didn't precede mathematicians? And if it did?
But why would your distribution be two deltas at 10^-4 and 10^-16 and not more continuous?
Because it's a toy example and it's easier to work out the math this way. You can get similar results with more continuous distributions, the math is simply more complicated.
I don't think I'm rationalizing an answer; I'm not even presenting an answer. I meant only to present a (very simplified) example of how such a conclusion might arise.
I'm totally willing to chalk the survey results up to scale insensitivity, but such results aren't necessarily nonsensical. It could just mean somebody started with "what credence do I assign that aliens exist and the Fermi Paradox is/isn't an illusion" and worked backwards from there, rather than pulling a number out of thin air for "chance of life developing in a single galaxy" and then exponentiating.
Since the latter method gives sharply differing results depending on whether you make up a probability a few orders of magnitude above or below 10^-11, I'm not sure working backwards is even a worse idea. At least working backwards won't give one 99.99999% credence in something merely because their brain is bad at intuitively telling apart 10^-8 and 10^-14.
Edit: I think some degree of dichotomy is plausible here. A lot of intermediate estimates are ruled out by us not seeing aliens everywhere.
Would some people be interested in answering 10 such questions and give their confidence about their answer every month? That would provide better statistics and a way to see if we're improving.
There's both PredictionBook and the Good Judgment Project as venues for this sort of thing.
Thank you.
EDIT: I just made my first (meta)prediction which is that I'm 50% sure that "I will make good predictions in 2014. (ie. 40 to 60% of my predictions with an estimate between 40 and 60% will be true.)"
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
My position is already sorted, I assure you. I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega.
As someone who rejects defection as the inevitable rational solution to both the one-shot PD and the iterated PD, I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.
True, the iteration does present the possibility of "exploiting" an "irrational" opponent whose "irrationality" you can probe and detect, if there's any doubt about it in your mind. But that doesn't resolve the fundamental issue of rationality; it's like saying that you'll one-box on Newcomb's Problem if you think there's even a slight chance that Omega is hanging around and will secretly manipulate box B after you make your choice. What if neither party to the IPD thinks there's a realistic chance that the other party is stupid - if they're both superintelligences, say? Do they automatically defect against each other for 100 rounds?
And are you really "exploiting" an "irrational" opponent, if the party "exploited" ends up better off? Wouldn't you end up wishing you were stupider, so you could be exploited - wishing to be unilaterally stupider, regardless of the other party's intelligence? Hence the phrase "regret of rationality"...
Do you mean "I cooperate with the Paperclipper if AND ONLY IF I think it will one-box on Newcomb's Problem with myself as Omega AND I think it thinks I'm Omega AND I think it thinks I think it thinks I'm Omega, etc." ? This seems to require an infinite amount of knowledge, no?
Edit: and you said "We have never interacted with the paperclip maximizer before", so do you think it would one-box?