Would you then permit homosexual incest, which doesn't produce children?
Also notice that as formulated ("You are given an initial stake of $1") you don't have any of your own money at risk, so... And if the game only ends when TAILS is flipped, there is no way to lose, is there?
If the first $1 comes from you, you are basically asking about the "double till you win" strategy. You might be interested in reading about the St.Petersburg paradox.
The money that's "at stake" is the amount you spend to play the game. Once the game begins, you get 2^(n) dollars, where n is the number of successive heads you flip.
I say,
99.8% likely this is an upset outsider baiting for reactions in order to gauge our degree of cultishness.
0.1% likely this a sincere believer.
0.1% likely this is Eliezer messing with our heads.
That adds up to 100%. You need to leave room for other things, like they're trolling us for the fun of it.
I greatly dislike the term "friendly AI". The mechanisms behind "friendly AI" have nothing to do with friendship or mutual benefit. It would be more accurate to call it "slave AI".
"Slave" makes it sound like we're making it do something against its will. "Benevolent AI" would be better.
I have thought about something similar with respect to an oracle AI. You program it to try to answer the question assuming no new inputs and everything works to spec. Since spec doesn't include things like the AI escaping and converting the world to computronium to deliver the answer to the box, it won't bother trying that.
I kind of feel like anything short of friendly AI is living on borrowed time. Sure the AI won't take over the world to convert it to paperclips, but that won't stop some idiot from asking it how to make paperclips. I suppose it could still be helpful. It could at the very least confirm that AIs are dangerous and get people to worry about them. But people might be too quick to ask for something that they'd say is a good idea after asking about it for a while or something like that.
Nope. The laws of physics are the same in all branches.
The laws of physics are the same (in MWI, not other multiverse theories.) But there could be a universe where the are are advanced aliens with nanotech, which for some reason decide to mimic magic exactly. Or where, mysteriously, every time someone says "wingardium leviosa", objects happen to levitate, just by chance of random quantum effects.
I do think that both of these universes are so unlikely we shouldn't worry about ever being in them. But I think that is what OP is getting at.
I think that the first universe is sufficiently more likely than the second that you shouldn't assume it's a coincidence, and you should expect wingardium leviosa to keep working.
Let me make a simpler form of this problem. Suppose I flip a fair coin a thousand times, and it just happens to land on heads every time. How do I find out that this is a fair coin, and that I don't actually have a trick coin that always lands on heads? The answer is that I can't. Any algorithm that tells me that it's fair is going to fail in the much more likely circumstance that I have a coin that always lands on heads. The best I can do is show that I have 1000 bits of evidence in favor of a trick coin, update my priors accordingly, and use this information when betting.
The good news is that you will only get a coin that lands on heads a thousand times about 00.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000933% of the time, so you won't be this wrong by chance very often. In general, you can calculate how likely you are to be wrong, and hedge your bets accordingly.
I've seen a number very small mentions like that, but never anything giving it more than passing consideration. In addition, I haven't seen anyone postulate that this could be distorting our view of other physical laws.
If you've come across something more, I would love to see it!
Obviously it would distort our view of how quickly the universe decays into a true vacuum. There's also the mangled worlds idea to explain the Born rule.
I'm pretty sure I've seen this before, with the example of our universe being a false vacuum with a short half-life.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Not sure if this is obvious of just wrong, but isn't it possible (even likely?) that there is no way of representing a complex mind that is sufficiently useful enough to allow an AI to usefully modify itself. For instance, if you gave me complete access to my source code, I don't think I could use it to achieve any goals as such code would be billions of lines long. Presumably there is a logical limit on how far one can usefully compress ones own mind to reason about it, and it seams reasonably likely that such compression will be too limited to allow a singularity.
There's certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I'm not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won't run away.