When their VM runs your VM running their VM... it times out and everybody loses.
Unless one of the contestants have time limits on their VM (or on their simulations in general). You can of clearly implement a VM where time goes faster just by pretending they have a slower processor than you really run on.
If you can tell where in the stack you are (like you could with C), you could tell if you were being run by the main program, or by another contestant. Can you?
Unless the other contestant wrote a virtual machine in which they are running you. Something which I think would be quite doable considering the ridiculously large time you've got (10s gives ~10^10 instructions).
If you have a late but fairly consistent bedtime, you can set your location several time zones to the west. f.lux kicks in at sunset in your reported location.
Hasn't been very consistent lately. Might try this later.
Huh. You are right; I had neglected such a cyclical god structure. That would appear to require time travel, at least once, to get the cycle started.
Not strictly speaking. Warning, what follows is pure speculation about possibilities which may have little to no relation to how a computational multiverse would actually work. It could be possible that there are three computable universes A, B & C, such that the beings in A run a simulation of B appearing as gods to the intelligences therein, the beings in B do the same with C, and finally the beings in C do the same with A. It would probably be very hard to recognize such a structure if you were in it because of the enormous slowdowns in the simulation inside your simulation. Though it might have a comparatively short description as the solution to a an equation relating a number of universes cyclically.
In case that wasn't clear I imagine these universes to have a common quite high-level specification, with minds being primitive objects and so on. I don't think this would work at all if the universes had physics similar to our own; needing planets to form from elementary particles and evolution to run on these planets to get any minds at all, not speaking of computational capabilities of simulating similar universes.
So, I've been on this site for awhile. When I first came here, I had never had a formal introduction to Bayes' theorem, but it sounded a lot like ideas that I had independently worked out in my high school and college days (I was something of an amateur mathematician and game theorist).
A few days ago I was reading through one of your articles - I don't remember which one - and it suddenly struck me that I may not actually understand priors as well as I think I do.
After re-reading some fo the series, and then working through the math, I'm now reasonably convinced that I don't properly understand priors at all - at least, not intuitively, which seems to be an important aspect for actually using them.
I have a few weird questions that I'm hoping someone can answer, that will help point me back towards the correct quadrant of domain space. I'll start with a single question, and then see if I can claw my way towards understanding from there based on the answers:
Imagine there is a rational, Bayesian AI named B9 which has been programmed to visually identify and manipulate geometric objects. B9's favorite object is a blue ball, but B9 has no idea that it is blue: B9 sees the world through a black and white camera, and has always seen the world through a black and white camera. Until now, B9 has never heard of "colors" - no one has mentioned "colors" to B9, and B9 has certainly never experienced them. Today, unbeknownst to B9, B9's creator is going to upgrade its camera to a full-color system, and see how long it takes B9 to adapt to the new inputs.
The camera gets switched in 5 seconds. Before the camera gets switched, what prior probability does B9 assign to the possibility that its favorite ball is blue?
Your question is not well specified. Event though you might think that the proposition "its favorite ball is blue" is something that has a clear meaning, it is highly dependent on to which precision it will be able to see colours, how wide the interval defined as blue is, and how it considers multicoloured objects. If we suppose it would categorise the observed wavelength into one of 27 possible colours (one of those being blue), and further suppose that it knew the ball to be of a single colour and not patterned, and further not have any background information about the relative frequencies of different colours of balls or other useful prior knowledge, the prior probability would be 1/27. If we suppose that it had access to internet and had read this discussion on LW about the colourblind AI, it would increase its probability by doing an update based on the probability of this affecting the colour of its own ball.
That's not clear.. There is presumably something like that in Tegmark's level IV.
Assume that P(god |
god) = Q, where Q < 1.0 for all x. Consider an infinite chain; what is P(
god|god)?
This would be god|god) =
. Since Q<1.0, this limit is equal to zero.
...hmmm. Now that I think about it, that applies for any constant Q. It may be possible to craft a function Q(x) such that the limit as x approaches infinity is non-zero; for example, if I set Q(1)=0.75 and then Q(x) for x>1 such that, when multiplied by the product of all the Q(x)s so far, the distance between the previous product and 0.5 is halved (thus Q(2)=5/6, Q(3)=9/10, Q(4)=17/18, and so on); then Q(x) asymptotically approaches 1, while P(god|god) = 0.5.
You haven't established the 'has to' (p==1.0)
You're right, and thank you for pointing that out. I've now shown that p<1.0 (it's still pretty high, I'd think, but it's not quite 1).
You seem to be neglecting the possibility of a cyclical god structure. Something which might very well be possible in Tegmark level IV if all the gods are computable.
I haven't. I did switch to a pipe, however, which works marvelously at delivering nicotine, in addition to smelling better, and carrying better social connotations. (Like snuff, it does carry a higher risk of oral cancer, but that's not -quite- as deadly.)
Note, according to my 30 seconds google scholar search, it is dipping/oral snuff that causes a higher risk of oral cancer. Nasal snuff seems safer (or perhaps less well researched).
But it requires active, exclusive use of time to go to a library, loan out a book, and bring it back (and additional time to return it), whereas I can do whatever while the book is en route.
That is true. However according to my experience you don't need to spend much time in the library itself if you know what you're looking for (you can always stay for the atmosphere). What takes time is going to and from the library. The value of this time obviously depends on a lot of parameters: is the library close to your route to/from some other place, are you currently very busy, do you enjoy city walks/bike-rides, etc.
Either way, I'm not putting the gum in my mouth. Teaching myself to ignore the warning signs that could lead to my throat closing up doesn't seem like a good idea. :-)
Lozenges are okay, albeit expensive. My favorite nicotine delivery system - although hard to find - is actually dissolving strips that stick to the roof of your mouth. (The only brand I've found thus far is NicoSpan.) Significantly cheaper the way I buy them, around five cents apiece compared to forty for the lozenges - I grab them on discount when they're near the expiration date. Only issue is that the supply is very irregular. (Speaking of which, I should probably order more now, since Amazon actually has a couple of boxes right now.)
It's not uncommon, actually - part of the issue is that you have to smoke them differently. (The draw is slower and softer.)
Have you tried snuff? It smells quite nice and can help clear your nose as well as deliver nicotine.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Hrm- is it different if they run my function from within theirs instead of constructing a full VM? I was considering ways to communicate with a copy of myself that my simulation of my opponent was running that it was a simulation, but couldn't find any good ideas.
If they run your function from within theirs they simply tell the computer to start reading those instructions, possibly with a timer for stopping detailed in other parts of the comments. If they implement a VM from scratch they can mess with how the library functions work, for instance giving you a time that moves much faster so that your simulation must stop within 0.1s instead of 10 and they can run your code 100 different times to deal with randomness. Now implementing your own VM is probably not the optimal way to do this, you probably just want to do a transformation of the source code to use your own secret functions instead of the standard time ones.