I probably have obstructive sleep apnea. I exhibit a symptoms (ie feeling sleepy despite getting normal or above average amounts of sleep, dry mouth when I wake up) and also I just had a sleep specialist tell me that the geometry of my mouth and sinuses makes puts me at high risk. I got an appointment for a sleep study a month from now. Based on what I've read, this means that it will probably take at least two months or more before I can start using a CPAP machine if I go through the standard procedure. This seems like an insane amount of time to wait for something that probably has a good chance of significantly improving my quality of life immediately. Is there any good reason why I can't just buy a CPAP machine and start using it?
So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?
I don't think so. It seems more likely to me that the common factor between increased defection rate and self-perceived success is more consequentialist thinking. This leads to perceived success via actual success, and to defection via thinking "defection is the dominant strategy, so I'll do that".
Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.
Mugger: So then, you think the probability that you should care as much about my 3↑↑↑3 simulated people as I thought you did is on the order of 1/3↑↑↑3?
After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did.
Mugger: Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers.
Me: I'm not sure about that.
Mugger: So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3?
Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.
Mugger: "This should be good."
Me: There's only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you're saving will be heavily duplicated. I'm not really sure that I care about duplicates that much.
Mugger: Well I didn't say they would all be humans. Haven't you read enough Sci-Fi to know that you should care about all possible sentient life?
Me: Of course. But the same sort of reasoning implies that, either there are a lot of duplicates, or else most of the people you are talking about are incomprehensibly large, since there aren't that many small Turing machines to go around. And it's not at all obvious to me that you can describe arbitrarily large minds whose existence I should care about without using up a lot of complexity. More generally, I can't see any way to describe worlds which I care about to a degree that vastly outgrows their complexity. My values are complicated.
How does your proposed solution for Game 1 stack up against the brute-force metastrategy?
Game 2 is a bit tricky. An answer to your described strategy would be to write a large number generator f(1),which produces some R, which does not depend on your opponents' programs, create a virtual machine that runs your opponents' programs for r steps, and, if they haven't halted, swaps the final recursive entry on the call stack with some number (say, R, for simplicity), and iterates upwards to produce real numbers for their function values. Then you just return the max of all three values. This strategy wins against any naive strategy, wins if your opponents are stuck in infinite loops, and, if taken by all players symmetrically, reduces the game to who has a larger R - i.e. the game simplifies to GAME1, and there is still (probably) one survivor.
How does your proposed solution for Game 1 stack up against the brute-force metastrategy?
Well the brute force strategy is going to do a lot better, because it's pretty easy to come up with a number bigger than the length of the longest program anyone has ever thought to write, and then plugging that into your brute force strategy automatically beats any specific program that anyone has ever thought to write. On the other hand, the meta-strategy isn't actually computable (you need to be able to decide whether program produces large outputs, which requires a halting oracle or at least a way of coming up with large stopping times to test against). So it doesn't really make sense to compare them.
I think I can win Game 1 against almost anyone - in other words, I think I have a larger computable number than any sort of computable number I've seen anyone describe in these sorts of contests, where the top entries typically use the fast-growing hierarchy for large recursive ordinals, in contests where Busy Beaver and beyond aren't allowed.
Game 2 is interesting. My first thought was that running the other person's program and adding 1 to the result guarantees that they die - either their program doesn't halt, or your program is larger. So my first thought was that it just reduced to 3 players who can choose whether to kill each other or not, at least 2 of whom have to die, with no solution except from TDT-type correlations. But suppose I output a large number without looking at my opponents' code, and my opponents both try to run the other two programs and add the outputs together, plus 1. They both go into an infinite loop and I win. There may be some nontrivial Nash-style equilibrium to be found here.
I think I can win Game 1 against almost anyone - in other words, I think I have a larger computable number than any sort of computable number I've seen anyone describe in these sorts of contests, where the top entries typically use the fast-growing hierarchy for large recursive ordinals, in contests where Busy Beaver and beyond aren't allowed.
Okay I have to ask. Care to provide a brief description? You can assume familiarity with all the standard tricks if that helps.
So I guess I should have specified which model of hypercomputation Omega is using. Omega's computer can resolve ANY infinite trawl in constant time (assume time travel and an enormous bucket of phlebotinum is involved) - including programs which generate programs. So, the players also have the power to resolve any infinite computation in constant time. Were they feeling charitable, in an average utilitarian sense, they could add a parasitic clause to their program that simply created a few million copies of themselves which would work together to implement FAI, allow the FAI to reverse-engineer humanity by talking to all three of the contestants, then creating arbitrarily large numbers of value-fulfilled people and simulating them forever. But I digress.
In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.
In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.
In this case, I have no desire to escape from the room.
Hello and goodbye.
I'm a 30 year old software engineer with a "traditional rationalist" science background, a lot of prior exposure to Singularitarian ideas like Kurzweil's, with a big network of other scientist friends since I'm a Caltech alum. It would be fair to describe me as a cryocrastinator. I was already an atheist and utilitarian. I found the Sequences through Harry Potter and the Methods of Rationality.
I thought it would be polite, and perhaps helpful to Less Wrong, to explain why I, despite being pretty squarely in the target demographic, have decided to avoid joining the community and would recommend the same to any other of my friends or when I hear it discussed elsewhere on the net.
I read through the entire Sequences and was informed and entertained; I think there are definitely things I took from it that will be valuable ("taboo" this word; the concept of trying to update your probability estimates instead of waiting for absolute proof; etc.)
However, there were serious sexist attitudes that hit me like a bucket of cold water to the face - assertions that understanding anyone of the other gender is like trying to understand an alien, for example.
Coming here to Less Wrong, I posted a little bit about that, but I was immediately struck in the "sequence rerun" by people talking about what a great utopia the gender-segregated "Failed Utopia 4-2" would be.
Looking around the site even further, I find that it is over 90% male as of the last survey, and just a lot of gender essentialist, women-are-objects-not-people-like-us crap getting plenty of upvotes.
I'm not really willing to put up with that and still less am I enthused about identifying myself as part of a community where that's so widespread.
So, despite what I think could be a lot of interesting stuff going on, I think this will be my last comment and I would recommend against joining Less Wrong to my friends. I think it has fallen very squarely into the "nothing more than sexism, the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists" cognitive failure mode.
If you're interested in one problem that is causing at least one rationalist to bounce off your site (and, I think the odds are not unreasonable, where one person writes a long heartfelt post, there might be multiple others who just click away) here you go. If not, go ahead and downvote this into oblivion.
Perhaps I'll see you folks in some years if this problem here gets solved, or some more years after that when we're all unfrozen and immortal and so forth.
Sincerely,
Sam
Why not stay around and try to help fix the problem?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I am on vacation in Japan until the end of August, I might be interested in attending a meetup. Judging from the lack of comments here this never took off, but I might as well leave this here just in case.