I probably have obstructive sleep apnea. I exhibit a symptoms (ie feeling sleepy despite getting normal or above average amounts of sleep, dry mouth when I wake up) and also I just had a sleep specialist tell me that the geometry of my mouth and sinuses makes puts me at high risk. I got an appointment for a sleep study a month from now. Based on what I've read, this means that it will probably take at least two months or more before I can start using a CPAP machine if I go through the standard procedure. This seems like an insane amount of time to wait for...
So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?
I don't think so. It seems more likely to me that the common factor between increased defection rate and self-perceived success is more consequentialist thinking. This leads to perceived success via actual success, and to defection via thinking "defection is the dominant strategy, so I'll do that".
Mugger: Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers.
Me: I'm not sure about that.
Mugger: So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3?
Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.
Mugger: "This should be good."
Me: There's only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you're saving...
I'm not really sure that I care about duplicates that much.
Bostrom would probably try to argue that you do. See Bostrom (2006).
I think it not unlikely that if we have a successful intelligence explosion and subsequently discover a way to build something 4^^^^4-sized, then we will figure out a way to grow into it, one step at a time. This 4^^^^4-sized supertranshuman mind then should be able to discriminate "interesting" from "boring" 3^^^3-sized things. If you could convince the 4^^^^4-sized thing to write down a list of all nonboring 3^^^3-sized things in its spare time, then you would have a formal way to say what an "interesting 3^^^3-sized thing" ...
How does your proposed solution for Game 1 stack up against the brute-force metastrategy?
Well the brute force strategy is going to do a lot better, because it's pretty easy to come up with a number bigger than the length of the longest program anyone has ever thought to write, and then plugging that into your brute force strategy automatically beats any specific program that anyone has ever thought to write. On the other hand, the meta-strategy isn't actually computable (you need to be able to decide whether program produces large outputs, which requires a halting oracle or at least a way of coming up with large stopping times to test against). So it doesn't really make sense to compare them.
I think I can win Game 1 against almost anyone - in other words, I think I have a larger computable number than any sort of computable number I've seen anyone describe in these sorts of contests, where the top entries typically use the fast-growing hierarchy for large recursive ordinals, in contests where Busy Beaver and beyond aren't allowed.
Okay I have to ask. Care to provide a brief description? You can assume familiarity with all the standard tricks if that helps.
Ordinarily I'd leave this for SamLL to respond to, but I'd say the chances of getting a response in this context are fairly low, so hopefully it won't be too presumptuous for me to speculate.
First of all, we as a community suck at handling gender issues without bias. The reasons for this could span several top-level posts and in any case I'm not sure of all the details; but I think a big one is the unusually blurry lines between research and activism in that field and consequent lack of a good outside view to fall back on. I don't think we're methodologi...
I'll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a "rather mathematical nature", let alone a precise specification of a mathematical problem.
If you think that you are communicating clearly, then you are wrong. Try again.
That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).
I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?
This is largely a matter of keeping track of the distinction between "first order logic: the mathematical construct" and "first order logic: the form of reasoning I sometimes use when thinking about math". The former is an idealized model of the latter, but they are distinct and belong in distinct mental buckets.
It may help to write a proof checker for first order logic. Or alternatively, if you are able to read higher math, study some mathematical logic/model theory.
Note that this actually has very little to do with most of the seemingly hard parts of FAI theory. Much of it would be just as important if we wanted to create a recursively self modifying paper-clip maximizer, and be sure that it wouldn't accidentally end up with the goal of "do the right thing".
The actual implementation is probably far enough away that these issues aren't even on the radar screen yet.
Sorry I didn't answer this before; I didn't see it. To the extent that the analogy applies, you should think of non-standard numbers and standard numbers as having the same type. Specifically, the type of things that are being quantified over in whatever first order logic you are using. And you're right that you can't prove that statement in first order logic; Worse, you can't even say it in first order logic (see the next post, on Godel's theorems and Compactness/Lowenheim Skolem for why).
I am well versed in most of this math, and a fair portion of the CS (mostly the more theoretical parts, not so much the applied bits). Should I contact you now, or should I study the rest of that stuff first?
In any case, this post has caused me to update significantly in the direction of "I should go into FAI research". Thanks.
If you take this objection seriously, then you should also take issue with predictions like "nobody will ever transmit information faster than the speed of light", or things like it. After all, you can never actually observe the laws of physics to have been stable and universal for all time.
If nothing else, you can consider each as being a compact specification of an infinite sequence of testable predictions: "doesn't halt after one step", "doesn't halt after two steps",... "doesn't halt after n steps".
I don't think ZFC can prove the consistency of ZF either, but I'm not a set theorist.
Also not a set theorist, but I'm pretty sure this is correct. ZF+Con(ZF) proves Con(ZFC) (see http://en.wikipedia.org/wiki/Constructible_universe), so if ZFC could prove Con(ZF) then it would also prove Con(ZFC).
< is defined in terms of plus by saying x<y iff there exists a such that y=z+x. + is supposed to be provided as a primitive operation as part of the data consisting of a model of PA. It's not actually possible to give a concrete description of what + looks like in general for non-standard models because of Tenenbaums's Theorem, but at least when one of x or y (say x) is a standard number it's exactly what you'd expect: x+y is what you get by starting at y and going x steps to the right.
To see that x<y whenever x is a standard number and y isn't, ...
you can write a formula in the language of arithmetic saying "the Turing machine m halts on input i"
You get a formula which is true of the standard numbers m and i if and only if the m'th Turing machine halts on input i. Is there really any meaningful sense in which this formula is still talking about Turing machines when you substitute elements of some non-standard model?
You get a formula which is true of the standard numbers m and i if and only if the m'th Turing machine halts on input i. Is there really any meaningful sense in which this formula is still talking about Turing machines when you substitute elements of some non-standard model?
In a sense, no. Eliezer's point is this: Given the actual Turing machine with number m = 4 = SSSS0 and input i = 2 = SS0, you can substitute these in to get a closed formula φ whose meaning is "the Turing machine SSSS0 halts on input SS0". The actual formula is something li...
Nice post, but I think you got something wrong. Your structure with a single two-sided infinite chain isn't actually a model of first order PA. If x is an element of the two-sided chain, then y=2x=x+x is another non-standard number, and y necessarily lies in a different chain since y-x=x is a non-standard number. Of course, you need to be a little bit careful to be sure that this argument can be expressed in first order language, but I'm pretty sure it can. So, as soon as there is one chain of non-standard numbers, that forces the existence of infinitely many.
I bet a large portion of the readership would have been disappointed if that didn't happen.
And in this particular case, that was the only fast way for Alastor to gain enough respect for Harry's competence that they could cooperate in the future. It wouldn't have been consistent with his already established paranoia if he just believed Dumbledore & co.
I can imagine this getting old eventually, but imo it hasn't happened yet.
You can account for a theory where neurons cause consciousness, and where consciousness has no further effects, by drawing a causal graph like
(universe)->(consciousness)
where each bracketed bit is short-hand for a possibly large number of nodes, and likewise the -> is short for a possibly large number of arrows, and then you can certainly trace forward along causal links from "you" to "consciousness", so it's meaningful. And indeed for the same reason that "the ship doesn't disappear when it crosses the horizon" is meani...
In a universe without causal structure, I would expect an intelligent agent that uses an internal causal model of the universe to never work.
Of course you can't really have an intelligent agent with an internal causal model in a universe with no causal structure, so this might seem like a vacuous claim. But it still has the consequence that
P(intelligence is possible|causal universe)>P(intelligence|acausal universe).
My cousin is psychic - if you draw a card from his deck of cards, he can tell you the name of your card before he >looks at it. There's no mechanism for it - it's not a causal thing that scientists could study - he just does it.
I believe that your cousin can, under the right circumstances, reliably guess which card you picked. There are all sorts of card tricks that let one do exactly that if the setup is right. But I confidently predict that his guess is not causally separated from the choice of card.
To turn this into a concrete empirical prediction...
Did you mean to write "for all programs that halt in less than (some constant multiple of) N steps", because what you wrote doesn't make sense.
Yes. Edited.
What if I give you a program that enumerates all proofs under PA and halts if it ever finds proof for a contradiction? There is no proof under PA that this program doesn't halt, so your fake oracle will return HALT, and then I will have reasonable belief that your oracle is fake.
That's cool. Can you do something similar if I change my program to output NOT-HALT when it doesn't find a proof?
If I am allowed to use only exponentially more computing power than you (are far cry from a halting oracle), then I can produce outputs that you cannot distinguish from a halting oracle.
Consider the following program: Take some program P as input, and search over all proofs of length at most N, in some formal system that can describe the behaviour of arbitrary programs (ie first order PA) for a proof that P either does or does not halt. If you find a proof one way or the other, return that answer. Otherwise, return HALT.
This will return the correct answer ...
Two suggestions, sort of on opposite ends of the spectrum.
First: Practice doing "contest style" math problems. This helps your general math skills, and also helps get you used to thinking creatively and learning to gain some confidence in exploring your good ideas to their limit, while also encouraging you to quickly relinquish lousy approaches.
Second: Exercise. A lot. Whether or not you're already in good shape, you will almost inevitably find it hard to keep a healthy exercise routine when starting in college. So start building some good habits right away.
Re exercise: Good point, but I'd emphasize making a strong habit over doing it a lot. Spending a lot of time is easier during summer, but harder to carry over. Sure, do that, but also make sure you have a 15 minute routine, say, that you do every morning. Even a five minute routine isn't to be sneezed at, if you're doing bodyweight exercises like pushups.
Doing a stretch and 5 minutes of exercise during study breaks is worth a try. Could help avoid some of the physical problems with long hours of computer use. (Press down with your whole hand during pushups - strong fingers, hands and arms will help avoid RSI.)
The standard answer is there is such a strong "first mover advantage" for self-improving AIs that it only matters which comes first: If an FAI comes first, it would be enough to stop the creation of uFAI's (and also vice versa). This is addressed at some length in Eliezer's paper Artificial Intelligence as a Positive and Negative Factor in Global Risk.
I don't find this answer totally satisfying. It seems like an awfully detailed prediction to make in absence of a technical theory of AGI.
I'm not saying the apparent object level claim (ie intelligence implies benevolence) is wrong. Just that it does in fact require further examination. Whereas here it looks like an invisible background assumption.
Did my phrasing not make it clear that this is what I meant, or did you interpret me as I intended and still think it sounds condescending?
I am on vacation in Japan until the end of August, I might be interested in attending a meetup. Judging from the lack of comments here this never took off, but I might as well leave this here just in case.