All of Quill_McGee's Comments + Replies

Well, it does say '2016', so that seems... Yeah, that isn't plausible, but the fact that it says 2016 makes it more plausible than it would be otherwise.

After a bit of thought, I believe I've found a basically permanent solution for this. I use word replacer (not sure how to add links without just posting them, you can google it, it is in the chrome web store) with a bunch of rules to enforce 'they' as default. If you put rules for longer strings at the top they match first ('he is' to 'they are' at the top with 'he' to 'they' lower down, for example)

You will have to put up with some number mismatch unless you want to add a rule for every verb in English ('they puts'), but I feel that that is an acceptabl... (read more)

Whereas, if I am interpreting them correctly, what they are saying is

(1) People say that high IQ is the reason Newton invented calculus.

(2) High processing speed and copious amounts of RAM don't themselves suffice to invent calculus.

(3) Therefore, "High processing speed and copious amounts of RAM" is not a good description of high IQ.

Personally, I'd say that 'high IQ' is probably most useful when just used to refer to whatever it is that enables people to do stuff like invent calculus, and that 'working memory' already suffices for RAM, and that ... (read more)

"[[ My favorite "other" referral was someone who checked the URL on tinychat entirely be coincidence, before it was passworded. ]]"

Yep, that was surprisingly successful. I also had success with that tactic on fimfiction.net, though that produced fewer useful results.

(also, unless there's another 15-year-old, I look to be the youngest.)

The system for generating new fields of research? After all, if it generates other areas that are no longer philosophy reasonably regularly, then that actually creates value.

1Lumifer
Is it a system for generating new fields of research, or is it just a catch-all bin where all the nebulous, hazy, and vague things are kept until they firm up enough to become fields of research?
6dxu
Does it (still) do so, though? I'm aware that most of what is now science used to be called "natural philosophy", but nowadays it doesn't really seem like there's anything left.

A way to communicate Exists(N) and not Exists(S) in a way that doesn't depend on the context of the current conversation might be ""Santa" exists but Santa does not." Of course, the existence of "Santa" is granted when "Santa does not exist" is understood by the other person, so this is really just a slightly less ambiguous way of saying "Santa does not exist"

0TheOtherDave
Slightly.

I was thinking of the "feeling bad and reconsider" meaning. That is, you don't want regret to occur, so if you are systematically regretting your actions it might be time to try something new. Now, perhaps you were acting optimally already and when you changed you got even /more/ regret, but in that case you just switch back.

2Kindly
That's true, but I think I agree with TheOtherDave that the things that should make you start reconsidering your strategy are not bad outcomes but surprising outcomes. In many cases, of course, bad outcomes should be surprising. But not always: sometimes you choose options you expect to lose, because the payoff is sufficiently high. Plus, of course, you should reconsider your strategy when it succeeds for reasons you did not expect: if I make a bad move in chess, and my opponent does not notice, I still need to work on not making such a move again. I also worry that relying on regret to change your strategy is vulnerable to loss aversion and similar bugs in human reasoning. Betting and losing $100 feels much more bad than betting and winning $100 feels good, to the extent that we can compare them. If you let your regret of the outcome decide your strategy, then you end up teaching yourself to use this buggy feeling when you make decisions.

In my opinion, one should always regret choices with bad outcomes and never regret choices with good outcomes. For Lo It Is Written ""If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."" As well It Is Written "If it's stupid but it works, it isn't stupid." More explicitly, if you don't regret bad outcomes just because you 'did the right thing,' you will never notice a flaw in your conception of 'the right thing.' This results in a lot of unavoidable regret, and so might not be a good algorithm in practice, but at least in principle it seems to be better.

3Epictetus
Take care to avoid hindsight bias. Outcomes are not always direct consequences of choices. There's usually a chance element to any major decision. The smart bet that works 99.99% of the time can still fail. It doesn't mean you made the wrong decision.
3TheOtherDave
It not only results in unavoidable regret, it sometimes results in regretting the correct choice. Given a choice between "$5000 if I roll a 6, $0 if I roll between 1 and 5" and "$5000 if I roll between 1 and 5, $0 if I roll a 6," the correct choice is the latter. If I regret my choice simply because the die came up 6, I run the risk of not noticing that my conception of "the right thing" was correct, and making the wrong choice next time around.

On the contrary, this is what the Litany of Tarski states.

0James_Miller
But by the Litany of Tarski, I want to desire the truth, I want the truth to be desirable.

exactly! No knock-on effects. Perhaps you meant to comment on the grandparent(great-grandparent? do I measure from this post or your post?) instead?

0private_messaging
yeah, clicked wrong button.

In the Least Convenient Possible World of this hypothetical, each and every dust speck causes a small constant amount of harm, with no knock-on effects(no increasing one's appreciation of the moments when one does not have dust in ones eye, no preventing a 'boring painless existence,' nothing of the sort). Now it may be argued whether this would occur with actual dust, but that is not really the question at hand. Dust was just chosen as being a 'seemingly trivial bad thing.' and if you prefer some other trivial bad thing, just replace that in the problem and the question remains the same.

In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)

0private_messaging
I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.

"if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it"

In this formalism we generally assume infinite resources anyway. And even if this is not the case, consistent/inconsistent doesn't depend on resources, only on the axioms and rules for deduction. So this still doesn't let you increase in proof strength, although again it should help avoid losing it.

1V_V
If we are already assuming infinite resources, then do we really need anything stronger than PA? A formal system may be inconsistent, but a resource-bounded theorem prover working on it might never be able to prove any contradiction for a given resource bound. If you increase the resource bound, contradictions may become provable.

I don't think he was talking about self-PA, but rather an altered decision criteria, such that rather that "if I can prove this is good, do it" it is "if I can prove that if I am consistent then this is good, do it" which I think doesn't have this particular problem, though it does have others, and it still can't /increase/ in proof strength.

1V_V
Yes. Mmm, I think I can see it. What about "if I can prove that if a version of me with unbounded computational resources is consistent then this is good, do it". (*) It seems to me that this allows increase in proof strength up to the proof strength of that particular ideal reference agent. (* there should be probably additional constraints that specify that the current agent, and the successor if present, must be provably approximations of the unbounded agent in some conservative way)

That AI doesn't drop an anvil on its head(I think...), but it also doesn't self-improve.

I think that what Joshua was talking about by 'infinite loop' is 'passing through the same state an infinite number of times.' That is, a /loop/, rather than just a line with no endpoint. although this would rule out (some arbitrary-size int type) x = 0; while(true){ x++; } on a machine with infinite memory, as it would never pass through the same state twice. So maybe I'm still misunderstanding.

Wasn't Löb's theorem ∀ A (Provable(Provable(A) → A) → Provable(A))? So you get Provable(⊥) directly, rather than passing through ⊥ first. This is good, as, of course, ⊥ is always false, even if it is provable.

3Kutta
You're right, I mixed it up. Edited the comment.

Darn it, and I counted like five times to make sure there really were 10 visible before I said anything. I didn't realize that the stone the middle-top stone was on top of was one stone, not two.

There might be one more stone not visible?

1[anonymous]
10 would still be incorrect.
1Transfuturist
I see nine stones, not ten.

It should be noted that if measured IQ is fat-tailed, this is because there is something wrong with IQ tests. IQ is defined to be normally distributed with a mean of 100 and a standard deviation of either 15 or 16 depending on which definition you're using. So if measured IQ is fat-tailed, then the tests aren't calibrated properly(of course, if your test goes all the way up to 160, it is almost inevitably miscalibrated, because there just aren't enough people to calibrate it with).

4JonahS
You don't want to force a normal distribution on the data. You're free to do so if you'd like, e.g. by asking takers millions of questions so as to get very fine levels of granularity, and then mapping people at the 84th percentile of "questions answered correctly" to IQ 115, people at the 98th percentile to IQ 130, etc. But what you really want is a situation where you have a (log)-linear relationship between standard deviations and other things that IQ correlates with, and if you force the data to obey a normal distribution, you'll lose this. The rationale for using a normal distribution is the central limit theorem, but that holds only when the summands are uncorrelated: assortative mating can induce correlations between e.g. having gene A that increases IQ and having gene B that increases IQ.

I would disagree with the phrasing you use regarding 'human terminal values.' Now, I don't disagree that evolution optimized humans according to those criteria, but I am not evolution, and evolution's values are not my values. I would expect that only a tiny fraction of humans would say that evolution's values should be our values(I'd like to say 'none,' but radical neo-darwinians might exist). Now, if you were just saying that those are the values of the optimization process that produced humanity, I agree, but that was not what I interpreted you as saying.

I assume you either linked to this in the post, or it has been mentioned in the comments, but I did not catch it in either location if it was present, so I'm linking to it anyway: http://intelligence.org/files/Non-Omniscience.pdf contains a not merely computable but tractable algorithm for assigning probabilities to a given set of first-order sentences.

0[anonymous]
Buggery. I meant to read that paper over the summer and never got around to it.

"S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1" I don't see how this follows. Perhaps it is because, if the system was sound, it would never prove more than one value for U() for a given a, therefore by the principle of explosion it proves A()≠1? But that doesn't seem to actually follow. I'm aware that this is an old post, but on the off chance that anyone ever actually sees this comment, help would be appreciated.

Personally, I fall on the 'all of the above(except idea A)' side of the fence. I primarily use LessWrong for the Main board, as it is an excellent source of well-edited, well-considered articles, containing interesting or useful ideas. I want the remainder of the site to thrive because if there is not a large, active userbase and new users being attracted, then I would expect to see the types of content I want to see become less frequent. All of these ideas seem like good things to do, keeping in mind that if these do not actually support the goal of making good Main articles more frequent, then they are not good things, and it seems possible that some of these could backfire.

Well, this comes up different ways under different interpretations. If there is a chance that I am being simulated, that is this is part of his determining my choice, then I give him $100. If the coin is quantum, that is there will exist other mes getting the money, I give him $100. If there is a chance that I will encounter similar situations again, I give him $100. If I were informed of the deal beforehand, I give him $100. Given that I am not simulated, given that the coin is deterministic, and given that I will never again encounter Omega, I don't thin... (read more)

My resolution to this, without changing my intuitions to pick things that I currently perceive as 'simply wrong', would be that I value certainty. A 9/10 chance of winning x dollars is worth much less to me than a 10/10 chance of winning 9x/10 dollars. However, a 2/10 chance of winning x dollars is worth only barely less than a 4/10 chance of winning x/2 dollars, because as far as I can tell the added utility of the lack of worrying increases massively as the more certain option approaches 100%. Now, this becomes less powerful the closer the odds, are, but... (read more)

http://www.fungible.com/respect/index.html This looks to be very related to the idea of "Observe someone's actions. Assume they are trying to accomplish something. Work out what they are trying to accomplish." Which seems to be what you are talking about.

1[anonymous]
That looks very similar to what I was writing about, though I've tried to be rather more formal/mathematical about it instead of coming up with ad-hoc notions of "human", "behavior", "perception", "belief", etc. I would want the learning algorithm to have uncertain/probabilistic beliefs about the learned utility function, and if I was going to reason about individual human minds I would rather just model those minds directly (as done in Indirect Normativity).

(aware that this is 2 years late, just decided to post) I find that I work, on average,somewhere between 2-3 times as fast when I am right up next to a deadline,than when I have plenty of time.

Does it count if the state of trying lasted for a long(but now ended) time? because if so, I kept on trying to create a bijection between the reals and the wholes, until I was about 13 and found an actual number that I could actually write down that none of my obvious ideas could reach, and find an equivalent for all the non obvious ones.( 0.21111111..., by the way)