All of smoofra's Comments + Replies

smoofra50

What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?

I think what you're saying about needing some way to change our minds is a good point though. And I certainly wouldn't say that every single object-level belief I hold is more secure than every meta belief. I'll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.

But I don't think that... (read more)

smoofra50

I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.

I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.

If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA o... (read more)

0maxikov
PA has a big advantage over object-level ethics: it never suggested things like "every tenth or so number should be considered impure and treated as zero in calculations", while object-level ethics did. The closes thing I can think of in mathematics, where everyone believed X, and then it turned out not X at all, was the idea that it's impossible to take every elementary integral algorithmically or prove that it's non-elementary. But even that was a within-system statement, not meta-statement, and it has an objective truth value. Systems as whole, however, don't necessarily have it. Thus, in ethics either individual humans or the society as whole need a mechanism for discarding ethical systems for good, which isn't that big of an issue for math. And the solution for this problem seems to be meta-ethics.
smoofra10

I haven't. I'll see if I can show up for the next one.

smoofra00

this was also the part of Dalliard's critique I found most convincing. Shalizi's argument seems to a refutation of a straw man.

smoofra40

One thing Dalliard mentions is that the 'g' derived from different studies are 'statistically indistinguishable'. What's the technical content of this statement?

2Deleet
There is a test to see how similar two factors are. When that test gives results in the >.95 area, the factors are usually taken to be indistinguishable. It's called congruence coefficients. See e.g. Jensen, Arthur R., and Li-Jen Weng. "What is a good g?." Intelligence 18.3 (1994): 231-258.
smoofra10

thanks for the link.

Not that I feel particularly qualified to judge, but I'd say Dalliard has a way better argument. I wonder if Shalizi has written a response.

3gwern
It only just came out, but given that in his earlier posts he expressed disgust with the entire field and regretted writing anything on the topic, I wouldn't expect him to.
smoofra10

wow that's a neat service.

smoofra20

It looks like we may have enough people interested in Probability Theory, Though I doubt we all live in the same city. I live near DC.

Depending on how many people are interested/where they live, it might make sense to meet over video chat instead.

0maia
Have you come to any DC meetups? They're pretty good. Though sadly, I think most people in the DC group who might be interested in doing this (including me) are already signed up to learn all the math they can handle in a formal program.
0Rixie
Yeah, I agree. I think that we should make a list of everyone who wants to join, split them into groups of not more than 10 based on age, and every mini-group will decide what they want to learn and go at a pace that matches their background and ability.
smoofra30

So you are assuming that it will be wanting to prove the soundness of any successors? Even though it can't even prove the soundness of itself? But it can believe in it's own soundness in a Bayesian sense without being able to prove it. There is not (as far as I know) any Godelian obstacle to that. I guess that was your point in the first place.

smoofra00

OK, forget about F for a second. Isn't the huge difficulty finding the right deductions to make, not formalizing them and verifying them?

smoofra50

This is all nifty and interesting, as mathematics, but I feel like you are probably barking up the wrong tree when it comes to applying this stuff to AI. I say this for a couple of reasons:

First, ZFC itself is already comically overpowered. Have you read about reverse mathematics? Stephen Simpson edited a good book on the topic. Anyway, my point is that there's a whole spectrum of systems a lot weaker than ZFC that are sufficient for a large fraction of theorems, and probably all the reasoning that you would ever need to do physics or make real wo... (read more)

3paulfchristiano
As others have said, doesn't much matter whether you use ZFC or any other system. On the second point: one general danger is that your ability to build systems to do X will outpace your understanding of systems that do X. In this case, you might mess up and not quite get what you want. One way to try and remedy this is to develop a better formal understanding of "systems that do X." AGI seems like an important case of this pattern, because there is some chance of serious negative consequences from failing, from building systems whose behavior you don't quite understand. (I tend to think this probability is smaller than Eliezer, but I think most everyone who has thought about it seriously agrees that this is a possible problem.) At the moment we have no such formal understanding for AGI, so it might be worth thinking about.

The problem is if your mathematical power has to go down each time you create a successor or equivalently self-modify. If PA could prove itself sound that might well be enough for many purposes. The problem is if you need a system that proves another system sound and in this case the system strength has to be stepped down each time. That is the Lob obstacle.

0lukeprog
Both are huge difficulties, but most of the work in FAI is probably in the AI part, not the F part.
Nisan110

The result works for theories as simple as Peano arithmetic.

smoofra10

I don't think you've chosen your examples particularly well.

Abortion certainly can be a 'central' case of murder. Immagine aborting a fetus 10 minutes prior to when it would have been born. It can also be totally 'noncentral': the morning after pill. Abortions are a grey area of central-murder depending on the progress of neural devlopment of the fetus.

Affermative action really IS a central case of racism. It's bad for the same reason as segregation was bad, because it's not fair to judge people based on their race. The only difference is that it's not nearly AS bad. Segregation was brutal and oppressive, while affermative action doesn't really affect most peopel enough for them to notice.

smoofra30

What do you think you're adding to the discussion by trotting out this sort of pedantic literalism?

Unless someone explicitly says they know something with absolute 100% mathematical certainty, why don't you just use your common sense and figure that when they say they "know" something, they mean they assign it a very high probability, and believe they have epistemologically sound reasons for doing so.

smoofra50

"Trust your intuitions, but don't waste too much time arguing for them"

This is an excellent point. Intuition plays an absolutely crucial point in human thought, but there's no point in debating an opinion that (by definition, even) you're incapable of verbalizing your reasons for. Let me suggest another maxim:

Intuitions tell you where to look, not what you'll find.

smoofra40

wait so, are you agreeing with me or disagreeing?

-1ChrisHibbert
You didn't state a point of view. I'm surprised that MatthewB was willing to guess at what side you were taking.
4MatthewB
I think I am agreeing with you. I am saying that Hitler did think about how to yell at crowds, and then took action to that effect (learned to yell at them effectively, and then yelled at them; to great effect, so it would seem).
smoofra100

What makes you think Hitler didn't deliberately think about how to yell at crowds?

5MatthewB
From reading histories of him. He took classes on how to yell at crowds; studied it in great detail.
smoofra50

You're confusing "reason" with inappropriate confidence in models and formalism.

smoofra40

I vote for the meta-thread convention, or for any other mechanism that keeps meta off the front page.

smoofra20

I think the main problem with mormon2's submission was not where it was posted, but that it was pointless and uninformed.

4PeterS
Eliezer's main problem with it was where it was posted (or that's all he let on to anyway).
smoofra30

I suggest you run an experiment. Go try to eat at a restaurant and explicitly state your intention not to tip. I predict the waiter will tell you to fuck off, and if the manager gets called out, he'll tell you to fuck off too.

3Cyan
Upvoted. I'll have to try to find time to do this, although I have qualms over the jerkiness of subjecting unsuspecting waitstaff to this experiment. Oh, well -- I guess I'll just have to leave a big tip. ETA: If my waiter does tell me to fuck off, I won't ask for the manager -- if I'm right, then that would get the waiter fired, and I'm not up for that.
smoofra00

I basically agree with you, though I'm not sure the legal distinction between "theft" and "breach of contract" is meaningful in this context. As far as I know there's no law that says you have to tip at all. So from a technical legal perspective, failing to tip is neither theft nor breach of contract nor any other offense.

smoofra-10

It may not be legal theft, but it's still moral theft. You sat down and ate with the mutual understanding that you would tip. The only reason the waiter is bringing you food is because of the expectation that you will tip. If you announced your intention not to tip, he would not serve you, he would tell you to fuck off. The tip is a payment for a service, it is not a gift. The fact that the agreement to pay is implicit, the fact that the precise amount of the payment is left partially unspecified are merely technicalities that do not change the basic fact that the tip is a payment, not a gift.

3Cyan
Wacky. The waiter brings food because that's the job description. And then the manager would fire him or her. I tip, often generously, never at less than the standard 15%, but I have no illusions about the enforceability of the tipping folkway.
smoofra80

You don't tip in order to be altruistic, you tip because you informally agreed to tip by eating in a restaurant in the first place. If you don't tip (assuming the service was acceptable), you aren't being virtuous, you're being a thief.

Perhaps you should say the correct moral move is to tip exactly 15%.

3mattnewport
I see the implicit contract slightly differently. When you eat at a restaurant in North America you enter into (at least) two implicit contracts. The first is to pay for the food and drink you consume before you leave (not to do so would certainly be theft). The second is to pay a service charge that you feel is appropriate to the quality of service you received, with 15-20% generally considered appropriate for service that is of average quality and lower or higher tips appropriate for below or above average quality service. You are in breach of the second implicit contract if you tip below 15% despite being satisfied with the service but I think calling that theft is a little strong. Bad faith or breach of contract would be closer to describing the offence. The system would be pointless if you never raised or lowered your tip to reflect the quality of the service as you perceive it however. The system works to the extent that the cultural norm persists. If something caused the cultural norm to break down then new informal contracts would have to arise, perhaps more like the ones found in Europe where tipping is not the expected norm. Tipping is not particularly unusual in relying on widespread adherence to an implicit/informal contract however - paying for your meal after you eat it is just as reliant on cultural norms.
2Alicorn
It's only theft not to tip if they actually include the tip in the bill as a "service charge". Otherwise, the tip is technically a gift. Withholding a customary gift might be mean (compare the likely outrage if you don't get your children gifts on their birthdays) but it's not stealing.
smoofra30

I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.

smoofra30

If I think I know a more efficient way to make a widget, I still need to convince somebody to put up the capital for my new widget factory.

smoofra50

But if results depend on my ability to convince rich people, that's not prediction market!

what!? Why not?

1gwern
Suppose the situation were that taw could make bets on the terms he wishes for - but only if he can convince 5 out of 9 rich people. How is this a market, and not some sort of bizarre committee or bureaucracy?
smoofra10

I guess it depends on how you define bullet-biting. Let me be more specific: voted up for accepting an ugly truth instead of rationalizing or making excuses.

3Alicorn
Is bullet-biting an inherently good thing? Is it even reliably correlated with good things?
smoofra20

Arbitrage, in the broadest sense, means picking up free money - money that is free because of other people's preferences

except, finding exploitable inconsistencies in other peoples preferences that haven't yet been destroyed by some other arbitrageur actually requires a fair bit of work and/or risk.

4LauraABJ
My husband's law professor described arbitrage as grabbing at nickels from in front of a bulldozer... The point being that you really need to know what your doing as an arbitrageur to make any money at all, and if you don't, you stand to lose quite a bit.
0Stuart_Armstrong
From the point of view of the person being arbitraged, this makes no difference...
0Technologos
Answered in this comment.
1John_Maxwell
There are a few things--voting, lotteries, the viability of picking up pennies off the ground--that draw way too much attention from rationalists. Not criticizing you here, I'm interested in them too!
smoofra30

Well, no.

Status is a informal, social concept. The legal system doesn't have much to do with "awarding" it.

smoofra90

In my experience, children are cruel, immoral, egotistical, and utterly selfish. The last thing they need is to have their inflated sense of self worth and entitlement stroked by the sort of parenting you seem to be advocating. Children ought to have fundamentally lower status, not just because they're children per se, but because they're stupid and useless. They should indeed be grateful that anyone would take the trouble to feed and care for someone as stupid and useless as they, and repay the favor by becoming stronger.

2akshatrathi
I am not a parent myself but I've been told a lot of times by my parents and others that they have learnt a great deal from children. Thus, calling them useless is not fair. Also, even now children in rural India are treated as future bread-earners. Thus, taking care of them and helping them grow is seen as an advantage to the parents. Stupid, yes they may be but then weren't we all?
dclayh110

Children ought to have fundamentally lower status, not just because they're children per se, but because they're stupid and useless.

So then the legal system should award status based on usefulness and intelligence, not age as in the present system.

Alicorn210

Children are ignorant and powerless; that's not the same as stupid and useless.

3MBlume
I don't know that I agree with you, but * I also don't know whether I disagree with you * No one else came close to this point * and you made it well so upvoted.
smoofra00

an other example: cox's theorem.

smoofra10

"The truly fast way to produce a human-relative ideal moral agent is to create an AI with the interim goal of inferring the "human utility function" (but with a few safeguards built in, so it doesn't, e.g., kill off humanity while it solves that sub-problem),"

That is three-laws-of-robotics-ism, and it won't work. There's no such thing as a safe superintelligince that doesn't already share our values.

1bogdanb
Sure­ly there can be such su­per-in­tel­li­gences: Imag­ine a (per­haps autis­tic) IQ-200 guy who just wants to stay in his room and play with his pa­per­clips. He doesn’t re­al­ly care about the rest of the world, he doesn’t care about ex­tend­ing his in­tel­li­gence fur­ther, and the rest of the world doesn’t quite care about his pa­per­clips. Now re­place the guy with an AI with the same val­ues: it’s quite su­per-in­tel­li­gent al­ready, but it’s still safe (in the sense that ob­jec­tive­ly it poses no threat, other than the fact that the re­sources it uses play­ing with its pa­per­clips could be used for some­thing else); I have no prob­lem scal­ing its in­tel­li­gence much fur­ther and leav­ing it just as be­nign. Of course, once it’s su­per-in­tel­li­gent (quite a bit ear­li­er, in fact), it may be very hard or im­pos­si­ble for us to de­ter­mine that it’s safe — but then again, the same is true for hu­mans, and quite a few of the bil­lions of ex­ist­ing and past hu­mans are or have been very dan­ger­ous. The dif­fer­ence be­tween “X can’t be safe” and “X can’t be de­ter­mined to be safe” is im­por­tant; the first means “prob­a­bil­i­ty we live, given X, is zero”, and the other means “prob­a­bil­i­ty we live, given X, is strictly less than one”.
smoofra00

it's perfectly possible for one twin to get fat while the other doesn't. If it doesn't happen often, it's because features like willpower are more controlled by genes than we think, not because staying thin doesn't depend on willpower.

0NancyLebovitz
I've also read that gene expression diverges in twins over time-- so if a lot of the difference in body composition is about gene expression, there might be a few pairs of twins where, just by chance, either the willpower or the fat storage changes kick in earlier or more strongly. "Willpower" is not just one thing-- there are people who can't stick to diets who show a lot of will power in other parts of their lives, and vice versa.
2MichaelGR
Indeed. This is why I wrote:
0[anonymous]
in fact, let me draw a picture what most people think the casual graph is: genes ----\ willpower ---> fatness what looking at twins is supposed to convince us: genes ----> fatness willpower what's really going on: genes ----------------\ willpower ----->
smoofra10

I figured it out! Roger Penrose is right about the nature of the brain!

just kidding.

smoofra00

Yes, I think it will change the decision. You need a very large number of minuscule steps to go from specs to torture, and at each stage you need to decimate the number of people affected to justify inflicting the extra suffering on the few. It's probably fair to assume the universe can't support more than say 2^250 people, which doesn't seem nearly enough.

2RolfAndreassen
You can increase the severity of the specking accordingly, though. Call it PINPRICKS, maybe?
smoofra50

These thought experiments all seem to require vastly more resources than the physical universe contains. Does that mean they don't matter?

3RolfAndreassen
It seems to me that you can rephrase them in terms of the resources the universe actually does contain, without changing the problem. Take SPECKS: Suppose that instead of the 3^^^^3 potential SPECKing victims, we instead make as many humans as possible given the size of the universe, and take that as the victim population. Should we expect this to change the decision?
4Eliezer Yudkowsky
What if we're wrong about the size of the universe?

As with Torture vs. Specks, the point of this is to expose your decision procedure in a context where you don't have to compare remotely commensurable utilities. Learning about the behavior of your preferences at such an extreme can help illuminate the right thing to do in more plausible contexts. (Thinking through Torture vs. Dust Specks helped mold my thinking on public policy, where it's very tempting to weigh the salience of a large benefit to a few people against a small cost to everyone.)

EDIT: It's the same heuristic that mathematicians often use when we're pondering a conjecture— we try it in extreme or limiting cases to see if it breaks.

smoofra10

seems to me that ESR is basically right, except, I'm not sure Dennet would even disagree. Maybe he'll reply in a comment?

0thomblake
Yes, by now ESR was corrected in the comments and has mentioned that he probably misread Dennett.
0billswift
I'm not sure either, but at the least his explanation of what he thinks qualia is is clearer than any other that I have seen.
smoofra-10

Yup. I get all that. I still want to go for the specs.

Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn't actually matter. I don't know. But I'm just not convinced.

7Eliezer Yudkowsky
Just give up already! Intuition isn't always right!
smoofra10

Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn't see it last time around.

I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there's something wrong with the escalation argument that I'm not presently clever enough to find. It's a bit like reading a proof that 2+2 = 5. You know you've just read a proof, and you checked each step, but you still, justifiably, don't believe it. It's far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.

2orthonormal
Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings' strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren't perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life. For one thing, our moral intuitions don't shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn't necessary; a million is enough to take us far outside of our usual domain), it's important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc. Keep in mind that this isn't arguing for implementing Utilitarianism of the "kill a healthy traveler and harvest his organs to save 10 other people" variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it's being implemented. The circularity of "SPECKS" just serves to point out one more domain in which Eliezer's Maxim applies:
smoofra00

the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)

There is no additivity axiom for utility.

0Dan_Moore
What smoofra said (although I would reverse the signs and assign torture and dust specks negative utility). Say there is a singularity in the utility function for torture (goes to negative infinity). The utility of many dust specks (finite negative) cannot add up to the utility for torture.
0orthonormal
This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn't additive enough to prefer that a much smaller number of people suffer a little bit more?
1cousin_it
This is called the "proximity argument" in the post. I've no idea how we're managing to have this discussion under a deleted submission. It shouldn't have even been posted to LW! It was live for about 30 seconds until I realized I clicked the wrong button.
smoofra10

I don't think it's an exact quote of anything on OB or LW. If it is then my subconscious has a much better memory than I do. I was just attempting to relate the Bourdain quote to OBLW terminology.

smoofra20

Yea, but then it wouldn't be a quote anymore!

smoofra80

"I don't, I've come to believe, have to agree with you to like you, or respect you."

--Anthony Bourdain.

Never forget that your opponents are not evil mutants. They are the heroes of their own stories, and if you can't fathom why they do what they do, or why they believe what they believe, that's your failing not theirs.

1Eliezer Yudkowsky
Last paragraph is an OBLW quote, no? Those don't go here...
2Psychohistorian
Removing the second and either the third or fourth clauses would make this a much stronger quote, i.e.
6Z_M_Davis
Interestingly though, by accepting this symmetry between you and your enemy, you potentially thereby break it. If you can understand why they believe what they believe, but they don't understand why you believe what you do, then you can justifiably consider yourself in a superior epistemic position.
smoofra00

If anyone guesses above 0, anyone guessing 0 will be beaten by someone with a guess between 0 and the average.

if the average is less than 3/4 then the zeros will still win

0lavalamp
It depends where in the range of 0-average the guess is. But of course I see what you mean; I meant between 0 and (average * 3/4), sorry. EDIT: (average 3/4 + average 3/8) is the upper bound, unless I forgot something or you're not allowed to go over. EDIT 2: The point being, there's a lot more winning non-zero answers than zero answers.
smoofra00

you are confusing wanting "truth" with wanting the beliefs you consider to be true.

What a presumptuous, useless thing to say. Why don't you explain how you've deduced my confusion from that one sentence.

Apparently you think I've got a particular truth in mind and I'm accusing those who disagree with me of deprioritizing truth. Even if I was, why does that indicate confusion on my part? If I wanted to accuse them of being wrong because they were stupid, or of being wrong because they lacked the evidence, I would have said so. I'm accusing... (read more)

0byrnema
OK. I thought I was arguing with another version of "if you're not rational, then you don't value truth". That was presumptuous. And you're right, there is this other category of being rather indifferent or careless with respect to the truth, especially if the truth may be unpleasant or require work. I observe I have a knee jerk reaction to defend the "them" group whenever there is any kind of anti-"other-people" argument... and it is not my intention to be an indiscriminate bleeding-heart defender, so I need to consider this.
smoofra00

thanks! I haven't seen that one before.

I'm working on a post on this topic, but I don't think I can really adequately address what I don't like about how Jayne's presents the foundations of probability theory without presenting it myself the way I think it ought to be. And to do that I need to actually learn some things I don't know yet, so it's going to be a bit of a project.

smoofra10

Interestingly, those goals I described us in terms of -- wanting truth, wanting to avoid deluding ourselves -- are not really what separates "us" from "them".

I'm not sure if that's true. Everyone says they want the truth, but often reveal though their actions that it's pretty low on the priority list. Perhaps we should say that we want truth more than most people. Or that we don't believe we can get away with deceiving ourselves without paying a terrible price.

0byrnema
I disagree with this. Of course, not everyone places seeking truth as their highest priority. (A certain kind of mindless hedonist , perhaps.) But when you say, "everyone says they want the truth, [...], but it's pretty low on the priority list" you are confusing wanting "truth" with wanting the beliefs you consider to be true. In other words, your version of the truth is low on their priority list. You don't have to be relativistic about what truth is, but I think it is a false belief to think that people don't believe their beliefs are true. I would also like to add that confusion about beliefs seems to be a common human state, at least transiently. It is too negative to call the state when a person has conflicting, inconsistent beliefs "delusional". Sometimes life teaches a person that this state is impossible to get out of -- I hope this is a false belief -- and they become complacent about having some subset of beliefs that they know are false. This complacence (really, a form of despair) is the closest example I can think of for a person not wanting truth.
Load More