All of CalmCanary's Comments + Replies

So if I spouted 100 billion true statements at you, then said, "It would be good for you to give me $100,000," you'd pay up?

9Houshalter
If you just said a bunch of trivial statements 1 billion times, and then demanded to give you money, it would seem extremely suspicious. It does not fit with your pattern of behavior. If, on the other hand, you gave useful and non-obvious advice, I would do it. Because the demand to give you money wouldn't seem any different than all the other things you told me to do that worked out. I mean, that's the essence of the human concept of earning trust, and betrayal.
1Unknowns
Yes, I would, assuming you don't mean statements like "1+1 = 2", but rather true statements spread over a variety of contexts such that I would reasonably believe that you would be trustworthy to that degree over random situations (and thus including such as whether I should give you money.) (Also, the 100 billion true statements themselves would probably be much more valuable than $100,000).
2faul_sname
If those 100 billion true statements were all (or even mostly) useful and better calibrated than my own priors, then I'd be likely to believe you, so yes. On the other hand, if you replace $100,000 with $100,000,000,000, I don't think that would still hold. I think you found an important caveat, which is that the fact that an agent will benefit from you believing a statement weakens the evidence that the statement is true, to the point that it's literally zero for an agent that you don't trust at all. And if an AI will have a human-like architecture, or even if not, I think that would still hold.
-2TheAncientGeek
You may be already doiving this, giving money to people whose claims you believe yoursel

Very interesting post, but your conclusion seems too strong. Presumably, if instead of messing around with artificial experiencers, we just fill the universe with humans being wireheaded, we should be able to get large quantities of real pleasure with fairly little actually worthwhile experiences; we might even be able to get away with just disembodied human brains. Given this, it seems highly implausible that if we try to transfer this process to a computer, we are forced to create agent so rich and sophisticated that their lives are actually worth living.

2Stuart_Armstrong
By this argument, we might not. If the wireheaded human beings never have experiences and never access their memories, in what way do they remain human beings? ie if we could lobotomise them without changing anything, are they not already lobotomised?

Correspondence (matching theories to observations) is a subset of coherence (matching everything with everything)

Correspondence is not just matching theories to observation. It is matching theories to reality. Since we don't have pure transcendent access to reality, this involves a lot of matching theories to observation and to each other, and rejecting the occasional observation as erroneous; however, the ultimate goal is different from that of coherence, since perfectly coherent sets of statements can still be wrong.

If your point is that "realit... (read more)

0TheAncientGeek
The correspondence theory of truth is a theory of truth, not a theory of justification. Correspondentists don't match theories to reality, since they don't have direct ways of detecting a mismatch, they use proxies like observation sentences and predictions. Having justified the a theory as being true, they then use correspondence to explain what it's truth consists of.
1[anonymous]
As far as I can tell, most coherentists want to match theories with reality too, because truth doesn't really have any other useful definition. The goal is not to be coherent within a random and reality-detached set of sentences: the goal is to be coherent with the whole of science. When a scientists rejects (assigns very low probability to) the observation of a perpetuum mobile on the basis that it contradicts the laws of physics, that is a standard coherentist move. This is another one. The goal is to avoid having to waste time and costs on non-fruitful data gathering. Ultimately the only thing that is rejected is that blind data-only approach that may be considered the straw-manning of the correspondenceist position, except that one is actually unfortunately used too much. A coherentist will simply not spend money buying an airplane ticket to check if someone's garage has a dragon, the proposition contradicts so much we already know that the very low prior probability does not worth the cost. You may as well call this a wiser version of correspondencism, the barriers are not exactly black and white here. This is unfortunately philosophy, so fairly muddy :)

Those are entirely valid points, but they only show that human desires are harder to satiate than you might think, not that satiating them would be insufficient to eliminate scarcity. And in fact, that could not possibly be the case even granting economy's definition of scarcity, because if you have no unmet desires, you do not need to make choices about what uses to put things to. If once you digitize your books, you want nothing in life except to read all the books you can now store, you don't need to put the shelf space to another use; you can just leave it empty.

0Xerographica
Human desires are harder to satiate than I might think? Well...I think that human desire is insatiable. Let's juxtapose a couple wonderful passages... ...and... Human desire is limitless. The challenge is to figure out how to help people understand that we sabotage progress when we prevent each and every individual from having the freedom to prioritize their desires. And prioritizing desires doesn't mean making a list... it means sacrificing accordingly. This effectively communicates to others what's really important to us. Without this essential information... how can other people make informed decisions about how to put society's limited resources to more valuable uses? Right now we can't choose where our taxes go. Instead, we elect a small group of people to represent humanity's powerful desires for a better world. The point of understanding concepts such as "scarcity" and "opportunity cost" is to effectively evaluate the efficacy of our current system. If it's beneficial to block your valuations from the public sector... then why isn't it beneficial to block your valuations from the private sector? Why do your unique private priorities matter but your unique public priorities do not? How can your freedom to sacrifice a fancy dinner for 10 books be more important than your freedom to sacrifice the drug war for cancer research? I've given this article a thumbs up. Does this effectively communicate how much I value it? Does the number of thumbs up that this article has received effectively communicate how much we value it? Does it really matter how much everybody on this forum values this article? Does it really matter how much we'd be willing to sacrifice/forego/give-up for this article? The point that I'm trying to make is that these economic tools aren't fancy paperweights. Whether our institutions are the forums that we participate in... or the governments that we pay taxes to... these economic tools serve a fundamentally important purpose of helping us to

You should add a link to the previous post at the top, so people who come across this don't get confused by the sand metaphor.

This will hopefully be addressed in later posts, but on its own, this reads like an attempt to legislate a definition of the word 'scarcity' without a sufficient justification for why we should use the word in that way. (It could also be an explanation of how 'scarcity' is used as a technical term in economics, but it is not obvious to me that the alternate uses/unsatiated desires distinction is relevant to what most economists spen... (read more)

1[anonymous]
Evidence would be any textbook, I hope. Problems are when economists speak lazily to laypeople and students or when use "want" in particular way so that definition comes out to one presented here. Indeed have used "use" in such a way, to some confusion, I see. Definition presented here is, as far as I know, the universally agreed upon definition in economics. No game, but need to understand how it differs from normal usage and common misconception. Most economists, it is true, do not spend most their time on the most basic material....
2Xerographica
Is everybody going to have their own replicators on a spaceship? If so, just how big are they going to be? And, if they are big enough to replicate an elephant... then... it seems like it's useful to think of the alternative uses of the space that all the large replicators and their produced items take up on a ship with limited space. So at no point do you ever truly get away from the benefit maximization problem. Right now I have a bunch of bookcases filled with books. In theory I could free-up a bunch of space by digitizing all my books. But then I'd still have to figure out what to do with the free space. Basically, it's a continual process of freeing up resources for more valuable uses. Being free to valuate the alternatives is integral to this process. You can't free-up resources for more valuable uses when individuals aren't free to weigh and forego/sacrifice/give-up the less valuable alternatives. Some good reading material... * Simon–Ehrlich wager * Running Out of Everything * Economists and Scarcity

From Yudkowsky's Epistle to the New York Less Wrongians:

Knowing about scope insensitivity and diminishing marginal returns doesn't just mean that you donate charitable dollars to "existential risks that few other people are working on", instead of "The Society For Curing Rare Diseases In Cute Puppies". It means you know that eating half a chocolate brownie appears as essentially the same pleasurable memory in retrospect as eating a whole brownie, so long as the other half isn't in front of you and you don't have the unpleasant memory

... (read more)

OrphanWilde appears to be talking about morality, not decision theory. The moral Utility Function of utilitarianism is not necessarily the decision-theoretic utility function of any agent, unless you happen to have a morally perfect agent lying around, so your procedure would not work.

The most obvious explanation for this is that utility is not a linear function of response time: the algorithm taking 20 s is very, very bad, and losing 25 ms on average is worthwhile to ensure that this never happens. Consider that if the algorithm is just doing something immediately profitable with no interactions with anything else (e.g. producing some crytptocurrency), the first algorithm is clearly better (assuming you are just trying to maximize expected profit), since on the rare occasions when it takes 20 s, you just have to wait almost 200 times a... (read more)

1VAuroch
True, but even in cases where it won't break everything, this is still valued. Consistency is a virtue even if inconsistency won't break anything. And it clearly breaks down in the extreme case where it becomes Caul, but I can't come up with a compelling reason why it should break down. My best guess: The factor that is being valued here is the variance. Low variance increases utility generally, because predictability is valuable in enabling better expected utility calculations for other connected decisions. There is no hard limit on how much this can matter relative to the average case, but as the discrepancy between the average cases diverge so that the low-variance version becomes worse than a greater and greater fraction of the high-variance cases, it it remains technically rational but its implicit prior approaches an insane prior such as that of Caul or Perry. I think this would imply that for an unbounded perfect Bayesian, there is no value to low variance outside of nonlinear utility dependence, but that for bounded reasoners, there is some cutoff where making concessions to predictability despite loss of average-case utility is useful on balance.

Part of the issue is that you are not subject to the principle of explosion. You can assert contradictory things without also asserting that 2+2=3, so you can be confident that you will never tell anyone that 2+2=3 without being confident that you will never contradict yourself. Formal systems using classical logic can't do this: if they prove any contradiction at all, they also prove that 2+2=3, so proving that they don't prove 2+2=3 is exactly the same thing as proving that they are perfectly consistent, which they can't consistently do.

4Adele_L
Unless I'm missing something, Löb's theorem is still a theorem of minimal logic, which does not have the principle of explosion.

You cannot possibly gain new knowledge about physics by doing moral philosophy. At best, you have shown that any version of utilitarianism which adheres to your assumptions must specify a privileged reference frame in order to be coherent, but this does not imply that this reference frame is the true one in any physical sense.

1bryjnar
This seems untrue. If you have high credence in the two premisses: * If X were a correct physical theory, then Y. * Not Y. then that should decrease your credence in X. It doesn't matter whether Y is a proposition about the behaviour of gases or about moral philosophy (although the implication is likely to be weaker in the latter case).

Strictly speaking, Lob's Theorem doesn't show that PA doesn't prove that the provability of any statement implies that statement. It just shows that if you have a statement in PA of the form (If S is provable, then S), you can use this to prove S. The part about PA not proving any implications of that form for a false S only follows if we assume that PA is sound.

Therefore, replacing PA with a stronger system or adding primitive concepts of provability in place of PA's complicated arithmetical construction won't help. As long as it can do everything PA can do (for example, prove that it can prove things it can prove), it will always be able to get from (If S is provable, then S) to S, even if S is 3*5=56..

0RolfAndreassen
Let me see what happens if I put in a specific example. Suppose that is a theorem of PA. Let me refer to "3*5=35 is a theorem" as sentence 1; "3*5=35" as sentence 2, and the implication 1->2 as sentence 3. Now, if 3 is a theorem, then you can use PA to prove 2 even without actually showing 1; and then PA has proven a falsehood, and is inconsistent. Is that a correct statement of the problem? If so... I seem to have lost track of the original difficulty, sorry. Why is it a worry that PA will assert that it can prove something false, but not a worry that it will assert something false? If you're going to worry that sentence 3 is a theorem, why not go straight to worrying that sentence 2 is a theorem?

Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E. If we assume E does in fact recommend to eat the chocolate ice cream, 50% of E agents will drink chocolate soda, 50% will drink the vanilla soda (assuming reasonable experimental design), and 100% will eat the chocolate ice cream. Therefore, given that you use E, there is no correlation between your decision and receiving the $1,000,000, so you might as well eat t... (read more)

1pallas
Can you show where the screening off would apply (like A screens off B from C)?

Are you saying we should maximize the average utility of all humans, or of all sentient beings? The first one is incredibly parochial, but the second one implies that how many children we should have depends on the happiness of aliens on the other side of the universe, which is, at the very least, pretty weird.

Not having an ethical mandate to create new life might or might not be a good idea, but average utilitarianism doesn't get you there. It just changes the criteria in bizarre ways.

0Stuart_Armstrong
I'm not saying anything, at this point. I believe that the best population ethics is likely to be complicated, just as standard ethics are, and I haven't fully settled on either yet.