JonahSinick comments on Common sense as a prior - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (212)
Hi Brian :-)
How do you know this? It's true that their utility functions aren't linear, but it doesn't follow that that's why they don't take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
Dovetailing from my comment above, I think that there's a risk of following the line of thought "I'm doing X because it fulfills certain values that I have. Other people don't have these values. So the fact that they don't engage in X, and don't think that doing X is a good idea, isn't evidence against X being a good idea for me" without considering the possibility that despite the fact that they don't have your values, doing X or something analogous to X would fulfill their (different) values conditioning on your factual beliefs being right, so that the fact that they don't do or endorse X is evidence against your factual beliefs connected with X. In a given instance, there will be a subtle judgment call as to how much weight to give to this possibility, but I think that it should always be considered.
Fair enough. :) Yes, from the fact that probability * utility is small, we can't tell whether the probability is small or the utility is, or both. In the case of shaping AI specifically, I haven't heard good arguments against assigning it non-negligible probability of success, and I also know that many people don't bite Pascalian wagers at least partly because they don't like Pascalian wagers rather than because they disagree with the premises, so combining these suggests the probability side isn't so much the issue, but this suggestion stands to be verified. Also, people will often feign having ridiculously small probabilities to get out of Pascalian wagers, but they usually make these proclamations after the fact, or else are the kind of people who say "any probability less than 0.01 is set to 0" (except when wearing seat belts to protect against a car accident or something, highlighting what Nick said about people potentially being more rational for important near-range decisions).
Anyway, not accepting a Pascalian wager does not mean you don't agree with the probability and utility estimates; maybe you think the wager is missing the forest for the trees and ignoring bigger-picture issues. I think most Pascalian wagers can be defused by saying, "If that were true, this other thing would be even more important, so you should focus on that other thing instead." But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P
You are also correct that differences in moral values doesn't completely shield off an update to probabilities when I find my actions divergent from those of others. However, in cases when people do make their probabilities explicit, I don't normally diverge substantially (or if I do, I tend to update somewhat), and in these particular cases, divergent values comprise the remainder of the gap (usually most of it). Of course, I may have already updated the most in those cases where people have made their probabilities explicit, so maybe there's bigger latent epistemic divergence when we're distant from the lamp post.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community), I think that a large part of what's going on is that people recognize that they have a substantial comparative advantage in a given domain, and think that they can have the biggest impact by doing what they're best at, and so don't try to optimize between causes. I think that their reasoning is a lot closer to the mark than initially meets the eye, for reasons that I gave in my posts Robustness of Cost-Effectiveness Estimates and Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren't utilitarian. But because of the number of people, and the diversity of comparative advantages, some of them will be working on problems that are utilitarian by chance, and will learn a lot about how best to address these problems. You may argue that the problems that they're working on are different from the problems that you're interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
As for people not working to shape AI, I think that that the utilitarian expected value of working to shape AI is lower than it may initially appear. Some points:
Viewing all of these things in juxtaposition, I wouldn't take people's low focus on AI risk reduction as very strong evidence that people don't care about astronomical waste. See also my post Many Weak Arguments and the Typical Mind: the absence of an attempt to isolate the highest expected value activities may be adaptive rather than an indication of lack of seriousness of purpose.
Thanks, Jonah. :)
But it's a smaller group than the set of elites used for the common-sense prior. Hence, many elites don't share our values even by this basic measure.
Yes, this was my point.
Definitely. I wouldn't claim otherwise.
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people's psychology, it seems very plausible that they in fact don't have linear utility functions.
Compare with behavioral economics. You can explain away any given discrepancy from classical microeconomic behavior by rational agents through an epicycle in the theory, but combined with all that we know about people's psychology, we have reason to think that psychological biases themselves are playing a role in the deviations.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don't compare with more systematic research done by someone like Carl.
But a lot of the people within this group use an elite common-sense prior despite having disjoint values, which is a signal that the elite common-sense prior is right.
I was acknowledging it :-)
Elite common sense says that voting is important for altruistic reasons. It's not clear that this is contingent on the number of people in America not being too big. One could imagine an intergalactic empire with 10^50 people where voting was considered important. So it's not clear that people have bounded utility functions. (For what it's worth, I no longer consider myself to have a bounded utility function.)
People's moral intuitions do deviate from utilitarianism, e.g. probably most people don't subscribe to the view that bringing a life into existence is equivalent to saving a life. But the ways in which their intuitions differ from utilitarianism may cancel each other out. For example, having read about climate change tail risk, I have the impression that climate change reduction advocates are often (in operational terms) valuing future people more than they value present people.
So I think it's best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
I've been extremely impressed by Peter Thiel based on reading notes on his course about startups. He has extremely broad and penetrating knowledge. He may have the highest crystalized intelligence of anybody who I've ever encountered. I would not be surprised if he's studied the possibility of stagnation and societal collapse in more detail than Carl has.
This is because they're deontologists, not because they're consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there's more overlap between deontology and consequentialism than meets the eye.)
It may be best to examine on a case-by-case basis. We don't need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there's enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.
Points taken on the other issues we discussed.
How do you know this? Do you think that these people would describe their reason for voting as deontological?
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...
I Googled the question and found similar responses in this article:
Interestingly, the author also says: "Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election)." This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations "magical thinking."
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
Explicitly yes, but implicitly...?
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers ... ?
These reasons aren't obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn't know whether this was the case, because people's stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren't familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn't make a difference in expectation, so maybe most people fall into the bucket of those who haven't really thought about the matter rigorously at all. (Remember, we're including English and Art majors here.)
You could say, "If they knew the arguments, they would be persuaded," which may be true, but that doesn't explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
At some point it may become a debate about the teleological level at which you assess their "reasons." As individuals, it's very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It's similar to assessing the "reason" why a mother cares for her child. At an individual / neural level it's based on reward circuitry. At a broader evolutionary level, it's based on bequeathing genes.
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn't demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Thanks! I'm not sure I understood your comment. Did you mean that if the other agents aren't similar enough to you, it's an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
You're assuming that people work by probabilities and Bayes each time. Nobody can do that for all of their beliefs, and many people don't do it much at all. Typically a statement like "any probability less than 0.01 is I set to 0" really means "I have this set of preferences, but I think I can derive a statement about probabilities from that set of preferences". Pointing out that they don't actually ignore a probability of 0.01 when wearing a seatbelt, then, should lead to a response of "I guess my derivation isn't quite right" and lead them to revise the statement, but it's not a reason why they should change their preferences in the cases that they originally derived the statement from.
Yep, that's right. In my top-level comment, I said, "In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions." Still, on big government-policy questions that affect society (rather than personal actions, relationships, etc.) elites tend to be (relatively) more interested in utilitarian calculations.
Unfortunately, it's a mixed case: there were motives besides pure altruism/self-interest. For example, Edward Teller was an advocate of asteroid defense... no doubt in part because it was a great excuse for using atomic bombs and keeping space and laser-related research going.
It's pretty easy to accept the possibility that an asteroid impact could wipe out humanity, given that something very similar has happened before. You have to overcome a much larger inferential distance to explain the risks from an intelligence explosion.