Comment author: Eliezer_Yudkowsky 15 August 2013 11:05:11PM 2 points [-]

My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public

That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.")

Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally? The last obvious point on which you could have thus been victorious would have been my skepticism of the now-confirmed Higgs boson, and Holden is apparently impressed by the retrospective applicability of this heuristic to predict that interventions much better than the Gates Foundation's best interventions would not be found. But still, an advance prediction would be pretty cool.

Comment author: Nick_Beckstead 16 August 2013 12:01:56AM 0 points [-]

That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.")

This isn't something I've looked into closely, though from looking at it for a few minutes I think it is something I would like to look into more. Anyway, on the Wikipedia page on diffusion of innovation:

This is the second fastest category of individuals who adopt an innovation. These individuals have the highest degree of opinion leadership among the other adopter categories. Early adopters are typically younger in age, have a higher social status, have more financial lucidity, advanced education, and are more socially forward than late adopters. More discrete in adoption choices than innovators. Realize judicious choice of adoption will help them maintain central communication position (Rogers 1962 5th ed, p. 283)."

I think this supports my claim that elite common sense is quicker to join and support new good social movements, though as I said I haven't looked at it closely at all.

Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally?

I can't think of anything very good, but I'll keep it in the back of my mind. Can you think of something that would sound bold relative to my perspective?

Comment author: Ustice 15 August 2013 08:04:32PM 0 points [-]

How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?

On a more personal basis, I'm polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don't have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don't think that I would be able to convince the elite of my opinions.

Comment author: Nick_Beckstead 15 August 2013 10:22:47PM 0 points [-]

How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?

My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public, so that if people generally adopted the framework, many of the promising social movements would progress more quickly than they actually did. I am not sufficiently aware of the specific history of the 15th and 19th amendments to say more than that at this point.

There is a general question about how the framework is related to innovation. Aren't innovators generally going against elite common sense? I think that innovators are often overconfident about the quality of their ideas, and have significantly more confidence in their ideas than they need for their projects to be worthwhile by the standards of elite common sense. E.g., I don't think you need to have high confidence that Facebook is going to pan out for it to be worthwhile to try to make Facebook. Elite common sense may see most attempts at innovation as unlikely to succeed, but I think it would judge many as worthwhile in cases where we'll get to find out whether the innovation was any good or not. This might point somewhat in the direction of less innovation.

However, I think that the most trustworthy people tend to innovate more, are more in favor of innovation than the general population, and are less risk-averse than the general population. These factors might point in favor of more innovation. It is unclear to me whether we would have more or less innovation if the framework were widely adopted, but I suspect we would have more.

On a more personal basis, I'm polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don't have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don't think that I would be able to convince the elite of my opinions.

My impression is that elite common sense is not highly discriminating against polyamory as a relationship model. It would probably be skeptical of polyamory for the general person, but say that it might work for some people, and that it could make sense for certain interested people to try it out.

If your opinion is that polyamory should be the norm, I agree that you wouldn't be able to convince elite common sense of this. My personal take is that it is far from clear that polyamory should be the norm. In any event, this doesn't seem like a great test case for taking down the framework because the idea that polyamory should be the norm does not seem like a robustly supported claim.

Comment author: Lumifer 13 August 2013 07:18:27PM 3 points [-]

an unusual approach to dealing with moral questions

Why do you think it's unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are "intuitive" -- they go by gut feeling, basically. I think that's the normal mode in which most of humanity operates most of the time.

Comment author: Nick_Beckstead 13 August 2013 07:32:41PM 3 points [-]

I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian's perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.

Comment author: Brian_Tomasik 13 August 2013 06:45:28PM *  1 point [-]

When you say "I want to do what I want to do" I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.

I'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.

On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance.

I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don't fully understand but seems to deliver valuable results.

Fair enough. :) I'll buy that way of putting it.

Anyway, if I were really as unreasonable as it sounds, I wouldn't be talking here and putting at risk the preservation of my current goals.

Comment author: Nick_Beckstead 13 August 2013 07:09:01PM 2 points [-]

I'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.

Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik's present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I'm coming from in your language. This is hard to write and think about.

Comment author: Brian_Tomasik 13 August 2013 12:56:48AM *  1 point [-]

In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on.

I think it's fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be.

Like Luke Muehlhauser, I believe that we don't even know what we're asking when we ask ethical questions

Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: "Within 20 seconds of arguing about the definition of 'desire', someone will say, 'Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions.'"

Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it."

I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".

Sure, but this is not a factual error, just an error in being a reasonable person or something. :)


I should point out that "doing what I feel like doing" doesn't necessarily mean running roughshod over other people's values. I think it's generally better to seek compromise and remain friendly to those with whom you want to cooperate. It's just that this is an instrumental concession, not because I actually agree with the values that I'm willing to be nice to.

Comment author: Nick_Beckstead 13 August 2013 12:00:44PM 1 point [-]

Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it."

I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don't think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say "I want to do what I want to do" I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.

I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".

Sure, but this is not a factual error, just an error in being a reasonable person or something. :)

I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don't fully understand but seems to deliver valuable results.

Comment author: Brian_Tomasik 13 August 2013 01:27:26AM *  1 point [-]

Nick, what do you do about the Pope getting extremely high PageRank by your measure? You could say that most people who trust his judgment aren't elites themselves, but some certainly are (e.g., heads of state, CEOs, celebrities). Every president in US history has given very high credence to the moral teachings of Jesus, and some have even given high credence to his factual teachings. Hitler had very high PageRank during the 1930s, though I guess he doesn't now, and you could say that any algorithm makes mistakes some of the time.

ETA: I guess you did say in your post that we should be less reliant on elite common sense in areas like religion and politics where rationality is less prized. But I feel like a similar thing could be said to some extent of debates about moral conclusions. The cleanest area of application for elite common-sense is with respect to verifiable factual claims.

Comment author: Nick_Beckstead 13 August 2013 11:44:40AM *  2 points [-]

I don't have a lot to add to my comments on religious authorities, apart from what I said in the post and what I said in response to Luke's Muslim theology case here.

One thing I'd say is that many of the Christian moral teachings that are most celebrated are actually pretty good, though I'd admit that many others are not. Examples of good ones include:

  • Love your neighbor as yourself (I'd translate this as "treat others as you would like to be treated")

  • Focus on identifying and managing your own personal weaknesses rather than criticizing others for their weaknesses

  • Prioritize helping poor and disenfranchised people

  • Don't let your acts of charity be motivated by finding approval from others

These are all drawn from Jesus's Sermon on the Mount, which is arguably his most celebrated set of moral teachings.

Comment author: Brian_Tomasik 13 August 2013 01:04:32AM 1 point [-]

I disagree with the claim that the argument for shaping the far future is a Pascalian wager.

I thought some of our disagreement might stem from understanding what each other meant, and that seems to have been true here. Even if the probability of humanity surviving a long time is large, there remain entropy in our influence and butterfly effects, such that it seems extremely unlikely that what we do now will actually make a pivotal difference in the long term, and we could easily be getting the sign wrong. This makes the probabilities small enough to seem Pascalian for most people.

It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."

Comment author: Nick_Beckstead 13 August 2013 09:29:16AM *  2 points [-]

It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."

If by short-term you mean "what happens in the next 100 years or so," I think there is something to this idea, even for people who care primarily about very long-term considerations. I suspect it is true that the expected value of very long-run outcomes is primarily dominated by totally unforeseeable weird stuff that could happen in the distant future. But I believe that the best way deal with this challenge is to empower humanity to deal with the relatively foreseeable and unforeseeable challenges and opportunities that it will face over the next few generations. This doesn't mean "let's just look only at short-run well-being boosts," but something more like "let's broadly improve cooperation, motives, access to certain types of information, narrow and broad technological capabilities, and intelligence and rationality to deal with the problems we can't foresee, and let's rely on the best evidence we can to prepare for the problems we can foresee." I say a few things about this issue here. I hope to say more about it in the future.

An analogy would be that if you were a 5-year-old kid and you primarily cared about how successful you were later in life, you should focus on self-improvement activities (like developing good habits, gaining knowledge, and learning how to interact with other people) and health and safety issues (like getting adequate nutrition, not getting hit by cars, not poisoning yourself, not falling off of tall objects, and not eating lead-based paint). You should not try to anticipate fine-grained challenges in the labor market when you graduate from college or disputes you might have with your spouse. I realize that this analogy may not be compelling, but perhaps it illuminates my perspective.

Comment author: RobinHanson 12 August 2013 07:10:35PM 1 point [-]

The overall framework is sensible, but I have trouble applying it to the most vexing cases: where the respected elites mostly just giggle at a claim and seem to refuse to even think about reasons for or against it, but instead just confidently reject it. It might seem to me that their usual intellectual standards would require that they engage in such reasoning, but the fact that they do not in fact think that appropriate in this case is evidence of something. But what?

Comment author: Nick_Beckstead 12 August 2013 11:52:38PM 2 points [-]

I think it is evidence that thinking about it carefully wouldn't advance their current concerns, so they don't bother or use the thinking/talking for other purposes. Here are some possibilities that come to mind:

  • they might not care about the outcomes that you think are decision-relevant and associated with your claim

  • they may care about the outcomes, but your claim may not actually be decision-relevant if you were to find out the truth about the claim

  • it may not be a claim which, if thought about carefully, would contribute enough additional evidence to change your probability in the claim enough to change decisions

  • it may be that you haven't framed your arguments in a way that suggests to people that there is a promising enough path to getting info that would become decision-relevant

  • it may be because of a signalling hypothesis that you would come up with; if you're talking about the distant future, maybe people mostly talk about such stuff as part of a system of behavior that signals support for certain perspectives. If this is happening more in this kind of case, it may be in part because of the other considerations.

Comment author: Brian_Tomasik 12 August 2013 07:30:57PM 1 point [-]

I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell.

As far as the GiveWell point, I meant "proper Pascalian bullets" where the probabilities are computed after constraining by some reasonable priors (keeping in mind that a normal distribution with mean 0 and variance 1 is not a reasonable prior in general).

In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments.

Low probability, yes, but not necessarily low probability*impact.

I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.

As I mentioned in another comment, I think most Pascalian wagers that one comes across are fallacious because they miss even bigger Pascalian wagers that should be pursued instead. However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?

This highlights a meta-point in this discussion: Often what's under debate here is not the framework but instead claims about (1) whether elites would or would not agree with a given position upon hearing it defended and (2) whether their sustained disagreement even after hearing it defended results from divergent facts, values, or methodologies (e.g., not being consequentialist). It can take time to assess these, so in the short term, disagreements about what elites would come to believe are a main bottleneck for using elite common sense to reach conclusions.

Comment author: Nick_Beckstead 12 August 2013 11:14:23PM 1 point [-]

However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?

I disagree with the claim that the argument for shaping the far future is a Pascalian wager. In my opinion, there is a reasonably high, reasonably non-idiosyncratic probability that humanity will survive for a very long time, that there will be a lot of future people, and/or that future people will have a very high quality of life. Though I have not yet defended this claim as well as I would like, I also believe that many conventionally good things people can do push toward future generations facing future challenges and opportunities better than they otherwise would, which with a high enough and conventional enough probability makes the future go better. I think that these are claims which elite common sense would be convinced of, if in possession of my evidence. If elite common sense would not be so convinced, I would consider abandoning these assumptions.

Regarding the more purely moral claims, I suspect there are a wide variety of considerations which elite common sense would give weight to, and that very long-term considerations are one time of important consideration which would get weight according to elite common sense. It may also be, in part, a fundamental difference of values, where I am part of a not-too-small contingent of people who have distinctive concerns. However, in genuinely altruistic contexts, I think many people would give these considerations substantially more weight if they thought about the issue carefully.

Near the beginning of my dissertation, I actually speak about the level of confidence I have in my thesis quite tentatively:

How convinced should you be by the arguments I'm going to give? I'm defending an unconventional thesis and my support for that thesis comes from highly speculative arguments. I don't have great confidence in my thesis, or claim that others should. But I am convinced that it could well be true, that the vast majority of thoughtful people give the claim less credence that they should, and that it is worth thinking about more carefully. I aim to make the reader justified in taking a similar attitude. (p. 3, Beckstead 2013)

I stand by this tentative stance.

Comment author: Brian_Tomasik 12 August 2013 07:39:23PM *  0 points [-]

I don't think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.

My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don't think there's something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I've never heard someone argue that it's a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.

Comment author: Nick_Beckstead 12 August 2013 10:57:00PM *  3 points [-]

My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me.

I'm a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.

Like Luke Muehlhauser, I believe that we don't even know what we're asking when we ask ethical questions, and I suspect we don't really know what we're asking when we asking meta-ethical questions either. As far as I can tell, you've picked one possible candidate thing we could be asking--"what do I care about right now?"--among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that's what you're asking.

Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don't think there's something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this.

I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren't good at thinking about.

I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".

[Edited to reduce rhetoric.]

View more: Prev | Next