Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: paulfchristiano 20 June 2017 04:11:08PM *  19 points [-]

I don't buy the "million times worse," at least not if we talk about the relevant E(s-risk moral value) / E(x-risk moral value) rather than the irrelevant E(s-risk moral value / x-risk moral value). See this post by Carl and this post by Brian. I think that responsible use of moral uncertainty will tend to push you away from this kind of fanatical view

I agree that if you are million-to-1 then you should be predominantly concerned with s-risk, I think they are somewhat improbable/intractable but not that improbable+intractable. I'd guess the probability is ~100x lower, and the available object-level interventions are perhaps 10x less effective. The particular scenarios discussed here seem unlikely to lead to optimized suffering, only "conflict" and "???" really make any sense to me. Even on the negative utilitarian view, it seems like you shouldn't care about anything other than optimized suffering.

The best object-level intervention I can think of is reducing our civilization's expected vulnerability to extortion, which seems poorly-leveraged relative to alignment because it is much less time-sensitive (unless we fail at alignment and so end up committing to a particular and probably mistaken decision-theoretic perspective). From the perspective of s-riskers, it's possible that spreading strong emotional commitments to extortion-resistance (e.g. along the lines of UDT or this heuristic) looks somewhat better than spreading concern for suffering.

The meta-level intervention of "think about s-risk and understand it better / look for new interventions" seems much more attractive than any object-level interventions we yet know, and probably worth investing some resources in even if you take a more normal suffering vs. pleasure tradeoff. If this is the best intervention and is much more likely to be implemented by people who endorse suffering-focused ethical views, it may be the strongest incentive to spread suffering-focused views. I think that higher adoption of suffering-focused views is relatively bad for people with a more traditional suffering vs. pleasure tradeoff, so this is something I'd like to avoid (especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation). Ironically, that gives some extra reason for conventional EAs to think about s-risk, so that the suffering-focused EAs have less incentive to focus on value-spreading. This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world. (Though here the non-s-riskers should also err on the side of extortion-resistance, e.g. trading with the position of rational non-extorting s-riskers rather than whatever views/plans the s-riskers happen to have.)

An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the former, then x-risk and s-risk reduction may end up being aligned. If the latter, then at best the s-riskers are indifferent to survival and need to resort to more speculative interventions. Interestingly, in this case it may also be counterproductive for s-riskers to expand their influence or acquire resources. My guess is that mature suffering-hating civilizations reduce s-risk, since immature suffering-hating civilizations probably provide a significant part of the game-theoretic incentive yet have almost no influence, and sane suffering-hating civilizations will provide minimal additional incentives to create suffering. But I haven't thought about this issue very much.

Comment author: ESRogs 22 July 2017 05:08:23PM 0 points [-]

An obvious first question is whether the existence of suffering-hating civilizations on balance increases s-risk (mostly by introducing game-theoretic incentives) or decreases s-risk (by exerting their influence to prevent suffering, esp. via acausal trade). If the former, then x-risk and s-risk reduction may end up being aligned.

Did you mean to say, "if the latter" (such that x-risk and s-risk reduction are aligned when suffering-hating civilizations decrease s-risk), rather than "if the former"?

Comment author: VAuroch 15 June 2017 05:53:42AM 0 points [-]

That would make it a terrible at being a medium of exchange or a store of value, though, wouldn't it? No one knows how much it's worth, and you have to acquire some, pass it off, and then (on their side) turn it into currency every time you use it.

Comment author: ESRogs 16 June 2017 12:43:23AM 0 points [-]

That depends on how volatile it is. On the timescale of a single transaction, a certain level of volatility might not matter very much even if the same level of volatility would prevent you from wanting to set prices in BTC.

Comment author: Raemon 01 May 2017 11:54:19PM *  10 points [-]

I don't actually know how not to play the same old game yet, but I am trying to construct a way.

I see you aiming to construct a way and making credible progress, but I worry that you're trying to do to many things at once and are going to cause lasting damage by the time you figure it out.

Specifically, the "confidence game" framing of the previous post moved it from "making an earnest good faith effort to talk about things" to "the majority of the post's content is making a status move"[1] (in particular in the context of your other recent posts, and is exacerbated by this one), and if I were using the framing of this current post I'd say both the previous post and this one have bad intent.

I don't think that's a good framing - I think it's important that you (and folk at OpenPhil and at CEA) do not just have an internally positive narrative but are actually trying to do things that actually cache out to "help each other" (in a broad sense of "help each other"). But I'm worried that this will not remain the case much longer if you continue on your current trajectory.

A year ago, I was extremely impressed with the work you were doing and points you were making, and frustrated that those points were not having much impact.

My perception was "EA Has A Lying Problem" was an inflection point where a) yeah, people started actually paying attention to the class of criticism you're doing, but the mechanism by which people started paying attention was by critics invoking rhetoric and courting controversy, which was approximately as bad as the problem it was trying to solve. (or at least, within an order of magnitude as bad)

[1] I realize there was a whole lot of other content of the Confidence Game post that was quite good. But, like, the confidence game part is the part I remember easily. Which is the problem.

Comment author: ESRogs 02 May 2017 08:36:53AM 2 points [-]

I agree that the "confidence game" framing, and particularly the comparison to a ponzi scheme seemed to me like surprisingly charged language, and not the kind of thing you would do if you wanted a productive dialogue with someone.

I'm not sure whether Benquo means for it to come across that way or not. (Pro: maybe he has in fact given up on direct communication with OpenPhil, and thinks his only method of influence is riling up their base. Con: maybe he just thought it was an apt metaphor and didn't model it as a slap-in-the-face, like I did. Or maybe something else I'm missing.)

Comment author: ESRogs 23 April 2017 07:50:30AM 1 point [-]

Then, as the Open Philanthropy Project explored active funding in more areas, its estimate of its own effectiveness grew. After all, it was funding more speculative, hard-to-measure programs...

If I start funding a speculative project because I think it has higher EV than what I'm funding now, then isn't it rational for me to think my effectiveness has gone up? It seems like you're implying it's wrong of them to think that.

but a multi-billion-dollar donor, which was largely relying on the Open Philanthropy Project's opinions to assess efficacy (including its own efficacy), continued to trust it.

I worry that this might paint a misleading picture to readers who aren't aware of the close relationship between Good Ventures and GiveWell. This reads to me like the multi-billion-dollar donor is at arm's length, blindly trusting Open Phil, when in reality Open Phil is a joint venture of GiveWell and Good Ventures (the donor), and they share an office.

[Link] Neuralink and the Brain’s Magical Future

6 ESRogs 23 April 2017 07:27AM
Comment author: Blackened 03 December 2012 10:24:50AM *  -2 points [-]

You are either not understanding, or not wanting to understand, the difference between the score on a reliable IQ test and the SAT scores of just the LWers who took a SAT. Obviously, an IQ test is a much better indicator, also SAT is only available for people in the US. Also, the responses I'm getting are already very different from the survey.

JCTI's reliability is verifiable from the link, even though the other test's is not.

sitting around talking about how smart we are doesn't send signals to onlookers that I think are in the best interests of LessWrong.

Investigating a phenomena is what we are about. I don't see a logically valid reason to not investigate this one, especially if previous data suggests an abnormally high level. This holds true even if the concept of IQ is invalid, as long as it is measurable.

Comment author: ESRogs 26 January 2017 06:42:46PM 1 point [-]

Obviously, an IQ test is a much better indicator

I don't think that's true. It's my impression that the SAT correlates with IQ tests about as much as IQ tests correlate with each other.

On IQ and SAT correlations:

The short answer is that IQ and SAT scores are very highly correlated, with a range between .53 and .82

Meanwhile, the correlation between the Stanford-Binet test and Raven's is about .72.

Comment author: BarbaraB 21 December 2016 05:57:40AM 0 points [-]

Are the uniforms at US schools reasonably practical, comfortable and do they have reasonable colour, e.g. not green ? As a girl of socialism, I experienced pioneer uniforms, which were not well designed. They forced short skirts on girls, which are impractical in some weather. The upper part, the shirt, needed to be ironed. There was no sweather or coat to unify kids in winter.. My mother once had to stand coatless in winter in a wellcome row for some event. I can also imagine some girls having aesthetic issues with the exposed legs or unflattering color. But what are the uniforms in the US usually like ?

Comment author: ESRogs 11 January 2017 07:24:32AM 0 points [-]

What's wrong with green?

Comment author: ESRogs 18 December 2016 09:18:37AM *  0 points [-]

Rather than relying on the moderator to actually moderate, use the model to predict what the moderator would do. I’ll tentatively call this arrangement “virtual moderation.”

...

Note that if the community can’t do the work of moderating, i.e. if the moderator was the only source of signal about what content is worth showing, then this can’t work.

Does the "this" in "this can't work" refer to something other than the virtual moderation proposal, or are you saying that even virtual moderation can't work w/o the community doing work? If so, I'm confused, because I thought I was supposed to understand virtual moderation as moderation-by-machine.

Comment author: ESRogs 18 December 2016 09:41:24AM 0 points [-]

Oh, did you mean that the community has to interact with a post/comment (by e.g. upvoting it) enough for the ML system to have some data to base its judgments on?

I had been imagining that the system could form an opinion w/o the benefit of any reader responses, just from some analysis of the content (character count, words used, or even NLP), as well as who wrote it and in what context.

Comment author: ESRogs 18 December 2016 09:18:37AM *  0 points [-]

Rather than relying on the moderator to actually moderate, use the model to predict what the moderator would do. I’ll tentatively call this arrangement “virtual moderation.”

...

Note that if the community can’t do the work of moderating, i.e. if the moderator was the only source of signal about what content is worth showing, then this can’t work.

Does the "this" in "this can't work" refer to something other than the virtual moderation proposal, or are you saying that even virtual moderation can't work w/o the community doing work? If so, I'm confused, because I thought I was supposed to understand virtual moderation as moderation-by-machine.

Comment author: SatvikBeri 27 November 2016 05:18:43PM 27 points [-]

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a problem I or other people were struggling with"
  • No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
  • Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I've made, not just the top level
  • Built-in argument mapping tools for comments
  • Shadowbanning, a la Hacker News
  • Initially restricted growth, e.g. by invitation only
Comment author: ESRogs 28 November 2016 10:19:44PM 1 point [-]

Built-in argument mapping tools for comments

Could you say more about what you have in mind here?

View more: Next