All of Eric Rogstad's Comments + Replies

Nm, the longer explanation later in the page answered my question.

If Wisconsin is trading cheese with Ohio, and then Michigan becomes much better at producing cheese, this can harm the economy of Wisconsin. It should not be possible for Wisconsin to be harmed by trading with Michigan unless something weird is going on.

Was "Wisconsin" supposed to be "Ohio" in the second sentence? Or are you contrasting between Wisconsin trading with Ohio and Wisconsin trading with Michigan?

1Eric Rogstad
Nm, the longer explanation later in the page answered my question.

This is silly

Perhaps

then you ought to focus predominantly on something else

This does not seem inconsistent with the post. (Contributing $1 per day towards something hardly seems to preclude focusing predominantly on other things.) Do you disagree with that?

It seems that classifiers trained on adversarial examples may be finding (more) conservative concept boundaries:

We also found that the weights of the learned model changed significantly, with the weights of the adversarially trained model being significantly more localized and interpretable

Explaining and Harnessing Adversarial Examples

Benquo Given your analysis, I'm surprised by your vote of 50%. You took what was given as a conservative estimate, added in additional moderating factors, and still got a 10x margin of safety. Is this just because of a strong prior towards discounting cost effectiveness estimates?

How much would one have to donate for you to be 90% sure that it would offset the cost of eating meat?

For reference, Lewis Bollard estimates that recent corporate cage-free campaigns "will spare about 250 hens a year of cage confinement per dollar spent."

4Benquo
You named two charities, and I ended up deciding that the case for one of them was plausible (CiWF), so for any given dollar there's a 50-50 chance ;) More seriously, I do have what you might summarize as a strong prior against cost-effectiveness estimates. In particular, I didn't address these issues: Bollard's picking post-hoc an animal charity and intervention with especially clear positive track records. This has the following problems: * Regression to the mean (I mentioned this but didn't properly account for it). * Even if you earmarked the money for such programs, I expect there's some elasticity of substitution between different programs within a charity. (Of course some programs could be secretly better than the cage-free egg campaigns, too.) Here are some other costs I didn't account for: * I didn't account for costs imposed on humans at all (see Jim's comment) * To work properly, offsets require an allocation of credit that doesn't overcount. Bollard's conservative estimate tries to account for this, but this is pretty hard to do. To some extent we have to count all the prior work done on promoting compassion for animals, and account for compounding opportunity cost. * In general I expect my environment to be marketing to me in non-truth-tracking ways. OPP is better than many but not perfect. In particular, I expect marketing to tend towards exaggerating the benefits of things that want my money. If OPP or someone else claiming this impact had a relevant track record of publicly registered predictions of impact, and had actually gone back and checked and found that they were well-calibrated, then that would go a long way towards getting me to update. I'm not sure what amount I'd put at 90% - my thinking on this is pretty bimodal, most of the 50% probability that the number's off comes from it being way off, not from it being a little off. For way off, I basically shouldn't anchor on public cost-effectiveness estimates at all.

Benquo Given your analysis, I'm surprised by your vote of 50%. You took what was given as a conservative estimate, added in additional moderating factors, and still got a 10x margin of safety. Is this just because of a strong prior towards discounting cost effectiveness estimates?

How much would one have to donate for you to be 90% sure that it would offset the cost of eating meat?

9Benquo
On the pro side: The whole "farmed animal welfare" field in the US gets less than $100MM per year, and makes material changes to how Americans eat. If all ~200M adult Americans gave $1/day to animal welfare charities instead of changing their diets further, that would fund about $7 billion of annual activism on this. That's huge. That's more than 100X what it is now. That's all US federal election expenditures during a presidential election year (but including house and senate races). That's more than twice MIT's entire budget. In present value terms, at a 5% annual discount rate, that's equivalent to a one-time expenditure of $140 billion. That's way more than the Manhattan Project cost, adjusted for inflation. Seems plausible that if you could scale up current efforts to that size at current levels of cost-effectiveness, it would be equivalent in welfare impact to getting Americans off factory-farmed meat and eggs altogether. This isn't a room for more funding argument - you'd hit diminishing returns way before that point - but it does suggest that the $1/day offset argument is not crazy on current margins, so long as you're willing to switch strategies if and when these orgs stop seeming cash-constrained (or, if you're willing to give a larger lump sum now, while such opportunities are still available.)
9Benquo
On the other hand, I think we should be skeptical of these estimates. ACE has a history of biasing its public info towards creating the impression that animal charities are more cost- effective than they likely are. If this leads to people reallocating their money towards net-worse things, your dollars given to ACE could easily cause net harm relative to setting the dollars on fire. This isn't obviously the case for an animal charity like CiWF, and perhaps most of ACE's research is better, or ACE will do better in the future. But, I don't have strong reason to think they're unusually trustworthy relative to other organizations marketing to my demographic. As far as I know, ACE's recommended orgs are actually fairly good, so perhaps ACE is net positive just by raising their profile - but, given that a fair amount of my positive impression comes from ACE and people strongly influenced by ACE, it's unclear to me how reliable that impression is. Bollard works for an organization that has repeatedly cautioned us not to take its expected value estimates literally. I'm not sure how we are supposed to take them, but it seems like a mistake to go ahead and take them literally anyway despite the ample disclaimers against exactly this use.
Benquo*240

Bollard's more conservative estimate is 38 hen-years per dollar, if you include other expenditures on farm animal welfare. I think we need to do include those because we didn't know in advance which efforts would be effective, and probably there will be some regression to the mean.

If you're thinking of this as an offset (instead of just directly comparing it to other charitable expenditures) then you need to credit other inputs - especially, the time of the people working at these places. Labor captures about 60% of national income. People working at a cha... (read more)

So I suppose I should attempt a real reply.

I think:

  • information hazards should be avoided
  • people should be allowed to develop opinions in private so that they can think freely
  • there's tremendous value in public discussions (where ideas can be evaluated by and/or spread to many people)

A probability doesn't seem like the right way to measure this.

2Eric Rogstad
So I suppose I should attempt a real reply. I think: * information hazards should be avoided * people should be allowed to develop opinions in private so that they can think freely * there's tremendous value in public discussions (where ideas can be evaluated by and/or spread to many people)
1Eric Rogstad
But I do think it is a good question.

Note that PredictIt currently thinks there's a 7% chance Trump will be impeached within the first 100 days.

That seems high to me for the first 100 days, since Republicans control both the house and the Senate. However, things could change at the midterm elections in 2018.

Overall I'm going with a 1 in 6 chance during the first term.

4orthonormal
Two ways impeachment could happen: * Trump becomes an albatross on the GOP, to the degree that they lose the House in 2018 despite their geographic advantage (the Democrats would basically have to win the House popular vote by 6-8 points in order to get a majority of seats, as per FiveThirtyEight). In this case, the Democrat-controlled House would be quite likely to initiate impeachment proceedings, both because Trump is a worse President than Pence would be, and because it would put GOP Senators in a serious bind. * The GOP preemptively impeaches Trump, both to prevent a bad election cycle for them and because they would prefer Pence as President. These aren't that unlikely over the next four years, though it won't happen in the first 100 days (barring some bigger bombshell or some real civic virtue from the GOP House leadership).
2Eric Rogstad
See also: http://www.metaculus.com/questions/377/will-donald-trump-be-the-president-of-the-united-states-in-2018/.

Fair to paraphrase as: donor-as-silent-partner?

Current thinking is that we should allow claims to be edited, but that past users' votes appear grayed out (so it's clear that they voted on a previous version of the claim). As of today, this hasn't been implemented yet.

The question of tradeoffs between X and Y and winners' curses reminds me of Bostrom's paper, The Unilateralist's Curse.

From the abstract:

In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will move forward more often than is opt

... (read more)
2RyanCarey
Nice! That's exactly what I have in mind. The hope is to flesh out how this would and should be addressed in practice.

Is the idea that a single organization should pursue X or Y and not worry about the fact that any given donors will value both X and Y to varying degrees?

(If so I might have called this organization-independence, or single-focus.)

1Eric Rogstad
Fair to paraphrase as: donor-as-silent-partner?
2RyanCarey
I got the idea from someone who suggested that if donors would fund some organization-leaders to do task A, and those leaders think B is more valuable, then the donors should usually fund them to do B. In one version of the claim, the donors' role then is to make some global assessment of how worthy they are of funds, and not to argue much about strategy. This kind of thing could apply if the organization is focused on X only, half X and half Y et cetera.

I'm not sure what you mean about an exchange rate. Isn't a Pareto improvement something that makes everyone better off (or rather: someone better off and no one worse off)?

3RyanCarey
Say we value article views and user signups. If I'm taking actions that achieve n views for each lost signup and you're taking actions that achieve m signups per lost view, where m < 1/n, then we could get a pareto improvement by doing less of both of these things.

This is one of the claims that Benquo made in his post, so I think we should leave the wording as is, unless he wants to change it.

(I've added a note explaining where the claim comes from.)

I agree that there are some x-risks (like global warming) that are helped by a colony, but most aren't.

Alexei What are some of the ones (besides AI x-risk) that you think are not?

From the FB thread:

Nathan Bouscal: Note that I haven't heard significant disagreement about a colony being useless-ish against AI x-risk. The argument is that it helps with (almost) every other x-risk.

Robert Wiblin: Even then the disagreement isn't that a Mars colony couldn't help, it's that you can get something similarly valuable on Earth for a fraction of the price and difficulty.

Paul Crowley: The proper disagreement to measure is something like "A permanent, self-sustaining off-Earth colony would be a much more effective mitigation of x-risk than even ... (read more)

"has some resistance to Eternal September" -> "is resistant to Eternal September" ?

I agree that For mitigating AI x-risk, an off-Earth colony would be about as useful as a warm scarf.

Otherwise, I think this does seem like the kind of thing you would do to mitigate a broad class of risks. Namely, those that arise on Earth and don't lend themselves to interplanetary travel (e.g. pandemics, nukes, and some of the unknown unknowns).

1Eric Rogstad
Alexei What are some of the ones (besides AI x-risk) that you think are not?
1alexei
I feel like Paul Crowley's version is basically the same as this one. And yes, I agree that there are some x-risks (like global warming) that are helped by a colony, but most aren't.
1Eric Rogstad
From the FB thread: Nathan Bouscal: Note that I haven't heard significant disagreement about a colony being useless-ish against AI x-risk. The argument is that it helps with (almost) every other x-risk. Robert Wiblin: Even then the disagreement isn't that a Mars colony couldn't help, it's that you can get something similarly valuable on Earth for a fraction of the price and difficulty. Paul Crowley: The proper disagreement to measure is something like "A permanent, self-sustaining off-Earth colony would be a much more effective mitigation of x-risk than even an equally well funded system of disaster shelters on Earth."

First use of "we" should indicate who "we" are, e.g. "We at Arbital..."

is utility the donor gets from donating money "smooth" with respect to the amount raised

Ideally the utility the donor gets (on reflection) is closely related to the utility the charity gets :-) But I agree that it's important to take donor "hedonics" into account.

I'd be interested to know if you find yourself having that feeling a lot, while interacting with claims.

If it's a small minority of the time, I think the solution is a "wrong question" button. If it happens a lot, we might need another object type --something like a prompt-for-discussion rather than a claim-to-be-agreed with.

In other words, promoting this claim as worded, is misleading?

Maybe "gradual" would be a better term. I mean that there aren't sharp transitions where e.g. raising 48k is not very valuable, but 50k+ is valuable.

Alexei can you say more about why you endorse this proposal? In particular, would you change your mind if you believed this claim?

1Eric Rogstad
Ideally the utility the donor gets (on reflection) is closely related to the utility the charity gets :-) But I agree that it's important to take donor "hedonics" into account.
1alexei
I updated my vote in response to Rob Bensinger's comment. Regarding that claim, there is a related question of: is utility the donor gets from donating money "smooth" with respect to the amount raised?

I don't usually have this concern because I assume that the utility from extra money for an organization grows smoothly as the amount of money increases, and that there are not sharp cutoffs or thresholds (even if the fundraiser declares "milestone" amounts).

alexei*100

You should make a claim, because I think I disagree. For example, there is a threshold at which they can afford to keep everyone at the same salary. Getting less money than that would mean firing someone or cutting salaries, which makes for an irregular utility function.

Even if we expect to implement an indirect approach to specifying ethics for our AI systems, it's still valuable to gain a better understanding of our own ethical reasoning (and ethics in general), because:

  1. The better we understand ethics, the less likely we are to take some mistaken assumption for granted when designing the process of extrapolating our ethics.
  2. The more confidently we'll be able to generate test cases for the AI's ethical reasoning.

I think this should be a claim.

I would add an "I assume" here in parentheses, so you're not putting words in their mouth, or projecting feelings into their heads.

I would like to see an operationalization.

Who is our community? How many of us should move?

Overall, I think the post covers most of the important points, but I think I'd want to cut some parts.

I'll try making an outline of what I think the key points are.

I might rephrase this to, "initial target" so it's clear that it was intended as a step along the path, not that it was our entire vision.

I think this section conflates two things: 1) the role LW used to play, and 2) the role ultimate-Arbital will play.

I think 1 is a subset of 2.

In particular, I don't think LW had solved the problem you describe here: "If someone wants to catch up on the state of the debate, they either need to get a summary from someone, figure it out as they go along, or catch up by reading the entire discussion."

Not sure if it makes sense for this one to be a probability bar.

Here's an alternate version with an Agreement bar.

1alexei
Agree that it doesn't make sense for this to be a probability bar.

Here's another comment.

Testing out replies.

I am a real comment. Don't delete me please!

This page is an outline for the Universal Property project.

Progress on the project will be measured by tracking the state of the pages linked below, as they transition from redlinks to stubs, etc.

We're going to feature whatever we choose as the current project on the front page, and I want to include some intro text. What do you think of the following (adapted from the first paragraph above):

Help us build an intuitive explanation of this fundamental concept in category theory!

Category theory is famously very difficult to understand, even for people with a relatively high level of mathematical maturity. With this project, we want to produce an explanation that will clearly communicate a core concept in category theory, the universal property, to a wide audience of learners.

See below for our current progress on the project, as well as how you can contribute.

2Patrick Stevens
Looks good to me!

If these are included I think it would be good to also include explanations of why each one is wrong.

This is a clear explanation, but I think some formatting changes could enable readers to grok it even more quickly.

Suppose a reader understands two of the three requirements and just needs an explanation of the third. It would be cool if they could find the sentences they're looking for w/o having to scan a whole paragraph looking for the words, "first", "second", or "third".

I think we can achieve this by A) moving each explanation right under the equation / inequality it's talking about, or B) putting the three explanations in a second numbered list, or C) leaving the three explanations in a paragraph, but use the numerals 1, 2, and 3 within the paragraph. Might require some experimentation to see what looks best.

2Bryce Woodworth
Thanks for the feedback! I'd prefer to have the explanations underneath the requirement they refer to, but I haven't been able to get the spacing to look good. I added numbers into the paragraph to make it visually easy to find where each requirement is discussed. If I get the spacing to work well, I'll switch to that.

Did you just swap the pronouns here? In the previous sentences the speaker was the seller and the listener was the buyer, but now it sounds like it's the other way around.

Load More