If Wisconsin is trading cheese with Ohio, and then Michigan becomes much better at producing cheese, this can harm the economy of Wisconsin. It should not be possible for Wisconsin to be harmed by trading with Michigan unless something weird is going on.
Was "Wisconsin" supposed to be "Ohio" in the second sentence? Or are you contrasting between Wisconsin trading with Ohio and Wisconsin trading with Michigan?
This is silly
Perhaps
then you ought to focus predominantly on something else
This does not seem inconsistent with the post. (Contributing $1 per day towards something hardly seems to preclude focusing predominantly on other things.) Do you disagree with that?
It seems that classifiers trained on adversarial examples may be finding (more) conservative concept boundaries:
We also found that the weights of the learned model changed significantly, with the weights of the adversarially trained model being significantly more localized and interpretable
Benquo Given your analysis, I'm surprised by your vote of 50%. You took what was given as a conservative estimate, added in additional moderating factors, and still got a 10x margin of safety. Is this just because of a strong prior towards discounting cost effectiveness estimates?
How much would one have to donate for you to be 90% sure that it would offset the cost of eating meat?
For counterpoint, see: http://effective-altruism.com/ea/ry/ethical_offsetting_is_antithetical_to_ea/.
For reference, Lewis Bollard estimates that recent corporate cage-free campaigns "will spare about 250 hens a year of cage confinement per dollar spent."
Benquo Given your analysis, I'm surprised by your vote of 50%. You took what was given as a conservative estimate, added in additional moderating factors, and still got a 10x margin of safety. Is this just because of a strong prior towards discounting cost effectiveness estimates?
How much would one have to donate for you to be 90% sure that it would offset the cost of eating meat?
Bollard's more conservative estimate is 38 hen-years per dollar, if you include other expenditures on farm animal welfare. I think we need to do include those because we didn't know in advance which efforts would be effective, and probably there will be some regression to the mean.
If you're thinking of this as an offset (instead of just directly comparing it to other charitable expenditures) then you need to credit other inputs - especially, the time of the people working at these places. Labor captures about 60% of national income. People working at a cha...
So I suppose I should attempt a real reply.
I think:
But I do think it is a good question.
A probability doesn't seem like the right way to measure this.
Note that PredictIt currently thinks there's a 7% chance Trump will be impeached within the first 100 days.
That seems high to me for the first 100 days, since Republicans control both the house and the Senate. However, things could change at the midterm elections in 2018.
Overall I'm going with a 1 in 6 chance during the first term.
Fair to paraphrase as: donor-as-silent-partner?
Current thinking is that we should allow claims to be edited, but that past users' votes appear grayed out (so it's clear that they voted on a previous version of the claim). As of today, this hasn't been implemented yet.
The question of tradeoffs between X and Y and winners' curses reminds me of Bostrom's paper, The Unilateralist's Curse.
From the abstract:
...In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will move forward more often than is opt
Is the idea that a single organization should pursue X or Y and not worry about the fact that any given donors will value both X and Y to varying degrees?
(If so I might have called this organization-independence, or single-focus.)
I'm not sure what you mean about an exchange rate. Isn't a Pareto improvement something that makes everyone better off (or rather: someone better off and no one worse off)?
This is one of the claims that Benquo made in his post, so I think we should leave the wording as is, unless he wants to change it.
(I've added a note explaining where the claim comes from.)
I agree that there are some x-risks (like global warming) that are helped by a colony, but most aren't.
Alexei What are some of the ones (besides AI x-risk) that you think are not?
From the FB thread:
Nathan Bouscal: Note that I haven't heard significant disagreement about a colony being useless-ish against AI x-risk. The argument is that it helps with (almost) every other x-risk.
Robert Wiblin: Even then the disagreement isn't that a Mars colony couldn't help, it's that you can get something similarly valuable on Earth for a fraction of the price and difficulty.
Paul Crowley: The proper disagreement to measure is something like "A permanent, self-sustaining off-Earth colony would be a much more effective mitigation of x-risk than even ...
"has some resistance to Eternal September" -> "is resistant to Eternal September" ?
I agree that For mitigating AI x-risk, an off-Earth colony would be about as useful as a warm scarf.
Otherwise, I think this does seem like the kind of thing you would do to mitigate a broad class of risks. Namely, those that arise on Earth and don't lend themselves to interplanetary travel (e.g. pandemics, nukes, and some of the unknown unknowns).
First use of "we" should indicate who "we" are, e.g. "We at Arbital..."
is utility the donor gets from donating money "smooth" with respect to the amount raised
Ideally the utility the donor gets (on reflection) is closely related to the utility the charity gets :-) But I agree that it's important to take donor "hedonics" into account.
I'd be interested to know if you find yourself having that feeling a lot, while interacting with claims.
If it's a small minority of the time, I think the solution is a "wrong question" button. If it happens a lot, we might need another object type --something like a prompt-for-discussion rather than a claim-to-be-agreed with.
In other words, promoting this claim as worded, is misleading?
Maybe "gradual" would be a better term. I mean that there aren't sharp transitions where e.g. raising 48k is not very valuable, but 50k+ is valuable.
Alexei can you say more about why you endorse this proposal? In particular, would you change your mind if you believed this claim?
I don't usually have this concern because I assume that the utility from extra money for an organization grows smoothly as the amount of money increases, and that there are not sharp cutoffs or thresholds (even if the fundraiser declares "milestone" amounts).
You should make a claim, because I think I disagree. For example, there is a threshold at which they can afford to keep everyone at the same salary. Getting less money than that would mean firing someone or cutting salaries, which makes for an irregular utility function.
Even if we expect to implement an indirect approach to specifying ethics for our AI systems, it's still valuable to gain a better understanding of our own ethical reasoning (and ethics in general), because:
Better?
I think this should be a claim.
I would add an "I assume" here in parentheses, so you're not putting words in their mouth, or projecting feelings into their heads.
I would like to see an operationalization.
Who is our community? How many of us should move?
Overall, I think the post covers most of the important points, but I think I'd want to cut some parts.
I'll try making an outline of what I think the key points are.
I might rephrase this to, "initial target" so it's clear that it was intended as a step along the path, not that it was our entire vision.
I think this section conflates two things: 1) the role LW used to play, and 2) the role ultimate-Arbital will play.
I think 1 is a subset of 2.
In particular, I don't think LW had solved the problem you describe here: "If someone wants to catch up on the state of the debate, they either need to get a summary from someone, figure it out as they go along, or catch up by reading the entire discussion."
Not sure if it makes sense for this one to be a probability bar.
Here's an alternate version with an Agreement bar.
Here's another comment.
Testing out replies.
I am a real comment. Don't delete me please!
This page is an outline for the Universal Property project.
Progress on the project will be measured by tracking the state of the pages linked below, as they transition from redlinks to stubs, etc.
We're going to feature whatever we choose as the current project on the front page, and I want to include some intro text. What do you think of the following (adapted from the first paragraph above):
Help us build an intuitive explanation of this fundamental concept in category theory!
Category theory is famously very difficult to understand, even for people with a relatively high level of mathematical maturity. With this project, we want to produce an explanation that will clearly communicate a core concept in category theory, the universal property, to a wide audience of learners.
See below for our current progress on the project, as well as how you can contribute.
Omit the 'as'
If these are included I think it would be good to also include explanations of why each one is wrong.
This is a clear explanation, but I think some formatting changes could enable readers to grok it even more quickly.
Suppose a reader understands two of the three requirements and just needs an explanation of the third. It would be cool if they could find the sentences they're looking for w/o having to scan a whole paragraph looking for the words, "first", "second", or "third".
I think we can achieve this by A) moving each explanation right under the equation / inequality it's talking about, or B) putting the three explanations in a second numbered list, or C) leaving the three explanations in a paragraph, but use the numerals 1, 2, and 3 within the paragraph. Might require some experimentation to see what looks best.
Did you just swap the pronouns here? In the previous sentences the speaker was the seller and the listener was the buyer, but now it sounds like it's the other way around.
and I can toss it?
Nm, the longer explanation later in the page answered my question.