Comment author: MugaSofer 17 January 2014 04:09:57AM 0 points [-]

Well, this is what comes immediately after the quoted paragraph, for context:

And yet without the utilitarian angle, this whole argument falls apart on exactly the “proving too much” grounds pushed by our hypothetical politician above. If you want to ban euthanasia, why not ban health care? If you want to ban prostitution, why not McJobs? If you want to ban BDSM, why not all consensual sex? If you don’t have a good quantitative argument ready, you sure can’t support it on qualitative grounds alone.

Look around you. Just look around you. Have you figured out what we’re looking for yet? That’s right. The answer is sacred values and taboo trade-offs.

So my interpretation doesn't seem entirely unreasonable. I haven't finished rereading the whole post yet, though.

Comment author: JGWeissman 17 January 2014 04:36:54AM 2 points [-]

Arguing that the consequentialist approach is better than the deontological approach is different than skipping that step and going straight to refuting your own consequentialist argument for the position others were arguing on deontological grounds. Saying they should do some expected utility calculations is different than saying the expected utility calculations the haven't done are wrong.

Comment author: MugaSofer 17 January 2014 01:55:24AM -1 points [-]

Really? When I read that article, I thought he was ramming home his point that his opponents are secretly deontologists there - hence the title of the post in question. Perhaps I too have failed to apply the principle of charity.

(Insert metahumourous joke about not bothering because of the OP's topic here.)

Comment author: JGWeissman 17 January 2014 02:33:26AM 4 points [-]

I thought he was ramming home his point that his opponents are secretly deontologists there

I think the point was that his opponents are openly deontologists, making openly deontological arguments for their openly deontological position, and therefor they are rightly confused and not moved by Yvain's refutation of a shoehorning of their position into a consequentialist argument when they never made any such argument, which Yvain now understands and therefor he doesn't do that anymore.

Comment author: JGWeissman 14 January 2014 09:11:23PM 16 points [-]

This seems to be overloading the term "side effects". The functional programming concept of side effects (which it says its functions shouldn't have) is changing the global state of the program that invokes them other than by returning the value. It makes no claims of these other concepts of a program being affected by analyzing the source code of the function independent of invoking it or of the the function running on morally relevant causal structure.

Comment author: Calvin 13 January 2014 12:53:42PM 1 point [-]

Well, this is something certainly I agree with, and after looking for the context of the quote I see that it can be interpreted that way.

I agree, that my interpretation wasn't very, well... charitable, but without context it really reads like yet another chronicle of superior debater celebrating victory over someone, who dared to be wrong on the Internet.

Comment author: JGWeissman 13 January 2014 02:50:05PM 8 points [-]

It seems to me that in the quote Yvain is admitting an error, not celebrating victory. Try taking his use of the word "reasonably" at face value.

In response to comment by Benja on Why CFAR?
Comment author: brazil84 31 December 2013 09:02:18PM -1 points [-]

I would agree with your reasoning if CFAR claimed that they can reliably turn people into altruists free of cognitive biases within the span of their four-day workshop. If they claimed that and were correct in that, then it shouldn't matter whether they (a) require up-front payment and offer a refund or (b) have people decide what to pay after the workshop, since a bias-free altruist would make end up paying the same in either case.

It's not so much what CFAR is claiming as what their goals are and which outcomes they prefer.

The goal is to create people who are effective, rational do-gooders. I see four main possibilities here:

First, that they succeed in doing so.

Second, that they fail and go out of business.

Third, that they become a sort of self-help cult like the Landmark Forum, i.e. they charge people money without delivering much benefit.

Fourth, they become a sort of fraternal organization, i.e. membership does bring benefits mainly from being able to network with other members.

Obviously (1) is the top choice. But if (1) does not occur, which would they prefer -- (2), or some combination of (3) and (4)? By charging money up front, they are on the path to (3) or (4) as a second choice. Which goes against their stated goal.

So let's assume that they do not claim to be able to turn people into effective rational do-gooders. The fact remains that they hope to do so. And one needs to ask, what do they hope for as a second choice?

In response to comment by brazil84 on Why CFAR?
Comment author: JGWeissman 01 January 2014 02:17:11AM 0 points [-]

CFAR can achieve its goal of creating effective, rational do-gooders by taking existing do-gooders and making them more effective and rational. This is why they offer scholarships to existing do-gooders. Their goal is not to create effective, rational do-gooders out of blank slates but make valuable marginal increases in this combination of traits, often by making people who already rank highly in these areas even better.

They also use the same workshops to make people in general more effective and rational, which they can charge money for to fund the workshops, and gives them more data to test their training methods on. That they don't turn people in general into do-gooders does not constitute a failure of the whole mission. These activities support the mission without directly fulfilling it.

Fourth, they become a sort of fraternal organization, i.e. membership does bring benefits mainly from being able to network with other members.

CFAR is creating an alumni network to create benefits on top of increased effectiveness and rationality.

In response to Why CFAR?
Comment author: brazil84 31 December 2013 06:47:02AM -8 points [-]

If CFAR's curricula is good at creating people who are effective rational do-gooders, then such people will (1) correctly ascertain the value of CFAR; (2) have the means to support CFAR; and (3) act by supporting CFAR. So arguably there is no need to charge money up front for CFAR training -- just tell participants to evaluate the training after the fact and pay whatever they think is appropriate. Kind of like a tip in a restaurant.

In response to comment by brazil84 on Why CFAR?
Comment author: JGWeissman 31 December 2013 02:28:52PM 6 points [-]

CFAR does offer to refund the workshop fee if after the fact participants evaluate that it wasn't worth it. They also solicit donations from alumni. So they are kind of telling participants to evaluate the value provided by CFAR and pay what they think is appropriate, while providing an anchor point and default which covers the cost of providing the workshop. That anchor point and default are especially important for the many workshop participants who are not selected for altruism, who probably will learn a lot of competence and epistemic rationality but not much altruism, and whose workshop fees subsidize CFAR's other activities.

Comment author: John_Maxwell_IV 27 December 2013 07:19:21AM *  22 points [-]

I've heard that CFAR is already trying to move in the direction of being self-sustaining by charging higher fees and stuff. I went to a 4-day CFAR workshop and was relatively unimpressed; my feeling about CFAR is that they are providing a service to individuals for money and it's probably not a terrible idea to let the market determine if their services are worth the amount they charge. (In other words, if they're not able to make a sustainable business or at least a university-style alum donor base out of what they're doing, I'm skeptical that propping them up as a non-alum is an optimal use of your funds.)

FHI states that they are interested in using marginal donations to increase the amount of public outreach they do. It seems like FHI would have a comparative advantage over MIRI in doing outreach, given that they are guys with PhDs from Oxford and thus would have a higher level of baseline credibility with the media, etc. So it's kind of disappointing that MIRI seems to be more outreach-focused of the two, but it seems like the fact that FHI gets most of its funding from grants means they're restricted in what they can spend money on. FHI strikes me as more underfunded than MIRI, given that they are having to do a collaboration with an insurance company to stay afloat, whereas MIRI has maxed out all of their fundraisers to date. (Hence my decision to give to FHI this year.)

If you do want to donate to MIRI, it seems like the obvious thing to do would be to email them and tell them that you want to be a matching funds provider for one of their fundraisers, since they're so good at maxing those out. (I think Malo would be the person to contact; you can find his email on this page.)

Comment author: JGWeissman 27 December 2013 01:42:13PM 20 points [-]

my feeling about CFAR is that they are providing a service to individuals for money and it's probably not a terrible idea to let the market determine if their services are worth the amount they charge.

I think that CFAR's workshops are self funding and contribute to paying for organizational overhead. Donated funds allow them to offer scholarships to their workshops to budding Effective Altruists (like college students) and run the SPARC program (targeting mathematically gifted children who may be future AI researchers). So, while CFAR does provide a service to individuals for money, donated money buys more services targeted at making altruistic people more effective and getting qualified people working on important hard problems.

Comment author: JGWeissman 27 December 2013 06:08:14AM 22 points [-]

I'm convinced AGI is much more likely to be built by a government or major corporation, which makes me more inclined to think movement-building activities are likely to be valuable, to increase the odds of the people at that government or corporation being conscious of AI safety issues, which MIRI isn't doing.

MIRI's AI workshops get outside mathematicians and AI researchers involved in FAI research, which is good for movement building within the population of people likely to be involved in creating an AGI.

Comment author: lukeprog 19 December 2013 08:00:11PM 13 points [-]

Reproduced for convenience...

On G+, John Baez wrote about the MIRI workshop he's currently attending, in particular about Löb's Theorem.

Timothy Gowers asked:

Is it possible to give just the merest of hints about what the theorem might have to do with AI?

Qiaochu Yuan, a past MIRI workshop participant, gave a concise answer:

Suppose you want to design an AI which in turn designs its (smarter) descendants. You'd like to have some guarantee that not only the AI but its descendants will do what you want them to do; call that goal G. As a toy model, suppose the AI works by storing and proving first-order statements about a model of the environment, then performing an action A as soon as it can prove that action A accomplishes goal G. This action criterion should apply to any action the AI takes, including the production of its descendants. So it would be nice if the AI could prove that if its descendants prove that action A leads to goal G, then action A in fact leads to goal G.

The problem is that if the AI and its descendants all believe the same amount of mathematics, say PA, then by Lob's theorem this implies that the AI can already prove that action A leads to goal G. So it must already do the cognitive work that it wants its smarter descendants to do, which raises the question of why it needs to build those descendants in the first place. So in this toy model Lob's theorem appears as a barrier to an AI designing descendants which it both can't simulate but can provably trust.

Comment author: JGWeissman 19 December 2013 08:22:09PM 3 points [-]

Qiaochu's answer seems off. The argument that the parent AI can already prove what it wants the successor AI to prove and therefore isn't building a more powerful successor, isn't very compelling because being able to prove things is a different problem than searching for useful things to prove. It also doesn't encompass what I understand to be the Lobian obstacle, that being able to prove that if your own mathematical system proves something that thing is true implies that your system is inconsistent.

Is there more context on this?

Comment author: CCC 13 December 2013 02:01:03PM 7 points [-]

Sooo... Quirrel knows a stunning hex that looks like Avadra Kevadra?

Then, back in Azkaban, facing that auror, when Quirrel used Avadra Kevadra in an attempt to force the auror to dodge, and Harry stopped it with his patronus...

...why did Quirrel not use the green stunner, unless Quirrel actually wanted to kill the auror?

And how long will it be until Harry asks that question?

Comment author: JGWeissman 15 December 2013 06:43:03AM 4 points [-]

Then, back in Azkaban, facing that auror, when Quirrel used Avadra Kevadra in an attempt to force the auror to dodge

What do you think you know about which spell Quirrell used, and how do you think you know it?

View more: Prev | Next