Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: epursimuove 11 June 2017 05:24:09AM 0 points [-]

64/4 these

What does this mean? Google isn't helping and the only mention I see on LW is this post.

Comment author: Benquo 11 June 2017 07:47:11AM 0 points [-]

The Pareto Principle says that you can 80:20 many things, i.e. get 80% of the value from 20% of the work. If you 80:20 the 20%, you end up with 64% of the value for 4% of the work.

Comment author: Duncan_Sabien 26 May 2017 03:12:35AM *  1 point [-]

Yeah, Geoff and Leverage have a lot I would love to look at and emulate, but I haven't been running under the assumption that I'd just ... be allowed to. I'm beginning some conversations that are exciting and promising.

That being said, I do think that the overall goals are somewhat different. Leverage (as far as I can tell) is building a permanent superteam to actually do stuff. I think Dragon Army is building a temporary superteam that will do stuff in the short and medium term, but is more focused on individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies. My model of Leverage is looking for the right thing to exploit on, whereas I'm looking for how to create competent people, and while there's a lot of overlap those are not the same Polaris.

I similarly think Geoff is highly competent and certainly outstrips me in some ways (and possibly is net more qualified), but I'd posit I outstrip him in a roughly similar number of ways, and that he's better matched for what Leverage is doing and I'm better matched for what DA is doing (sort of tautologically, since we're each carving out mountains the way we think makes the most sense). I think the best of all would be if Geoff and I end up in positions of mutual respect and are able to swap models and resources, but I acknowledge he's a good five years my senior and has no reason to treat me as an equal yet.

EDIT: Also note that Geoff is disqualified by virtue of already being busy, and as for "just join Leverage," well ... they've never really expressed interest in me up to this point, so I figured I wouldn't bother them unless I was no longer employed day-to-day.

Comment author: Benquo 26 May 2017 03:15:45AM 0 points [-]

What do you think are the key advantages & disadvantages of your Polaris vs Leverage's? How does this relate to methods?

Comment author: Duncan_Sabien 26 May 2017 02:26:25AM 2 points [-]

I like both your praise and your criticism. re: the criticism, one of the reasons I've held off a bit is a suspicion that I can't actually well-model the sorts of things the house will accomplish once fully formed (that it will be stranger/more surprising than I think). I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum, etc. but they were all over the map.

Comment author: Benquo 26 May 2017 03:12:36AM *  5 points [-]

I can't actually well-model the sorts of things the house will accomplish once fully formed

My best guess is that having a highly specific plan that includes steering/replanning capacity and then totally abandoning it when the wheels hit the road because it turns out to be the wrong thing is way better than having a generic plan.

I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum

I'd love to see how you'd design a house specifically for any one of these goals. Robot car is the one that I think would give you the most feedback from your internal models during the planning stage, followed by publishing a book or movie. "Create an org" is a bit recursive, and a talk series is probably either too easy or too vague. Not sure what you mean by develop Veritaserum but it seems to strongly overlap with some of Leverage's most plausibly successful research.

I claim with moderate confidence that simply walking through how the house as currently planned might go about building a robot car would substantially improve not just your plans for particular object-level capacity, but general capacity. "How will this organization change its mind?" might be a lot harder to cash out usefully than "How will this organization change its mind about valve design for the fuel injector?".

Comment author: Benquo 26 May 2017 02:21:14AM *  17 points [-]

This seems similar to Leverage in a lot of ways. It seems like it would be really instructive to contrast your plan with Leverage's plan - as initially intended, and as executed - to see what you plan to invest in that they aren't, what you're not doing that they are, and costs and benefits of those differences.

Other contrasting case studies might also add clarity:

  • Esalen
  • kibbutzim
  • the old Singularity Institute house
  • residential colleges
  • fraternities
  • Buddhist monasteries
  • Christian monasteries
  • actual armies
  • actual paramilitary organizations / militias
  • Sea Org

It probably makes sense to 64/4 these with rough sketches from memory/stereotypes/Wikipedia-ing before bothering to do any time-intensive research.

Comment author: Benquo 26 May 2017 02:18:58AM *  13 points [-]

Praise: The focus on actually doing a thing is great.

Criticism: Most of this post was about methods the house will have, why these are OK, etc. Comparatively little was about what the house is going to used to accomplish outside itself. This seems worth putting much more up-front thought into given how much of the point is to make a house that can actually do a thing. Probably your methods and selection criteria are not very well-calibrated for whatever project will turn out to be best - human coordination is much easier when you're coordinating about something in particular.

Obviously you will not know everything perfectly in advance no matter how much planning you do - but planning to accomplish a particular thing is very qualitatively different from planning to accomplish things in general.

Praise: A lot of the details on how to live together well (group exercise, food, time explicitly set aside for checking in) seem really good. If step 1 is just "learn to live well together," that is itself a respectable project, and one most of the Rationalists have failed at. Probably most attempts at this fail, we only observe the old communes that didn't fall apart.

Comment author: paulfchristiano 08 May 2017 02:13:28AM 2 points [-]

I think this is very unlikely.

Comment author: Benquo 08 May 2017 06:56:07AM 0 points [-]

I think this would be valuable to work out eventually, but this probably isn't the right time and place, and in the meantime I recognize that my position isn't obviously true.

Comment author: paulfchristiano 08 May 2017 02:11:54AM 1 point [-]

a. Private discussion is nearly as efficient as public discussion for information-transmission, but has way fewer political consequences. On top of that, the political context is more collaborative between participants and so it is less epistemically destructive.

b. I really don't want to try to use collectively-enforced norms to guide epistemology and don't think there are many examples of this working out well (whereas there seem to be examples of avoiding epistemically destructive norms by moving into private).

Comment author: Benquo 08 May 2017 06:49:49AM *  0 points [-]

Private discussion is nearly as efficient as public discussion for information-transmission, but has way fewer political consequences.

If this is a categorical claim, then what are academic journals for? Should we ban the printing press?

If your claim is just that some public forums are too corrupted to be worth fixing, not a categorical claim, then the obvious thing to do is to figure out what went wrong, coordinate to move to an uncorrupted forum, and add the new thing to the set of things we filter out of our new walled garden.

Comment author: entirelyuseless 06 May 2017 02:22:52PM *  0 points [-]

I would be a bit surprised if that was explicitly what Nate meant, but it is what we should be concerned about, in terms of being concerned about whether someone is a bad person.

To make my general claim clearer: "doing evil to bring about good, is still doing evil," is necessarily true, for exactly the same reason that "blue objects touching white objects, are still blue objects," is true.

I agree that many utilitarians understand their moral philosophy to recommend doing evil for the sake of good. To the extent that it does, their moral philosophy is mistaken. That does not necessarily mean that utilitarians are bad people, because you can be mistaken without being bad. But this is precisely the reason that when you present scenarios where you say, "would you be willing to do such and such a bad thing for the sake of good," many utilitarians will reply, "No! That's not the utilitarian thing to do!" And maybe it is the utilitarian thing, and maybe it isn't. But the real reason they feel the impulse to say no, is that they are not bad people, and therefore they do not want to do bad things, even for the sake of good.

This also implies, however, that if someone understands utilitarianism in this way and takes it too seriously, they will indeed start down the road towards becoming a bad person. And that happened even in the context of the present discussion (understood more broadly to include its antecedents) when certain people insisted, saying in effect, "What's so bad about lying and other deceitful tactics, as long as they advance my goals?"

Comment author: Benquo 06 May 2017 05:43:57PM 0 points [-]

I agree that this exists, and claim that it ought to be legitimate discourse to claim that someone else is doing it.

Comment author: Raemon 03 May 2017 02:59:14AM 1 point [-]

I agree that this is a big and complicated deal and "never resort to sensationalist tactics" isn't a sufficient answer for reasons close to what you describe. I'm not sure what the answer is, but I've been thinking about ideas.

Basically, I think were automatically fail if we have no way to punish defectors, and we also automatically fail controversy/sensationalism-as-normally-practiced is our main tool of doing so.

I think the threat of sensationalist tactics needs to be real. But it needs to be more like Nuclear Deterrence than it is like tit-for-tat warfare.

We've seen where sensationalism/controversy leads - American journalism. It is a terrible race to the bottom of inducing as much outrage as you can. It is anti-epistemic, anti-instrumental, anti-everything. Once you start down the dark path, forever will it dominate your destiny.

I am very sympathetic to the fact that Ben tried NOT doing that, and it didn't work.

Comment author: Benquo 06 May 2017 02:40:08AM *  1 point [-]

Comments like this make me want to actually go nuclear, if I'm already not getting credit for avoiding doing so.

I haven't really called anyone in the community names. I've worked hard to avoid singling people out, and instead tried to make the discussion about norms and actions, not persons. I haven't tried to organize any material opposition to the interests of the organizations I'm criticizing. I haven't talked to journalists about this. I haven't made any efforts to widely publicize my criticisms outside of the community. I've been careful to bring up the good points as well as the bad of the people and institutions I've been criticizing.

I'd really, really like it if there were a way to get sincere constructive engagement with the tactics I've been using. They're a much better fit for my personality than the other stuff. I'd like to save our community, not blow it up. But we are on a path towards enforcing norms to suppress information rather than disclose it, and if that keeps going, it's simply going to destroy the relevant value.

(On a related note, I'm aware of exactly one individual who's been accused of arguing in bad faith in the discourse around Nate's post, and that individual is me.)

Comment author: entirelyuseless 05 May 2017 01:14:00PM 1 point [-]

This seems completely false. Most people think that Hitler and Stalin were intrinsically bad, and they would be likely to think this with or without systems of dominance.

Kant and Thomas Aquinas explain it quite well: we call someone a "bad person" when we think they have bad will. And what does bad will mean? It means being willing to do bad things to bring about good things, rather than wanting to do good things period.

Comment author: Benquo 05 May 2017 08:22:28PM *  0 points [-]

Do you think Nate's claim was that we oughtn't so often jump to the conclusion that people are willing to do bad things in order to bring about good things? That this is the accusation that's burning the commons? I'm pretty sure many utilitarians would say that this is a fair description of their attitude at least in principle.

View more: Next