Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: RobbBB 03 March 2015 04:51:38AM 6 points [-]

We may do a staggered release for the print version. Splitting up the eBook might have given people headaches because of all the clickable links between different parts of the book.

Comment author: jsalvatier 03 March 2015 05:53:53AM 1 point [-]

Ah, I didn't realize you were also doing a print version.

Comment author: jsalvatier 03 March 2015 04:32:01AM *  2 points [-]

I'm very surprised you guys are releasing them all at once rather than releasing them on a year or something. That seems like it would generate more interest.

Also, I'm sort of disappointed that they were not more substantially edited. When I show the sequences to other people, people often complain a lot about the examples being terrible and more offensive than necessary even if they agree with the argument. But I get that that would require a lot of work.

Comment author: Salemicus 14 January 2015 10:38:16AM 3 points [-]

I'd suggest you start with the "Three Languages of Politics" - see e.g. podcast here example review here. Alternatively, It's all about how to be meta-rational about politics, spotting your own biases, and the way we frame our language in political discussion. In other words, very less-wrong stuff.

I agree he is ideological in the sense that he has priors for understanding new events. We all do. I disagree that he is ideological in the sense that he fails Ideological Turing Tests, but YMMV.

Comment author: jsalvatier 14 January 2015 07:51:36PM 2 points [-]

Two months later, he reemerged at his own domain, promising to avoid a particular kind of discourse, one aimed at closing the minds of those on one’s own side. Although Kling was never among the worst offenders on this score, one could indeed sense a shift in his tone. He prioritized framing his opponents’ positions in the most favorable light, and he developed a framework for understanding political issues from progressive, conservative, and libertarian perspectives.

Hey that sounds pretty good! This was precisely my problem with him on EconLog. My ideology match his a lot, but I was irritated because he seemed to make okay, but not especially good arguments for things I agreed with and seemed to frame things in unnecessarily charged ways. He often framed things in a very libertarian way (in a Three Languages of Politics sense, which seems like it has a pretty cool idea), and I'm glad he does that a lot less!

His book sounds interesting.

Comment author: Salemicus 12 January 2015 04:18:31PM *  5 points [-]

I would highly recommend Arnold Kling, both for his excellent blog, and his most recent book, which has been occasionally discussed on this site. Not everything he writes is necessarily "lesswrongish", mind you, but most of it is.

Comment author: jsalvatier 14 January 2015 12:45:46AM 1 point [-]

I'm surprised, I followed him on econlog for a long long time, but usually found him too ideological for my tastes (even though I lean pretty libertarian) and just not that interesting. What are some of your favorites?

Comment author: pianoforte611 07 January 2015 07:21:40PM 1 point [-]

How do you figure out what is best? I used the "sort by customer rating" function on Amazon when I bought my first set of household goods with decent results.

Comment author: jsalvatier 07 January 2015 08:24:35PM 4 points [-]

The standard advice for the best quality/price tradeoff seems to be Victorinox knives with the fibrox handle.

Comment author: jkaufman 10 October 2014 11:13:27AM *  2 points [-]

Perhaps one way to improve the measurement would be to structure the question in terms of preference rather than direct measurement

This is a really cool idea. But even preference has issues. For example, I like contra dance (a kind of social dancing) a lot, and have a good time when I go. The feel in the moment is one of my favorite things. If you asked me, "would you rather be contra dancing" I would usually say yes. But if you look at my behavior, I don't actually go that often anymore, even when I do have free time. How do you tell the difference between me irrationally underconsuming something I enjoy vs me overestimating how much I enjoy it in posed comparisions?

Comment author: jsalvatier 14 October 2014 11:37:11PM 0 points [-]

For certain formulations of this, that objection seems irrelevant. Imagine that instead of a 1-10 scale, you had a ranked list of activities (or sets of activities).

Comment author: ESRogs 26 September 2014 09:24:47PM *  8 points [-]

In order to get a better handle on the problem, I’d like to try walking through the mechanics of a how a vote by moral parliament might work. I don’t claim to be doing anything new here, I just want to describe the parliament in more detail to make sure I understand it, and so that it’s easier to reason about.

Here's the setup I have in mind:

  • let's suppose we've already allocated delegates to moral theories, and we've ended up with 100 members of parliament, MP_1 through MP_100
  • these MP's will vote on 10 bills B_1 through B_10 that will each either pass or fail by majority vote
  • each MP M_m has a utility score for each bill B_b passing U_m,b (and assigns zero utility to the bill failing, so if they'd rather the bill fail, U_m,b is negative)
  • the votes will take place on each bill in order from B_1 to B_10, and this order is known to all MP's
  • all MP's know each other's utility scores

Each MP wants to maximize the utility of the results according to their own scores, and they can engage in negotiation before the voting starts to accomplish this.

Does this seem to others like a reasonable description of how the parliamentary vote might work? Any suggestions for improvements to the description?

If others agree that this description is unobjectionable, I'd like to move on to discussing negotiating strategies the MP's might use, the properties these strategies might have, and whether there are restrictions that might be useful to place on negotiating strategies. But I'll wait to see if others think I'm missing any important considerations first.

Comment author: jsalvatier 26 September 2014 10:28:21PM *  2 points [-]

Remember there's no such thing as zero utility. You can assign an arbitrarily bad value to failing to resolve, but it seems a bit arbitrary.

Comment author: Manfred 26 September 2014 07:55:41PM *  4 points [-]

Is there some way to rephrase this without bothering with the parliament analogy at all? For example, how about just having each moral theory assign the available actions a "goodness number" (basically expected utility). Normalize the goodness numbers somehow, then just take the weighted average across moral theories to decide what to do.

If we normalize by dividing each moral theory's answers by its biggest-magnitude answer, (only closed sets of actions allowed :) ) I think this regenerates the described behavior, though I'm not sure. Obviously this cuts out "human-ish" behavior of parliament members, but I think that's a feature, since they don't exist.

Comment author: jsalvatier 26 September 2014 10:26:39PM 2 points [-]

I think the key benefit of the parliamentary model is that the members will vote trade in order to maximize their expectation.

Comment author: Stuart_Armstrong 22 August 2014 10:52:06AM 2 points [-]

It seems the general goal could be cashed out in simple ways, with biochemistry, epidemeology, and a (potentially flawed) measure of "health".

Comment author: jsalvatier 28 August 2014 06:57:08PM 1 point [-]

I think you're sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.

Comment author: jsalvatier 24 August 2014 06:17:13PM 3 points [-]

Narrow AI can be dangerous too is an interesting idea, but I don't think this is very convincing. I think you've accidentally snuck in some things not inside its narrow domain. In this scenario the AI has to model the actual population, including the quantity of the population, which doesn't seem too relevant. Also, it seems unlikely that people would use reducing absolute number of deaths as the goal function as opposed to chance of death for those already alive.

View more: Next