Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: John_Maxwell_IV 17 July 2014 07:02:06AM 7 points [-]

Perhaps you could see trying to think of analogies as sampling randomly in conceptspace from a reference class that the concept you are interested in belongs to.

Imagine a big book of short computer programs that simulate real-life phenomena. I'm working on a new program for a particular phenomenon I'm trying to model. I don't have much data about my phenomenon, and I'm trying to figure out if a recursive function (say) would accurately model the phenomenon. By looking through my book of programs, I can look at the frequency with which recursive functions seem to pop up when modeling reality and adjust my credence that the phenomenon can be modeled with a recursive function accordingly.

Choosing only to look at pages for phenomena that have some kind of isomorphism with the one I'm trying to model amounts to sampling from a smaller set of data points from a tighter reference class.

This suggests an obvious way to improve on reasoning by analogy: try to come up with a bunch of analogies, in a way that involves minimal motivated cognition (to ensure a representative sample), and then look at the fraction of the analogies for which a particular proposition holds (perhaps weighting more isomorphic analogies more heavily).

Comment author: jsalvatier 17 July 2014 10:52:27PM 1 point [-]

I like the idea of coming up with lots of analogies and averaging them or seeing if they predict things in common.

Comment author: jsalvatier 05 July 2014 10:36:38PM *  0 points [-]
  1. Human Compatible AGI
  2. Human Safe AGI
  3. Cautious AGI
  4. Secure AGI
  5. Benign AGI
Comment author: jsteinhardt 20 June 2014 09:06:17AM 1 point [-]

It's not obvious to me that Qiaochu would endorse utility functions as a standard for "ideal rationality". I, for one, do not.

Comment author: jsalvatier 20 June 2014 07:06:15PM 1 point [-]

Even if you don't think it's the ideal, utility based decision theory it does give us insights that I don't think you can naturally pick up from anywhere else that we've discovered yet.

Comment author: jsalvatier 29 May 2014 12:07:03AM 1 point [-]

About 50% of my day to day friends are LWers. All 3 of my housemates are LWers. I've hosted Yvain and another LWer. Most of the people I know in SF are through LW. I've had a serious business opportunity through someone I know via LW. I've had a couple of romantic interests.

Comment author: jsalvatier 07 May 2014 07:42:46PM 0 points [-]

This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.

That's my experience with myself.

In response to Channel factors
Comment author: Daniel_Burfoot 12 March 2014 05:32:19PM -1 points [-]

closing browser tabs as soon as I’m done with them

There should be a browser feature something along the lines of: if a tab is deeply buried and hasn't been used in a while, it gets closed automatically.

Comment author: jsalvatier 15 March 2014 01:33:18AM 1 point [-]
In response to Channel factors
Comment author: jsalvatier 15 March 2014 01:28:03AM 5 points [-]

This seems quite close to Beware Trivial Inconveniences. It's good to have an outside established name for this, though.

In response to Proportional Giving
Comment author: drethelin 04 March 2014 02:58:09AM 1 point [-]

proportional giving is good because it's a kind of giving that you can get a lot of people to do. Not for weird math reasons. I agree with what you're saying given utilitarian calculations but I don't think you're doing the right calculation.

Comment author: jsalvatier 06 March 2014 07:58:53PM 0 points [-]

Can you expand on that? What do you think would be closer to the right calculation?

In response to Proportional Giving
Comment author: jsalvatier 05 March 2014 01:24:55AM 1 point [-]

This seems obviously correct to me. In my experience this is not obvious to everyone and many people find it a bit distasteful to talk about. I'm glad you bring it up.

I haven't really tried hard, but I think I would find it pretty difficult to get myself to behave this way.

The way I "resolve" this dissonance is by thinking in terms of a parliamentary model of me. Parts of me want to be altruistic and part of me is selfish and they sort of "vote" over the use of resources.

Comment author: NancyLebovitz 25 February 2014 08:55:04PM 3 points [-]

Not exactly. My best guess is that trying to figure out conscientiousness, benevolence, and loyalty are so hard that people mostly trust or mistrust without very good reasons.

And the reason loyalty is on the list is that companies don't want embezzlers, but they don't want whistleblowers, either.

Comment author: jsalvatier 26 February 2014 01:02:36AM 0 points [-]

You say not exactly, but you seem to be agreeing and clarifying?

Also there are definite strongish conscientiousness signals, such as education level and grooming/dress.

I think this post could use more context. Your point seems interesting and novel, but I'm not 100% certain what it is or what question you're trying to address.

View more: Next