Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 22 August 2014 10:52:06AM 2 points [-]

It seems the general goal could be cashed out in simple ways, with biochemistry, epidemeology, and a (potentially flawed) measure of "health".

Comment author: jsalvatier 28 August 2014 06:57:08PM 1 point [-]

I think you're sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.

Comment author: jsalvatier 24 August 2014 06:17:13PM 3 points [-]

Narrow AI can be dangerous too is an interesting idea, but I don't think this is very convincing. I think you've accidentally snuck in some things not inside its narrow domain. In this scenario the AI has to model the actual population, including the quantity of the population, which doesn't seem too relevant. Also, it seems unlikely that people would use reducing absolute number of deaths as the goal function as opposed to chance of death for those already alive.

Comment author: nbouscal 01 August 2014 05:51:59PM *  9 points [-]

There have been numerous critiques of Connection Theory already, and I encounter people disavowing it with much more frequency than people endorsing it, in both the rationalist and EA communities. So, I don't think we have anything to worry about in that direction. I'm more worried by the zeal with which people criticize it, given that Leverage rarely seems to mention it, all of the online material about it is quite dated, and many of the people whose criticism of it I question don't seem to actually know hardly anything about it.

To be extra clear: I'm not a proponent of CT; I'm very skeptical of it. It's just distressing to me how quick the LW community is to politicize the issue.

Comment author: jsalvatier 04 August 2014 10:10:58PM 3 points [-]

One part that worries me is that they put on the EA Summit (and ran it quite well), and thus had a largish presence there. Anders' talk was kind of uncomfortable to watch for me.

Comment author: John_Maxwell_IV 17 July 2014 07:02:06AM 8 points [-]

Perhaps you could see trying to think of analogies as sampling randomly in conceptspace from a reference class that the concept you are interested in belongs to.

Imagine a big book of short computer programs that simulate real-life phenomena. I'm working on a new program for a particular phenomenon I'm trying to model. I don't have much data about my phenomenon, and I'm trying to figure out if a recursive function (say) would accurately model the phenomenon. By looking through my book of programs, I can look at the frequency with which recursive functions seem to pop up when modeling reality and adjust my credence that the phenomenon can be modeled with a recursive function accordingly.

Choosing only to look at pages for phenomena that have some kind of isomorphism with the one I'm trying to model amounts to sampling from a smaller set of data points from a tighter reference class.

This suggests an obvious way to improve on reasoning by analogy: try to come up with a bunch of analogies, in a way that involves minimal motivated cognition (to ensure a representative sample), and then look at the fraction of the analogies for which a particular proposition holds (perhaps weighting more isomorphic analogies more heavily).

Comment author: jsalvatier 17 July 2014 10:52:27PM 1 point [-]

I like the idea of coming up with lots of analogies and averaging them or seeing if they predict things in common.

Comment author: jsalvatier 05 July 2014 10:36:38PM *  0 points [-]
  1. Human Compatible AGI
  2. Human Safe AGI
  3. Cautious AGI
  4. Secure AGI
  5. Benign AGI
Comment author: jsteinhardt 20 June 2014 09:06:17AM 1 point [-]

It's not obvious to me that Qiaochu would endorse utility functions as a standard for "ideal rationality". I, for one, do not.

Comment author: jsalvatier 20 June 2014 07:06:15PM 1 point [-]

Even if you don't think it's the ideal, utility based decision theory it does give us insights that I don't think you can naturally pick up from anywhere else that we've discovered yet.

Comment author: jsalvatier 29 May 2014 12:07:03AM 1 point [-]

About 50% of my day to day friends are LWers. All 3 of my housemates are LWers. I've hosted Yvain and another LWer. Most of the people I know in SF are through LW. I've had a serious business opportunity through someone I know via LW. I've had a couple of romantic interests.

Comment author: jsalvatier 07 May 2014 07:42:46PM 0 points [-]

This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.

That's my experience with myself.

In response to Channel factors
Comment author: Daniel_Burfoot 12 March 2014 05:32:19PM -1 points [-]

closing browser tabs as soon as I’m done with them

There should be a browser feature something along the lines of: if a tab is deeply buried and hasn't been used in a while, it gets closed automatically.

Comment author: jsalvatier 15 March 2014 01:33:18AM 1 point [-]
In response to Channel factors
Comment author: jsalvatier 15 March 2014 01:28:03AM 5 points [-]

This seems quite close to Beware Trivial Inconveniences. It's good to have an outside established name for this, though.

View more: Next