Narrow AI can be dangerous too is an interesting idea, but I don't think this is very convincing. I think you've accidentally snuck in some things not inside its narrow domain. In this scenario the AI has to model the actual population, including the quantity of the population, which doesn't seem too relevant. Also, it seems unlikely that people would use reducing absolute number of deaths as the goal function as opposed to chance of death for those already alive.
There have been numerous critiques of Connection Theory already, and I encounter people disavowing it with much more frequency than people endorsing it, in both the rationalist and EA communities. So, I don't think we have anything to worry about in that direction. I'm more worried by the zeal with which people criticize it, given that Leverage rarely seems to mention it, all of the online material about it is quite dated, and many of the people whose criticism of it I question don't seem to actually know hardly anything about it.
To be extra clear: I'm not a proponent of CT; I'm very skeptical of it. It's just distressing to me how quick the LW community is to politicize the issue.
One part that worries me is that they put on the EA Summit (and ran it quite well), and thus had a largish presence there. Anders' talk was kind of uncomfortable to watch for me.
Perhaps you could see trying to think of analogies as sampling randomly in conceptspace from a reference class that the concept you are interested in belongs to.
Imagine a big book of short computer programs that simulate real-life phenomena. I'm working on a new program for a particular phenomenon I'm trying to model. I don't have much data about my phenomenon, and I'm trying to figure out if a recursive function (say) would accurately model the phenomenon. By looking through my book of programs, I can look at the frequency with which recursive functions seem to pop up when modeling reality and adjust my credence that the phenomenon can be modeled with a recursive function accordingly.
Choosing only to look at pages for phenomena that have some kind of isomorphism with the one I'm trying to model amounts to sampling from a smaller set of data points from a tighter reference class.
This suggests an obvious way to improve on reasoning by analogy: try to come up with a bunch of analogies, in a way that involves minimal motivated cognition (to ensure a representative sample), and then look at the fraction of the analogies for which a particular proposition holds (perhaps weighting more isomorphic analogies more heavily).
I like the idea of coming up with lots of analogies and averaging them or seeing if they predict things in common.
- Human Compatible AGI
- Human Safe AGI
- Cautious AGI
- Secure AGI
- Benign AGI
It's not obvious to me that Qiaochu would endorse utility functions as a standard for "ideal rationality". I, for one, do not.
Even if you don't think it's the ideal, utility based decision theory it does give us insights that I don't think you can naturally pick up from anywhere else that we've discovered yet.
About 50% of my day to day friends are LWers. All 3 of my housemates are LWers. I've hosted Yvain and another LWer. Most of the people I know in SF are through LW. I've had a serious business opportunity through someone I know via LW. I've had a couple of romantic interests.
This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.
That's my experience with myself.
closing browser tabs as soon as I’m done with them
There should be a browser feature something along the lines of: if a tab is deeply buried and hasn't been used in a while, it gets closed automatically.
This seems quite close to Beware Trivial Inconveniences. It's good to have an outside established name for this, though.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It seems the general goal could be cashed out in simple ways, with biochemistry, epidemeology, and a (potentially flawed) measure of "health".
I think you're sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.