What topics aren't controversial and within a short inferential distance from most people? My intuition is that this is close to the definition of "boring".
Listen to actual conversation sometime, most of it is excruciatingly boring if you think about it in terms of information. But as other posters have pointed out, most conversation is about social bonding, not exchanging information.
A model is a map of the territory. For example, we could create an emulation of a light bulb using the most the most basic understanding of a light bulb. I.e., you flip a switch, magic goes through a wire, and on goes the light bulb. Or if you wished (and could) make the model more accurate, you would go down to the level of electrons, or even further. However, you wouldn't want a model at the most fundamental level if you're trying to understand how artificial light affects human behavior, for example. Models are a tool for explaining, understanding, and predicting phenomena conveniently.
Or for representing phenomena in an altered "format". For example, I have read a description of the bimetallic spring in a thermostat as a model of the room's temperature presented in a way that the furnace can make use of it.
Thinking about Eliezer's post about Doublethink Speaking of deliberate, conscious self-deception he opines: "Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen."
This seems odd for a site devoted to the principle that most of the time, most human minds are very biased. Don't we have the brains of one species of apes that has evolved to be particularly sensitive to politics? Why wouldn't doublethink be the evolutionarily adaptive norm?
My intuition, based on my own private experience, is the opposite of Eliezer's -- I'd assume that most industrialized people practice some degree of doublethink routinely. I'd further suspect that this talent can be cultivated, and I'd think that (say) most North Koreans might be extremely skilled at deliberate self-deception, in a manner that would have been very familiar to George Orwell himself.
This seems like an empirical question. What's the evidence out there?
Humans normally get away with their biases by not examining them closely, and when the biases are pointed out to them by denying that they, personally are biased. Willful ignorance and denial of reality seem to be two of the most common human mental traits.
Lure of the Void (Part 1) a recent blog post on Urban Future on the culture of space travel in the West.
That has a link to a new article by Sylvia Engdahl who has written on the importance of space for years, http://www.sylviaengdahl.com/space.htm
Idea: Rational Agreement Software
I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.
I criticize FAI because I don't think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn't (serendipity is the inverse of Murphy's law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.
Also remember the corollary that any decision made under pressure could probably stand to be reviewed at leisure.
Everybody does that anyway, it is usually called second-guessing yourself. The best rule is to not decide under pressure unless you really have to, take the time to think things through.
While I agree with the post, I'm not sure that it's actually a strong defense of sunk costs as being more heuristic than fallacy; it depends upon your past self having more information than your current self. I believe the sunk cost fallacy is generally used to refer to the phenomenon where, having additional information that makes the original investment look like a bad idea, you proceed to additional investment instead of cutting loose.
That is, the sunk cost fallacy generally refers to a situation in which it is more or less explicitly stated that you have more information, rather than less. Starting from the assumption that you have less information in the future than in the past permits this reasoning, but the question becomes whether or not the assumption actually holds. My intuitive reaction is that it doesn't.
it depends upon your past self having more information than your current self.
Or maybe you just spent more time thinking it through before. "Never doubt under pressure what you have calculated at leisure." I think that previous states should have some influence on your current choices. As the link says:
If your evidence may be substantially incomplete you shouldn't just ignore sunk costs -- they contain valuable information about decisions you or others made in the past, perhaps after much greater thought or access to evidence than that of which you are currently capable.
What's wrong with not having any more reason to live after you get the utilons?
I see you found yet another problem, with no way to get more utilons you die when those in the box are used up. And utility theory says you need utility to live, not just to give you reason to live.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Sounds about right. With the occasional driverless car, which is really pretty amazing.
I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the "judgement" of a working narrow AI strikes me as a much more plausible route to GAI.