Comment author: David_Gerard 22 August 2012 03:28:22PM 1 point [-]

Sounds about right. With the occasional driverless car, which is really pretty amazing.

Comment author: billswift 22 August 2012 04:02:20PM 1 point [-]

I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the "judgement" of a working narrow AI strikes me as a much more plausible route to GAI.

Comment author: jaibot 17 August 2012 02:44:42PM 0 points [-]

What topics aren't controversial and within a short inferential distance from most people? My intuition is that this is close to the definition of "boring".

Comment author: billswift 19 August 2012 04:21:09AM 3 points [-]

Listen to actual conversation sometime, most of it is excruciatingly boring if you think about it in terms of information. But as other posters have pointed out, most conversation is about social bonding, not exchanging information.

In response to comment by [deleted] on Open Thread, August 16-31, 2012
Comment author: siodine 17 August 2012 05:22:34PM 1 point [-]

A model is a map of the territory. For example, we could create an emulation of a light bulb using the most the most basic understanding of a light bulb. I.e., you flip a switch, magic goes through a wire, and on goes the light bulb. Or if you wished (and could) make the model more accurate, you would go down to the level of electrons, or even further. However, you wouldn't want a model at the most fundamental level if you're trying to understand how artificial light affects human behavior, for example. Models are a tool for explaining, understanding, and predicting phenomena conveniently.

Comment author: billswift 19 August 2012 04:17:46AM 1 point [-]

Or for representing phenomena in an altered "format". For example, I have read a description of the bimetallic spring in a thermostat as a model of the room's temperature presented in a way that the furnace can make use of it.

Comment author: Costanza 16 August 2012 09:34:59PM 8 points [-]

Thinking about Eliezer's post about Doublethink Speaking of deliberate, conscious self-deception he opines: "Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen."

This seems odd for a site devoted to the principle that most of the time, most human minds are very biased. Don't we have the brains of one species of apes that has evolved to be particularly sensitive to politics? Why wouldn't doublethink be the evolutionarily adaptive norm?

My intuition, based on my own private experience, is the opposite of Eliezer's -- I'd assume that most industrialized people practice some degree of doublethink routinely. I'd further suspect that this talent can be cultivated, and I'd think that (say) most North Koreans might be extremely skilled at deliberate self-deception, in a manner that would have been very familiar to George Orwell himself.

This seems like an empirical question. What's the evidence out there?

Comment author: billswift 19 August 2012 04:03:52AM 1 point [-]

Humans normally get away with their biases by not examining them closely, and when the biases are pointed out to them by denying that they, personally are biased. Willful ignorance and denial of reality seem to be two of the most common human mental traits.

Comment author: [deleted] 17 August 2012 05:07:01PM *  3 points [-]

Lure of the Void (Part 1) a recent blog post on Urban Future on the culture of space travel in the West.

In response to comment by [deleted] on Open Thread, August 16-31, 2012
Comment author: billswift 19 August 2012 03:55:53AM 1 point [-]

That has a link to a new article by Sylvia Engdahl who has written on the importance of space for years, http://www.sylviaengdahl.com/space.htm

Comment author: Epiphany 18 August 2012 12:32:30AM *  6 points [-]

Idea: Rational Agreement Software

Comment author: billswift 18 August 2012 01:08:21AM 1 point [-]

I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.

Comment author: billswift 14 August 2012 01:04:44AM 7 points [-]

I criticize FAI because I don't think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn't (serendipity is the inverse of Murphy's law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.

Comment author: drethelin 08 August 2012 07:46:12PM 1 point [-]

Also remember the corollary that any decision made under pressure could probably stand to be reviewed at leisure.

Comment author: billswift 08 August 2012 11:05:55PM *  1 point [-]

Everybody does that anyway, it is usually called second-guessing yourself. The best rule is to not decide under pressure unless you really have to, take the time to think things through.

Comment author: OrphanWilde 08 August 2012 03:12:21PM 4 points [-]

While I agree with the post, I'm not sure that it's actually a strong defense of sunk costs as being more heuristic than fallacy; it depends upon your past self having more information than your current self. I believe the sunk cost fallacy is generally used to refer to the phenomenon where, having additional information that makes the original investment look like a bad idea, you proceed to additional investment instead of cutting loose.

That is, the sunk cost fallacy generally refers to a situation in which it is more or less explicitly stated that you have more information, rather than less. Starting from the assumption that you have less information in the future than in the past permits this reasoning, but the question becomes whether or not the assumption actually holds. My intuitive reaction is that it doesn't.

Comment author: billswift 08 August 2012 05:31:05PM 6 points [-]

it depends upon your past self having more information than your current self.

Or maybe you just spent more time thinking it through before. "Never doubt under pressure what you have calculated at leisure." I think that previous states should have some influence on your current choices. As the link says:

If your evidence may be substantially incomplete you shouldn't just ignore sunk costs -- they contain valuable information about decisions you or others made in the past, perhaps after much greater thought or access to evidence than that of which you are currently capable.

In response to comment by billswift on The Doubling Box
Comment author: Mestroyer 06 August 2012 02:13:45PM 0 points [-]

What's wrong with not having any more reason to live after you get the utilons?

In response to comment by Mestroyer on The Doubling Box
Comment author: billswift 06 August 2012 05:09:54PM -1 points [-]

I see you found yet another problem, with no way to get more utilons you die when those in the box are used up. And utility theory says you need utility to live, not just to give you reason to live.

View more: Prev | Next