Posts

Sorted by New

Wiki Contributions

Comments

"3WC would be a terrible movie. "There's too much dialogue and not enough sex and explosions", they would say, and they'd be right."

Hmmm.. Maybe we should put together a play version of 3WC; plays can't have sex and explosions in any real sense, and dialogue is a much larger driver.

In case that wasn't a rhetorical question, you almost certainly did: your Introduction to Bayesian Reasoning is the fourth Google hit for "Bayesian", the third Google hit for "Bayes", and has a pagerank of 5, the same as the Cryonics Institute's main website.

"Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?"

We already have an (admittedly limited) counterexample to this, in that many Westerners choose to seek out and do somewhat painful things (eg., climbing Everest), even when they are perfectly capable of choosing to avoid them, and even at considerable monetary cost.

"Some ordinary young man in college suddenly decides that everyone around them is staring at them because they're part of the conspiracy."

I don't think that this is at all crazy, assuming that "they" refers to you (people are staring at me because I'm part of the conspiracy), rather than everyone else (people are staring at me because everyone in the room is part of the conspiracy). Certainly it's happened to me.

"Poetry aside, a human being isn't the seed of a god."

A human isn't, but one could certainly argue that humanity is.

"But with a sufficient surplus of power, you could start doing things the eudaimonic way. Start rethinking the life experience as a road to internalizing new strengths, instead of just trying to keep people alive efficiently."

It should be noted that this doesn't make the phenomenon of borrowed strength go away, it just outsources it to the FAI. If anything, given the kind of perfect recall and easy access to information that an FAI would have, the ratio of cached historical information to newly created information should be much higher than that of a human. Of course, an FAI wouldn't suffer the problem of losing the information's deep structure like a human would, but it seems to be a fairly consistent principle that the amount of cached data grows faster than the rate of data generation.

The problem here- the thing that actually decreases utility- is humans taking actions without sufficient understanding of the potential consequences, in cases where "Humans seem to do very well at recognizing the need to check for global consequences by perceiving local features of an action." (CFAI 3.2.2) fails. I wonder, out of a sense of morbid curiosity, what the record is for the highest amount of damage caused by a single human without said human ever realizing that they did anything bad.

"By now, it's probably true that at least some people have eaten 162,329 potato chips in their lifetimes. That's even less novelty and challenge than carving 162,329 table legs."

Nitpick: it takes much less time and mental energy to eat a potato chip than to carve a table leg, so the total quantity of sphexishness is much smaller.

"Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans?"

They already have, at least for a short while.

http://www.nytimes.com/2008/12/10/business/10markets.html

"We are currently living through a crisis that is in large part due to this lack of appreciation for emergent behavior. Not only people in general but trained economists, even Nobel laureates like Paul Krugman, lack the imagination to understand the emergent behavior of free monetary systems."

"Emergence", in this instance, is an empty buzzword, see http://lesswrong.com/lw/iv/the_futility_of_emergence/. "Imagination" also seems likely to be an empty buzzword, in the sense of http://lesswrong.com/lw/jb/applause_lights/.

"precisely because the emergent behavior of the market is more powerful, more intelligent, in solving the problem of resource allocation than any committee."

Markets do not allocate resources anywhere near optimally, and sometimes they do even worse than committees of bureaucrats; the bureaucrats, for instance, may increase utility by allocating more resources to poor people on grounds of higher marginal utility per dollar per person.

"Once you understand it then it's not so amazing but it is very difficult to understand. Ben Bernanke doesn't understand and Alan Greenspan didn't understand before him."

If you think you know more than Bernanke, then why haven't you become rich by making better-than-expected bets?

"It can be improved on by randomisation: randomly betting on heads with p=0.5 and tails with p=0.5 is a stochastic strategy which offers improved returns - and there is no deterministic strategy which produces superior results to it."

Eliezer has already noted that it is possible for a random strategy to be superior to a stupid deterministic strategy:

"But it is possible in theory, since you can have things that are anti-optimized. Say, the average state has utility -10, but the current state has an unusually low utility of -100. So in this case, a random jump has an expected benefit. If you happen to be standing in the middle of a lava pit, running around at random is better than staying in the same place. (Not best, but better.) A given AI algorithm can do better when randomness is injected, provided that some step of the unrandomized algorithm is doing worse than random."

The point of the post is that a random strategy is never better than the best possible deterministic strategy. And assuming that you're betting on real, physical coinflips, a random strategy is actually worse than the deterministic strategy of betting that the coin will come up heads if it started as heads and vice versa (see http://www.npr.org/templates/story/story.php?storyId=1697475).

"It is not clear this can be shown to be true. 'Improvement' depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge."

Eliezer specifically noted this in the post:

"Sometimes it is too expensive to take advantage of all the knowledge that we could, in theory, acquire from previous tests. Moreover, a complete enumeration or interval-skipping algorithm would still end up being stupid. In this case, computer scientists often use a cheap pseudo-random algorithm, because the computational cost of using our knowledge exceeds the benefit to be gained from using it. This does not show the power of randomness, but, rather, the predictable stupidity of certain specific deterministic algorithms on that particular problem."

"This may not sound like a profound insight, since it is true by definition. But consider - how many comic books talk about "mutation" as if it were a source of power? Mutation is random. It's the selection part, not the mutation part, that explains the trends of evolution."

I think this is a specific case of people treating optimization power as if it just drops out of the sky at random. This is certainly true for some individual humans (eg., winning the lottery), but as you point out, it can't be true for the system as a whole.

"These greedy algorithms work fine for some problems, but on other problems it has been found that greedy local algorithms get stuck in local minima."

Er, do you mean local maxima?

"When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it - some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively."

In such a case, you can clearly affect the content of the signal, so why not just give it a blanket boost of ten points (or whatever), if the threshold is so high that you're missing desirable data?

Load More