Comment author: billswift 30 August 2012 01:54:34AM 2 points [-]

My comment from July 5, "Go Bayes! So if you just make your priors big enough, you never have to change your mind.", was rather snarky, but it illustrates a real problem. If your priors are not reasonably accurate, it takes a lot of new information and updating to get it straightened out. That is one reason a lot of introductions to Bayes rule use medical decision making which has reasonably well-established base-rates (priors) to begin with.

Comment author: billswift 30 August 2012 01:45:27AM 3 points [-]

Off-topic, but in the context of "best mistake", here is John Ringo's definition of serendipity from The Last Centurion:

We were saved by serendipity. (Which is a term meaning "I fucked up but things came out better than if I hadn't.")

Comment author: billswift 30 August 2012 01:35:15AM *  3 points [-]

Derek Lowe also commented on the studies. Repeating my comment there:

So the comparison of the two experiments shows that underfeeding results in life extension over monkeys that over-eat, but not over monkeys that eat a normal diet. Where is the surprise there?

ADDED: I just noticed the paragraph here is missing a key bit of information needed to make sense of my comment. The WNPRC experiment, which found positive results from calorie restriction, fed their controls ad libitum, as much as they wanted to eat. The newer NIA experiment fed the controls a standard, healthy diet, and found no effect of diet restriction.

Comment author: kilobug 27 August 2012 09:50:21AM 29 points [-]

I'm very skeptical of reasoning like "it was like that in ancestral environment so it must be good". There are at least three reasons that makes me uncomfortable with the reasoning :

  1. Even if we consider evolution to be a perfect optimizer (which it is not), there is a huge difference between "our digestion system is optimized to make the best possible use of food X" and "food X is best possible food for our digestion system". If you made an algorithm A optimized to transmit data on a noisy channel N, it doesn't mean the algorithm wouldn't run better on a less noisy channel C. There may be an algorithm B that work better on the clear channel C than A, but still, A can work better on C than on N.

  2. Evolution doesn't optimize for the same purpose we do. Evolution doesn't optimize for us to live long, it has a very low pressure to make us live past ~60, for example.

  3. We have completely different lifestyles and activities than we did during paleolithic. And the optimal diet very likely depends of lifestyle and activities.

That said, what would convince me to do a diet is not a plausible-sounding reasoning, but some evidence of short-term and long-term effects on a sane sample size, with a control group. Something which seems very rare in the diet field, saddly.

Comment author: billswift 28 August 2012 02:59:51AM 1 point [-]
  1. Evolution doesn't stop. We have continued to evolve, adapting to new environments, including foods.
Comment author: wattsd 27 August 2012 01:21:59AM 0 points [-]

I agree that the only way to practice decisions is to make them, but I think there is more to it than that. The deliberate part of deliberate practice is that you are actively trying to get better. The deliberate performance paper I linked to touches on this a bit, in that deliberate practice is challenging for professionals and that something else might work better (they advocate the first 5 methods in that paper).

Beyond making decisions, you need to have an expectation of what will happen, otherwise hindsight bias is that much harder to overcome. It's the scientific method: hypothesis->test->new hypothesis. Without defining what you expect ahead of time, it is much easier to just say "Oh yeah, this makes sense" and normalize without actually improving understanding.

Comment author: billswift 27 August 2012 01:49:31AM 0 points [-]

I don't disagree with anything in this comment, I was just pointing out that "deliberate practice" has several requirements, including practice being separate from execution, that makes it less usable, or even totally unusable, for some areas, such as decision making and choosing. The other main requirements are that it has a specific goal, should not be enjoyable and, as you pointed out, that is is challenging. Another thing, that is not part of the original requirements but is encompassed by them, is that you are not practicing when you are in "flow".

Comment author: wattsd 26 August 2012 08:13:04PM 0 points [-]

An alternative to improving your intuition and removing your biases would be to find other and better processes and tools to rely on. And then actually use them.

I think that is part of what I was attempting to get at, though I probably didn't do a very good job. In a sense we are biased to use certain processes or tools. The only way to change those "default settings" is to deliberately practice something better, so that when the time comes, you'll be ready.

Comment author: billswift 27 August 2012 12:31:28AM 0 points [-]

Some places, the "deliberate practice" idea breaks down, choosing and decision making is one of them. There is no way to "practice" them except by actually making chooses and decisions; separating practice from normal execution is not possible.

Comment author: MileyCyrus 25 August 2012 11:17:29PM *  2 points [-]

You can't go wrong with writing, as it is nearly universally required and will be among the last skills to be Turing'd. A STEM major who can write well is a scarce commodity, if the GRE scores are any indication. (I would not recommend taking a course to improve your writing however. Just start writing a blog or something. And read Eats, Shoots & Leaves.)

If you have trouble socializing, learn how to do that. Most jobs opportunities come through socializing. Even the jobs that are advertised still require a competent interview.

Comment author: billswift 25 August 2012 11:43:31PM 0 points [-]

Colleges have a breadth requirement; one source I read suggested using that to take a writing heavy course in history or philosophy that requires lots of short papers in order to improve your writing.

Comment author: jimrandomh 25 August 2012 02:11:34AM 0 points [-]

"The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X". Is it clear that an accurate X exists?

You mean, he is confronted with the statement that this GLUT predicts he will do X. That statement may or may not be true, depending on his behavior. He can choose a strategy of either always choosing to do what is predicted, always choosing to do the opposite of what is predicted, or ignoring the prediction and choosing based on unrelated criteria. It is possible to construct a lookup table containing accurate predictions of this sort only in the first and third cases, but not in the second.

Comment author: billswift 25 August 2012 02:50:12AM 0 points [-]

Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can't even define all of the influences.

Comment author: Dolores1984 22 August 2012 08:18:07PM 2 points [-]

It certainly doesn't represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you're hamstrung by your inability to solve certain crucial mathematical issues.

Comment author: billswift 23 August 2012 01:42:13PM *  1 point [-]

You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky's Society of Mind, or some papers on modularity in evolutionary psych, for more details.

Comment author: atucker 23 August 2012 03:45:01AM -1 points [-]

Narrow-AI driverless cars will probably not decide that they need to take over the world in order to get to their destination in the most efficient way. Even if it would be better, I would be very surprised if they decided to model the world that generally for the purposes of driving.

There's only so much modeling of the world/general capability you need in order to solve very domain-specific problems.

Comment author: billswift 23 August 2012 01:32:12PM 0 points [-]

The reason for expanding a narrow AI is the same for a tool agent not staying restricted; the narrow domain they are designed to function in is embedded in the complexity of the real world. Eventually someone is going to realize that the agent/AI can provide better service if they understand more about how their jobs fit into the broader concerns of their passengers/users/customers and decide to do something about it.

View more: Prev | Next