In response to An Alien God
Comment author: Jannia 02 November 2007 08:07:29AM 7 points [-]

Maybe predators are wary of rattles and don't step on the snake. Or maybe the rattle diverts attention from the snake's head.

The point of a rattle, as I understand it, is that it's metabolically expensive, and time consuming, to produce poison. A snake that can chase off a dozen threats a day by wagging its tail is much better off probability-of-producing-offspring-wise than one that can only bite and poison three threats before being left defenseless for a few days.

It does leave me wondering what benefits the intermediate mutations provide though, since going from a normal snake tail to a rattle seems like it would take more than one step.

In response to comment by Jannia on An Alien God
Comment author: sboo 30 December 2013 10:25:22PM 4 points [-]

even if poison were cheap, every fight has a risk. better to neither fight nor flee.

Comment author: sboo 07 April 2013 02:03:20AM *  2 points [-]

"we irrationally find present costs more salient than future costs"

Present Bias is not always irrational!

it can be rationalized (as in, "find rational cause" not "make up excuse") as hedging against uncertainty. the future is never certain. our predictions about the future aren't even probable. if you save your money instead of spending it, you might lose it all to madoff. if you don't use that giftcard to some restaurant, your tastes might change and it won't be worth anything.

in fact, Geometric Discouting maximizes average (undiscounted utility) if, every moment in time, there is some probability that you will transition to a state where you won't ever be able to get more utility. i think of it as the Apocalypse. then the discount is less about preference and more about an uncertain future.

even better, let's say you know THAT there is some "Apocalypse probability", but not WHAT it is. put a beta distribution on it, a natural prior on probabilities. then every day, when you wake up (i.e. the Coin Of Fates lands heads), it's a little more likely that the daily apocalypse is less likely (e.g. think about how unlikely flipping a fair coin 365 times is, you need to be a fool to not lower your estimate of the tails odds). update by bayes, you get laplace's rule, and Hyperbolically Discounted reward. it's like the Anthropic Principle.

i had to put in the math there to say that present bias can be rational and logical, and this can be shown formally and precisely. but really, it comes from common sense. just because a behavioral economist tells you that they'll give you money tomorrow (and you know he's telling the truth, since unlike psychology, the journals won't accept deceptive experiments), doesn't mean you'll get the money (the world changes, e.g. they forget or err in mailing the check), and it doesn't mean you'll want the money (you change, e.g. you win the lottery). shit happens. people change.

having said all that, it's safe to say that most of present bias is irrational. this is obvious from the frequent feelings of frustration with our present problems and anger against our past self for not solving them. at least, for me.

it's just i've been smelling this Fetish lately for hating heuristics, biases, and intuition. but really, these things work really well much of the time for many tasks. and that's often the first thing we hear in informed discussions, but i think people get caught up and forget about it (not saying lukeprog did, just making a big deal about one word he used).

(it's like Lazy Evaluation. haskell is often fast despite, not because of, it. but sometimes, you really didn't need to do something, and since everything is like a generator, you save big on computation.)

anyway, great post! (i stopped reading it halfway through because of the silliness of reading the internets to procrastinate my chores, and finished after :) i need to keep rereading it and thinking about it until i can figure out a way to remember and implement these things in my own mind.

ps check out "Strotz Meets Allais: Diminishing Impatience and the Certainty Effect"

Comment author: jsalvatier 07 October 2012 03:22:14AM 0 points [-]

In your verbal description it says 40 miles, but in the matrix it says 40 minutes.

Comment author: sboo 14 October 2012 06:13:00AM 0 points [-]

60mph?

Comment author: Dave_Greene 28 April 2009 12:41:00AM 3 points [-]

The example of emergence that comes to my mind most readily is a simple observation that Douglas Hofstadter made in _Godel, Escher, Bach_ -- a book which definitely does not use "emergent" as a synonym for "magical":

In a game of Go, once there are two separate open spaces -- "eyes" -- in the middle of a connected group of stones, that group becomes invincible (because the opponent can't fill both holes with one move). There's no official rule in Go that says "Patterns with two eyes can't be captured", the rule just says that to capture a group you have to surround it completely and leave no open spaces. Thus two-eye invincibility is an emergent consequence of the rules of Go.

It's important that this new emergent rule is a significant _simplification_: once you realize that two eyes are invincible, you no longer have to do any complicated analysis about how close a two-eyed group is to being completely surrounded. It's safe, full stop (at least as long as you don't stupidly fill in the holes yourself).

The game of Go has very few rules. In practice, the two-eye invincibility "rule" is a very important and useful one, if you want to play the game well. To try to force Eliezer to talk about "emergence" or something equivalent, I would ask: where did the two-eye invincibility rule come from?

-- Okay, now, so what _isn't_ emergent? There's a another Go rule, the "ko" rule, which says you can't play in such a way as to get the exact same board position back after two moves: no capture followed by immediate recapture unless it changes the board. There's nothing "emergent" about that rule that I can think of -- it helps keep Go games from going on forever, but it has no simple-but-unexpected high-level consequences.

There are a lot of strategic patterns of play in Go that are not emergent, either -- e.g., there are no simple rules of thumb that can tell you, in all cases, whether a group with one eye or no eyes can be captured or not. Often the answer depends on a single apparently unrelated stone way over on the other side of the board. No simplifications available here, therefore no emergence.

There are many other more involved examples of emergence (and non-emergence) -- gliders and spaceships in Conway's Game of Life come to mind, and Herschels and random ash densities -- but this blog comment is too narrow to contain a good summary of them all...

Two other books that do a fine job (in my opinion) of describing the concept of "emergence" as distinct from "magic" are Cohen and Stewart's _The Collapse of Chaos_ and _Figments of Reality_.

Comment author: sboo 08 October 2012 06:12:35AM 0 points [-]

i think by 'emergence' you just mean 'implication'

Comment author: sboo 20 August 2012 10:06:17AM 1 point [-]

have you succeeded in chaining these "one-inference-steps"?

that is, have you found you can take people with different beliefs / less domain knowledge, in casual conversation, and quickly explain things one inference at a time? i've found that i can only pull a few of those, even if they follow and are delightfully surprised by each one, else i start sounding too weird.

Comment author: sboo 20 August 2012 09:59:47AM 0 points [-]

i like what you said about fiction perceived as distant reality. "long long ago in a galaxy far far away".

In response to Cached Thoughts
Comment author: [deleted] 20 January 2012 12:11:21AM 1 point [-]

"One neuropsychologist estimates that visual perception is 90 percent memory, less than 10 percent sensory [nerve signals]." Apparently, we even use cached thought to see. We're really biased, huh?

In response to comment by [deleted] on Cached Thoughts
Comment author: sboo 20 August 2012 09:37:21AM 1 point [-]

src?

In response to Cached Thoughts
Comment author: kilobug 13 September 2011 04:25:16PM 3 points [-]

Interesting article, but I'm not so sure about the "cache" analogy. A typical cache in computer science has two major differences with the effect you're pointing to :

  1. A cache stores the result of a computation. Result of a complex algorithm, of a database of external server query, of disk read, ... but the computation is done once and then the result is stored for later used. Very few cache in computer science are caching results that comes from elsewhere but that were not computed at least once. While in your case, it's not "I did once the complex job of thinking about love and rationality, I concluded love is not rational, so I cached that computation, and later on I reuse it" but "I heard that love is not rational, I didn't do the computation, but still I stored the result".

  2. As a consequence of 1., a cached result in computer science is (almost) never wrong. It may be obsoleted (an old version of the Internet page), but not wrong (that old version was the correct one when you fetched it). In the cases described by the article, the "cached thought" are wrong values stored in the cache, not just obsoleted values.

What you refer to sounds more like a cache poisoning attack than the normal operation of a caching system.

I don't know how to rephrase the "cached thoughts" expression into something more accurate but still as potent as an expression, so I'll stick with your "cached thoughts" for now, but I'm uncomfortable with it because of those two differences.

In response to comment by kilobug on Cached Thoughts
Comment author: sboo 20 August 2012 09:36:48AM 0 points [-]

indeed.

if we decouple the cost of caching into "was true but is false" and "was never true", it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the "cached thought" problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.

i've personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.

e.g. though i have never coded in it, if i hear "C++", i'll (try to) think both "not worth it, too unsafe and errorprone" and "so worth it, speed and libraries". whenever i don't have enough data to haveĀ a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache "contradictory twins" together.

In response to SotW: Be Specific
Comment author: sboo 04 April 2012 05:50:53AM 1 point [-]

i'm involved with a startup. there's so much well-intentioned bullshit and it's the founders who harm themselves more than any user or any investor.

i watched the video, and felt something was wrong, and then i read your article, you dissected it mercilessly, and nailed it precisely.

In response to comment by sboo on SotW: Be Specific
Comment author: sboo 04 April 2012 05:53:17AM 2 points [-]

precision is hard. you know, until i started studying math, i didn't even know what "be precise" really means.

In response to SotW: Be Specific
Comment author: sboo 04 April 2012 05:50:53AM 1 point [-]

i'm involved with a startup. there's so much well-intentioned bullshit and it's the founders who harm themselves more than any user or any investor.

i watched the video, and felt something was wrong, and then i read your article, you dissected it mercilessly, and nailed it precisely.

View more: Prev | Next