Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Thanks for disclosing. 
I feel this should be part of this kind of post. Not knowing exactly before reading is helpful though.
 

Answer by brummli10

A pretty good trigger for me is whenever I ask myself: "Is that plausible?" 

How would that app work? In what way similar? I am failing to see the part worth emulating in my example.

I will definitely read this. I've been trying to find these kinds of preferences in myself for some time. 

Answer by brummli40
  • Polar caps and glaciers. Albedo change sets a high barrier for new growth when gone.
  • When getting to know a sect you can be in any social relation to them. At some point you will settle in one of two very distinct states. Entering isn't too hard. Exiting has a very high 'activation energy'.
  • Acquired taste preferences (coffee or tea) seem to be bistable. (I'd guess many habits are)
Answer by brummli60

It is harder than expected not to recycle from known instances. I had to totally avoid physics and markets to feel like finding not remembering examples.

  •  There's a lot of somewhat cyclic stochastic processes that I would call a stable equilibrium. My whiteboard tends to have about the same fraction of free space most of the time. Sketching something or deleting something is a fluctuation. Changing my usage habits could make a long term difference.
  • The density of social events in my calendar is surprisingly constant. Less is boring, more is exhausting. It regulates itself. Fluctuations include random clusters of important events or little spare time. To constantly change it I could become a recluse or more sociable.
  • The mass of the contents of most rooms is dominated by a few long lasting things and is mostly stable. Getting a cup of tea is a small disturbance. Buying new furniture will suddenly change the equilibrium. 

I come up blank on a regular basis when thinking about the usefulness of sharing something. 
Useful content tends to teach me a model or enable me to built one. 

  • Unexpectedly useful content extends a model the writer didn't know the reader had or fills a conceptual hole of the readers model.
  • Unexpectedly useless content tries to teach about something the reader already has a good model for.

I'd love to have even a bad heuristic (for not totally obvious cases) of this problem.