Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: RomeoStevens 19 May 2017 08:32:37PM *  3 points [-]

I think values are confusing because they aren't a natural kind. The first decomposition that made sense was 2 axes: stated/revealed and local/global

stated local values are optimized for positional goods, stated global values are optimized for alliance building, revealed local are optimized for basic needs/risk avoidance, revealed global barely exist and when they do are semi-random based on mimesis and other weak signals (humans are not automatically strategic etc.)

Trying to build a coherent picture out of various outputs of 4 semi independent processes doesn't quite work. Even stating it this way reifies values too much. I think there are just local pattern recognizers/optimizers doing different things that we have globally applied this label of 'values' to because of their overlapping connotations in affordance space and because switching between different levels of abstraction is highly useful for calling people out in sophisticated hard to counter ways in monkey politics.

Also useful to think of local/global as Dyson's birds and frogs, or surveying vs navigation.

I'm unfamiliar with existing attempts at value decomposition if anyone knows of papers etc.

On predictions, humans treating themselves and others as agents seems to lead to a lot of problems. Could also deconstruct poor predictions based on which sub-system it runs into the limits of: availability, working memory, failure to propagate uncertainty, inconsistent time preferences...can we just invert the bullet points from superforecasting here?

Comment author: RomeoStevens 17 May 2017 02:25:18PM 1 point [-]

Relevant term is judgmental bootstrapping in the forecasting literature if anyone wants to dive deeper. It is extremely practically relevant for many circumstances such as hiring, where adhoc linear models outperformed veteran hiring managers.

Comment author: Raemon 14 May 2017 10:33:04AM 3 points [-]

I currently have almost zero knowledge of Naruto and I'm interested in hearing more things about the perception/action skills thing as it applies to Naruto Classic (and/or rationalist!naruto)

Comment author: RomeoStevens 15 May 2017 06:52:02AM *  5 points [-]

Time Braid and The Waves Arisen. Super fun reads, and also seem to put me in agenty mode better even than other rationalist fics. I haven't seen the naruto anime and I got on just fine with both.

As for why my model works this way: heavily influenced by the research on deliberate practice. Essentially, it caused me to see expert performance as the combination of several core traits which are all predicated on perceptual skills. The first is generating the correct chunkings that mirror the causal structure in the domain in the first place, which are composed of distinctions that you must learn to make. If you've ever done something like music where you went from hearing complicated sounds to hearing specific 'phrases' this is what i'm pointing to with perception of chunks. In order to build these up one has to also isolate the feedback/reward loop that allows you to zero in on your performance of that chunk. Cleanly delineating the hits from the misses and having that information be on the smallest time delay possible. The other skill is navigating the chunked tree, which is predicated on perception of cues/proxies that indicate which decision paths to take in your knowledge tree. This structure then has the ability to get activated by experiences in the real world, where you notice something that looks like a chunk you've already seen. Normal self help techniques generally don't have these hooks that fire in specific times and places, meaning you likely just don't remember to use them.

Comment author: jsalvatier 13 May 2017 08:31:15PM 1 point [-]

John Maxwell posted this quote:

The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.

-- Daniel Kahneman

Comment author: RomeoStevens 14 May 2017 08:12:24AM 5 points [-]

Ontology lock in. If you have nice stuff built on top of something you'll demand proof commensurate with the value of those things when someone questions the base layer even if those things built on top could be supported by alternative base layers. S1 is cautious about this, which is reasonable. Our environment is much safer for experimentation than it used to be.

Comment author: RomeoStevens 14 May 2017 08:08:03AM *  4 points [-]

This is why I like Naruto as a rationalist fanfic substrate: perceptual skills are explicitly upstream of action skills in the naruto universe. I think this mirrors the real universe and explains much of the valley of bad self-help. Action skills are pointless if you don't have the cues on when where and why to deploy them.

Another frame on the same concept: don't keep teaching people spells when their mana pool size sucks.

Comment author: Lumifer 12 May 2017 03:48:43PM 0 points [-]

This implies that a "natural" well-rested, well-exercised, well-fed state of a human is the best he could ever hope to be and that biochem interventions (like nootropics) can compensate for non-optimality elsewhere but can't lift you above your natural best.

Would you accept this implication?

Comment author: RomeoStevens 12 May 2017 11:39:47PM *  0 points [-]

This was what the research review on nootropics indicated is mostly the case. I've also encountered a similar conclusion in many other areas. Enough so that my prior in new domains is now that you can cut off the tail of bad outcomes but can't do much to the upside.

Comment author: Valentine 12 May 2017 05:45:46AM 0 points [-]

Meta question:

How do I create links when the URL has close-parentheses in them?

E.g., I can't seem to link properly to the Wikipedia article on common knowledge in logic. I could hack around this by creating a TinyURL for this, but surely there's a nicer way of doing this within Less Wrong?

Comment author: RomeoStevens 12 May 2017 07:18:49AM *  3 points [-]

backslash escape special characters. Test Common knowledge

done by adding the '\' in logic'\') without the quotes (otherwise it disappears)

Comment author: Valentine 12 May 2017 05:07:22AM 1 point [-]

I'm not familiar with factor analysis, so I have to say no, I haven't considered this. Can you recommend me a good place to start looking to get a flavor of what you mean?

Comment author: RomeoStevens 12 May 2017 06:41:39AM *  3 points [-]

Big five personality traits is likely the factor analysis most people have heard of. Worth reading the blurb here: https://en.wikipedia.org/wiki/Factor_analysis

Many many models can be thought of as folk factor analyses whereby people try to reduce a complex output variable to a human readable model of a few dominant input variables. Why care?

Additive linear models outperform or tie expert performance in the forecasting literature: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1178&context=marketing_papers

Teaching factor analysis is basically an excuse to load some additional intuitions to make Fermi estimates(linear model generation for approximate answers) more flexible in representing a broader variety of problems. Good sources on fermi estimates (eg the first part of The Art of Insight in Science and Engineering) often explain some of the concepts used in factor analysis in layman terms. So for example instead of sensitivity analysis they'll just talk about how to be scope sensitive as you go so that you drop non dominant terms.

It's also handy for people to know that many 'tricky' problems are a bit more tractable if you think of them as having more degrees of freedom than the human brain is good at working with and that this indicates what sorts of tricks you might want to employ, eg finding the upstream constraint or some other method to reduce the search space first of which a good example is E-M theory of John Boyd Fame.

It also just generally helps in clarifying problems since it forces you to confront your choice of proxy measure for your output variable. Clarifying this generally raises awareness of possible failures (future goodheart's law problems, selection effects, etc.).

Basically I think it is a fairly powerful unifying model for a lot of stuff. It seems like it might be closer to the metal so to speak in that it is something a bayesian net can implement.

Credit to Jonah Sinick for pointing out that learning this and a few other high level statistics concepts would cause a bunch of other models to simplify greatly.

Comment author: RomeoStevens 12 May 2017 03:29:53AM *  0 points [-]

Have you considered trying to teach factor analysis as a fuzzy model (very useful when used loosely, not just rigorously)? It seems strongly related to this and imports some nice additional connotations about hypothesis search, which I think is a common blind spot.

Comment author: RomeoStevens 12 May 2017 03:25:47AM 0 points [-]

model uncertainty based discounting also.

View more: Next