Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

RomeoStevens comments on "Flinching away from truth” is often about *protecting* the epistemology - Less Wrong

68 Post author: AnnaSalamon 20 December 2016 06:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: RomeoStevens 20 December 2016 06:56:05AM *  4 points [-]

Monolithic goal buckets cause a cluster of failure modes I haven't fully explored yet. Let's say you have a common goal like 'exercise.' The single goal bucket causes means-substitution that causes you to not fully cover the dimensions of the space that are relevant to you e.g. you run and then morally license not lifting weights because you already made progress towards the exercise bucket. Because the bucket is large and contains a bunch of dimensions of value it induces a mental flinch/makes ugh fields easier to develop/makes catastrophizing and moralizing more likely. The single goal causes fewer potential means to be explored/brainstormed in the first place (there seems to be some setting that people have that tells them how many options goal like things need regardless of goal complexity). Lower resolution feedback from conceptualizing it as one thing makes training moving towards the thing significantly harder (deliberate practice could be viewed as the process by which feedback resolution is increased). Monolithic goals generally seem to be farther in construal than finer grained goals, which induces thinking about them in that farther construal mode (more abstract) which will hide important details that are relevant to making, say, TAPs about the thing which requires awareness of near mode stumbling blocks. Since they tend towards simplicity, they also discourage exploration, eg 'exercise->run' feels like matching construal levels, 'increase vo2 max->go find out more about how to increase vo2 max' also seems like matching construal levels and the second looks closer to a construct that results in actions towards the thing.

I think some cleaner handles around this cluster would be useful, interested in ideas on making it more crisp.

Meta: better tools/handles around talking about ontology problems would greatly reduce a large class of errors IMO. Programmers deal with this most frequently/concretely so figuring out what works for them and seeing if there is a port of some kind seems valuable. To start with a terrible version to iterate on: UML diagrams were about trying to capture ontology more cleanly. What is better?