This is a link-post for: https://www.foretold.io/c/1bea107b-6a7f-4f39-a599-0a2d285ae101/n/5ceba5ae-60fc-4bd3-93aa-eeb333a15464

---

Epistemic status: gesturing at something that feels very important. Based on a true story. Show, don't tell. Release early.

Why are documents and spreadsheets so successful?

Why does code, which is many times more powerful than spreadsheets, have many times fewer users?

I think it's because code not just forces you to express your ideas in code, but also to think in code. It imposes constraints on your ontology for thinking.

Having spent the last year working on forecasting, I got some experience with how ontologies can significantly constrain technology projects.

I think such constraints have...

  • heavily limited the usefulness of past forecasting efforts
  • resulted in broad misconceptions about what forecasting could be used for
  • hidden a large space of interesting work that can be unlocked if we solved them

So the link-post is an interactive essay where I attempt to show what solving them might look like in practice, using some technology which is currently not supported on LessWrong.

(Note that the link will not work well on mobile.)

New Comment
6 comments, sorted by Click to highlight new comments since:

[Thoughts on what to do if there is an ontological mismatch between one's thinking and a tool]

  • When I saw Jacob present a version of the OP in person, the discussion focused on cases where the correct response is to use a different tool, ideally one that matches the natural ontology of ones thinking. E.g. when using a whiteboard rather than a Google doc to express thoughts most naturally expressed as a mind map.
  • But I think it's important that there are other cases where it can actually beneficial to 'learn how to think in a different ontology'. I think this is quite common in pure maths, but also shows up in more everyday situations: e.g. initially I found it quite counterintuitive to use, say, Emacs org mode or LaTeX, but after I had payed the fixed cost of adapting to the ontologies imposed by them I actually think that it made me more efficient at some tasks.
  • Similarly, I think it's useful to be able to translate between different ontologies. To learn this, it can be useful to deliberately expose oneself to ontologies that seem unnatural/bad/cumbersome initially.
I think it's useful to be able to translate between different ontologies

This is one thing that is done very well by apps like Airtable and Notion, in terms of allowing you to show the same content in different ontologies (table / kanban board / list / calendar / pinterest-style mood board).

Similarly, when you’re using Roam for documents, you don’t have to decide upfront “Do I want to have high-level bullet-points for team members, or for projects?“. The ability to embed text blocks in different places means you can change to another ontology quite seamlessly later, while preserving the same content.

Ozzie Gooen pointed out to me that this is perhaps an abuse of terminology, since "the semantic data is the same, and that typically when 'ontology' is used for code environments, it describes what the data means, not how it’s displayed."

I think in response, the thing I'm pointing at that seems interesting is that there is a bit of a continuum between different displays and different semantic data — two “displays” which are easily interchangeable in Roam will not be in Docs or Workflowy, as they lack the “embed bullet-point” functionality. Even though superficially they’re both just bullet-point lists.

I was very confused by the notebook interface at first. I think you need to log in for it to work?

You need to log in if you want to make predictions. You should be able to see others' predictions without logging in. (At least on Firefox and Chrome)

Note the notebook interface is kind of new and still has some quirks that are getting worked out.

It is really slick; I was mostly confused because the text itself talked about using the interface to make predictions. The only interface-specific annoyance was that there didn't seem to be a way to close the prediction sidebar once it was open.

[Thoughts on the term "ontology".]

  • In philosophy, ontology refers to the subfield aiming to answer the question "Which things exist?".
  • Perhaps as a consequence of this, hearing "ontology" makes me think of questions like: What are the primitives or building blocks here? E.g. in a spreadsheet, there would be cells; in a graph there would be vertices and edges etc.
  • But I think the important things Jacob is talking about show up as answer to the different question of which relations there are between these primitives. E.g. the important thing about a spreadsheet is that each cell is unique identified by a pair of numbers, that you can perform computations/functions on the values of cells; the important thing about a (certain type of) graph is that every edge has exactly one vertex as source and exactly one vertex as target.
  • Brainstorm for alternative terms: structure, conceptual structure, conceptual apparatus, conceptual scheme, conceptual toolkit, relations, relational constraints, conceptual landscape, ...
  • (The math version of my complaint is that "ontology" makes me think of set theory, whereas I think the important bits are more naturally visible from a category theory point of view.)