In the exercise of defining what we mean by a term, we explore our assumptions about the nature of the term and what it’s pointing at. Sometimes this leads us to get a better handle on a concept. Other times, it shows us our confusion.
As an example of the value of definitions, YCombinator considers it important for founders to be able to clearly and concisely say a company's purpose, and that the ability to do so is indicative of plausible success..
Here are two exercises:
- Write a paragraph to define the term applied rationality.
- Write a sentence to define the term applied rationality.
I encourage you to write your definitions before reading what others wrote. While I suggest that you first attempt to define the term in a paragraph and then move to the sentence variation, you can also change the order if that seems easier.
Please use the answer feature for your answer to these exercises, and the comment function for other needs.
Paragraph:
When a bounded agent attempts a task, we observe some degree of success. But the degree of success depends on many factors that are not "part of" the agent - outside the Cartesian boundary that we (the observers) choose to draw for modeling purposes. These factors include things like power, luck, task difficulty, assistance, etc. If we are concerned with the agent as a learner and don't consider knowledge as part of the agent, factors like knowledge, skills, beliefs, etc. are also externalized. Applied rationality is the result of attempting to distill this big complicated mapping from (agent, power, luck, task, knowledge, skills, beliefs, etc) -> success down to just agent -> success. This lets us assign each agent a one-dimensional score: "how well do you achieve goals overall?" Note that for no-free-lunch reasons, this already-fuzzy thing is further fuzzified by considering tasks according to the stuff the observer cares about somehow.
Sentence:
Applied rationality is a property of a bounded agent, which attempts to describe how successful that agent tends to be when you throw tasks at it, while controlling for both "environmental" factors such as luck and "epistemic" factors such as beliefs.
Follow-up:
In this framing, it's pretty easy to define epistemic rationality analogously compressing from everything -> prediction loss to just agent -> prediction loss.
However, in retrospect I think the definition I gave here is pretty identical to how I would have defined "intelligence", just without reference to the "mapping broad start distribution to narrow outcome distribution" idea (optimization power) that I usually associate with that term. If anyone could clarify specifically the difference between applied rationality and intelligence, I would be interested.
Maybe you also have to control for "computational factors" like raw processing power, or something? But then what's left inside the Cartesian boundary? Just the algorithm? That seems like it has potential, but still feels messy.