All of calebo's Comments + Replies

calebo10

There's a sense in which debates over the concept "perception" are purely conceptual, but they're not just a matter of using a different word. This is what I was getting at by cognitive qualia: conceptual debates can shape how you experience the world.

1TAG
I never asserted that they were. What's the difference between conceiving differently and mapping differently? Not a lot. In the special case where two maps have identical structures, they slice-and-dice the territory identically, then it doesn't much matter how you label the components. In every other case, conceiving differently is mapping differently. "Direct perception" and "indirect perception" aren't just equivalent labels for the same structure, they indicate different structures, a two stage one , and a three state one. They work the same end-to-end, as blank boxes ... but we are not limited to viewing perception as a black box, we can peak inside using brain scanners. Direct and indirect perception are different maps that can succeed or fail in corresponding to the territory, and we can check that. That may be the case, but it doesn't mean all the other issues are purely conceptual.
calebo60

One thing I'm confused about this post is whether constructivism, subjectivism count as realisms. The cited realists (Enoch and Parfit) are substantive realists.

I agree that substantive realists are a minority in the rationality community, but not that constructivists + subjectivists + substantive realists are a minority.

4bmgarfinkel
Terminology definitely varies. FWIW, the breakdown of normative/meta-normative views I prefer is roughly in line with the breakdown Parfit uses in OWM (although he uses a somewhat wonkier term for "realism"). In this breakdown: "Realist" views are ones under which there are facts about what people should do or what they have reason to do. "Anti-realist" views are ones under which there are no such facts. There are different versions of "realism" that claim that facts about what people should do are either "natural" (e.g. physical) or "non-natural" facts. If we condition on any version realism, there's then the question of what we should actually do. If we should only act to fulfill our own preferences -- or pursue other similar goals that primarily have to do with our own mental states -- then "subjectivism" is true. If we should also pursue ends that don't directly have to do with our own mental states -- for example, if we should also try to make other people happy -- then "objectivism" is true. It's a bit ambiguous to me how the terms in the LessWrong survey map onto these distinctions, although it seems like "subjectivism" and "constructivism" as they're defined in the survey probably would qualify as forms of "realism" on the breakdown I just sketched. I think one thing that sometimes makes discussions of normative issues especially ambiguous is that the naturalism/non-naturalism and objectivism/subjectivism axes often get blended together.
8Rob Bensinger
Sayre-McCord in SEP's "Moral Realism" article: Joyce in SEP's "Moral Anti-Realism" article: So, everyone defines "non-realism" so as to include error theory and non-cognitivism; some people define it so as to also include all or most views on which moral properties are in some sense "subjective." These ambiguities seem like good reasons to just avoid the term "realism" and talk about more specific positions, though I guess it works to think about a sliding scale where substantive realism is at one extreme, error theory and non-cognitivism are at the other extreme, and remaining views are somewhere in the middle.
calebo30

Re Neuroticism.

Let's just consider emotions. A really simple model of emotions is that they're useful bec they provide info and bec they have motivational power. Neurotic emotions are useful when they provide valuable info or motivate valuable actions.

If you're wondering whether a negative emotion is useful, check whether it's providing valuable info or motivating useful action. I think internal family systems might be especially useful for this.

Of course, sometimes you can get the valuable info or motivation w/o experiencing a negati... (read more)

Answer by calebo70

Weekly review:

  • I use Google sheets.
  • Rate my progress on key goals (1-5). Add notes justifying the score.
  • Note how much time I spent working, review work cycle sheets for trends, insight, and things I'd like to try next week.
  • Compare my view of what a successful week looked like with what actually happened.
  • Determine what a successful week looks like for the next week.

The ontology is key goals, weekly goals. Each goal is grouped under a broad project. I like flexibility, so the ontology is as general as it sounds.


Daily review:

... (read more)
calebo20

I agree that the return to "learning to navigate moods" varies by person.

It sounds to me, from your report, that you tend to be in moods conducive to learning. My sense is that there are many who are often in unproductive moods and many who aware that they spend too much time in unproductive moods. These people would find learning to navigate moods valuable.

calebo*10

Awesome, thanks for the super clean summary.

I agree that the model doesn't show that AI will need both asocial and social learning. Moreover, there is a core difference between the growth of the cost of brain size between humans and AI (sublinear [EDIT: super] vs linear). But in the world where AI dev faces hardware constraints, social learning will be much more useful. So AI dev could involve significant social learning as described in the post.

3Rohin Shah
Actually, I was imagining that for humans the cost of brain size grows superlinearly. The paper you linked uses a quadratic function, and also tried an exponential and found similar results. Agreed if the AI uses social learning to learn from humans, but that only gets you to human-level AI. If you want to argue for something like fast takeoff to superintelligence, you need to talk about how the AI learns independently of humans, and in that setting social learning won't be useful given linear costs. E.g. Suppose that each unit of adaptive knowledge requires one unit of asocial learning. Every unit of learning costs $K, regardless of brain size, so that everything is linear. No matter how much social learning you have, the discovery of N units of knowledge is going to cost $KN, so the best thing you can do is put N units of asocial learning in a single brain/model so that you don't have to pay any cost for social learning. In contrast, if N units of asocial learning in a single brain costs $KN2, then having N units of asocial learning in a single brain/model is very expensive. You can instead have N separate brains each with 1 unit of asocial learning, for a total cost of $KN, and that is enough to discover the N units of knowledge. You can then invest a unit or two of social learning for each brain/model so that they can all accumulate the N units of knowledge, giving a total cost that is still linear in N. I'm claiming that AI is more like the former while this paper's model is more like the latter. Higher hardware constraints only changes the value of K, which doesn't affect this analysis.
calebo30

Can you say more about what you mean by I. metaphilosophy in relation to AI safety? Thanks.

calebo30

Have there been explicit requests for web apps that have may solve an operations bottleneck at x-risk organisations? Pointers towards potential projects would be appreciated.

Lists of operations problems at x-risk orgs would also be useful.

6ozziegooen
I made a Question for this on the EA forum. https://forum.effectivealtruism.org/posts/NQR5x3rEQrgQHeevm/what-new-ea-project-or-org-would-you-like-to-see-created-in
8habryka
I am actually not a huge fan of the "operations bottleneck" framing, and so don't really have a great response to that. Maybe I can write something longer on this at some point, but the very short summary is that I've never seen the term "operations" used in any consistent way, and instead I've seen it refer to a very wide range of skillsets of barely-overlapping skillsets that are often very high-skill tasks that people hope to find a person for who is both willing to work with very little autonomy and with comparably little compensation. I think many orgs have very concrete needs for specific skillsets they need to fill and for which they need good people, but I don't think there is something like a general and uniform "operations skillset" missing at EA orgs, which makes building infrastructure for this a lot harder.