Nurse who likes to read philosophy and rationalist literature
What did you end up choosing to study with data science? I'm in a bootcamp choosing a topic and I have been brainstorming ideas like crazy recently, and would be happy to discuss this more with you if you are interested.
I see a hilarious and inspiring similarity between your story and mine.
In high school, I realized that I enjoyed reflecting on topics to achieve coherence, discussing mechanisms of superficial phenomena, and wanted everyone to be happy on a deep level. So I created a religion, because, of course, I wanted to save the world. I thought other religions were a failed attempt to incorporate modern positive psychology learnings (which had "solved happiness") into moral theories but I wanted to use the meme potential of social phenomena like religion and music. Then I wasn't good at making a religion 'religious', I didn't like playing instruments much, and I was too arrogant to think old philosopher's had anything interesting to say that I hadn't thought of. So I settled for helping people as a nurse. I got stuck.
10 years later, I heard of and finally read some philosophy, realized what I had been grasping at wasn't religion, but something more like logical positivism, then I found rationality and EA. It is strange how much this community feels like everything I was trying to articulate as a youngling, but I didn't realize economics, math, philosophy, and computer science were where my interests really lie, not religion and music. So now I'm in the rapid learning stage.
I think the only interesting insight I have here is that I'm now very weary of getting trapped in another local optimum (rationality / ea is the best, so I won't look at other intellectual movements) and continue to be open to seeing if there is anything better. However, it sounds like you implicitly understand this groupthink wariness, so enjoy and go save the world.
So I was brainstorming recently with a friend about this very topic: how to convince someone to support a goal of rationality (existential risk reduction, AI study), who doesn't enjoy engaging in rational reasoning. Like I'd love to be able to persuade my random gen pop friend to be vegetarian, or think about the real disparities in the world, or about the implications of our actions if they were scaled up.
2 possible strategies I came up with, and all border on Rhetoric, Persuasion, and I guess the Dark Arts in general are
Inspiration: "Inspire" them to value the rational process aka philosophical reasoning and evidence based reasoning. Inspiration involves realizing that something is socially virtuous, aesthetically pleasing, or has good instrumental results.
Approach from the Side: Identities are powerful. You can emphasize the virtuousness of one part of peoples' identities as being consistent with the virtuousness of another part of their identity. Eventually you can weave these syllogisms together to support an "actually good value". For Example, I find that every liberal and conservative pays lip service to the ideas of a 'post-truth world' or 'fake news', and this is great because one can use this piece of their identity to make them more curious about what is true. When I did this with my parents it went generally something like this.
My approach from the side technique worked briefly with my parents, maybe for like 2 or 4hrs they were curious about evidence for their beliefs and even some epistemology. Their belief statements went back to equilibrium (total confidence, signaling, etc.) after that though, although maybe it has shifted slightly in an undiscernible way. Anyways, the brief flicker of light I saw in them still feels worth it, but it might not for you.
Both these Means or Strategies feel like dirty and manipulative ways to use the Dark Arts, but the people I talked to are immersed in a whirlpool of these without even noticing it. Also the End Goal of being able to bring my friends and family closer to philosophy or rationality at all is of major utility to me and my posterity. If splashing a drowning man might save him, I would do it.
>!
The Pruned
Epistemically Uncertain: Gender norms. I thought about this one for a while, and it was interesting but I just don't see too many equilibrium forming around "stable" femininity or masculinity throughout history. oh well. I also removed the Lipostat.
Boring Repetitions: Temperature of human body, other biologically important homeostasis levels, or psychologically important set - points. !<
I can't seem to put spoiler tags on this?..
This bet would've paid major dividends in hindsight. Is there a way to bet on OpenAI and Anthropic or other AI safety focused labs to both give them more access to capital and to make a profit? Nvidia stock has already ballooned quite a bit, and seems to be mostly duel use. Also I'm not confident about the safety credibility of many other AI companies. Although scoring each major foundation model building company for safety would be a useful project to do... (Pondering if I should do this)
I'm asking this mostly to see if anyone else has already done their homework on this question.