Gordon Seidoh Worley

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

This statement would be false is, for example, we discovered that people could change other's perceptions of the world by expecting them to be different and taking no other action.

Exactly. In each moment, the world is exactly as it is and can't be anyway other than how we find it. Then it's the next moment and the world is no longer the same as it was the moment before, yet is still however it is in that moment and no other way. The world is constantly changing from moment to moment, but always changing into exactly what it is.

I reject the claim that faith implies the world cannot change

Me too, which is why I didn't write this.

  • Level Up told people to "Let your faith die," and then contrasted faith with wonder.

    I really don't think that most people experience faith as being in opposition to wonder. It also suggests that faith is incompatible with the sort of progress the rationalist community wants.

This bit about faith points to something that frequently annoys me when interacting with my fellow rationalists.

I'm a person with faith now, but I wasn't always, and it took me a long time to figure out what faith really means. I spent most of my life deeply misunderstanding what faith is because the Christians I grew up around often conflated faith with unwavering and unquestioning belief in dogma, to the point that even now it's unclear to me if they meant anything else by the word. I only came around on faith once I realized it was just Latin for trust, and specifically trust in the world to be just as it is.

If a person is a Christian (especially a Nicene Christian), faith will include a metaphysical belief in God because they believe the world is God's creation and evidence for God exists everywhere within it. I'm not a Christian and so don't hold such a belief, but I can nevertheless have faith that the world will always be exactly as it is, and find refuge in my trust that I cannot be wrong about my experience of it prior to interpreting and judging it.

But lots of rationalists I know don't get this. Like they can say the worlds "it all adds up to normality" but then constantly say and do things that suggest to me that they actually think that if they just try a little harder they might understand things well enough to remake the world in some fundamental way. They lack faith, and by extension humility. And while it's good to see pain in the world and want to heal it, such healing will always be limited in effectiveness so long as the world is not seen clearly, and it's my strong belief that someone is not seeing clearly if they don't have faith in the world to be as it is.

Sorry for this ranty tangent, but the song line is plucking at a thread that I think is worth pulling, and I've not spent enough time writing about my thoughts here. Hopefully this is somewhat understandable.

In a sense, yes.

Although in control theory open- vs. closed-loop is a binary feature of a system, there's a sense in which some systems are more closed than others because more information is fed back as input and that information is used more extensively. Memoryless LLMs have a lesser capacity to respond to feedback, which I think makes them safer because it reduces their opportunities to behave in unexpected ways outside the training distribution.

This is a place where making the simple open vs. closed distinction becomes less useful because we have to get into the implementation details to actually understand what an AI does. Nevertheless, I'd suggest that if we had a policy of minimizing the amount of feedback AI is allowed to respond to, this would make AI marginally safer for us to build.

Most AI safety policy proposals are of this type. I'm not suggesting this as a solution to get safe AI, but as a policy that government may implement to intentionally reduce capabilities to make us marginally safer.

Eh, I've encountered plenty of times when I really needed to understand the variance of data such that I had to "zoom in" and put the start of the axis at something above 0 because otherwise I couldn't find out what I needed to know to make a decision. But I do often like to see it both ways, so I can understand it both in relative and absolute terms.

Yes. I think that any attempt to explain wholesomeness in written words will be inadequate at best and misleading at worst.

I guess this is fine, but I'm not convinced. This mostly just seems like you pushing your personal aesthetic preferences, but it feels like I could easily come up with arguments for following exactly the opposite advice.

This post reminds me of lots of writing advice: seems fine, so long as you have the same aesthetic sensibilities as the person giving the advice.

I'm confused by the title. Reading this, I don't really see a case for the benefits of the "poison" in ayahuasca to recommend it over purer forms of DMT. My take away from your points is that ayahuasca is significantly worse than other forms of DMT and should be avoided.

Load More