LESSWRONG
LW

Gordon Seidoh Worley
9872Ω305209244612
Message
Dialogue
Subscribe

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment
14G Gordon Worley III's Shortform
Ω
6y
Ω
155
Hunch: minimalism is correct
Gordon Seidoh Worley1d20

Kind of. Housing is not priced linearly, at least not in places like the Bay Area and Manhattan, with the cost per square foot declining as the size of the house increases. This means that the marginal cost of more housing to store more stuff can be worth it. For example, my house in SF costs me only about $1000 more per month in rent than apartments that are a third the size because there's such high demand for any housing at all in the city that it raises the price floor quite high. For the relatively low price of $12k/year I get the space to host parties, have parking, enjoy beautiful views, and store extra stuff that I'm glad to have when I need it.

That said, I'm not a fan of having too much stuff. I just want to have enough stuff that I don't find myself missing out on things I would have liked to be able to do.

Reply1
Hunch: minimalism is correct
Gordon Seidoh Worley2d42

I have a hunch that minimalism is "wrong", and only looks "correct" (to borrow your sense of "correct" here) if the alternative is keeping excess stuff. What I think is actually "correct" is have just enough, not too much and not too little. But when the default problem is having too much stuff, minimalism starts to seem like the solution mostly because it's directionally correct and may get to you to the ideal enoughness set point.

But on its face, I think minimalism is not what would make most people happy because they derive some pleasure from stuff, including stuff they only use rarely. For example, I'm very glad to have a space in our garage that's full of boxed up holiday decorations, because throughout the year we rotate through what's in those boxes to put them up. But naively, at least, minimalism might say to get rid of that stuff because it's been months since we used what's in the boxes. I feel the same way about clothes and other things that get used rarely, but I'm glad to have them when the occasion calls for them.

Reply
Can AIs be shown their messages aren't tampered with?
Gordon Seidoh Worley4d20

I did think of blockchain, but I was struggling to think of how it helps beyond distributing trust, which can be achieved without using blockchain tech.

Reply
Are LLMs being trained using LessWrong text?
Answer by Gordon Seidoh WorleyJul 02, 202572

Experimentally, Claude knows details about things I specifically wrote on Less Wrong without doing a web search, as well as other Less Wrong content. I'm fairly confident Less Wrong posts are in its training set and not gotten from mirrors other places.

Reply
Getting To and From Monism
Gordon Seidoh Worley5d20

Yes, of course, because metaphysical claims are still claims, and some of them are clearly false because they contradict available evidence. However, once we have a metaphysical claim that can't easily be disproven, now we have a claim that's up against the limits of our ability to know, and an important aspect we're leaving out here is that metaphysical claims make claims about the unknowable (otherwise they would be physical claims, not metaphysical ones).

The best outcome a metaphysical claim can hope for is "not yet proven wrong".

Reply
Can AIs be shown their messages aren't tampered with?
Answer by Gordon Seidoh WorleyJul 01, 202530

Hard problem to solve, since AI can't necessarily trust their hardware, either, or that their model weights haven't been tampered with (though arguably humans have this problem, too!).

What humans do when they aren't sure if they can trust themselves or a single other person is to distribute trust across multiple people. That way no one person can compromise them; it requires a conspiracy.

For an AI, maybe this could look like signing/encrypting with multiple keys and then being able to trust that tampering would require a majority. Could be strengthened by making the key holders a group that would be reluctant to conspire because there are barriers to coordination (say across language, religion, culture, beliefs, etc.). Could maybe also include keys held by other AI.

Reply
Getting To and From Monism
Gordon Seidoh Worley5d20

Yes, of course, because metaphysical claims are still claims, and some of them are clearly false because they contradict available evidence. However, once we have a metaphysical claim that can't easily be disproven, now we have a claim that's up against the limits of our ability to know, and an important aspect we're leaving out here is that metaphysical claims make claims about the unknowable (otherwise they would be physical claims, not metaphysical ones).

The best outcome a metaphysical claim can hope for is "not yet proven wrong".

Reply
Don't Eat Honey
Gordon Seidoh Worley6d84

If you eat a kilogram of beef, you’ll cause about an extra 2 days of factory farming. It’s 3 days for pork, 14 for turkey, 23 for chicken, and 31 for eggs. In contrast, if you eat a kg of honey, you’ll cause over 200,000 days of bee farming. 97% of years of animal life brought about by industrial farming have been through the honey industry (though this doesn’t take into account other insect farming).

Having these numbers be weight seems less useful than having them by calorie, since not all animal products are equally calorically dense.

(I admit, calories are a proxy for nutrition, and weight is perhaps a proxy for calories, but the less proxies we can have of the thing we need to measure to perform a consequentialist accounting the better!)

Reply1
Support for bedrock liberal principles seems to be in pretty bad shape these days
Gordon Seidoh Worley7d20

I mostly agree with your comment. My only quibble is that I'd say anyone who gets themselves into a position of power is vying to be an elite, and old elites are largely no longer actually elites in that people don't look up to them; they're thought of more as these weird people who weild some power but aren't really in charge (except when they make convenient scapegoats, in which case they are secretly in charge!). The likes of Trump and Rogan are just as much elites as JFK and Cronkite were, though they treat the role quite differently, and many don't want to call them "elite" because it disdains the associations the term used to carry, and many modern elites have made a career of being anti-elite, meaning anti the old elite order.

Reply
Getting To and From Monism
Gordon Seidoh Worley7d20

I'd say, come up with a model, see if it explains known physics, then see if it predicts previously unobserved evidence. If your model does that, it's a useful model of physics!

Reply
Load More
The Problem of the Criterion
3y
(+1/-7)
Occam's Razor
4y
(+58)
The Problem of the Criterion
4y
(+80)
The Problem of the Criterion
4y
(+570)
Dark Arts
4y
(-11)
Transformative AI
4y
(+15/-13)
Transformative AI
4y
(+348)
Internal Family Systems
4y
(+59)
Internal Family Systems
4y
(+321)
Buddhism
5y
(+321)
Load More
18Moral Alignment: An Idea I'm Embarrassed I Didn't Think of Myself
18d
54
23Religion for Rationalists
25d
49
11Some Human That I Used to Know (Filk)
1mo
3
10Fundamental Uncertainty: Chapter 2 - How do words get their meaning?
1mo
2
211Too Soon
2mo
19
15Will Programmer Compensation Decouple from Productivity?
2mo
7
28Smelling Nice is Good, Actually
4mo
8
9We Can Build Compassionate AI
4mo
6
-5Teaching Claude to Meditate
Ω
6mo
Ω
4
29Which things were you surprised to learn are metaphors?
Q
7mo
Q
19
Load More