Gordon Seidoh Worley

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

I think we can more easily and generally justify the use of the intentional stance. Intentionality requires only the existence of some process (a subject) that can be said to regard things (objects). We can get this in any system that accepts input and interprets that input to generate a signal that distinguishes between object and not object (or for continuous "objects", more or less object).

For example, almost any sensor in a circuit makes the system intentional. Wire together a thermometer and a light that turns on when the temperature is over 0 degrees, off when below, and we have a system that is intentional about freezing temperatures.

Such a cybernetic argument, to me at least, is more appealing because it gets down to base reality immediately and avoid the need to sort out things people often want to lump in with intentionality, like consciousness.

Author's note: This chapter took a really long time to write. Unlike previous chapters in the book, this one covers a lot more stuff in less detail, but I still needed to get the details right, so it took a long time to both figure out what I really wanted to say and to make sure I wasn't saying things that I wouldn't upon reflection regret having said because they were based on facts that I don't believe or I had simply gotten wrong.

It's likely still not the best version of this chapter it could be, but at this point I think I've made all the key points I wanted to make here, so I'm publishing the draft now and expect this one to need a lot of love from an editor later on.

I'm somewhat confused. I may not be reading the charts you included right, but it sort of looks to me like just rinsing with saline is useful, and that seems like it should be extremely safe and low risk and just about as effective as anything else. Thoughts?

I suppose you'd agree that there are in fact tradeoffs at play here and that the real question is what direction the scale tends to lean. And I suppose you are of the opinion that the scale tends to lean in favor of narrower, more targeted solutions than broader, more all-in-one solutions. Is all of that true? If so, would you mind elaborating more on why you are of that belief?

Scaling the business is different than getting started.

To get started it's really useful to have a very specific problem you're trying to solve. Provides focus and let's you outperform on quality by narrowly addressing a single need better than anyone else can.

That is often the wedge to scale the business. You get in by solving a narrow, hard problem, then look for opportunities to expand your business by seeing what else your customers need or what else you could do given the position your in.

To give another example from a previous employer, Plaid got their start by providing an API to access banks, and they did everything they could to make it the best-in-class experience, with special attention on making the experience great for developers so they would advocate for paying a premium price over cheaper alternatives. That's still the core business, but they've expanded into other adjacent products both as API access to banks has become easier to come by (in part thanks to Plaid's success) and as customers have come looking for more all-in-one solutions to their fintech platform needs (e.g. a money-movement product so they don't have to manage transfers on their own, alternative credit decisions tools, etc.).

Given your desire to do something that's more lifestyle business than a high-growth startup, better examples might be to look at similar lifestyle products. In the LW-sphere there's things like Complice and Roam, and outside LW you'll find plenty of these that have been quite successful or were successful in the past (Basecamp is a prime example here, but I think Slack was arguably a lifestyle business that accidentally figured out how to take off when they pivoted to messaging away from MMOs, etc.).

Also matters what the experience is like. High prestige university allows you to get a job at a high prestige company. Low prestige university makes it a lot harder to get considered for jobs at high prestige firms. You'll have to outperform high-prestige peers by, say, 50% to get noticed if you want access to the same sort of opportunities they get access to via prestige.

(To be clear, I'm not in favor of this sort of thing, I just want to be realistic about it and I wish someone had been real with me about it when I was 17 trying to decide where to go to college. Don't rely on your ability to outperform others. Take every advantage you can get and then leverage them to do even more!)

And this really makes it hard for me as an "indie hacker" to do what people often recommend: solve one very specific problem. Find a niche. Something narrow and focused. "Zoom in". This works in areas where problems have low cohesiveness, but not when they have high cohesiveness.

It's really hard to solve a lot of problems well. The value of an all-in-one product is that you really don't need anything else. Everything it doesn't do, or doesn't do well enough to meet your needs, is a ding against it, and it's relatively easy to peel off specific problem spaces from the general problem of basic business operations needs.

Here's a specific example from the company I'm part of (Anrok). We sell a product that aims to replace a SaaS business's need for an accounting team to be compliant with sales tax & VAT. We compete against both all-in-one solutions from billing systems that offer to also solve this problem because the problem is hard and they are trying to solve at least two problems: billing and tax. Since we just try to solve tax, we can do a better job of it. We also compete against accounting firms who try to offer you all-in-one services, but again same problem, plus the cost is a lot higher because they use humans to do what we do with code.

So I wouldn't worry too much about the existence of everything apps. It takes a long time to build a good everything app that actually does "everything" within its domain (think Salesforce or Jira), and even then "everything" is often achieved via outsourcing some of the everything to third parties who build plugins and integrations.

"Unconditional" makes a lot more sense if we think of it as "unconditional, conditional on my ability to think of conditions on my love". This is what I think most people mean by unconditional love: they can't think of any reasonable conditions on their love, and would discount unreasonable conditions as unusual or outside the realm of what's meant by "unconditional".

This is probably something like a shape-rotator vs. wordcel thing: shape-rotators take words literally and are uncomfortable with a word like "unconditional" unless there are literally no conditions, while wordcels are happy to say "unconditional" if the conditions are outside their Overton Window for reasons they would stop loving someone.

Unless you are going to one of the big prestige universities, I don’t think it matters which you choose all that much. Save money.

My experience is that this is right. The list of top-tier global institutions, in terms of prestige, is short: Oxford, Cambridge, Harvard, MIT, CalTech, maybe Berkeley, maybe Stanford and Waterloo if you want to work in tech, maybe another Ivy if you want to do something non-tech. The prestige bump falls off fast as you move further down the list. Lots of universities have local prestige but it gets lost as you talk to people with less context.

Prestige mostly matters if you want to do something that requires it as the cost of entry. If you can get in, it doesn't hurt to have the prestige of a top-tier institution, but there's lots of things you might do where the prestige will be wasted.

Sadly, this is a tough thing to know whether you will need the prestige or not. You'll have to make an expected value calculation against the cost and make the best choice you can to minimize the risk of regret.

Perhaps, but I also feel like this is a real misunderstanding of politics being the mind killer. Rationality is critically important in dealing with real world problems, and that includes problems that have become politicized. The important-to-me thing is that, at least here on Less Wrong, we stay focused, as much as possible, on questions of evidence and reasoning. Posts about whether Israel or Palestine is good/bad should be off limits, but posts about whether Israel or Palestine are making errors in their reporting of facts in ways that can be sussed out using statistical analysis feel very much on brand.

For comparison, COVID was a hot button issue for a long time, and Less Wrong hosted tons of great posts about various mechanical things about COVID while avoiding many of the political issues. Less Wrong has also stayed away from topics like abortion and racism because there's little to say on the topic that isn't a thinly veiled attempt to argue over values. So while some aspects of the current Israel/Palestine conflict are fights over values and should be off limits here, I'd be pretty sad if we couldn't talk about trying to understand the facts of the situation, like whether or not Palestinian death figures are correct, just like we've been able to talk about COVID origins and whether or not masks are effective and controlling the spread of COVID (if you can remember back to when that was a controversial topic!).

I actually don't think the problem with this post is politics, but that it's nothing more than a link post, and except in rare cases, I'd like to see people add something more than just provide links.

The analysis in the linked article itself is interesting and not obviously politicized (or at least, isn't from a mistake-theory standpoint; it's definitely political if you're a conflict theorist, but then what isn't!).

Load More