Gordon Seidoh Worley

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

Feels like this has too much wiggle room. Like what counts as an "easy" problem of consciousness and what counts as "transcending" it? Generally good definitions avoid words that either do too much work or invite judgement calls about what counts.

Answer by Gordon Seidoh Worley20

It really helps if we just taboo the word "consciousness" because people have too many implicit associations wrapped up in what they want that word to mean.

On a day to day level, we want "conscious" be to a stand-in for something like "things that have subjective experiences like mine". This is unfortunately not very useful, as the world is not carved up into thing that are like this and not, other than for other humans.

On the other, if we try to get technical about what we mean for things to be conscious, we either end up at panpsychism by deflating the notion of consciousness (I'm personally supportive of this and think in many cases we should use "consciousness" to refer to negative-feedback control systems because these are the smallest unit of organization that has subjective information), or we end up with convoluted definitions of consciousness to add on enough qualifiers to avoid deflation.

"Consciousness" is a word people are really confused about and have lots of different competing intuitions about what it should mean and I really wish we'd just stop saying it and talk about what we mean directly instead.

Much of this depends on what kind of AI we get and how long we live in relatively the same conditions as that AI.

The unstated assumptions here seem to me to be something like:

  • AI provides relatively fixed levels of automation, getting gradually better over time
  • AI doesn't accelerate us towards some kind of singularity so that society has time to adapt to tiering

I'm pretty suspicious of accepting the second assumption here, as I think just the opposite is more likely. But, given the assumptions Acemoglu seems to be making, a two-tiered society split seems a likely outcome to me.

Sort of a tangent, but when I ride I technically fall into the "strong & fearless" group, but don't feel like it.

I'd much prefer protected bike infrastructure, but for a variety of reasons it's often unavailable. In those cases, at least in an urban setting, I generally prefer to ride in the lane with cars over using an unprotected bike lane. To me this is obvious:

  • in the lane the cars will definitely see you
  • if you ride far enough into the lane cars can't pass you by "squeezing by", getting dangerously close
  • less likely to put yourself in precarious situations at intersections

Unprotected bike lanes seem like the worse possible option, and find it strange that anyone falls into the "enthused & confident" category.

Answer by Gordon Seidoh Worley20

My guess is that there's no home economic alpha to be had in polyamory, on average. This isn't a very strong opinion, but I expect most efficiencies that can be obtained by a polycule (which are similar to those obtained by a family of the same size with kids), will be offset by increased volatility due to complex relationship dynamics (I'm not saying poly people break up more, but that this is simple math of having more people and holding the base rate of breakups constant).

I think there are ways to make home economics more efficient for any particular household, but they are largely orthogonal to relationship style of the people in the house.

For what it's worth, being sedated for a wisdom tooth extraction preceded my stream entry (Buddhist term for an early stage of awakening) by about 6 months. Before that I had no real experience with being in severely altered states (other than accidentally Robotripping which I was unintentionally doing because I didn't realize how sensitive I was for DXM and had been experiencing it since I was a toddler taking cold medicine and thought it was just part of being sick). The experience of seeing myself continue to operate when "I" wasn't there was eye-opening.

I think we can more easily and generally justify the use of the intentional stance. Intentionality requires only the existence of some process (a subject) that can be said to regard things (objects). We can get this in any system that accepts input and interprets that input to generate a signal that distinguishes between object and not object (or for continuous "objects", more or less object).

For example, almost any sensor in a circuit makes the system intentional. Wire together a thermometer and a light that turns on when the temperature is over 0 degrees, off when below, and we have a system that is intentional about freezing temperatures.

Such a cybernetic argument, to me at least, is more appealing because it gets down to base reality immediately and avoid the need to sort out things people often want to lump in with intentionality, like consciousness.

Author's note: This chapter took a really long time to write. Unlike previous chapters in the book, this one covers a lot more stuff in less detail, but I still needed to get the details right, so it took a long time to both figure out what I really wanted to say and to make sure I wasn't saying things that I wouldn't upon reflection regret having said because they were based on facts that I don't believe or I had simply gotten wrong.

It's likely still not the best version of this chapter it could be, but at this point I think I've made all the key points I wanted to make here, so I'm publishing the draft now and expect this one to need a lot of love from an editor later on.

I'm somewhat confused. I may not be reading the charts you included right, but it sort of looks to me like just rinsing with saline is useful, and that seems like it should be extremely safe and low risk and just about as effective as anything else. Thoughts?

I suppose you'd agree that there are in fact tradeoffs at play here and that the real question is what direction the scale tends to lean. And I suppose you are of the opinion that the scale tends to lean in favor of narrower, more targeted solutions than broader, more all-in-one solutions. Is all of that true? If so, would you mind elaborating more on why you are of that belief?

Scaling the business is different than getting started.

To get started it's really useful to have a very specific problem you're trying to solve. Provides focus and let's you outperform on quality by narrowly addressing a single need better than anyone else can.

That is often the wedge to scale the business. You get in by solving a narrow, hard problem, then look for opportunities to expand your business by seeing what else your customers need or what else you could do given the position your in.

To give another example from a previous employer, Plaid got their start by providing an API to access banks, and they did everything they could to make it the best-in-class experience, with special attention on making the experience great for developers so they would advocate for paying a premium price over cheaper alternatives. That's still the core business, but they've expanded into other adjacent products both as API access to banks has become easier to come by (in part thanks to Plaid's success) and as customers have come looking for more all-in-one solutions to their fintech platform needs (e.g. a money-movement product so they don't have to manage transfers on their own, alternative credit decisions tools, etc.).

Given your desire to do something that's more lifestyle business than a high-growth startup, better examples might be to look at similar lifestyle products. In the LW-sphere there's things like Complice and Roam, and outside LW you'll find plenty of these that have been quite successful or were successful in the past (Basecamp is a prime example here, but I think Slack was arguably a lifestyle business that accidentally figured out how to take off when they pivoted to messaging away from MMOs, etc.).

Load More