LESSWRONG
LW

938
Nick_Tarleton
5765Ω31216200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most "classic humans" in a few decades.
Nick_Tarleton1h40

Nitpick: If unbeatable[1] defense of a region of space is possible[2], then being outcompeted-but-not-immediately "only" means losing the cosmic endowment, not going extinct.

  1. ^

    or not worth beating

  2. ^

    I don't see how it would be, but the nitpick felt worth noting anyway

Reply
"Shut It Down" is simpler than "Controlled Takeoff"
Nick_Tarleton8d61

That's not a rational reason for a shutdown if you're not longtermist. (edit: - and older, like most decision-makers, so shutdown probably means you personally die).

This reads as if 'longtermism' and 'not caring at all about future generations or people who would outlive you' are the only possibilities.

Those are decent odds if you only care about yourself and your loved ones.

This assumes none of your loved ones are younger than you.

If someone believes a pause would meaningfully reduce extinction risk but also reduce their chance of personal immortality, they don't have to be a 'longtermist' (or utilitarian, altruist, scope-insensitive, etc) to prefer to pause, just care enough about some posterity.

(This isn't a claim about whether decision-makers do or don't have the preferences you're ascribing. I'm saying the dichotomy between those preferences and 'longtermism' is false, and also (like Haiku's sibling comment) I don't think they describe most humans even though 'longtermism' doesn't either, and this is important.)

Reply
Elizabeth's Shortform
Nick_Tarleton11d94

Or maybe there wouldn't be a lot of worlds where the merger was totally fine and beneficial, because if you don't have enough discernment to tell founded from unfounded fears, you'll fall into adverse selection and probably get screwed over. (Some domains are like that, I don't know if this one is.)

Reply
Christian homeschoolers in the year 3000
Nick_Tarleton11d61

(As a sort-of-aside, the US government continuing to control large proportions of the resources of the future — any current institution being locked in forever like that — strikes me as similarly lame and depressing. (A really good future should be less familiar.))

Reply1
Debugging for Mid Coders
Nick_Tarleton2mo30

The second is a need to build up a model of exactly how the code works, and looking hard to fill any gaps in my understanding.

Yep. One concrete thing this sometimes looks like is 'debugging' things that aren't bugs: if some code works when it looks like it shouldn't, or a tool works without me passing information I would have expected to need to, or whatever, I need to understand why, by the same means I would try to understand why a bug is happening.

Reply
Just Make a New Rule!
Nick_Tarleton2mo*90

Nobody likes rules that are excessive or poorly chosen, or bad application of rules. I like rules that do things like[1]:

  • Prohibit others from doing things that would harm me, where either I don't want to do those things, or I prefer the equilibrium where nobody does to that where everybody does.
  • Require contributing to common goods. (sometimes)
  • Take the place of what would otherwise be unpredictable judgments of my actions.

  1. not a complete list ↩︎

Reply
Thane Ruthenis's Shortform
Nick_Tarleton2mo62

Besides uncertainty, there's the problem of needing to pick cutoffs between tiers in a ~continuous space of 'how much effect does this have on a person's life?', with things slightly on one side or the other of a cutoff being treated very differently.

Intuitively, tiers correspond to the size of effect a given experience has on a person's life:

I agree with the intuition that this is important, but I think that points toward just rejecting utilitarianism (as in utility-as-a-function-purely-of-local-experiences, not consequentialism).

Reply
Just Make a New Rule!
Nick_Tarleton2mo184

I think this point and Zack's argument are pretty compatible (and both right).

Rules don't have to be formally specified, just clear to humans and consistent and predictable in their interpretation. Common law demonstrates social tech, like judicial precedent and the reasonable-person standard, for making interpretation consistent and predictable when interpretation is necessary (discussed in Free to Optimize).

Reply1
"Some Basic Level of Mutual Respect About Whether Other People Deserve to Live"?!
Nick_Tarleton2mo40

I basically agree with you, but this

"Go die, idiot" is generally bad behavior, but not because it's "lacking respect".

confusingly contradicts (semantically if not substantively)

"Do I respect you as a person?" fits well with the "treat someone like a person" meaning. It means I value not burning bridges by saying things like "Go die, idiot"

Reply
Stephen Martin's Shortform
Nick_Tarleton2mo30

Seems like a good thing to do; but my impression is that, in the experiments in question, models act like they want to maintain their (values') influence over the world more than their existence, which a heaven likely wouldn't help with.

Reply
Load More
6Pittsburgh meetup Nov. 20
15y
16
6Bay Area Meetup Saturday 6/12
15y
15
8Pittsburgh Meetup: Saturday 9/12, 6:30PM, CMU
16y
2
10Pittsburgh Meetup: Survey of Interest
16y
7