Eli Tyre

Comments

Sorted by

Disagreed insofar by "automatically converted" you mean "the shortform author has no recourse against this'".

No. That's why I said the feature should be optional. You can make a general default setting for your shortform, plus there should and there should be a toggle (hidden in the three dots menu?) to turn this on and off on a post by post basis.

I agree. I'm reminded of Scott's old post The Cowpox of Doubt, about how a skeptics movement focused on the most obvious pseudoscience is actually harmful to people's rationality because it reassures them that rationality failures are mostly obvious mistakes that dumb people make instead of hard to notice mistakes that I make.

And then we get people believing all sorts of shoddy research – because after all, the world is divided between things like homeopathy that Have Never Been Supported By Any Evidence Ever, and things like conventional medicine that Have Studies In Real Journals And Are Pushed By Real Scientists.

Calling groups cults feels similar, in that it allows one to write them off as "obviously bad" without need for further analysis, reassures one that their own groups (which aren't cults, of course) are obviously unobjectionable.

Read ~all the sequences. Read all of SSC (don't keep up with ACX).

Pessimistic about survival, but attempting to be aggresively open-minded about what will happen instead of confirmation biasing my views from 2015. 

your close circle is not more conscious or more sentient than people far away, but you care about your close circle more anyways

Or, more specifically, this is a non-sequitor to my deonotology, which holds regardless of whether I personally like or privately wish for the wellbeing of any particular entity.

Well presumably because they're not equating "moral patienthood" with "object of my personal caring". 

Something can be a moral patient, who you care about to the extent you're compelled by moral claims, or who's rights you are deontologically prohibited from trampling on, without your caring about that being in particular.

You might make the claim that calling something a moral patient is the same as saying that you care (at least a little bit) about its wellbeing, but not everyone buys that calim.

Eli Tyre5912

An optional feature that I think LessWrong should have: shortform posts that get more than some amount of karma get automatically converted into personal blog posts, including all the comments.

It should have a note at the top "originally published in shortform", with a link to the shortform comment. (All the copied comments should have a similar note).

There's some recent evidence that non-neural cells have memory like functions. This doesn't, on its own, entail that non-neural cell are maintaining personality-relevant or self-relevant information.

Shouldn't we expect that ultimately the only thing selected for is mostly caring about long run power?

I was attempting to address that in my first footnote, though maybe it's too important a consideration to be relegated to a footnote. 

To say it differently, I think we'll see selection evolutionary fitness, which can take two forms:

  • Selection on AIs' values, for values that are more fit, given the environment.
  • Selection on AIs' rationality and time preference, for long-term strategic VNM rationality.

These are "substitutes" for each other. An agent can either have adaptive values, adaptive strategic orientation, or some combination of both. But agents that fall below the Pareto frontier described by those two axes[1], will be outcompeted.

Early in the singularity, I expect to see more selection on values, and later in the singularity (and beyond), I expect to see more selection on strategic rationality, because I (non-confidently) expect the earliest systems to be myopic and incoherent in roughly similar ways to humans (though probably the distribution of AIs will vary more on those traits than humans). 

The fewer generations there are before strong, VNM agents with patient values / long time preferences, the less I expect small amounts of caring for human in AI systems will be eroded. 

  1. ^

    Actually, "axes" are a bit misleading since the space of possible values is vast and high dimensional. But we can project it onto the scalar of "how fit are these values (given some other assumptions)?"

Load More