Well, I remember a moment in BLAME! (a manga that's largely aesthetically about the disappearance of heirloom strains of humanity) where someone described Killy as human, even though he later turns out to (also?) be an immortal special safeguard, but they may have just not known that. It's possible the author didn't even know that at that time (I don't think the plot of blame was planned in advance)
There seems to be real acrimony over whether a transhumanist future is definitionally a future where humans are more or less extinct. I've always thought we should just refer to whatever humans (voluntarily, uncoerced) choose to become as human, just as american made or american controlled jets are called "american", or in the same way that a human's name doesn't change after all of their cells have renewed.
But you know, I don't think I've ever seen this depicted in science fiction. Seems bad. Humans can't imagine humanity becoming something better. Those who want humanity to become something better are pitted against those who want humanity to survive, as if these causes can't be unified. The language for the synthesis seems not to be exist, or to be denied.
This is probably too complicated to explain to the general population
I think it's workable.
No one ever internalises the exact logic of a game the first time they hear the rules (unless they've played very similar games before). A good teacher gives them several levels of approximation, then they play at the level they're comfortable with. Here's the level of approximation I'd start with, which I think is good enough.
"How much would we need to pay you for you to be happy to take the survey? Your data may really be worth that much to us, we really want to make sure we get answers that represent every type of person, including people who value their time a lot. So name your price. Note, you want to give your true price. The more you ask, the less likely it is you'll get to take the survey."
(if callee says "you wouldn't be able to afford it", say "try us.")
(if callee requests a very high amount, double-check and emphasise again that the more they ask the less likely it is that they'll get to take the survey and receive such a payment, make sure they're sure. Maybe explain that the math is set up so that they can't benefit from overstating it)
I had some things to say after that interview, he said some highly concerning things, but I ended up not commenting on this particular thing because it's probably mostly a semantic disagreement about what counts as a human or an AI.
When a human chooses to augment themselves to the point of being entirely artificial, I believe he'd count that as an AI. He's kind of obsessed with humans merging with AI in a way that suggests he doesn't really see that as just being what humans now are after alignment.
Has a laser ever been fired outdoors in japan
Perhaps customs notices that both parties have deep pockets, and it's become a negotiation, further slowed by the fact that the negotiation has to happen entirely under the table.
I think it's not a fluke at all. Decision theory gave us a formal-seeming way of thinking about the behaviour of artificial agents long in advance of having anything like them, you have to believe you can do math about AI in order to think that it's possible to arrest the problem before it arrives, and also drawing this analogy between AI and idealised decision theory agents smuggles in a sorcerers apprentice frame (where the automaton arrives already strong, and follows instructions in an explosively energetic and literal way) that makes AI seem inherently dangerous.
So to be the most strident and compelling advocate of AI safety you had to be into decision theory. Eliezer exists in every timeline.
The usual thought, I guess. We could build forums that're sufficiently flexible that they could have features like this added to them without any involvement from hosts (in this case I'd implement it as a Proposal/mass commitment to read 'post of the day's, and the introduction of a 'post of the day' tag. I don't think this even requires radical extensibility, just the tasteweb model), and we should build those instead of building more single purpose systems that are even less flexible than the few-purpose systems we already had.
Do we know whether wolves really treat scent marks as boundary markers.
Some confusing things about wolf territoriality is they frequently honestly signal their locations through howling while trying (and imo failing?) to obfuscate their number in the way that they howl.
While I do think there are many reasons pluralism isn't stable, increasingly unstable as information technology advances, and there might not ever meaningfully be pluralism under AGI at all (eg, there probably will be many agents working in parallel, but the agents might basically share goals and also be subject to very strong oversight in ways that humans often pretend to be but never have been), which I'd like to see Ngo acknowledge,
The period of instability is fairly likely to be the period under which the constitution of the later stage of stability is written, so it's important that some of us try to understand it.