There seems to be real acrimony over whether a transhumanist future is definitionally a future where humans are more or less extinct. I've always thought we should just refer to whatever humans (voluntarily, uncoerced) choose to become as human, just as american made or american controlled jets are called "american", or in the same way that a human's name doesn't change after all of their cells have renewed.
But you know, I don't think I've ever seen this depicted in science fiction. Seems bad. Humans can't imagine humanity becoming something better. Those who want humanity to become something better are pitted against those who want humanity to survive, as if these causes can't be unified. The language for the synthesis seems not to be exist, or to be denied.
This is probably too complicated to explain to the general population
I think it's workable.
No one ever internalises the exact logic of a game the first time they hear the rules (unless they've played very similar games before). A good teacher gives them several levels of approximation, then they play at the level they're comfortable with. Here's the level of approximation I'd start with, which I think is good enough.
"How much would we need to pay you for you to be happy to take the survey? Your data may really be worth that much to us, we really want to make sure we get answers that represent every type of person, including people who value their time a lot. So name your price. Note, you want to give your true price. The more you ask, the less likely it is you'll get to take the survey."
(if callee says "you wouldn't be able to afford it", say "try us.")
(if callee requests a very high amount, double-check and emphasise again that the more they ask the less likely it is that they'll get to take the survey and receive such a payment, make sure they're sure. Maybe explain that the math is set up so that they can't benefit from overstating it)
I had some things to say after that interview, he said some highly concerning things, but I ended up not commenting on this particular thing because it's probably mostly a semantic disagreement about what counts as a human or an AI.
When a human chooses to augment themselves to the point of being entirely artificial, I believe he'd count that as an AI. He's kind of obsessed with humans merging with AI in a way that suggests he doesn't really see that as just being what humans now are after alignment.
Has a laser ever been fired outdoors in japan
Perhaps customs notices that both parties have deep pockets, and it's become a negotiation, further slowed by the fact that the negotiation has to happen entirely under the table.
I think it's not a fluke at all. Decision theory gave us a formal-seeming way of thinking about the behaviour of artificial agents long in advance of having anything like them, you have to believe you can do math about AI in order to think that it's possible to arrest the problem before it arrives, and also drawing this analogy between AI and idealised decision theory agents smuggles in a sorcerers apprentice frame (where the automaton arrives already strong, and follows instructions in an explosively energetic and literal way) that makes AI seem inherently dangerous.
So to be the most strident and compelling advocate of AI safety you had to be into decision theory. Eliezer exists in every timeline.
The usual thought, I guess. We could build forums that're sufficiently flexible that they could have features like this added to them without any involvement from hosts (in this case I'd implement it as a Proposal/mass commitment to read 'post of the day's, and the introduction of a 'post of the day' tag. I don't think this even requires radical extensibility, just the tasteweb model), and we should build those instead of building more single purpose systems that are even less flexible than the few-purpose systems we already had.
Do we know whether wolves really treat scent marks as boundary markers.
Some confusing things about wolf territoriality is they frequently honestly signal their locations through howling while trying (and imo failing?) to obfuscate their number in the way that they howl.
Not a coincidence, there are practical reasons borders end up on thresholds. A sort of quantization that happens in the relative strength calculation. Two models:
I'm pretty sure I'd predict no for 1. Cats don't seem to care about that stuff.
For 2, I'm not sure, if there were a hole in the fence, I'd expect confrontations to happen there because that's a chokepoint where a cat could get through safely if the other one wasn't standing on the other side, and maybe the chokepoint is a vulnerability threshold, too. Chokepoints are thresholds for projectile combat (because when you come through the defender sees you immediately but you don't spot them until they start shooting), cats may be partly characterizable as stealth projectiles.
Also worth noting is that, eg, dogs, they engage in "boundary aggression" at things like fences, but experiments show that they're doing it for the love of the game. If you remove the fence, hostilities cease. Cats may have some of this going on as well. They may on some level enjoy yelling and acting tough while being in no risk of having to actually fight.
3: Yeah, but because it makes the relative strength calculation harder. A fence is a blessed device that allows cats to get a good look at each other without engaging. I wish humans had something like that. (A hole in a fence may also be a good device for this)
Well, I remember a moment in BLAME! (a manga that's largely aesthetically about the disappearance of heirloom strains of humanity) where someone described Killy as human, even though he later turns out to (also?) be an immortal special safeguard, but they may have just not known that. It's possible the author didn't even know that at that time (I don't think the plot of blame was planned in advance)