let's try it from the other direction:
do you think stable meta-values are to be observed between australopiteci and say contemporary western humans? on the other hand: do values across primitive tribes or early agricultural empires not look surprisingly similar? third hand: what makes it so that we can look back and compare those value systems, while it would be nigh-impossible for the agents in questions to wrap their head around even something as "basic" as representative democracy?
i don't think it's thought as much as capacity for it that changes one's ...
In which way would the infection-resistant body or the lightcone destiny-setting world government pose limits to evolution via variation and selection?
To me it seems that the alternative can only ever be homeostasis - of the radical, lukewarm-helium-ion-soup kind.
When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land.
er - this defeats all rules of conversational pragmatics but look, i concede if it stops further more preposterous rebuttals.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writing things yourself.
of course it doesn't. my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post...
Look friend.
You said you understood from the beginning that the text in question was Land's.
In your first comment, though, you clearly show that not to be the case:
> I do not see how you are doing that. You state Pythia mind experiment. And then react to it: "You go girl!". I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land's ideas.
This clearly marks me as the author, as separated from Land.
I find it hard to keep engaging under an assumption of good faith on these premises.
uh I see - I’ve put the editors note in blockquote; hope that helps at least to make its meta- character clearer (:
sure? that would blickauote 75% of the article
perhaps I could block quote the editors note instead?
My bad, I didn't check and was tricked by the timing. Sincere apoloigies.
How would you suggest the thing could be improved? (the TeX version in the PDF contains Nick Land only).
I was thinking perhaps to add a link to each XS item, but wasnt really looking forward to rehashing comments of what has probably been the nadir in r/acc / LW diplomatic relations
the editor's note, mine, is marked with the helpful title "editor's note", while the xenosystem pieces about orthogonality are marked with "xenosystems: orthogonality".
you seem to be the only user, although not the only account, who experienced this problem.
propaganda of nick land's idea
wait - are you aware that the texts in question are nick land's? i think it should be pretty clear from the editor's note.
besides, in the first extract, the labels part was entirely incidental - and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text. i definitely see the issue of fixating on labels, now, tho - and i thank you for providing an object lesson.
ideological turing test
the purpose of the idelogical turing te...
how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above?
I think the argument is stated as clearly as it’s appropriate under the assumption of a minimally charitable audience; in particular, I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?
I cannot shake the feeling that the commenter might have only read the first extract and either fell victim of fnords or found it expedient to leave a couple of them for the benefit of less sophisticated leaders - in particular, has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?
wait - do you consider that an insult? i snuggled with the best of them
[curious about the downvotes - there's usually much /acc criticising around these parts, I thought having the arguments in question available in a clear and faithful rendition would be considered an unalloyed good from all camps? but i've not poasted here since 2018, will go read the rules in case something changed]
So, something like "quiet quitting"?
Well, no - not necessarily. And with all the epistemic charity in the world, I am starting to suspect you might benefit from actually reading the review at this point, just to have more of an idea of what we're talking about.
Funny, I see "exit" as. more or less the opposite of the thing you are arguing against. Land (and Moldbug) refer to this book by Hirschman, where "exit" is contrasted with "voice" - the other way to counter institutional/organisational decay. In such model, exit is individual and aims to carve a space for a different way of doing things, while voice is collective, and aims to steer the system towards change.
Balaji's network state, cryptocurrency, etc are all examples. Many can run parallel to existing institutions, working along different dimensions, and t...
I'm trying to understand where the source of disagreement lies, since I don't really see much "overconfidence" - ie, i don't see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
I'm not sure I agree - in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back - and we should all consider ourself lucky that our early sapiens predecessors didn't take the second option.
(btw, I'm very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/
I personally don't see the choice of "allowing a more intelligent set of agents take over" as particularly altruistic: personally, i think intelligence trumps species, and I am not convinced interrupting its growth to make sure more sets of genes similar to mine find hosts for longer would somehow be "for my benefit".
Even in my AI Risk years, what I was afraid is the same I'm afraid of now: Boring Futures. The difference is that in the meantime the arguments for a singleton ASI, with a single unchangeable utility function that is not more intelligence/know...
Not hitting on people on their first meetup is good practice, but none of the arguments in OP seem to support such a norm.
Perhaps less charitably than @Huluk, I find the consent framing almost tendentious. It's quite easy to see how the dynamics denounced have little to do with consent; here are two substitutions which show how the examples are professional ethics matters, and orthogonal to the intimacy axis:
- one could easily swap "sexual relations" with "access to their potential grantee's timeshare" without changing much in terms of moral calculus;
- one...