Milan Weibel https://weibac.github.io/
Making up something analogous to Crocker's rules but specifically for pronouns would probably be a good thing: a voluntary commitment to surrender any pronoun preferences (gender related or otherwise) in service of communication efficiency.
Now that I think about it, a literal and expansive reading of Crocker's rules themselves includes such a surrender of the right to enforce pronoun preferences.
You're assuming that:
- There is a single AGI instance running.
- There will be a single person telling that AGI what to do
- The AGI's obedience to this person will be total.
I can see these assumptions holding approximately true if we get really really good at corrigibility and if at the same time running inference on some discontinuously-more-capable future model is absurdly expensive. I don't find that scenario very likely, though.
As you repeatedly point out, there are multiple solutions to each issue. Assuming good enough technology, all of them are viable. Which (if any) solutions end up being illegal, incentivized, made fun of, or made mandatory, becomes a matter of which values end up being normative. Thus, these people may be culture-warring because they think they're influencing "post-singularity" values. This would betray the fact that they aren't really thinking in classical singularitarian terms.
Alternatively, they just spent too much time on twitter and got caught up in dumb tribal instincts. Happens to the best of us.
Against 1.c. Humans need at least some resources that would clearly put us in life-or-death conflict with powerful misaligned AI agents in the long run.: The doc says that "Any sufficiently advanced set of agents will monopolize all energy sources, including solar energy, fossil fuels, and geothermal energy, leaving none for others" There's two issues with that statement:
First, the qualifier "sufficiently advanced" is doing a lot of work. Future AI systems, even if superintelligent, will be subject to physical constraints and economic concepts such as opportunity costs. The most efficient route for an unaligned ASI or set of ASIs for expanding their energy capture may well sidestep current human energy sources, at least for a while. We don't fight ants to capture their resources.
Second: it assumes advanced agents will want to monopolize all energy sources. While instrumental convergence is true, partial misalignment with some degree of concern for humanity's survival and autonomy is plausible. Most people in developed countries have a preference for preserving the existence of an autonomous population of chimpanzees, and our "business-as-usual-except-ignoring-AI" world seems on track to achieve that.
Taken together, both arguments paint a picture of a future ASI mostly not taking over the resources we are currently using on Earth, mostly because it's easier to take over other resources (for instance, getting minerals from asteroids and energy from orbital solar capture). Then, it takes over the lightcone except Earth, because it cares about preserving independent-humanity-on-Earth a little. This scenario has us subset-of-humans-who-care-about-the-lightcone losing spectacularly to an ASI in a conflict over the lightcone, but not humanity being in a life-or-death-conflict with an ASI.
Note in particular that the Commission is recommending Congress to "Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership".
i.e. if these recommendations get implemented, pretty soon a big portion of the big 3 lab's revenue will come from big government contracts. Look like a soft nationalization scenario to me.
I'm not sure if this preferences of mine holds for most people, but I think I'd be easier to threaten with real photos than with fake ones. There's an element of guilt and shame in having taken and sent real photos to a stranger. I don't think scammers would invest in generating fake pictures for a potential victim who may well just block after the first message. I think the deepfake-first strategy would be both less profitable and less enjoyable for these sick fucks than the "trick into sexting" strategy. Now if the victim selection / face acquisition / deepfake generation / victim first contact pipeline were to be automated, I can see things changing.
I think people downvoted you because you buried the lede. That is, neither the post's title nor the introduction you pasted here explain that your post is about having a subdermal magnet implanted.
Instead, it provides "context info" readers on this site are likely to already know, and thus consider irrelevant. I think most downvoted before clicking the link to your website. Consider making your titles and introductions more to-the-point, and including TLDRs in the case of linkposts like this.
Having said that, it's always cool to hear about people's experiences with subdermal magnets, and that kind of content definitively belongs on this site.
creating and posting have no filter
False. There is a filter for content submitted by new accounts.
Assuming private property as currently legally defined is respected in a transition to a good post-TAI world, I think land (especially in areas with good post-TAI industrial potential) is a pretty good investment. It's the only thing that will keep on being just as scarce. You do have to assume the risk of our future AI(-enabled?) (overlords?) being Georgists, though.
I suspect that most people whose priors have not been shaped by a libertarian outlook are not very surprised by the outcome of this experiment.