Interesting. The linked post is a bit too political for direct posting on LW, and definitely NOT about AI. To the extent that it DOES apply to AI, I worry that it applies more to how the AI can constrain/weaken humans (or at least help humans constrain each other) than the other way around.
Strong upvoted, due to sheer-quality-of-discussion reasons, not agreement.
I disagree that galaxy-brained legal writing and cultural strength is sufficient to survive changing technological eras, even for such schelling points as constitutions; the 4th amendment ("right of the people to be secure in their persons, houses, papers, and effects") is effectively not enforced on massive non-consensual human manipulation research not because the law applied to legal searches over military searches, but because technological and political changes effectively made it unenforceable, pulling the rug out from underneath the constitution. The people won't rally against the NSA because the NSA is overwhelmingly well-positioned to persuade them all that such thinking is something that only low-status people do. Culture is also no longer sufficient because, like legal documents, humans gained more than enough understanding and capabilities to subvert it.
These exact same capabilities will inescapably bury the cultural sovereignty/consistency all of Islam, unless countries start banning computers North Korea-style, due to the de-facto power wielded by hackers and the data scientists who make up the mass surveillance apparatus. And I'm not sure that even that is possible, since the US and China are competing to arm middle eastern regimes with these sorts of capabilities. We need something else entirely, like augmented humans, that changes too quickly for the researchers and subverters to tighten the noose around. Hence why I keep insisting on people like Valentine being the ones to try, as the AI policy community in DC is too slow (possibly even in deadlock) and need assistance from the massive reserves of optimization power currently living in the Bay Area.
I can see this kind of funky-angle finding mental processes yielding valuable insights and results for AI policy. It should probably take multiple attempts; your work inspired me to buy Downing's Calculus The Easy Way so I can learn real calculus instead of the standard shorthand, and I cracked it open this morning, hoping it would go differently from high school, and it sorta didn't, which was pretty discouraging IMO, but I'm still going to try again on a different day when I'm in a different state of mind.
You might want to check your local community college. They likely offer calculus, at least up to calculus 2. Maybe differential equations. Not only is the class with an instructor that you can interact with useful, but they might have some sort of math lab. I worked for 3-4 years as a math lab tutor while in college. I was basically one of several tutors whose whole job was to provide supplementary instruction. They may even allow non-students.
A good teacher/tutor will be able to try multiple ways of explaining a concept, tailored to your questions. It is also quite valuable connecting with peers that are at your level who are trying to make sense of the same new concept as you.
I’m sure that there are online communities too. Anyways, if that book isn’t working for you, others or other forms of learning might work better.
Oops, I forgot about base rates and the psychological effect of repeat exposure to base rate people on LW users.
It wasn't that I was having a hard time learning, it was that I wasn't having fun because it felt too much like my experience being force-fed math in the K-12 education system. I'm bad at mental calculations but good at learning and applying and the textbooks were always adequate but never fun.
The thing I mean by “superintelligence” is very different from a government. A government cannot design nanotechnology, and is made of humans which value human things.
The two examples everyone loves to use to demonstrate that massive top-down engineering projects can sometimes be a viable alternative to iterative design (the Manhattan Project and the Apollo Program) were both government-led initiatives, rather than single very smart people working alone in their garages. I think it's reasonable to conclude that governments have considerably more capacity to steer outcomes than individuals, and are the most powerful optimizers that exist at this time.
I think restricting the term "superintelligence" to "only that which can create functional self-replicators with nano-scale components" is misleading. Concretely, that definition of "superintelligence" says that natural selection is superintelligent, while the most capable groups of humans are nowhere close, even with computerized tooling.
A government funded total effort could not design nanotechnology or are you saying because a present day nanotech rush would be accomplished a team of elite scientists and engineers and near future AI tools, it's not "the government"? (The government being made of elderly leaders and mountains of people who process administrative procedures. Were a nanotech rush to succeed it would be accomplished by a "skunkworks" style effort with nearly unlimited resources)
Just kinda confused, because "the government" has not meaningfully tried in this domain. A true total effort would be at large integrated sites, it would identify potential routes to the goal and fully fund then all in parallel for redundancy. You would see rush built buildings and safety procedures and most federal laws would be waived.
As I understand it, NNI essentially gives separate university labs small grants to work on "nanotechnology" which includes a broad range of topics that are unrelated to the important one of a self replicating molecular assembler.
Presumably a reasonable outside view would be this effort will not develop such an assembler prior to 2100 or later.
If it became known that a rival government had nearly finished a working assembler and was busy developing "kill dust" that can make any human in earth drop dead on command, you would see such an effort.
I don't think of governments as being... among other things "unified" enough to be superintelligences.
Also, see "Things That Are Not Superintelligence" and "Maybe The Real Superintelligent AI Is Extremely Smart Computers".
So, while I do agree that considering the dynamics of governments is very relevant to the future of thinking systems...
whew! didn't need all those words. the author feels like they come from a particular kind of background that I have some very strong agreements with and some equally very strong disagreements with, and reading their writing is just a bit too much for me (I really don't think that encouraging people to think of themselves as lords - controlling commanders of others - is a move that produces a culture that is better than our own). so, here's a kagi summary:
- The author argues that enumerated rights in constitutions inevitably lead to a narrowing of rights over time as rights not explicitly listed are not recognized.
- Early 20th century British novels imagined dystopian scenarios of a German occupation and saw even minor government overreach as tyrannical.
- Islam has remained remarkably consistent in interpretation over 1400 years due to its theological foundations, while common law traditions evolve.
- The attacks on Charlie Hebdo highlighted deep divisions between interpretations of appropriate responses to blasphemy in Islam.
- Many original intentions of the US Bill of Rights have eroded over time or been overridden by new amendments and case law interpretations.
- American gun culture has roots in efforts to arm freed blacks after the Civil War and grew from the Black Panthers' embrace of Second Amendment rights.
- Canadians embraced trucker convoy protests as asserting rights from their Charter of Rights and Freedoms in a way courts did not.
- Future constitutions could learn from features like Islam's resistance to change, pride in rights, and explicit enforcement through violence.
- A "Dark Bill of Rights" is proposed that is unchanging, cultivates pride in rights, and demands violent enforcement of rights against tyranny.
- Cultural forces like understanding of the Second Amendment could survive the dissolution of its legal basis in a future dystopian scenario.
so... there's an unchanging thing, parts of the system durably refer back to it, its enforcement is decentralized. seems pretty compatible with the open agency line of thinking.
Valentine, is there anything else in the article that you feel is worth compressing, or does this cover enough of it that you feel someone who didn't read the article would get enough from it?
I thought the linked article ("…Teach a Man to Revolt: Dreams of a Dark Bill of Rights" by Kulak) was a fun and clarifying read. It explained some things about both Islamic political dynamics and the American Constitution that I hadn't fully appreciated before.
It occurred to me that some folk here might like reading it too. The main reason being, governments & social movements are to-me clear examples of current superintelligences. The attempts to align & constrain them are very related to questions of AI alignment, in my opinion. It's the closest we have to an empirical study of aligning superintelligences.
In that light, this article gives several examples of things that empirically work very well, and why; and also some ways that attempted constraints fail, and why. The author's reference to a "Dark Bill of Rights" is a kind of spitballing attempt to recombine the best of American liberty norms with Islamic cultural endurance.
Even if Kulak turns out to be exactly spot-on, I haven't put much thought into how to translate this method into a literal AI. It might be interesting to discuss how one might do so.