I agree with the cat and mouse metaphor and that we should assume an AI to be hyper competent.
At the same time, it will be restricted to operating within the constraints of the systems in can influence. My main point, which I admit was poorly made, is that cross site scripting attacks can be covered with a small investment, which eliminates clever java script as a possible attack vector. I would place lower probability on this being the way an AI escapes.
I would place higher probability on an AI exploiting a memory buffering type error similar to the one you referenced. Furthermore I would expect it to be in software it is running on top of and can easily experiment/iterate on. (OS, container, whatever) Whereas browser interactions are limited in iteration by the number of times a user calls the service, one would expect the local software can by manipulated and experimented with constantly and only be constrained by the CPU /IO resources available.
That is okay with me, what do you want to discuss?
Disagreements can be resolved!
I see your motivation for writing this up as fundamentally a good one. Ideally, every conversation would end in mutual understanding and closure, if not full agreement.
At the same time, people tend to resent attempts at control, particularly around speech. I think part of living in a free and open society is not attempting to control the way people interact too much.
I hypothesize the best we can do is try and emulate what we see as the ideal behavior and shrug it off when other people don't meet our standards. I try to spend my energy on being a better conversation partner (not to say I accomplish this), instead of trying to make other people better at conversation. If you do the same, and your theory of what people want from a conversation partner accurately models the world, you will have no shortage of people to have engaging discussions with and test your ideas. You will be granted the clarity and closure you seek.
By 'what people want' I don't mean being only super agreeable or flattering. I mean interacting with tact, brevity, respect, receptivity to feedback, attention and other qualities people value. You need to appeal to the other person's interest. Some qualities essential to discussion, like disagreeing, will make certain folks back off, even if you do it in the kindest way possible, but I don't think that's something that can be changed by policy or any other external action. I think it's something they need to solve on their own.
Lot's of low cost ways to prevent this- perhaps already implemented (I don't use GPT3 or I'd verify). Human's have been doing this for awhile, so we have a lot of practice defending against it.
https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html
I enjoyed your post.
I am relatively new to less wrong, but also have been influenced by Buddhism, and am glad to see it come up here.
The confusion you point at between faith and belief is appreciated and was an important distinction I did not make for roughly the first 20 years or so of my life. The foundational axiom I use so as to not fall into the infinite skepticism you mention is the idea that it’s okay to try and build, help, learn, and contribute even if you don't understand things completely. I also hold out hope for the universe and life to ultimately make sense, and I try to understand it, but I suspect it will all amount to an absurd Sisyphean act.
What is referred to as faith or trust in the post I refer to as open mindedness. I think it maps without issue to the same concept you are referring to, but I am open to distinctions being drawn.
The other thing I wanted to mention, if anyone found the distinction between belief and faith especially interesting, and would be interested to understand how even within religious communities belief can be detrimental, I recommend the book The Religious Case Against Belief by James P. Carse. It explores this subject in depth and is quite enjoyable.
I think its fair to say direct democracy would not eliminate lobbying power. And to your final point, I agree that reliable educational resources or perhaps some other solution would be needed to make sure whomever is doing the voting is as rational as they can be. It's not sufficient to only give everyone a vote.
Regarding your point around running ads, to make sure I am understanding: do you mean the number of people who actually read the bill will be sufficiently low, that a viable strategy to get something passed would be to appeal to the non-reading voters and misinform them?
Thank you for additional detail, I understand your point about conformity to rules, the way that increases predictability, and how that allows for larger groups to coordinate effectively. I think I am getting hung up on the word trust, as I tend to think of it as when I take for granted someone has good intentions towards me and basic shared values. (e.g. they can't think whats best for me is to kill me) I think I am pretty much on board with everything else about the article.
I wonder if another productive way to think about all this would be (continuing to riff on interfaces, and largely restating what you have already said) something like: when people form relationships they understand how each other will behave, relationships enable coordination, humans can handle understanding and coordinating up to Dunbar's number, to work around this limit above 150 we begin grouping people- essentially abstracting them back down to a single person (named for example 'Sales' or 'The IT Department'), if that group of people follow rules/process then the group becomes understandable and we can have a relationship and coordinate with that group, and if we all follow shared rules, everyone can understand and coordinate with everyone else without having to know them. I think I am pretty much agreeing with the point you make about small groups being able to predict each other's behavior, and that being key. Instead of saying one person trusts another person, I'd favor one person understands another person. I think this language is compatible with your examples of sarcasm, lies, and the prisoner's dilemma.
Anyway, I'll leave it at that. Thank you for the discussion.
I enjoyed your post. Specifically, using programs as an analogy for society seems like something that could generate a few interesting ideas. I have actually done the same and will share one of my own thoughts in this space at the end.
To summarize some of your key points:
Regarding mental prediction of group behavior as the definition of trust. I am not sure on this one. What about when you reliably predict someone will lie?
Regarding the continuum of formality for social rules I agree that formality is an important dimension. Although I would suggest decoupling enforcement and understanding. Consider people who work at corporations or live in tyrannies- these environments have high enforcement/concentrations of power, but often an opaque ruleset. Carl Popper in his book The Open Society spends a good amount of time discussing the institutionalization of norms into policies/laws etc, vs rules which simply give people in a hierarchy discretionary power. You may enjoy it. Chapter 17 section VII. The overall point though is that for rules to be understandable in a meaningful sense (beyond "don't piss off the monarch") they can't delegate discretion to other people.
Creating interfaces that are consistent means the circumstances of individuals have to be abstracted away.
Is the idea behind this maybe something like everybody in a democracy implements get_vote(issue) -> true|false?
Is this a problem?
Lastly, to share an idea I am currently trying to research more extensively, but uses the software analogy:
What if someone founded a new political party whose candidates run on the platform that if elected they will send every bill voted on to their constituents using an app of sorts and will always vote the way the constituency says. Essentially having no opinions of their own. I think of this political party as an adapter that turns a representative democracy into a direct (or liquid or whatever you implement in the app) democracy.
I think I am troubled by the same situation as you. How to organize society that uses hierarchy less, but still has law, order, and good coordination between people. To me, more direct forms of democracy are the next logical step. Doing the above would erode lobbying power/corruption. I am researching similar concepts for companies as well.
Hi jyby
I'd be interested in hearing more of your thoughts here. I think you formulated the question and alluded to your current leanings, but I'd like to hear more about what form of authoritarianism you think is required to prevent the collapse of biodiversity and climate change. Would you be willing so share more?