James Stephen Brown

Atheist, utilitarian*, artist, coder, documentarian and polymath (jokes.. but I do believe the synergy of many fields can lead to novel insights).

I write about moral philosophy, artificial intelligence and game theory—in particular non-zero-sum games and their importance in solving the world's problems. Most of my writing originates on my personal website nonzerosum.games.

I have admitted I am wrong at least 10 times on the internet.


* I don't really class Utilitarianism as an ethical framework in competition with other ethical frameworks, I see it more as a calculus that most people, when it comes down to it, use to determine or assess the generalised virtues, principles and laws that they live by (well, at least I do).

Wiki Contributions

Comments

Sorted by

Hi Seth,

I share your concern that AGI comes with the potential for a unilateral first strike capability that, at present, no nuclear power has (which is vital to the maintenance of MAD), though I think, in game theoretical terms, this becomes more difficult the more self-interested (in survival) players there are. Like in open-source software, there is a level of protection against malicious code because bad players are outnumbered, even if they try to hide their code, there are many others who can find it. But I appreciate that 100s of coders finding malicious code within a single repository is much easier than finding something hidden in the real world, and I have to admit I'm not even sure how robust the open-source model is (I only know how it works in theory). I'm more pointing to the principle, not as an excuse for complacency but as a safety model on which to capitalise.

My point about the UN's law against aggression wasn't that in and of itself it is a deterrent, only that it gives a permission structure for any party to legitimately retaliate.

I also agree that RSI-capable AGI introduces a level of independence that we haven't seen before in a threat. And I do understand inter-dependence is a key driver of cooperation. Another driver is confidence and my hope is that the more intelligent a system gets, the more confident it is, the better it is able to balance the autonomy of others with its goals, meaning it is able to "confide" in others—in the same way as the strongest kid in class was very rarely the bully, because they had nothing to prove. Collateral damage is still damage after all, a truly confident power doesn't need these sorts of inefficiencies. I stress this is a hope, and not a cause for complacency. I recognise that in analogy, the strongest kid, the true class alpha, gets whatever they want with the willing complicity of the classroom. RSI-cabable AGI might get what it wants coercively in a way that makes us happy with our own subjugation, which is still a species of dystopia.

But if you've got a super-intelligent inventor on your side and a few resources, you can be pretty sure you and some immediate loved ones can survive and live in material comfort, while rebuilding a new society according to your preferences.

This sort of illustrates the contradiction here, if you're pretty intelligent (as in you're designing a super-intelligent AGI) you're probably smart enough to know that the scenario outlined here has a near 100% chance of failure for you and your family, because you've created something more intelligent than you that is willing to hide its intentions and destroy billions of people, it doesn't take much to realise that that intelligence isn't going to think twice about also destroying you.

Now, I realise this sounds a lot like the situation humanity is in as a whole... so I agree with you that...

multipolar human-controlled AGI scenario will necessitate ubiquitous surveillance.

I'm just suggesting that the other AGI teams do (or can, leveraging the right incentives) provide a significant contribution to this surveillance.

Sorry, you’re right, I did misread that—do you mind if I edit my answer so it addresses your point properly? Happy to continue after the election.

Thanks for your comment, the post itself is meant to challenge the reader to question what is really bias, and what is actually an even-handed view with apparent bias, due to the shifted centre. But I certainly take your point, beginning in a clearly partisan manner might not have been the best approach before putting it in context.

I do think there are defences that can be made of the points you raise.

You took one of the tamest aspects of the radical left here

I agree I have taken a tame aspect of the radical left, and the claim you point to, that Kamala usurped power in an undemocratic and calculated way is a much more serious accusation. The thing is though, as serious as the claim is, it lasted about a week in the news cycle, largely because it's a spurious interpretation of events with no real evidence, and wasn't politically shifting the needle, throughout that week (I was watching Fox's coverage, I often do) Kamala was surging in the polls, and Fox moved back to their usual fear-mongering about immigrants and sane-washing whatever abhorrent statement Trump had most recently made. The trans issue however is a perennial touchstone that has been used as the consistent example of radical left woke-ness for years, and throughout this campaign.

There is antisemitism amongst the pro-Palestine crowd

I don't doubt you are correct that anti-Israel sentiment can stray into anti-semitism. But the point is about motivations, one side is motivated by sympathy for a population with 10s of thousands of people being killed over a year, and millions being displaced and having their homes destroyed, the other is motivated by white supremacy. Do you not see this as a false equivalence?

In the numerous conversations I've had and heard, whenever people refer to the radical left as a counterpoint to the radical right, 99% of the time the examples used are the trans issue, or support for Palestine. I have literally never had Kamala's ascendence to candidacy used as an example of radical leftism—it's more seen as a risk-averse aspect of democratic orthodoxy and adherence to a boring status-quo, and as mentioned was a short-lived talking point.

In short, I think pronouns and Palestine were fair comparisons. But, again I agree that I should have spent some time explaining the problem of both-sidesism and the shifted centre before acting in accordance with the principles with which the post concludes.

I don't know nearly enough about economics to know for sure.

I'm in the same position. It's at these points where I defer to experts, which is what I have advised in the post.

Thanks again for your comment. I hope my comment hasn't been too argumentative, it's meant to explain as an extension of the post.

I agree, it seems as though the incentives aren't aligned that way, so it ends up incumbent upon the audience to distill nuance out of binary messaging, and to recognise the value of those who do present unique perspectives.

This made me think about how this will come about, whether we we have multiple discrete systems for different functions; language, image recognition, physical balance, executive functions etc working interdependently, communicating through compressed-bandwidth conduits, or whether at some point we can/need-to literally chuck all the raw data from all the systems into one learning system, and let that sort it out (likely creating its own virtual semi-independent systems).

The nuclear MAD standoff with nonproliferation agreements is fairly similar to the scenario I've described.  We've survived that so far- but with only nine participants to date.

I wonder if there's a clue in this. When you say "only" nine participants it suggests that more would introduce more risk, but that's not what we've seen with MAD. The greater the number becomes, the bigger the deterrent gets. If, for a minute we forgo alliances, there is a natural alliance of "everyone else" at play when it comes to an aggressor. Military aggression is, after all, illegal. So, the greater the number of players, the smaller advantage any one aggressive player has against the natural coalition of all other peaceful players. If we take into account alliances, then this simply returns to a more binary question and the number of players makes no difference.

So, what happens if we apply this to an AGI scenario?

First I want to admit I'm immediately skeptical when anyone mentions a non-iterated Prisoner's Dilemma playing out in the real world, because a Prisoner's Dilemma requires extremely confined parameters, and ignores externalities that are present even in an actual prisoner's dilemma (between two actual prisoners) in the real world. The world is a continuous game, and as such almost all games are iterated games.

If we take the AGI situation, we have an increasing number of players (as you mention "and N increasing"); different AGIs, different humans teams, and mixtures of AGI and human teams, all of which want to survive, some of which may want to dominate or eliminate all other teams. There is a natural coalition of teams that want to survive and don't want to eliminate all other teams, and that coalition will always be larger and more distributed than the nefarious team that seeks to destroy them. We can observe such robustness in many distributed systems, that seem, on the face of it, vulnerable. This dynamic makes it increasingly difficult for the nefarious team to hide their activities, meanwhile the others are able to capitalise on the benefits of cooperation.

I think we discount the benefit of cooperation, because it's so ubiquitous in our modern world. This ubiquity of cooperation is a product of a tendency in intelligent systems to evolve toward greater non-zero-sumness. While I share many reservations about AGI, when I remember this fact, I am somewhat reassured that, as our capability to destroy everything gets greater, this capacity is born out of our greater interconnectedness. It is our intelligence and rationality that allows us to harness the benefits of greater cooperation. So, I don't see why greater rationality on the part of AGI should suddenly reverse this trend.

I don't want to suggest that this is a non-problem, rather that an acknowledgement of these advantages might allow us to capitalise on them.

 

Good point, I guess all-sidesism would be more desirable, this would take the form of panels representing different experts, opinions or demographics. Some issues, like US politics do end up necessarily polarised though, given there are only two options, even if you begin with a panel—they did start with an anti-vax candidate too with RFK Jr (with the Ds and even the Rs being arguably pro-vax), but political expediency results in his being subsumed into the binary.

Thanks Seth, yes, I think we're pretty aligned on this topic. Which gives me some more confidence, given you actually have relevant education and experience in this area.

I'm not sure it's fully satisfying. I'm afraid someone who's really bothered by determinism and their lack of "free will" wouldn't find this comforting at all

I absolutely agree, which is why I followed this section up with the caveat

Now, I'll admit this is not very satisfying, in terms of understanding how our intuitions relate to physical reality

The reason for including this was because it can be an end-of-argument claim for hard determinists. I was meaning only to highlight that an intuition is being smuggled in to an otherwise reductionist argument. I get that this will not be satisfying to believers in free will, as it's not a positive argument for free will, and is not intended to be. Reducing anything to its component parts can remove intuitive meaning from anything and everything, and if an argument can be used to undermine anything and everything, it is self-defeating and essentially meaningless.

I did a whole bunch of work on exactly how the brain does exatly that process of conscious predictions to choose outcomes

This looks fascinating. I should add that I'm aware that prediction is not only involved in internal processes but is also active while taking actions, where we project our expectations on to the world and our consciousness acts as a sort of error correction, or evaluation function. But for the purposes of not over-complicating the logic I was trying to clarify, I omitted this from the model.

So my response to people being bothered by being "just" the "continuation of deterministic forces via genetics and experience" is that those are condensed as beliefs, values, knowledge, and skills, and the effort with which those are applied is what determines outcomes and therefore your results and your future.

I agree, I had implicitly included beliefs, values etc in my 'model of self', and also emphasise effort (or deliberation) as the key "variety of free will worth having". I'm not, in the slightest, concerned that my desires and intentions are not conscious decisions (I've never believed this to be something I was in control of, and when people ask "but are you really in control" at this level, I believe they are accidentally arguing against a straw-man)—although I think desires and intentions can be consciously reviewed, to check their consistency with other values, just like any other aspect of life through the same internal iterative process. 

Thanks again for your thought provoking comment, I'm chuffed that you thought the post was worth engaging with.

This seems to be discounting the consequentialist value of short term pleasure seeking. Doing something because you enjoy the process has immediate positive consequences. Doing something because it is enriching for your life has both positive short term and long term consequences.

To discount short term pleasures as hedonism (as some might) is to miss the point of consequentialism (well of Utilitarianism at least), which is to increase well-being (which can be either short or long term). Well-being can only be measured (ultimately) in terms of pleasure and pain.

Though I agree consequentialism is necessarily incomplete as we don't have perfect predictive powers.

Load More