I'm reminded of the old Star Trek episode with the super humans that were found in cryosleep that then took over the Enterprise.
While I do agree that this could be one potential counter to AI (unless the relative speed things overwhelm) but also see a similar type of risk from the engineered humans. In that view, the program needs to be something that is widely implemented (which would also make it potentially a x-risk case itself) or we could easily find ourselves having created a ruler class that views ordinary humans as subhuman on not deserving of full rights. Not sure how that gets done though -- from a purely practical and politically viable approach.
I certainly think if we're doing things piecemeal we would want somewhat smarter people before we have much longer living people.
I'm a bit conflicted on the subject of death penalty. I do agree with the view some solution is needed for incorrigible cases where you just don't want that person out in general society. But I honestly don't know if killing them versus imprisoning them for life is more or less humane. In terms of steelmanning the case I think one might explore this avenue. Which is the cruelest punishment?
But I would also say one needs to consider alternatives to either prison or death. Historically it was not unheard of to exile criminals to near impossible to escape locations -- Australia possibly being a best example.
In some ways I think one can make that claim but in an important ways, to me, numbers don't really matter. In both you still see the role of government as an actor, doing things, rather than an institutional form that enables people to do things. I think the US Constitution is a good example of that type of thinking. It defines the powers the government is suppose to have, limiting what actions it can and cannot take.
I'm wondering what scope might exist for removing government (and the bureaucracy that performs the work/actions) from our social and political worlds while still allowing the public goods (non-economic term use here) to still be produced and enjoyed by those needing/wanting such outputs. Ideally that would be achieved without as much forced-carrying (the flip of free-riding) from those uninterested or uninterested at the cost of producing them.
Markets seem to do a reasonable job of finding interior solutions that are not easily gamed or controlled by some agenda setter. Active government I think does that more poorly and by design will have an agenda setter in control of any mediating and coordinating processes for dealing with the competing interest/wants/needs. These efforts then invariable become political an politicized -- an as being demonstrated widely in today's world, as source of a lot of internal (be it global, regional/associative or domestic) strife leading to conflict.
Did the Ask Question type post go away? I don't see it any more. So I will ask here since it certainly is not worthy of a post (I have no good input or thoughts or even approaches to make some sense of it). Somewhat prompting the question was the report today about MS just revealing it's first quantum chip, and the recent news about Google's advancement in its quantum program (a month or two back).
Two branches of technology have been seen as game, or at least potential game changers: AI/AGI and quantum computing. The former often a topic here and certainly worth calling "mainstream" technology at this point. Quantum computing has been percolating just under the surface for quite a while as well. There have been a couple of recent announcements related to quantum chips/computing suggesting progress is being made.
I'm wondering if anyone has thought about the intersection between these two areas of development. Is a blending of quantum computing and AI a really scary combination for those with relatively high p(doom) views of existing AGI trajectories? Does quantum computing, in the hands of humans perhaps level the playing field for human v. AGI? Does quantum computing offer any potential gains in alignment or corrigibility?
I realize that current state quantum computing is not really in the game now, and I'm not sure if those working in that area have any overlap with those working in the AI fields. But from the outside, like my perspective is, the two would seem to offer large complementarities - for both good and bad I suppose, like most technologies.
Thanks. It was an interesting view. Certainly taking advantage of modern technologies and, taken at face value, seem to have resulted in some positive results. Has me thinking of making a visit just to talk with some of the people to see get some first hand accounts and views just how much that is changing the views and "experience" of government (meaning people experience as they live under a government).
I particularly liked the idea of government kind of fading into the background and being generally invisible. I think in many ways people see markets in that way pretty much too -- when running to the store to pick up some milk or a loaf of bread or whatnot, who really gives much though to the whole supply chain aspect of whatever they were getting actually being there.
Does anyone here ever think to themselves, or out loud, "Here I am in the 21st Century. Sure, all the old scifi stories told me I'd have a shiny flying car but I'm really more interested in where my 21st Century government is?"
For me that is premised on the view that pretty much all existing governments are based on theory and structures that date at least back to the 18th Century in the West. The East might say they "modernized" a bit with the move from dynasties (China, Korea, Japan) to democratic forms but when I look at the way those governments and polities actually work seems more like a wrapper around the prior dynastic structures.
But I also find it challenging to think just what might be the differentiating change that would distinguish a "21st Century" government from existing ones. The best I've come up with is that I don't see it as some type of privatization divestiture of existing government activities (even though I do think some should be) but more of a shift from government being the acting agent it is now and more like a markets in terms of mediating and coordinating individual and group actions via mechanisms other than voting for representation or direct voting on actions.
With regards to thinking about what comes next, you might find these two links, if you didn't already come across them, of some interest.
https://www.atlanticcouncil.org/content-series/atlantic-council-strategy-paper-series/three-worlds-in-2035/ hypothesizes 3 global futures for 2035.
https://www.atlanticcouncil.org/content-series/atlantic-council-strategy-paper-series/welcome-to-2035/ offers results from a survey about various outcomes of states that might obtain in (by???) 2035. I didn't find much surprising here but some of the questions I had not given thought to before so hardly had a point of view on them.
I could be way off on this, but I cannot help but core here is less about complexity than it is about efficiency. The most efficient processes do all appear to be a bit simpler than they probably are. It's a bit like watching an every talented craftsman working and thinking "The looks easy." Then when you try you find out it was much more difficult and complicated than it appeared. The craftsman's efficiency in action (ability to handle/deal with the underlying complexity) masked the truth a bit.
I've had similar experiences where my intuition tells me to be cautious but I could not say why. When I've ignored those intuitions I've generally paid a price. So now I do give them consideration.
In such situations it is probably good to take some time to sit back and try to identify some of the things that triggered the response. We are very good at pattern matching but also really good at filtering. Could be that intuitions like "getting bad vibes" is all about the interaction of the two.
But that is a pretty difficult task, we're asking our self to go back and review all the details we ignored and filtered. But I suspect it is a very good thing to try doing.
There's a Korean expression that basicly seems to be "the look is right" or "the look fits" which seems in line with your comment. The same outfit, hat, shoes, glasses, jacket or even car for different people create a different image in other's heads. There is a different message getting sent.
So if the overall point for the post is about the signaling then I suspect it is very important to consider the device one chooses to send messages like this. In other words, yes breaking some social/cultural standards to make certain points is fine but thought needs to be put into just how appropriately your chosen device/method "fits" you will probably have a fairly large impact on your success.
I suspect that holds just as well if you're looking at some type of "polarizing" action as a mechanism for breaking the ice and providing some filtering for making new acquaintances and future good friends.