Would I be incorrect in reading into the post that, at least looked at through the lens of geopolitical status and power, one might not view the post WWII international order than promoted a lot of investment in the developing countries economic and health was planting the seeds of the West's vision of the modern world order?
Or do you see the current state of things as occurring independently of relative population changes?
I mostly put this question through the same filter I do the question of Chinese vs. US hegemony/empire. China has a long history of empire and knows how to do it well. The political bureaucracy in China is well developed for preserving both itself and the empire (even within changes at the top/changes of dynasty). Culturally and socially the population seems to be well acclimated to being ruled rather than seeing government as the servant of the people (which I not quite the same as saying they are resigned to abusive totalitarianism, the empire has to be providing something that amounts to public good and peace but seem more tolerant of the means applied than western cultures).
The US on the other hand pretty much sucks at empire and lacks a well functioning and well developed bureaucracy for supporting empire.
So I think perhaps one might need to ask what the specific risk one is most concerned about. China probably produces an AI that serves the Party and Empire and so perhaps is a bit more corrigible and won't decide to kill everyone. But if you're concerned about what those in control of an AGI/ASI might do with it China might be a bigger risk than the US. With the US you probably have greater risks from the AI wanting to kill everyone or simply doing that without caring but likely have more controls on what the AI will allow its "owners" to do with it.
That is pretty much what I'm trying to accomplish and want to try to increase the rate I am building the working vocabulary.
I do agree with both you and Vaughn. Reading should (very hard for me now) really help improving the recall once I can read and have a sufficient understanding of the statement and larger text. Texting is (I have been able to do some) good for me in that it tends to keep the exchange short and sentence structure more simple and short (which means I typically will have a reasonable grasp of the general meaning so can better infer what the unknown word or unrecalled word likely means.)
Literacy seems to make sense to me but I might be missing something in the post. Writing is language and language is communication so at least two sides.
As more people learned to read, they also learned to write, and written communications increases. However, even with modest literacy one can read a long sentence. Or can do that when it is written by a good/skilled writer. But being able to read does not really lead to writing skills in most cases I suspect.
As more people started communicating via writing (think things like schools and education expansion) the skill level of the average writer likely declined. That probably lead to training next generation writes to write in a more simple sentence structure.
There's a Korean expression that basicly seems to be "the look is right" or "the look fits" which seems in line with your comment. The same outfit, hat, shoes, glasses, jacket or even car for different people create a different image in other's heads. There is a different message getting sent.
So if the overall point for the post is about the signaling then I suspect it is very important to consider the device one chooses to send messages like this. In other words, yes breaking some social/cultural standards to make certain points is fine but thought needs to be put into just how appropriately your chosen device/method "fits" you will probably have a fairly large impact on your success.
I suspect that holds just as well if you're looking at some type of "polarizing" action as a mechanism for breaking the ice and providing some filtering for making new acquaintances and future good friends.
I'm reminded of the old Star Trek episode with the super humans that were found in cryosleep that then took over the Enterprise.
While I do agree that this could be one potential counter to AI (unless the relative speed things overwhelm) but also see a similar type of risk from the engineered humans. In that view, the program needs to be something that is widely implemented (which would also make it potentially a x-risk case itself) or we could easily find ourselves having created a ruler class that views ordinary humans as subhuman on not deserving of full rights. Not sure how that gets done though -- from a purely practical and politically viable approach.
I certainly think if we're doing things piecemeal we would want somewhat smarter people before we have much longer living people.
I'm a bit conflicted on the subject of death penalty. I do agree with the view some solution is needed for incorrigible cases where you just don't want that person out in general society. But I honestly don't know if killing them versus imprisoning them for life is more or less humane. In terms of steelmanning the case I think one might explore this avenue. Which is the cruelest punishment?
But I would also say one needs to consider alternatives to either prison or death. Historically it was not unheard of to exile criminals to near impossible to escape locations -- Australia possibly being a best example.
In some ways I think one can make that claim but in an important ways, to me, numbers don't really matter. In both you still see the role of government as an actor, doing things, rather than an institutional form that enables people to do things. I think the US Constitution is a good example of that type of thinking. It defines the powers the government is suppose to have, limiting what actions it can and cannot take.
I'm wondering what scope might exist for removing government (and the bureaucracy that performs the work/actions) from our social and political worlds while still allowing the public goods (non-economic term use here) to still be produced and enjoyed by those needing/wanting such outputs. Ideally that would be achieved without as much forced-carrying (the flip of free-riding) from those uninterested or uninterested at the cost of producing them.
Markets seem to do a reasonable job of finding interior solutions that are not easily gamed or controlled by some agenda setter. Active government I think does that more poorly and by design will have an agenda setter in control of any mediating and coordinating processes for dealing with the competing interest/wants/needs. These efforts then invariable become political an politicized -- an as being demonstrated widely in today's world, as source of a lot of internal (be it global, regional/associative or domestic) strife leading to conflict.
Did the Ask Question type post go away? I don't see it any more. So I will ask here since it certainly is not worthy of a post (I have no good input or thoughts or even approaches to make some sense of it). Somewhat prompting the question was the report today about MS just revealing it's first quantum chip, and the recent news about Google's advancement in its quantum program (a month or two back).
Two branches of technology have been seen as game, or at least potential game changers: AI/AGI and quantum computing. The former often a topic here and certainly worth calling "mainstream" technology at this point. Quantum computing has been percolating just under the surface for quite a while as well. There have been a couple of recent announcements related to quantum chips/computing suggesting progress is being made.
I'm wondering if anyone has thought about the intersection between these two areas of development. Is a blending of quantum computing and AI a really scary combination for those with relatively high p(doom) views of existing AGI trajectories? Does quantum computing, in the hands of humans perhaps level the playing field for human v. AGI? Does quantum computing offer any potential gains in alignment or corrigibility?
I realize that current state quantum computing is not really in the game now, and I'm not sure if those working in that area have any overlap with those working in the AI fields. But from the outside, like my perspective is, the two would seem to offer large complementarities - for both good and bad I suppose, like most technologies.
I always find the use of "X% of the vote" in US elections to make some general point about overall acceptability or representation of the general public problematic. I agree it's a true statement but leaves out the important aspect of turn out for the vote.
I have to wonder if, particularly the USA, would not be quite as divided if all the reporting provided percentage of vote of the eligible voting population rather than the percentage of votes cast. I think there is a big problem with just ignoring the non-vote information the is present (or expecting anyone to look it up and make the adjustments on their own).
But I agree, I'm not too sure just where electoral systems fall into this question of AGI/ASI first emerging under ether the USA or CCP.