I think the point by the OP is that while YOU might think NYC is a great place, not everybody does. One of the nice things about the current model is that you can move to NYC if you want to, but you don't have to. In the hypothetical All-AGI All Around The World future, you get moved there whether or not you like it. Some people will, but it's worth thinking about the people who won't like it and consider what you might do to make that future better for them as well.
Your black table of income levels and taxes paid has something wrong with it. I looked at the Tax Foundation link you provide, and it says something rather different from what you report.
Here is how I read their numbers compared to yours
Top 5%: 23.3% rate vs. your 18.9%
Top 10%: 21.5% rate vs. your 14.3%
Top 25%: 18.4% rate vs. your 10.3%
Top 50%: 16.2% rate vs. your 7.2%
I also note that your row for School Teachers has the same bracket as truck drivers and police officers, but the rate for teachers is from the next bracket up.
I hope you market it under the name Soylent.
Why do you think that the space colonists would be able to create a utopian society just because they are not on earth? You will still have all the same types of people up there as down here, and they will continue to exhibit the Seven Deadly Sins. They will just be in a much smaller and more fragile environment, most likely making the consequences of bad behavior worse than here on earth.
So, does this mean that you have descended past "We need to eliminate the suffering of fruit flies" and gone straight for "We need to eliminate the suffering of atomic nuclei that are forced to fuse together?" This seems like a pretty wildly wrong view, and not because rectifying the problem is beyond our technological abilities. It seems like there is plenty of human suffering to attend to without having to invent new kinds of suffering based on atoms in the sun.
I saw that too and I don’t think it’s a nitpick. All of that was raised in support of the idea that human limits are much greater than we think, so having a couple of examples that are off by a factor of two is not a small difference. In addition to the wild claims about a human with 350 kg of muscle mass, I know the world record for unequipped deadlift is just shy of 1,100 pounds/500kg. “Lifting a car” can’t mean picking it off the ground entirely no matter how small it is; my Miata weighs about 2,400 pounds and other than something like a Lotus Elise it’s right at the lowest weight available. I’m willing to buy “picking up the back of a tiny car while leaving the front wheels on the ground, but again that’s not what you implied.
I have no idea about whether you raised your IQ with your method, but the exaggeration of facts I do know makes me suspicious.
I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.
According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s - there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurists started predicting what transportation tech would look like, and flying cars were one of the big topics. The fact that they're impractical didn't occur to many of the people making predictions.
I have a strong suspicion that there are flaws in current reasoning about the future, especially as it relates to the threat of AGI. Recall that there was a round of AI hype back in the 1980s that fizzled out when it became clear nothing much worked beyond the toy systems. I think there are good reasons to believe we're in a very dangerous time, but I think there are also reasons to believe that we'll figure it out before we all kill ourselves. Frankly, I'm more concerned about global warming, as that requires absolutely no new technology nor policy changes to be able to kill us or at least put a real dent in global human happiness.
My point is simply that deciding that we're 95% likely to die in the next five years is probably wrong, and if you base your entire set of life choices on that prediction, you are going to be surprised when it turns out differently.
Also, I'm not strongly invested in convincing others of this fact, partly because I don't think I have any special lock on predicting the future. I'm just suggesting you look back farther than 1980 for examples of how people expected things to turn out vs. how they actually did and factor that into your calculations.
[Small edit in the first paragraph for clearer wording]
I gather from the recent census article that most of the readers of this site are significantly younger than I am, so I'll relay some first-hand experiences you probably didn't live through.
I was born in 1964. The Cuban Missle Crisis was only a few years in the past, and Kennedy had just been shot, possibly by Russians, or The Mob, or whomever. Continuing through at least the end of the Cold War in 1989, there was significant public opinion that we were all going to die in a nuclear holocaust (or Nuclear Winter), so really, what was the point in making long-term plans?
Spoiler: things worked out better than expected, although not without significant bumps along the way. Spending all your money on hookers and blow because you might as well enjoy yourself now would not have been a solid investment strategy.
Now, much like the AGI/ASI threat, the nuclear threat could have actually played out. There were other close calls where we (or they) thought the attack had started already (Vasily Arkhipov comes to mind), and of course, Death From AI could well happen. However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age. Remember, we still don't have flying cars.
I see a problem with this approach when the speaker does not know the answer to the question:
Under Abs-E, binary questions ("yes"-or-"no") are less obvious to answer. If your answer would ordinarily be "no", you must instead reply as if the question was open-ended. For example, your reply to "will you be here tomorrow?" may be "yes", or "I will be in the office tomorrow", or "I will stay home tomorrow". This forces you to speak with more information.
How do you respond when you don't know what you will be doing tomorrow? This could be a case where you haven't made up your mind yet (in which case "I will decide on that later" is a valid answer), or it could be because you genuinely don't have the information and have no way to find it. "What will the closing price of Apple be at the end of the year?" is difficult to answer as far as I can tell, especially if the person asking the question thinks you should know the answer. "You will have to wait for the end of the year before you can know that" doesn't convey the same information as "Short of stock manipulation, nobody can know the answer to that until the time comes." The first one could mean that I know the answer but choose not to tell you, while the second one conveys the more reasonable claim that the question cannot be answered in advance.
So, I think this is an interesting thought experiment, but I suspect that the amount of time spent mentally rewording everything before you speak will outweigh any positive value.
Probably even negatively correlated. If you think you're protected, you're going to engage in sex more often without real protection than you would if you knew you were just 15 minutes away from being a parent.