Rather than quickly developing large spacecraft that could rapidly intercept and guide asteroids for impact at a suitable location for mining (Canada?), would it be easier to instead alter the trajectory of the earth ("Calling all boffins! You were jealous of Oppenheimer's gang - now it's your turn!") to collide with various asteroids?
Maybe we explode all of the world's nuclear weapons in the atmosphere at one point which may suitably nudge the earth and thus aim the asteroid at a globally chosen "catcher's mitt" impact zone (Canada?), surrounded by eager bulldozers ready for the pickings.
"If anyone really wants to be in a position of power, they probably shouldn't be."
Trying to determine what your point here is.
Since "intelligence" (thorny definition) could be said to be distributed in a normal distribution (if it's even possible to plot "intelligence" (let's say, "intellectual" or, dear me, "IQ")), and those people below the median would be less likely to be well educated (making assumptions here), then it would be expected that a significant proportion of every population will have people that won't answer these questions as well educated people with higher intelligence (to me these are obvious points, though maybe I'm missing something here).
I think a lot of people in high school (and otherwise) had little motivation to learn, so their knowledge may seem painfully inadequate compared to that readers of forums such as this. But, then, if you ask these people what practical use is knowledge of genetics or whether a star revolves around a plant of vice versa, their answer would probably not surprise you.
But, then, perhaps they could gut a fish in 30 seconds or fix the carburetor on your car as the zombies are closing in (faster than you* or I would whilst we're frantically speeding up the youtube video or sending the photo to chatgpt), so pluses and minuses (I won't judge in which direction the pluses and minsues tally for any person at any particular time in any particular universe).
On the other hand: Idiocracy (2006)
I recommend to my patients to purchase(in the UK) Ferrous Fumurate (has better bioavailabilty than ferrous sulphate), the more you take the better (upto 3 times a day, you may have GI side effects) and take with 200mg+ of Vitamin C (or fresh orange juice), and don't have tea/coffee/dairy one hour either side of taking it.
(I'm a GP/Family Physician)
When people come to us (GPs/Family Physicians) with hair loss for example, the first thing a doctor would do would be to check their bloods, specifically looking for ferritin levels (and other things eg thyroid etc).
If the ferritin level is less than 60 then we would recommend increasing their iron intake.
Thinking about this logically, one could say that the usual lower threshold of normal iron(around 20, differss with age/sex and lab) Is too low if you can increased chances of hair loss at levels below 60, hence I recommend a goal of ferritin > 60.
I recommend that people purchase(in the UK) Ferrous Fumurate (has better bioavailabilty than ferrous sulphate), the more you take the better (upto 3 times a day; you may have GI side effects - abdo discomfort, diarrhoea/constipation/black faeces) and take with 200mg+ of Vitamin C (or fresh orange juice) which triples the absorption of iron, and don't have tea/coffee/dairy one hour either side of taking it (which reduces absorption).
Seems Musks actions are not "making it better".
Surprise, surprise: the opposite:
Good idea, and works in Medicine.
If you see a patient and say:
"You have X, this is what it is, take this medicine and this is what should happen but if this happens then do this",
They may not believe you ("But I just want antibiotics") and/or not follow advice (+/- trust in you). (It's more complicated in real life, of course, we explore the patient's ideas/concerns/expectations, and make sure these are also satisfied, which is usually more important than anything else, as well as making a plan together - fun to do in 10 min)
If you describe your logical thinking to the patient and say:
"Because you have said X and Y, and on examination I have found Z, and with these findings these are the possible diagnosis, and these are the diagnoses that aren't likely, so take this..."
They are much more likely to believe you and do as you suggest, especially if there is a lot of uncertainty (as there often is):
If you're not sure what is happening, rather than lying (which the patient can probably tell), explaining ones thinking and describing why there is uncertainty I find often leads to more confidence and trust:
"You have told me X, and you have Y symptoms, which is odd as they don't point to a particular condition. These symptoms may mean A, and those B. Although it is very unlikely that there is anything serious, I think we should do Q and P, and review a week later. If M or N happens, tell me sooner".
Interesting site!
Have you explored the Oxford English Dictionary - the bees knees of dictionaries and sources of etymology?
Here are the OED entries for "patience" : https://i.imgur.com/S5tGj7i.png
https://www.oed.com/viewdictionaryentry/Entry/138816
A goldmine if you appreciate the origin of language.
What are you basing your optimism for Musk's future for Twitter on?
(Sorry, I'm doing something wrong trying to insert links with markdown on)
[AP: eport: Tweets with racial slurs soar since Musk takeover]
(https://apnews.com/article/elon-musk-technology-business-government-and-politics-2907d382db132cfd7446152b9309992c?)
[BBC: Scale of abuse of politicians on Twitter revealed] (https://www.bbc.co.uk/news/uk-63330885)
[Reuters: Elon Musk's Twitter slow to act on misleading U.S. election content, experts say](https://www.reuters.com/technology/elon-musks-twitter-girds-surge-us-midterm-election-misinformation-2022-11-08/)
What Musk says: ["Mr Musk insisted that the platform's commitment to moderation remained "absolutely unchanged"."](https://news.sky.com/story/elon-musk-defends-culling-twitter-staff-but-insists-commitment-to-moderation-remains-absolutely-unchanged-12738642)
What Musk does: ["Yesterday’s reduction in force affected approximately 15% of our Trust & Safety organization (as opposed to approximately 50% cuts company-wide"](https://twitter.com/yoyoel/status/1588657227035918337)
The market will decide what to do with Twitter, it seems, though these are early days.
His antics and hypocrisy [aren't a good sign](https://www.thelondoneconomic.com/news/elon-musk-becomes-the-butt-of-the-joke-after-he-welcomes-comedy-back-on-twitter-338363/)
In terms of your riposte "Your attempt to tie personality-based critiques (stem / white / male) isn't helpful.":
The following quotes are from the book: [The Psychology of Silicon Valley, Cook, K. (2020)][https://link.springer.com/chapter/10.1007/978-3-030-27364-4_2)
"Simon Baron-Cohen, a psychologist and researcher at the University of Cambridge, has researched the neurological characteristics endemic in certain fields, most notably in science, technology, engineering, and mathematics (STEM ) professions. Baron-Cohen has repeatedly found that those with autism or autistic traits are over-represented in these disciplines, particularly in engineering and mathematics,Footnote 36,Footnote 37,Footnote 38 a finding that has been corroborated by different research teams.Footnote "
There is much anecdotal evidence and growing research that points to a correlation between the type of work necessitated in tech and the analytical, highly intelligent, and cognitively-focused minds of “Aspies” who may be instinctively drawn to the engineering community.
In 2012, technology journalist Ryan Tate published an article in which he argued that this obsessiveness was in fact “a major asset in the field of computer programming, which rewards long hours spent immersed in a world of variables, data Surveillance structures, nested loops and compiler errors.”Footnote 44 Tate contended that the number of engineers with Asperger’s was increasing in the Bay Area, given the skillset many tech positions demanded.
Entrepreneur and venture capitalist Peter Thiel similarly described the prevalence of Asperger’s in Silicon Valley as “rampant.” Autism spokesperson Temple Grandin, a professor at Colorado State University who identifies as an Aspie, also echoes Tate, Thiel, and Baron-Cohen’s conclusion:
Is there a connection between Asperger’s and IT? We wouldn’t even have any computers if we didn’t have Asperger’s…. All these labels—‘geek’ and ‘nerd’ and ‘mild Asperger’s’—are all getting at the same thing. ….The Asperger’s brain is interested in things rather than people, and people who are interested in things have given us the computer you’re working on right now.Footnote 47
**The most notable result is what many describe as a deficiency of emotional intelligence, particularly empathy, throughout the tech industry**
Alex Stamos, former Chief Security Officer at Facebook,:
As an industry we have a real problem with empathy. And I don’t just mean empathy towards each other… but we have a real inability to put ourselves in the shoes of the people that we’re trying to protect…. We’ve got to put ourselves in the shoes of the people who are using our products.
"...the woman described a systemic belief, particularly amongst executives, which held that those in the industry were the smartest and best suited to solve the problems they were tasked with, and therefore couldn’t “really learn anything from anyone else.”" I asked what she believed informed this attitude, the woman replied the problem stemmed, in her experience, from a lack of awareness and emotional intelligence within Silicon Valley.
Berners-Lee argues that the root cause of this returns, again and again, to “companies that have been built to maximise profit more than to maximise social good.”
I would say that personality type of executives has an impact on final goals and methods of tech firms, hence the comment.
EDIT: One more quote from the book:
"studies have shown how power rewires our brains in a way that Dacher Keltner, a professor of psychology at University of California, Berkeley, explains is comparable to a traumatic brain injury. Research by Keltner and others have found evidence of an inverse relationship between elevated social power and the capacity for empathy and compassion.29,30 These studies suggest that the degree of power people experience changes how their brains respond to others, most notably in the regions of the brain associated with mirror neurons, which are highly correlated with empathy and compassion.31 Keltner explains that as our sense of power increases, activity in regions of the orbito-frontal lobe decreases, leading those in positions of power to “stop attending carefully to what other people think,”32 become “more impulsive, less risk-aware, and, crucially, less adept at seeing things from other people’s point of view.”33"
Recently, they made a deal with News Corp.
Then, last week, gave a seat on the board to the ex-NSA lead (I can understand why from a business perspective, though from a user perspective this is pretty damning, even assuming that most/all large AI companies are forced to cooperate with a/the government).
I'm afraid that on this trajectory, OpenAI will be a right-wing hard-government aligned tool of surveillance (though it probably is, already)
If I put my tin-foil hat on - could OpenAI force it's product to subtly alter responses for an ulterior purpose eg subtly make them more right-wing, or anti-Chile/choose any country/person/etc -
or a way to infiltrate the language with "Newspeak" aka 1984.
Will we be looking back in 10 years time from within a dystopia thinking "remember how it started"?
If so, I hope I'll be living on a no-internet farm-commune.