No opinion on content, but wanted to say that I dislike the style where 90% of your article's content is someone else's article. If you provide a link to the original, I think it is okay to only quote the important parts.
The easiest rationality skill that you currently fail at is probably the most important one for you now.
So if you are someone like me, the basic rationality checklist will feel pretty condescending. Something like:
I want to ask: how have you coped with something like this, and how can one go trought it?
If you believe in MWI, then your friend is still alive in another branch. This branch of you will never meet him again. But some other branch of you can.
In this branch, take care to interact with the people who are still alive here.
...but those are just excuses, and it would be preferable if all our friends lived happily in all branches.
Have the AI edit a condescending post so that you can read it without taking damage. Variations on this theme are also highly underutilized.
Fun exercise: have an AI read the entire frontpage of r/SneerClub and steelman the criticism, try to remove all toxicity, all value disagreements (e.g. if someone thinks that rationality or fanfic or polyamory or whatever are intrinsically stupid, ignore that part), etc.
Simply, AI being unemotional and impartial (but configurable) could be the perfect tool to steelman your opposition without enduring the emotional cost of reading texts that are toxic on purpose.
This is heinous behavior. Somehow it seems like this is legal? It should not be legal.
Yep. The obvious next iteration is something like: the ICE agents shoot you, and produce deepfake evidence of you having attacked them.
Should we make a "skill" file for the AI to play Pokemon?
Hmmm... on one hand, this feels like cheating, depending on how much detail we provide. In extreme, we could give the AI an entire sequence of moves to execute in order to complete the game. That would definitely be cheating. The advice should be more generic. But how generic is generic enough? Is it okay to leave reminders such as "if there is a skill you need to overcome an obstacle, and if getting that skill requires you to do something, maybe prioritize doing that thing", or is that already too specific?
(Intuitively, perhaps an advice is generic enough if it can be used to solve multiple different games? Unless that is a union of very specific advice for all the games in the test set, of course.)
On the other hand, the situation in deployment would be that we want the AI to solve the problem and we do whatever is necessary to help it. I mean, if someone told you "make Claude solve Pokemon in 2 days or I will kill you" and wouldn't specify any conditions, you would cheat as hard as you could, like upload complete walkthroughs etc. So perhaps solving a problem that we humans have already solve is not suitable for a realistic challenge.
one annoying thing about anti-psychiatry people
I find annoying that they take "X is related to something" as a proof that "X does not actually exist". I'll try to explain by a parody:
"Doctors sometimes tell you that you have a broken leg. But why is that a problem? Legs naturally come with different shapes and different conditions. It's just that capitalism requires you to work, and that sometimes involves walking to places, and a broken leg decreases your productivity. If we could for a moment abandon the mindset of capitalism and productivity, we could easily realize that there is simply no such thing as a broken leg."
But people would care about broken legs even without capitalism, because broken legs hurt, and because people who have broken legs often wish they could walk and run, even for reasons unrelated to productivity.
Basically, most of their arguments feel like this to me, except instead of a broken leg, insert autism or schizophrenia or Down syndrome or whatever. It is completely irrelevant what the condition does to the person and everyone around them. No no no, you are just brainwashed by capitalism to believe that <insert symptom> is a problem.
Why do you say we want PMs to "become more than an obscure game for nerds?"
Prediction markets need money as a fuel. The incentive for people to provide correct predictions is to gain money. More money in prediction markets means more people have an incentive to spend their time figuring out the correct answers. Or maybe the same people have an incentive to spend more of their time figuring out things, so they can answer more questions.
I am not really sure on the impact of extra dumb money. Perhaps the extra dumb people need to be told repeatedly that they are wrong? But preferably in a way that doesn't take their entire salaries away.
Why do you think avoiding insider trading is a disadvantage?
No, I didn't mean it that way. Actually, the other way round; insider trading is good -- well, as long as the insider has a "read-only" access to information.
It becomes different when the insider is incentivize to create chaos and make money on predicting their own chaos most reliably. Like, I wouldn't want a world where e.g. Trump suddenly nukes a random unimportant place as a way to make money, like in the morning he creates a bet saying "what is the chance that Trump will nuke this irrelevant place of out the blue this evening?" and when people respond "very low", he says "lol, watch me, losers" and collects the jackpot. (But of course if people responded "very high, this seems like his usual pattern of insider trading", then he would make money by betting "no, he definitely won't" and not nuking the random place today. So he makes money either way.)
But the thing I wanted to say in the previous comment was that it seems good to have a rule "noobs can't make very high bets", but that would turn against many cases of insider trading, if the insider happens to be a noob.
If comprehensible things become too large, in a way that cannot be factorized, they become incomprehensible. But at the boundary, increasing the complexity by +1 can mean that a more intelligent (and experienced) human could understand it, and a less intelligent one would not. So there is no exact line, it just requires more intellect the further you go.
Maybe an average nerd could visualize a 3x3 matrix multiplication, a specialized scientist could visualize 5x5 (I am just saying random numbers here), and... a superintelligence could visualize 100x100 or maybe even 1000000x1000000.
And similarly, a stupid person could make a plan "first this, then this", a smart person could make a plan with a few alternatives "...if it rains, we will go to this café; and if it's closed, we will go to this gallery instead...", and a superintelligence could make a plan with a vast network of alternatives.
And yes, just like with biology, a human can understand one simple protein maybe (again, I am just guessing here, what I mean is "there is a level of complexity that a human understands"), and a superintelligence could similarly understand the entire organism.
In each case, there is no clear line between comprehensibility and incomprehensibility, it just becomes intractable when it is too large.
Yes, that's exactly what I meant. Are today's networks "comprehensible"? If you ask whether humans are able to understand matrix multiplication, yes they are. But effectively, they are not.
I am not saying that the plans of superhuman AIs will be like this, but they could have a similar quality. Millions of pieces, individually easy to understand, the entire system too complicated to reason about, somehow achieving the intended outcome.
I don't read newspapers, so I don't have much data. Perhaps I notice the bad things more, because I do not have the good things to balance it with? (Kinda like if neither you nor your friends have a dog, so the typical moment when you notice a dog is when some stranger's dog threatens you. So your model of a dog is that dogs attack strangers, and you miss all the nice moments when they play or relax, which is what their owners see.)
I was interviewed by a journalist twice in my life; both time the journalist wrote totally made up things unrelated to what I said; and I suspect that the story was already written long before they talked to me, they just wanted a name to attach to their fictional person.
Once I participated in a small peaceful protest (imagine a group of less than ten people standing on a street with banners for 30 minutes, then going home), and a TV commented on it while showing videos of looting (that happened a few months before, on the opposite side of the country, in a situation related neither to our cause nor our organization). When we called them by phone to complain, they just laughed at us, said that there were tiny letters saying that the videos were "illustrations" so it's legally okay, and if we have any complaints we are supposed to address them to their well-paid legal department. (We didn't do anything about it.)
A few years ago (I don't remember when exactly) there were "scientific" articles approximately every month about how theory of relativity was experimentally debunked; people shared them on Hacker News and social networks. And always a few weeks later there was a blog post somewhere explaining now it was just a mistake in calculation, because someone forgot to use a proper relativistic equation somewhere. Of course, these blog posts were not shared so much. -- Later, I guess, this topic went out of fashion. (Perhaps because the newspapers switched to stronger clickbait?)
My very first blog post was a response to a popular journalist, basically just a long list of factual mistakes he made in a popular article. (And I mean factual mistakes in a very literal sense, like how many countries were members of a specific organization, what year the organization started, etc. That is, not something that could be explained by different people having a different political opinion.)
Uhm, Gamergate. A situation where a bunch of nerds complains about the way journalists report on their hobby, and the journalists decide to go nuclear on them: holding ranks, posting absurd fabrications, refusing to even mention the talking points of the other side, then doubling down repeatedly until the topic gets debated at UN.
Which reminds me of how journalists treated James Damore. The "original memo" that practically all newspapers referred to was actually heaving redacted (all links to scientific papers removed). They even changed font to random sizes to have it appear unhinged.
...all these things considered, why should I even read newspapers?