A wonderful vision of a world where you don't need a job because you can make money by full-time arguing with people online!
However, any objections to various karma systems (e.g. you can get upvotes by posting clickbait) would apply the same here, only more strongly, because there would be a financial incentive now.
I think Reddit tried something like that; you could award people "Reddit gold", not sure how it worked.
Prediction markets in forums and systems that support them, naturally giving rise to/being refutation bounties.
You need to have a way to evaluate the outcome. For example, you couldn't use a prediction market to ask whether people have a free will, or what is the meaning of life. Probably not even whether Trump won the 2020 election, unless you specify how exactly the answer will be determined -- because simply asking people won't work.
A subscription model with fees being distributed to artists depending on post-watch user evaluations, allowing outsized rewards for media that's initially hard for the consumer to appreciate the value of, but turns out to have immense value after they've fully understood it. (media economics by default are terminally punishing to works like that)
The details matter, because they determine how people will try to game this. I could imagine a system where you e.g. upvote the articles you liked, and then one year later it shows you the articles you liked, and you can now specify whether you like them on reflection. An, uhm, maybe 10% of your subscription is distributed to the articles you liked immediately, and 90% to those you liked on reflection? -- I just made this up, not sure what is the weakness, other than the authors having to wait for 1 year until the rewards for meaningful content start coming.
Take a notebook, and before reading lesswrong make notes of all your values and opinions, so that you can backtrack if necessary. :D
Coordination is hard. "Assigning Molochian elements a lower value" is a kind of coordination. Making rules, and punishing people when they break them is another. Even if attack is stronger than defense, the punishment could be stronger yet (because it is a kind of attack). I agree that it is difficult, not sure if impossible.
I'd say "things that are good in moderation and harmful in excess... and most people (in our community) do them in excess".
Even better, we should have two different words for "doing it in moderation" and "doing it in excess", but that would predictably end up with people saying that they are doing the former while they are doing the latter, or insisting that both words actually mean the same only you use the former for the people you like and the latter for the people you dislike.
I am not even sure whether "contrarianism" refers to the former or the latter (to a systematically independent honest thinker, or to an annoying edgy clickbait poser -- many people probably don't even have separate mental buckets for these).
It doesn't seem like knowing your enemy and knowing yourself should actually make you invincible in war. Besides, what if your enemy also knows themselves and knows you?
It makes more sense if you consider that another option is to avoid the war. So I would interpret it like this:
If you know that you are strong and that the enemy is weak, you will win the war. (And if you know otherwise, you will avoid the war -- by keeping peace, paying tribute, or surrendering.)
If you know that you are strong, but you don't know your enemy... sometimes you will win, sometimes you will be surprised by finding that your enemy is strong, too.
If you have no idea, and just attack randomly... expect to get destroyed soon.
In this light, the next quote would be interpreted like: before you start the war, make sure to build a strong army, so that you don't have to improvise desperately after the war has started.
Thank you! I probably wouldn't read the book, but this description is fascinating.
Not sure if this might be helpful -- I asked an AI how to tell the difference between "smart, autistic, and ADHD" and "smart, autistic, but no ADHD", and it gave me the following:
There are similarities between the two, because both autism and ADHD involve some executive dysfunction; social avoidance/exhaustion looks similar to ADHD avoidance; autistic burnout looks similar to ADHD inattention; being tired from masking looks similar to ADHD lack of focus; and high intelligence can mask both through compensation.
The differences:
Suppose that you need to read a boring technical book to understand something that is very important for you. Could you read it? (Autism only: if it is perfectly clear why the books is important, and you have a lot of time, and a quiet room only for yourself: yes. ADHD: sorry, after 10 minutes you will drop the book and go research something else.)
Do you lose hours of time without noticing? (Autism only: only when engaged with something interested. ADHD: yes, all the time.)
If you have a clear task, proper environment, and interest; can you start doing the task? (Autism only: usually yes. ADHD: probably no.)
Do you make major decisions on impulse -- such as buy something expensive, quit your job, start a new project, start driving too fast -- and then wonder "why did I do this"? (Autism only: no. ADHD: often.)
...I found this interesting, because I was operating under assumption that I have both autism and ADHD, but now it seems more like autism only. (Then again, this is AI, they like to hallucinate.)
Conditional on you not making the claim (or before you make the claim) and generally not doing anything exceptional, all three probabilities seem small... I hesitate to put an exact number on them, but yeah, 1e-6 could be a reasonable value.
Comparing the three options relatively to each other, I think there is no reason why someone would want to distract lesswrong from something. Wanting to erode trust seems unlikely but possible. So the greatest probability of these three would go to painting yourself a victim, because there are people like that out there.
If you made the claim, I would probably add a fourth hypothesis, which would be that you are someone else's second account; someone who had some kind of conflict with Thane in the past, and that this is some kind of revenge. And of course the fifth hypothesis that the accusation is true. And a sixth hypothesis that the accusation is an exaggeration of something that actually happened.
(The details would depend on the exact accusation and Thane's reaction. For example, if he confirmed having met you, that would remove the "someone else's sockpuppet account" option.)
If you made the accusation (without having had this conversation), I would probably put 40% probabilities on "it happened" and "exaggeration", and 20% on "playing victim", with the remaining options being relatively negligible, although more likely that if you didn't make the claim. The exact numbers would probably depend on my current mood, and specific words used.
Like the list of stuff you said are extremely specific things.
I assume those were not chosen randomly from a large set of possible motivations, but because those options were somehow salient for Thane. So I would guess the priors are higher than 1e-6.
For example, I have high priors on "wants to distract people from something" for politicians, because I have seen it executed successfully a few times. The amateur version is doing it after people notice some bad thing you did, to take attention away from the scandal; the pro version is doing it shortly before you do the bad thing, so that no one even notices it, and if someone does, no one will pay attention because the cool kids are debating something else.
What even is human self-determination?
And yet, religion remains legal, although to a large degree it is brainwashing people since childhood to be scared of disobeying the religious authorities.
Should human self-determination respecting AI be like: "I will let you follow your religion etc., but if you ask me whether god exists, I will truthfully say no, and I will give the same truthful answer to your children, if they ask"?
Should it allow or prevent killing heretics? What about heretics who have formerly stated explicitly "if I ever deviate from our religion, I want you to kill me publicly, and I want my current wish to override my future heretical wishes". Would it make a difference if the future heretic at the moment of asking for this is a scared child who believes that god will put him in hell to be tortured for eternity if he does not make this request to the AI?