In this approach, you concede a need to at least occasionally intervene in a particular kind of dispute such as banning the white supremacists
Another problem with this is what does one mean by "white supremacists"? The definition used by the people who most advocate banning them tends to include anyone who believes in certain statements about the differences between races that are almost certainly true. For example, how race correlates with IQ. This is especially a problem for a forum that wants to be "rational".
You seem to be conflating quantity and quality.
To mention the elephant in the living room, I wonder if the increasingly broken wikipedia mod culture has something to do with this.
What about ostensibly apolitical posts that nonetheless use hot button issues as examples?
What about situations where a hot button issue comes up in the context of discussion?
Here is Vox Day explicitly arguing that if conservatives can be fired for expressing their opinions, so should NFL player for disrespecting the flag.
We all know that the NFL wouldn't hesitate to act if players started throwing Nazi salutes; they already come down hard on the expression of any opinion that is negative about homosexuality.
The Rubicon has been decisively crossed, so it's time to start cracking down on "speech" Americans don't like.
Always play by the rules that are actually in place, not by the rules that you wish were
This certainly seems rather accusatory, seeing as (as far as I know) Ozy doesn't actually support doxxing random social media users and is certainly not responsible for the actions of the entire SJ movement.
Except the OP is presented as an arguments against the elements of SJ that would oppose it.
Ozy's claim here, as far as I can tell, is that, even if people on Our Side stopped doing bad things, that wouldn't automatically cause people on The Other Side to stop doing bad things. Do you actually think that Ozy is wrong about this, or do you
"intersectional" strikes me as an example of an intentionally confusing term, at least I've never been able to figure out a meaning beyond "a word people throw into arguments to make it a norm violation to notice that said arguments make no sense".
I'm scared of leaving my house. This means that when I make social arrangements a lot of the time I won't end up actually going to them because I will be too scared of leaving my house. Whether I'm going to have a good mental health day or a bad mental health day is hard to predict even a week in advance, because it depends on short-term triggers like whether I've fought with a close friend, whether the assholes across the street have decided to set off fireworks, whether a person has said something unpleasant about me on the Internet,
I'm reminded of an incident in Richard Feynman's "What do you care what other people think?" involving his then girlfriend, later wife, Arline and her illness. Her family chose to go with (1) both Feynman and her where rather annoyed when they found out. I don't remember the exact details right now and don't have the book in front of me.
I don't think that it depends on them.
Then why are you asserting them?
There's a bunch of politics involved and additionally, it's about the distinction of states for which I believe jimmy to which I have replied to have mental models
And why does this discussion of psychological states depend no you asserting false statements about contemporary politics?
Clearly Trump tells lies that lead to people believing simple factual falsehoods.
I don't think this is clear at all. At least the statements of his that people object to the loudest aren't lies.
In (1) the subject is the word "none". The word "us" is part of the prepositional phrase "of us".
Honestly, I'm not sure how much Scott Adams even believes what he says. I suspect part of it is that his target audience is people for whom "don't worry Trump doesn't actually believe these things, he's just saying them to hypnotize the masses" is less threatening then "actually these things Trump says are true". If you want the latter, I recommend Steve Sailer.
I'm sorry, I good the name wrong. I meant to say John Oliver and got the last name wrong.
This doesn't exactly inspire me to trust your memory about other details of the story.
I referencing information from one of his videos on Trump.
Specifically, he appears to have made a joke that could reasonably be interpreted as an invitation to Trump (specifically inviting an alias Trump once used), then said "that was only a joke" when Trump called him on it.
I think Last Week Tonight generally follows at least Karl Roves 100% truth test.
I admitte...
This reads like the author has such a strong external locus of control that he can't even imagine how an actual internal locus works. The whole point of an internal locus of control for things you can actually control is to control them. For example, rather than the rationalization for inaction:
My house is a mess, it's my fault but I don't care.
the actual internal locus of control behavior is:
My house is a mess and I'm going to clean it up right now, that mean before replying to this comment.
There are many issues where Trump lies about an issue where the truth would be simple to explain and be understood by average people. When Trump tells the public that John Stewart invited Trump multiple times when John Stewart did no such thing it might be "emotionally true" in the sense that people who watch Trump want to emotionally belief.
It's interesting that the best example you could come up with appears to be an obscure bit of trivia. I wasn't able to figure out the exact details by searching, but Jon Steward certainly said many things...
Honestly, the problem with this approach is that it tends to degenerate to "when my side tells lies, they're still emotionally true; when the other side makes inconvenient statements that are true, I can dismiss them as emotionally false".
Agreed. Which is why the scientific approach is think about how to refute the claim that the earth is flat using only information you personally gather, rather than making snarky comments about the implausibility of the conspiracy.
Ok, now your just (intentionally?) missing the point of the hypothetical.
Also, science can and has been (and certainly still is) wrong about a lot of stuff. (Nutrition being a recent less-controversial example.)
That what you describe as the "real point" amounts to an appeal to authority.
I'd have to say no here, but if you asked about plants observing light or even ice observing heat, I'd say "sure, why not". There are various differences between what ice does, what roomba does, and what I do, however they are mostly quantitative and using one word for them all should be fine.
What are you basing this distinction on? More importantly, how is whatever you're basing this distinction on relevant to grounding the concept of empirical reality?
Using Eliezer's formulation of "making beliefs pay rents in anticipated experiences&q...
Science is based on the principal of nullius in verba (take no one's word for it). So your attitude is anti-scientific and likely to fall a foul of Goodhart's law.
Ok, so where does it store the administrator password to said server?
How do you know? Does a falling rock also observe the gravitational field?
I don't think this could work. Where would the virus keep its private key?
even for the improvement of the virus.
I don't think this would work. This requires some way for it to keep the human it has entrusted with editing its programing from modifying it to simply send him all the money it acquires.
Finally, being conscious doesn't mean anything at all. It has no relationship to reality.
What do you mean by "reality"? If you're an empiricist, as it looks like you are, you mean "that which influinces our observations". Now what is an "observation"? Good luck answering that question without resorting to qualia.
A: "I would have an advantage in war so I demand a bigger share now" B: "Prove it" A: "Giving you the info would squander my advantage" B: "Let's agree on a procedure to check the info, and I precommit to giving you a bigger share if the check succeeds" A: "Cool"
Simply by telling B about the existence of an advantage A is giving B info that could weaken it. Also, what if the advantage is a way to partially cheat in precommitments?
Even if A is FAI and B is a paperclipper, as long as both use correct decision theory, they will instantly merge into a new SI with a combined utility function.
What combined utility function? There is no way to combine utility functions.
Maybe you can't think of a way to set up such trade, because emails can be faked etc, but I believe that superintelligences will find a way to achieve their mutual interest.
They'll also find ways of faking whatever communication methods are being used.
Empirically, people who believe in the Christian hell don't behave dramatically better than people who do.
Hasn't quite been my experience but, whatever.
The doctrine of hell whose (de)merits we're discussing doesn't actually say that people are only eligible for hell if they have never stopped believing in it.
Of course, otherwise it would be completely useless as it would simply motivate people to stop believing in it.
The more people the threat is known to, the less likely that they all comply.
And someone who doesn't know about it is even less likely to comply. If you've already concluded that threatening to torture someone is worth it for the increased chance of getting compliance, then the exact same calculation applies to everyone else.
it seems to me that you want the threat known to a small number of people and to persuade them to work towards a highly specific goal that those people are particularly well-suited to achieving.
Not really. In fact one reason for universality is to discourage reactions like Eliezer's.
Because it seems incredibly unlikely to maximize utility,
Avoiding for the moment the question whether utilitarianism is the right approach to these kinds of problems. There is in fact a decision theory argument in favor of this. Eliezer stumbled up on a version of it and didn't react well, specifically banning all detailed discussion of it from LW in an extremely ham-handed manner.
neither does it accord with what seems to me a general principle that punishment should be at most proportionate to the crime being punished.
Where does this principal co...
I am not much interested in turning this into a lengthy argument about whether the available evidence actually does or doesn't support Christianity.
I'm not necessarily ether. I'm not even a Christian. That's what makes the number of laughably bad arguments people use to deconvert themselves so frustrating.
Punishing bad people may or may not be morally monstrous. Punishing finite badness with eternal torture is morally monstrous.
Why? I actually disagree with this point.
...The scientific doctrine of light and matter does not really say that light an
Christian doctrines as morally monstrous (hell)
Why is punishing bad people morally monstrous?
probably internally incoherent (Trinity, dual nature of Christ)
Do you also find the scientific doctrine of light, and mater, being both particle and wave internally incoherent.
For example you wrote:
I was a Christian for many years, but repeatedly found that the best arguments I heard against Christianity seemed stronger than either the best refutations of those arguments or the best arguments for Christianity, and was uncomfortably aware that I didn't have much in the way of actual evidence for the factual claims of the religion I followed.
Which arguments and which factual claims?
It's not stupid as it stands. It is however rather lacking in the specifics it'd need to evaluate it.
I’m currently atheist; my deconversion was quite the unremarkable event. September 2015 (I discovered HPMOR in February and RAZ then or in March), I was doing research on logical fallacies to better argue my points for a manga forum, when I came across Rational Wiki; for several of the logical fallacies, they tended to use creationists as examples. One thing lead to another (I was curious why Christianity was being so hated, and researched more on the site)
So you came to a pseudo-rationalist cite, (you will find the opinion of Rational Wiki around here ...
subsidized egg freezing and childcare
Fertility is inversely correlated with income, the problem isn't that people don't have enough money, the problem is that in some sense they don't want children. I think a better approach would be cultural changes that make it high status to have lot's of children.
I don't think that is a correct summary of the essay at all, which is really pointing to a problem with how we think about coordination.
True, his point that Bayesians should be able to overcome these coordination problems by doing X, Y, and Z. Except neither him nor anyone else has should any interest in actually making an effort to do X, Y, and Z.
Unfortunately, analogies with Greek city states are wasted on me, because I don't have enough knowledge about them to make deep connections. For example, how specifically did Athens solve the problem of refugees bringing their own culture, sometimes incompatible with the original values of Athens?
Citizenship, and hence the right to vote, was restricted to people both whose parents were citizens.
Or whichever wikipedia admin is watching those pages won't permit criticism.
Surely, the brain is important, but humans exist 200 000 years on earth, and civilisation exists only 5 000 years. So something changed not only in the brain.
And neural nets existed for ~500 million years.
However, the likelihood ratio (P(B|A)/P(B|~A)), a.k.a., the quantity you actually care about when updating on new evidence, is symmetric.
A kind of causation. X implies Y.
You seem to be confusing causation and "evidence for" implication. DON'T. Wet streets are evidence for rain, but when streets do not cause rain.
(c) whether the information on the page is accurate.
Except not all topics and not all information are of equal interest to people.