Hmmmm. Depends how ingrained the memes are in the material. Oh, you'd certainly have awareness of the memes - but accepting them is a different story, and a certain skepticism in a student (or in a professor) can probably blunt that effect quite a bit.
Even if the memes are that thoroughly integrated, though, the only effect is to make the establishment of a parallel infrastructure that much more appropriate a solution.
I think missionaries are usually sent to particular places by organizations, and when one leaves another goes.
It's not going to be perfect. Sometimes there will be more missionaries than established places to send them, and new missions can be opened - but sometimes a missionary will, through mischance or malice, die before he's expected to do so and there will be no replacement ready to send.
I don't actually know about specific incidences, but there should be enough data on what happens when a mission is abandoned to be able to tell how successful it c...
Huh.
Okay. In this particular real-life example, though, it is clear that the politicisation is in the infrastructure around the science, not in the science itself. That is to say, learning climate science is not memetically dangerous - it is simply difficult to get a paper published that does not agree with certain politics. And that is bad, but it is not the worst possibility - it means that someone merely studying climate science is safe in so doing.
So, in this particular case, the solution of studying climate science oneself, becoming an expert, and the...
But "amateurs should defer to experts", in reference to Christianity, doesn't mean "amateurs should accept the experts' word about Christianity," it means "amateurs should accept the claims presented by Christianity". There's nothing comparable for Shakespeare. In this sense, neither experts nor schools teach Shakespeare at all.
Um.
Going back to the comment that started this all - over here - shows that the quote originally comes from this page, which is an essay written from the atheist perspective on how to go about arguin...
If all experts are infected with meme plagues, and are able to prevent alternative views from being presented, then you have a problem. This implies that one of the following is true:
You could attempt to study the subject to expert level yourself, taking appropriate anti-meme-pla...
I don't think you need a careful effort to track their exact effectiveness. It would be fairly obvious in a couple of generations that peaceful missionaries would fall in one of two categories - either they have some success (as evidenced by some number of converts that they win over) or they have no success (as evidenced by every missionary outreach pretty much collapsing as soon as the missionary either leaves or dies).
A careful effort to track effectiveness could tell the difference between slight success and strong success, but I think that even with a...
A brief Google points me at this fellow. He was a medieval Fransiscan missionary to China, and established what appears to have been a reasonably successful church there that stayed around for about forty years after his death (until the Ming Dynasty arose in 1369 and expelled them from the country).
No, it just has to get big enough that Christians have enough other Christians around that the social structure becomes self-sustaining. Social ostracism is used to get rid of spontaneously appearing non-Christian individuals, not large groups.
Fair enough. A neighbourhood or other small community can be self-sustaining, then.
But it still needs to be started.
So don't assume it's an exhaustive list.
As soon as I don't assume it's an exhaustive list, your point collapses. Yes, it does spread as a meme system, This is because it is a meme system.
Newtonia...
We know how religion spreads.
I'm not sure that you do.
From your previous post:
The predominant ways in which Christianity has spread are conversion by the sword, parent to child transmission, and social ostracism for people who refuse to believe it.
If this were true - and if it were an exhaustive list of the predominant ways - then I would expect to see the following:
Hmmm. Could work. Or perhaps the first thing he'd conclude is that you are infected by the meme plague, and the second thing he'd do is suspect that you are trying to infect him with the meme plague.
He could respond to this in two ways; either by ending the debate, in the hope of immunising himself; or by arguing against you, in the hopes of curing you.
...huh. Actually, thinking about this, a lot of bad debate habits (ignoring the other person's evidence, refusing to change your mind, etc.) actually make a lot of sense when seen as protective measures specifically to prevent infection by meme plagues.
Then I may have misunderstood the intention of the phrase.
As an observation about the limits of the maxim, I agree with it. And no, I'm not going to argue that a memetic plague never happens.
I am, however, going to argue that a memetic plague is hard to identify, making this observation very difficult to actually apply with any reliability. It's just too easy - if I see a bunch of experts in the subject all saying something that I disagree with - for me to think "they're infected by a memetic plague". It's so much more comforting to think that th...
Also, distinguish between "anyone can claim X" and "anyone can correctly claim X". Creationists could claim that evolution spreads the same way--but they'd be wrong.
Assume a climate change denier or a creationist who (a) makes such an argument and (b) firmly believes it to be correct. How would he be best convinced that he is, in fact, wrong?
I think this fails in the case where the experts are infected by a meme plague.
Isn't this a Fully General Counterargument, though? Climate change deniers can claim that climate experts are 'infected by a meme plague'. Creationists can claim anyone who accepts evolution is 'infected by a meme plague'. So on and so forth.
Hmmm. To mess around with equations a bit... what can we say about P(Bunyan | stories about Bunyan) and P(!Bunyan | stories about Bunyan), given P(stories about Bunyan | Bunyan) > P(stories about Bunyan | !Bunyan)?
Let's genaralise it a bit (and reduce typing). What can we say about P(A|B) and P(!A|B) when P(B|A) > P(B|!A)?
Consider Bayes' Theorem: P(A|B) = [(P(B|A)*P(A)]/P(B). Thus, P(B) = [(P(B|A)*P(A)]/P(A|B)
Therefore, P(!A|B) = [(P(B|!A)*P(!A)]/P(B)
Now, P(!A) = 1-P(A). So:
P(!A|B) = [(P(B|!A)*{1-P(A)}]/P(B)
Solve for P(B):
P(B) = [(P(B|!A)*{1-P(A)}]/P...
In a pure-logic kind of way, finding B where B is exactly equivalent to A means nothing, yes. However, in a human-communication kind of way, it's often useful to stop and rephrase your argument in different words. (You'll recognise when this is helpful if your debate partner says something along the lines of "Wait, is that what you meant? I had it all wrong!")
This has nothing to do with formal logic; it's merely a means of reducing the probability that your axioms have been misunderstood (which is a distressingly common problem).
What I've yet to glean from your comments is how 'absolute truth' is any different than 'green sound'. They're both short phrases but neither seems to refer to anything.
It's kind of a side point, but there actually is such a thing as green noise (there's actually four different definitions...)
"Uniforms are good because they'll reduce bullying." (A because B, B --> A) "Uniforms are bad, because along with all their costs they fail to reduce bullying." (~A because ~B, ~B --> ~A)
A: "Uniforms are good"
B: "Uniforms reduce bullying"
B->A: "If uniforms reduce bullying, then uniforms are good."
~B->~A : "If uniforms do not reduce bullying, then uniforms are not good."
"A is equivalent to B": "The statement 'uniforms are good' is exactly as true as the statement 'unif...
Let me rephrase: does the double crux method contains any improvement that is not already covered by tabooing terms? Or simply saying "why do you think this is the case?"
In this particular argument, no. (In fact, if both participants are willing to examine their own chain of reasoning and consider that they might be wrong, then asking "why do you think this is the case?" sounds like a perfect first step in the double crux method to me)
In cases where the disagreement is due to (say) Bob making a mathematical error, tabooing terms is u...
That is true. In a disagreement where the root of the disagreement is applying different meanings to the word 'better', properly defining that term would identify the true disagreement straight away. The double crux method, by seeking equivalent statements for each position, brings that disagreement in terminology to light almost immediately (where a word-by-word process of definitions might well get mired down in the definition of 'steel' and whether or not it includes small amounts of chromium - which might be interesting and informative on its own, but ...
"Aluminium is better than steel!" cries Alice.
"Steel is better than aluminium!" counters Bob. Both of them continue to stubbornly hold these opinions, even in the face of vehement denials from the other.
It is not at once clear how to resolve this issue. However, both Alice and Bob have recently read the above article, and attempt to apply it to their disagreement.
"Aluminium is better than steel because aluminium does not rust," says Alice. "The statement 'aluminium does not rust, but steel does' is an equivalent argument ...
This set of strategies looks familiar. I've never called it double crux or anything like that, but I've used a similar line in internet arguments before.
Taking a statement that disagrees with me; assuming my opponent is sane and has reasons to insist that that statement is true; interrogating (politely) to try to find those reasons (and answering any similar interrogations if offered); trying to find common ground where possible, and work from there to the point of disagreement; eventually either come to agreement or find reasons why we do not agree that d...
First get straight: good literally objectively does mean desirable.
It does not.
Wiktionary states that it means "Acting in the interest of good; ethical." (There are a few other definitions, but I'm pretty sure this is the right one here). Looking through the definitions of 'ethical', I find "Morally approvable, when referring to an action that affects others; good. " 'Morally' is defined as "In keeping of requirements of morality.", and 'morality' is "Recognition of the distinction between good and evil or between righ...
Which individual? The might be some decision theory which promotes the interests of Joe Soap, against the interests of society, but there is no way i would call it morality.
Ah, I may have been unclear there.
To go into more detail, then; you appear to be suggesting that optimal morality can be approached as a society-wide optimisation problem; in the current situations, these moral strictures produce a more optimal society than those, and this optimisation problem can be solved with sufficient computational resources and information.
But now, let us consi...
So... what you're suggesting, in short, is that a sufficiently intelligent AI can work out the set of morals which are most optimal in a given human society. (There's the question of whether it would converge on the most optimal set of morals for the long-term benefit of the society as a whole, or the most optimal set of morals for the long-term benefit of the individual).
But let's say the AI works out an optimal set of morals for its current society. What's to stop the AI from metaphorically shrugging and ignoring those morals in order to rather build more paperclips? Especially given that it does not share those values.
Then again--there are Catholic moralists, including, I think, some Catholics I know personally, who firmly believe that (for example) stealing is wrong because stealing is wrong. Not for any other reason.
This sounds like deontological ethics. It's not by any means unique to Catholicism; it's just the general idea that being good involves following a (presumably carefully chosen) list of rules.
Not all Catholics are deontologists; not all deontologists are Catholic. And, I may be misreading here, but I think your worry is more about deontology than Cathol...
Atheists don't hold that religions are mostly wrong. They hold that religious believers depend on untestable hypotheses and shield their beliefs from criticisms instead of engaging them.
I have come across atheists who hold - sometimes quite loudly - that all religions are completely wrong.
I have no doubt that some think as you describe, but most certainly not all.
Hi, Trent!
I'd love to be one of the first people on Mars. Not sure how realistic that goal is or what steps I should even take to make it happen beyond saving $500,000 for a supposed SpaceX ticket and mastering a useful skill (coding!), but it's something to shoot for!
Have you heard of the Mars One project?
Hmmm. I had to go back and re-read the story.
...I notice that, while they were not ignorant that they were causing pain and emotional distress, they did honestly believe that they were doing the best thing and, indeed, even made a genuine attempt to persuade humanity, from first principles, that this was the right and good thing to do.
So they were doing, at all times, the action which they believed to by most moral, and were apparently willing to at least hear out contrary arguments. I still maintain, therefore, that their actions were immoral but they themselves were not; they made a genuine attempt to be moral to the best of their ability.
What they did was clearly wrong... but, at the same time, they did not know it, and that has relevance.
Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.
The above par...
"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.
Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.
AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.
(Wow, this was from a while back)
I wasn't suggesting that the AI might try to calculate the reverse sequence of moves. I was suggesting that, if the cube-shuffling program is running on the same computer, then the AI might learn to cheat by, in effect, looking over the shoulder of the cube-shuffler and simply writing down all the moves in a list; then it can 'solve' the cube by simply running the list backwards.
At 10am tomorrow, I can legitimately express my confidence in the proposition "the cable guy will arrive after noon" is different to what it was today.
There are two cases to consider:
But none of this takes anythin...
That text is actually quite misleading. It never says that it's the snake that should be thought of as figuratively, maybe it's the Tree or eating a certain fruit that is figurative.
True - any part of the described incident (more likely, all of it) could be figurative.
...The devil is a being of "pure spirit" and the catholics believe that he was an angel that disobeyed god. Now, this fallen angel somehow tempts the first parents, who are in a garden (378). It could presumably only be done in one or two ways: Satan talks directly to Adam and Eve
...it's possible. There are many differences between our proposed worlds, and it really depends on what you mean by "more extreme". Volairina's world is "more extreme" in the sense that there are no rules, no patterns to take advantage of. My world is "more extreme" in that the rules actively punish rationality.
My world requires that elementary physics somehow takes account of intent, and then actively subverts it. This means that it reacts in some way to something as nebulous as intent. This implies some level of understandin...
we're shown no tempter, whether human or animal or evil spirit.
There's one in Matthew 4 verse 1 to 11, in which Jesus spends forty days in the desert, fasting, and then is visited (and tempted) by the Devil.
We now know that talking requires a big fancy brain, such as humans have and snakes conspicuously don't (and don't have room for), and the right sort of vocal apparatus, ditto.
How big and fancy a brain does a parrot have?
Elections aren't everything.
Yes, I know that I, personally, have had (and will have) absolutely zero effect on the American 2016 November elections. I am fully aware that I, personally, will have absolutely zero impact on Donald Trump's candidacy, and everything that goes into that. And I am perfectly fine with that, for a single, simple, and straightforward reason; I am not American, I live in a different country entirely. I have a (very tiny) impact on a completely different set of elections, dealing with a completely different set of politicians and pol...
Hmmm. I don't know enough history to be able to name specific situations, but what about the other way round - countries that learned Latin without being conquered? (Perhaps for ease of trading?)
... (read more)