I'm not sure where you're from, or what the composition of your social circle is, Lumifer—but I think you should find as many people as you can (or use whatever reasonable metric you have for determining a "normal person") and say: "Being stupid is a disease. The first step to destigmatizing this disease is to stop making fun of stupid people; I too am guilty of this," and then observe the reaction you get.
Personally, I'm baffled as to how you could think that this wouldn't engender a negative response from someone who's never been on LW before.
That being said, simply changing the theme from "anti-stupidity" to "pro-intelligence" would change the post dramatically.
Or a resistance and no definitive establishment
I don't think you're missing anything, no.
Ah, good points.
I did not really know what was meant by "collectivist ideaologies" and assumed it to be something along the lines of "ideaologies that necssitate a collection of people." Originally, I didn't see any significance of the 50% (to me, it just seemed like an off-the-cuff number), but you put it into some good context.
I concede and retract my original criticism
Does that really seem like a political post to you, though? It doesn't look like an attempt to discuss politics, types of politics, who's right and who's wrong, there's no tribalism, nothing regarding contemporary politics, etc. It looks like a pure and simple statement of fact: Humans have been coercing other humans into doing specific actions—often times empowering themselves—for the whole of human history.
I don't think tukabel's post was very political outside of the statement "An AI doing this is effectively politics, and politics has existed for a long time." I don't think that's considered discussing politics.
I don't think this is a pertinent or useful suggestion. The point of the reply wasn't to discuss politics, and I think it's a red herring to dismiss it as if it were.
If I may expand on tukabel's response: What is the point of this post? It seems to be some sort of "new" analysis as to how AIs could potentially hack humans—but if we get passed the "this is new and interesting" presentation, it doesn't seem to give anything new, unusual, or even really discussion-worthy.
Why is "The AI convinces/tricks/forces the human to do a specif...
This is a bit tangential, and a bit ranty, maybe a bit out of line, but it might help [a bit]...
From one self-hater to another: I've always been negative. I've always disliked myself, my past decisions, the world around me, and the decisions made therein. Here's the kind of philosophy I've embraced over the past few years:
My pessimism motivates me something like the way nihilism motivates Nietzsche. It is the ultimate freedom. I'm not weighed down by this oppressive sense that I'm missing some great opportunity or taking an otherwise good life and shitting...
I've been around in LW for years, and I'd say it's tended more towards refining the art of pragmatism than rationality (though there's a good bit of intersection there).
There's a lot of nuance to this situation that makes a black-and-white answer difficult, but let's start with the word arrogance. I think the term carries with it a connotation of too much pride; something like when one oversteps the limits of one's domain. For example, the professor saying "You are probably wrong about this" is an entirely different statement (in terms of arrogance) than the enthusiast saying "You are probably wrong about this," because this is a judgement that the professor is well qualified to make. While I can see a...
Huh. Actually, I enjoyed reading it.
They're equally likely, but, unless Alice chose 1649271 specifically, I'm not quite sure what that question is supposed to show me, or how it relates to what I mentioned above.
Maybe let me put it this way: We play a dice game; if I roll 3, I win some of your money. If you roll an even number, you win some of my money. Whenever I roll, I roll a 3, always. Do you keep playing (because my chances of rolling 3-3-3-3-3-3 are exactly the same as my chances of rolling 1-3-4-2-5-6, or any other specific 6-numbered sequence) or do you quit?
I agree with you that the probability of Alice's sequence being a sequence will always be the same, but the reason Alice's correct prediction is a difference in the two mentioned situations is because the probability of her randomly guessing correctly is so low—and may indicate something about Alice and her actions (that is, given a complete set of information regarding Alice, the probability of her correctly guessing the sequence of coin flips might be much higher).
Am I misunderstanding the point you're making w/ this example?
I do not think these events are equally improbable (thus, equally probable).
The specific sequence, M, is some sequence in the space of all possible sequences; "... achieves some specific sequence M" is like saying "there exists an M in the space of all sequences such that N = M." That will always be true—that is, one can always retroactively say "Alice's end-result is some specific sequence."
On the other hand, it's a totally different thing to say "Alice's end-result is some specific sequence which she herself picked out before flipping the coin."
But if I'm not mistaken the original argument around Chesterton's fence is that somebody had gone through great efforts to put a fence somewhere, and presumably would not have wasted that time if it would be useless anyway.
My response was to this statement—specifically, toward the assumption that, since someone has gone through great efforts to put a fence somewhere, it's ok to assume said fence isn't useless. I'm not seeing where my comment is inconsistent with what it's responding to (that is, I'm seeing "gone through great efforts" as synon...
Every time I read about Chesterton's fence, it seems like the implication is:
Because someone worked hard on something, or because a practice/custom took a long time to develop, it has a greater chance of being correct, useful, or beneficial [than someone's replacement who looks and says "This doesn't make sense"]
I think that's a terrible statement.
In my experience, that's not what usually happens.
Where are you getting "that's what usually happens"?
Mine was a little ill-thought out comment.
I guess I don't think that the meaning of my question was hidden in any significant way. This is leads me to interpret your response less as a genuine concern for specificity that lead to constructive criticism, and more as "I don't like this subject—therefore I will express disagreement with something you did to indicate that." It feels to me as if you're avoiding the subject in favor of nitpicking.
I know you knew what the actual question is because you pointed out vagueness. You knew the question you answered [Literally: Do different races have...
This does not addresses my question. The implication is "... shouldn't it follow that different races could have different brains—such that these differences are generalizable according to race?"
I think this implication was obvious. For example, if someone were to ask "Do different races typically have different skin colors?" I don't think you would answer "Different people of the same race have different skin colors. No two skin colors are exactly the same. You have to make statements that are less vague."
Edit: If, in fact, that is the way you would answer, then I'm mistaken, but I don't think that's necessary.
I don't understand why this comment is met with such opposition. Calories are the amount of energy a food contains. If you use more energy than you take in, then you have to lose weight [stored energy]. There's literally no other way it could work.
The statement can even be further simplified to:
All people who create a calorie deficit lose weight.
The fact that selection pressure for mental ability is everywhere present is an excellent point; thanks. As to why it's a troublesome subject, I always maintain "If there is a quantitative difference, I sure as hell hope we never find it."
I think that'd lead to some pretty unfortunate stuff.
I don't think this is a stupid question, but everyone else seems to—that is, the immediate reaction to it is usually "there's obviously no difference." I've struggled with this question a lot, and the commonly accepted answer just doesn't sit well with me.
If different races have different skin, muscle/bone structure, genetics, and maybe other things, shouldn't it follow that different races could have different brains, too?
I know this is taboo, and feel the following sort of disclaimer is obligatory: I'm not racist, nor do I think any difference ...
I do not care at all about watching other people play sports. Everyone thinks it's super boring.
... doesn't seem to make much sense to me. In what context would he not mean that?
It took me a minute or two to figure out what you were trying to say. For anyone else who didn't get it first-read, I believe Lumifer's saying something like:
"World War II was 60 years ago. On a 1,400 year timescale, that's not getting somewhere, that's just a random blip of time where no gigantic wars happened; those blips have happened before. What do you mean 'to get to where we are now'?"
Now, to answer that, I think he means "to get to a society where fear of being killed or kidnapped (then killed) isn't a normal part of every day life, and women can wear whatever they want."
I know this post is five years old, but can someone explain this to me? I understood that both questions could have an answer of no because one may want to minimize the monetary loss / maximize the monetary gain of the poorer family—therefore, the poorer family should get a higher reduction and a lower penalty. Am I misunderstanding something about the situation?
Ah! This puts everything into a sensible context—thank you.
I'd like to have a conversation on said fairness sometime; maybe I'll make a thread about it.
Sorry, I'm a bit confused. Not being fully versed in the terminology of utilitarians, I may be somewhat in the dark...
... but, is the point of this piece "Money should be the unit of caring" or "Money is the unit of caring"? I expected it to be the latter, but it reads to me like the former, with examples as to why it currently isn't. That is, if money were actually the unit of caring—if people thought of how much money they spend on something as synonymous with how much they care about something—then a lawyer would hire someone to work...
I... I don't actually understand why this comment got so many downvotes—and I'm 100% for cryonics. In fact, I agree with the above comment.
Is this a toxic case of downvoting?
I really like this idea, but I can't tell whether I failed the test, I passed the test, or the article-selection for this test was bad.
Wait, IlyaShipitser—I think you overestimate my knowledge of the field of statistics. From what it sounds like, there's an actual, quantitative difference between Bayesian and Frequentist methods. That is, in a given situation, the two will come to totally different results. Is this true?
I should have made it more clear that I don't care about some abstract philosophical difference if said difference doesn't mean there are different results (because those differences usually come down to a nonsensical distinction [à la free will]). I was under the impressi...
Sorry, I didn't mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.
That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.
When you say "apply a probability to something," I think:
..."If one were to have to make a decision based on whether or not event A will happen, how would one consider the available data in making
I'm having a hard time answering this question with "yes" or "no":
The event in question is "Alice rolling a particular number on a 6-sided die." Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to i...
A quantitative thing that indicates how likely it is for an event to happen.
I still don't understand the apparently substantial difference between Frequentist and Bayesian reasoning. The subject was brought up again in a class I just attended—and I was still left with a distinct "... those... those aren't different things" feeling.
I am beginning to come to the conclusion that the whole "debate" is a case of Red vs. Blue nonsense. So far, whenever one tries to elaborate on a difference, it is done via some hypothetical anecdote, and said anecdote rarely amounts to anything outside of "Different people somet...
This ends up being somewhat circular then, doesn't it?
Olbers' paradox is only a paradox in an infinite, static universe. A fininte, expanding universe explains the night sky very well. One can't use Olbers' paradox to discredit the idea of an expanding universe when Olbers' paradox depends on the universe being static.
Furthermore, upon re-reading MazeHatter's "The way I see it is..." comment, Theory B does not put us at some objective center of reality. An intuitive way to think about it is: Imagine "space" being the surface of a balloo...
Trial-and-error.
There are, of course, inconsistencies that I'm unaware of: These are known unknowns. The idea, though, is that when I'm presented with a situation, any such relevant inconsistencies come up and are eliminated (either by a change of the foundation or a change of the judgement).
That is, inconsistencies that exist but don't come up aren't relevant.
An example—extreme but illustrative: Say an element of this foundational set is "I want to 'treat everyone equally'". I interview a Blue man for a job and, upon reflecting, think very ne...
Haha, that's what I do.
If my cost is $14.32, I know $1.43 is 10%, and half of that is about $0.71, so the tip's $2.14 (though I tip 20%, which is even easier).
Yes and no. It's a different experience—like taking a bath and going swimming.
Why is the Newcomb problem... such a problem? I've read analysis of it and everything, and still don't understand why someone would two-box. To me, it comes down to:
1) Thinking you could fool an omniscient super-being 2) Preserving some strictly numerical ideal of "rationality"
Time-inconsistency and all these other things seem totally irrelevant.
I have a fundamental set of morals from which I build my views. They aren't explicit, but my moral decisions all form a consistent web. Sometimes one of these moral-elements must be altered because of some inconsistency it presents, and sometimes my judgement of a situation must be altered because it was inconsistent with the foundation. But ultimately, consistency is what I aim for. I know this is super vague, and for that I apologize.
So far, this has worked for me 100% of the time. There have been some sticky situations, but even those have been worked o...
In this country, we charge students tens of thousands of dollars for that diploma. In fact, at my "public" university, first year students are required to:
Of course, these and all other services (except teaching and research) are privately owned.
Otherwise, everything's pretty much the same.
I check it once a day. My work e-mail a few more times if I'm working (which involve a constant correspondance w/ people).
We observe a finite amount of light from a finite distances.
That's an empirical fact.
That is to say, the empirical and theoretical range of electromagnetic radiation are not in agreement.
Why does observing a finite amount of light from a finite distance contradict anything about the range of electromagnetic radiation?
(Also... has anyone read http://en.wikipedia.org/wiki/Redshift? It's... well... good.)
Hi. I apologize: this is a pretty long reply—but thanks very much for your comment. :) I really appreciate the opportunity to follow up like this on something I said a few years ago.
My thoughts on being poly. haven't changed. I still think it's the most functional romantic outlook. Although, after re-reading my comment: "without introducing new problems in their place" is somewhat of a loaded statement. If someone has a difficult time being polyamorous, then it introduces a lot of problems. Not to dwell on this too much, but that part of the co...
This may be an unrelated question, but I've seen a lot of similar exercises here—is the general implication that:
1 Person tortured for 1 year = 365 people tortured for 1 day = 8760 people tortured for 1 hour = 525600 people tortured for 1 minute?
... Oh.
Hm. In that case, I think I'm still missing something fundamental.
I mean moreso: Consider a FAI so advanced that it decides to reward all beings who did not contribute to creating Roko's Basilisk with eternal bliss, regardless of whether or not they knew of the potential existence of Roko's Basilisk.
Why is Roko's Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes? What's so special about this one in particular that makes it non-negligible? Or to make anyone concerned about it in the slightest? (That is the part I'm missing. =\ )
I don't understand why Roko's Basilisk is any different from Pascal's Wager. Similarly, I don't understand why its resolution is any different than the argument from inconsistent revelations.
Pascal's Wager: http://en.wikipedia.org/wiki/Pascal%27s_Wager
Argument: http://en.wikipedia.org/wiki/Argument_from_inconsistent_revelations#Mathematical_description
I would actually be surprised (really, really surprised) if many people here have not heard of these things before—so I am assuming that I'm totally missing something. Could someone fill me in?
(Edit: Instead...
To those who seem to not like the manner in which XiXiDu is apologizing: If someone who genuinely thinks the sky is falling apologizes to you while still wearing their metal hat—then that's the best you can possibly expect. To reject the apology until the hat is removed is...
I don't believe you, and I'm especially skeptical of IQ—and a lot of other fetishizations of overly confident attempts to exactly quantify hugely abstract and fluffy concepts like intelligence.