All of AmagicalFishy's Comments + Replies

Low IQ is not fixable by practice

I don't believe you, and I'm especially skeptical of IQ—and a lot of other fetishizations of overly confident attempts to exactly quantify hugely abstract and fluffy concepts like intelligence.

0Lumifer
You don't have to believe me: there is a LOT of literature on the subject. IQ research -- precisely because it's so controversial -- is one of the more robust parts of psychology. It does not suffer from a replication crisis and its basic conclusions have been re-confirmed over and over again.

I'm not sure where you're from, or what the composition of your social circle is, Lumifer—but I think you should find as many people as you can (or use whatever reasonable metric you have for determining a "normal person") and say: "Being stupid is a disease. The first step to destigmatizing this disease is to stop making fun of stupid people; I too am guilty of this," and then observe the reaction you get.

Personally, I'm baffled as to how you could think that this wouldn't engender a negative response from someone who's never been on LW before.

That being said, simply changing the theme from "anti-stupidity" to "pro-intelligence" would change the post dramatically.

0Lumifer
I expect most of my social circle to agree that stupidity is a pathological condition ("disease" is too much associated with infections and contagion for me), albeit very widespread. I don't know why would you want to destigmatize is, though -- incentives matter.

Or a resistance and no definitive establishment

I don't think you're missing anything, no.

Ah, good points.

I did not really know what was meant by "collectivist ideaologies" and assumed it to be something along the lines of "ideaologies that necssitate a collection of people." Originally, I didn't see any significance of the 50% (to me, it just seemed like an off-the-cuff number), but you put it into some good context.

I concede and retract my original criticism

Does that really seem like a political post to you, though? It doesn't look like an attempt to discuss politics, types of politics, who's right and who's wrong, there's no tribalism, nothing regarding contemporary politics, etc. It looks like a pure and simple statement of fact: Humans have been coercing other humans into doing specific actions—often times empowering themselves—for the whole of human history.

I don't think tukabel's post was very political outside of the statement "An AI doing this is effectively politics, and politics has existed for a long time." I don't think that's considered discussing politics.

2gjm
Yup, it seems political, because tukabel made particular choices of what specific actions to highlight and what particular sorts of ideologies to suggest might be responsible. In the sort of scenario I would consider "worst case", a 50% tax to fund whatever the AI is doing wouldn't be anywhere on the list of things to worry about. Why mention "give it half their income"? It doesn't even make sense: "give it half of their income in exchange for totalitarian control" -- so humans give the AI half their income, and "in exchange" the AI gets totalitarian control over them? That's not an exchange, it's like saying "I'll pay you £10 in exchange for being obliged to polish your shoes for you". Outside of peculiar sexual fetishes, no one does that. So why talk about "half of their income"? Because tukabel wants to complain about how taxes are bad, that's why. Politics. Collectivist ideologies? Well, I guess. But it's not as if it's only "collectivist ideologies" that have persuaded people to hand over their money for dubious benefits. (To take a typical class of counterexamples from quite a long time ago, consider fraudulent businesses in the time of the South Sea Bubble.) Why focus on "collectivist ideologies"? Because tukabel wants to complain about how collectivism is bad and individualism is better, that's why. That's how it looks to me, anyway. Maybe I'm wrong; maybe I've been oversensitized by seeing any number of other people doing what I think tukabel is doing here, and now see Proselytizing Libertarians under every bed. That would still be an example of why it's worth going out of the way to avoid saying things that are liable to look like politics: because even if you aren't actually intending to slide political preaching into a discussion of something else, it's very easy for it to look as if you are. (This is one of the key points Eliezer made back when he wrote "Politics is the Mind-Killer". Look at the Nixon example he cites; it's no more inflammatory than

I don't think this is a pertinent or useful suggestion. The point of the reply wasn't to discuss politics, and I think it's a red herring to dismiss it as if it were.

If I may expand on tukabel's response: What is the point of this post? It seems to be some sort of "new" analysis as to how AIs could potentially hack humans—but if we get passed the "this is new and interesting" presentation, it doesn't seem to give anything new, unusual, or even really discussion-worthy.

Why is "The AI convinces/tricks/forces the human to do a specif... (read more)

0Stuart_Armstrong
The point of this post is to be able to link to it from other, longer posts, so that I can, for instance, claim that using the humans as a truth channel http://lesswrong.com/r/discussion/lw/okd/humans_as_a_truth_channel/ is not vulnerable to the first two types of hacking (given other reasonable precautions), but is to the third.
0gjm
I don't think that's at all clear, and looking at tukabel's past comments certainly doesn't give me any confidence that it wasn't. I think there's certainly an argument of this sort to be made, I think it's an interesting argument, and I think (as your comment demonstrates) it can be made without getting needlessly political. But tukabel didn't really bother to make it, and certainly didn't bother to avoid making it in a needlessly political way.

This is a bit tangential, and a bit ranty, maybe a bit out of line, but it might help [a bit]...

From one self-hater to another: I've always been negative. I've always disliked myself, my past decisions, the world around me, and the decisions made therein. Here's the kind of philosophy I've embraced over the past few years:

My pessimism motivates me something like the way nihilism motivates Nietzsche. It is the ultimate freedom. I'm not weighed down by this oppressive sense that I'm missing some great opportunity or taking an otherwise good life and shitting... (read more)

2KristenBurke
This does help, thank you. I'd come to similar judgments and maybe couldn't sustain them long because I didn't know of anyone else with them. I think this also happens to help me ask my question better. What I'd also like to know: What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it "gaming" in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy? Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn't probably fix. I just truly don't know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don't unnecessarily choose some trajectory in exceeded vain. I guess, above, what I was trying to communicate—if there's something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn't be prone to be fundamentally derived from any kind of idolatry.)

I've been around in LW for years, and I'd say it's tended more towards refining the art of pragmatism than rationality (though there's a good bit of intersection there).

There's a lot of nuance to this situation that makes a black-and-white answer difficult, but let's start with the word arrogance. I think the term carries with it a connotation of too much pride; something like when one oversteps the limits of one's domain. For example, the professor saying "You are probably wrong about this" is an entirely different statement (in terms of arrogance) than the enthusiast saying "You are probably wrong about this," because this is a judgement that the professor is well qualified to make. While I can see a... (read more)

0casebash
"This may also be somewhat pedantic, but in something like quantum physics, because of this gap in knowledge, it'd be very obvious who the professor was to an audience that doesn't know quantum physics, even if it wasn't made explicitely clear beforehand." - I met one guy who was pretty convincing about confabulating quantum physics to some people, even though it was obvious to me he was just stringing random words together. Not that I know even the basics of quantum physics. He could actually speak really fluently and confidently - just everything was a bunch of non-sequitors/new age mysticism. I can imagine a professor not very good at public speaking who would seem less convincing.

They're equally likely, but, unless Alice chose 1649271 specifically, I'm not quite sure what that question is supposed to show me, or how it relates to what I mentioned above.

Maybe let me put it this way: We play a dice game; if I roll 3, I win some of your money. If you roll an even number, you win some of my money. Whenever I roll, I roll a 3, always. Do you keep playing (because my chances of rolling 3-3-3-3-3-3 are exactly the same as my chances of rolling 1-3-4-2-5-6, or any other specific 6-numbered sequence) or do you quit?

I agree with you that the probability of Alice's sequence being a sequence will always be the same, but the reason Alice's correct prediction is a difference in the two mentioned situations is because the probability of her randomly guessing correctly is so low—and may indicate something about Alice and her actions (that is, given a complete set of information regarding Alice, the probability of her correctly guessing the sequence of coin flips might be much higher).

Am I misunderstanding the point you're making w/ this example?

1OrphanWilde
Which seems more unlikely: The sequences exactly matching, or the envelope sequence, converted to a number, being exactly 1649271 plus the flipped sequence converted to a number?

I do not think these events are equally improbable (thus, equally probable).

The specific sequence, M, is some sequence in the space of all possible sequences; "... achieves some specific sequence M" is like saying "there exists an M in the space of all sequences such that N = M." That will always be true—that is, one can always retroactively say "Alice's end-result is some specific sequence."

On the other hand, it's a totally different thing to say "Alice's end-result is some specific sequence which she herself picked out before flipping the coin."

1OrphanWilde
All sequences, both written and flipped, are equally improbable. The difference is in treating the cases where the two sequences are identical as logically distinct from all other possible combinations of sequences. They're not nearly as distinct as you might think; imagine if she's off by one. Still pretty improbable, just not -as- improbable. Off by three, a little less probably still. Equivalent using a Caeser Cipher using blocks of 8? Equivalent using a hashing algorithm? Equivalent using a different hashing algorithm? Which is to say: There is always going to be a relationship that can be found between the predicted sequence and the flipped sequence. Two points make a line, after all.

But if I'm not mistaken the original argument around Chesterton's fence is that somebody had gone through great efforts to put a fence somewhere, and presumably would not have wasted that time if it would be useless anyway.

My response was to this statement—specifically, toward the assumption that, since someone has gone through great efforts to put a fence somewhere, it's ok to assume said fence isn't useless. I'm not seeing where my comment is inconsistent with what it's responding to (that is, I'm seeing "gone through great efforts" as synon... (read more)

Every time I read about Chesterton's fence, it seems like the implication is:

Because someone worked hard on something, or because a practice/custom took a long time to develop, it has a greater chance of being correct, useful, or beneficial [than someone's replacement who looks and says "This doesn't make sense"]

I think that's a terrible statement.

4Lumifer
That's not the Chesterton's fence at all. In plain words, the Chesterton's fence says that if you want to remove something because you don't understand why it's there, you should first find out why is it there. That, as you notice, has nothing to do with "worked hard" or "took a long time".

In my experience, that's not what usually happens.

Where are you getting "that's what usually happens"?

-2entirelyuseless
Technological changes can provide good examples. Many people keep saying things like "Five more years and printed books will be obsolete," because they don't see any advantages of printed books over e-books. But it doesn't happen because there are a good number of advantages to the printed books, which remain even when people do not explicitly notice them. On the other hand, given a long enough time, the transition people expect will in fact happen, because alternative solutions to the issues will ultimately be found. I could mention a number of advantages, but just one for illustration: when you read a printed book, the fact that you are physically aware of where you are in the book, e.g. two thirds of the way through, helps you remember the book.

Mine was a little ill-thought out comment.

I guess I don't think that the meaning of my question was hidden in any significant way. This is leads me to interpret your response less as a genuine concern for specificity that lead to constructive criticism, and more as "I don't like this subject—therefore I will express disagreement with something you did to indicate that." It feels to me as if you're avoiding the subject in favor of nitpicking.

I know you knew what the actual question is because you pointed out vagueness. You knew the question you answered [Literally: Do different races have... (read more)

3ChristianKl
The subject of LW is refining the art of human rationality. Telling people to be more precise when discussing political issues is on that subject. This isn't reddit and I wouldn't like LW to become like reddit. To do that it's important to defend a certain level of posting quality and speak up when that's violated. We have recent discussions about whether to ban political posts. I'm not in favor of banning but I'm in speaking up to have those discussions on a higher quality level. If you would ask the same question on http://skeptics.stackexchange.com it would be closed as being too vague and to have questions like this on LW without being criticized.

This does not addresses my question. The implication is "... shouldn't it follow that different races could have different brains—such that these differences are generalizable according to race?"

I think this implication was obvious. For example, if someone were to ask "Do different races typically have different skin colors?" I don't think you would answer "Different people of the same race have different skin colors. No two skin colors are exactly the same. You have to make statements that are less vague."

Edit: If, in fact, that is the way you would answer, then I'm mistaken, but I don't think that's necessary.

1polymathwannabe
Judging only by skin color, most Korean hands would be indistinguishable from most Caucasian hands, and most Arabic hands would be indistinguishable from most Latino hands. Likewise, judging only by brain function, no EEG-visible or MRI-visible differences appear between ethnic groups.
2ChristianKl
In highly politically charged subjects it's very important to be explicit about your questions and not hiding your meaning in implications of your statements. But apart from that it's not clear what the notion of generalizable differences that are not significant is supposed to mean. The standard way you would declare that a difference is generalizable is showing a stastically significant effect. It's part of scientific reasoning to make claims that are in principle falsifiable. To do that you actually need to be precise over what you mean. There are contexts where it's okay not to practice high standards but if you want to discuss a topic like race differences that's politcally charged I think you have to practice high standards.

I don't understand why this comment is met with such opposition. Calories are the amount of energy a food contains. If you use more energy than you take in, then you have to lose weight [stored energy]. There's literally no other way it could work.

The statement can even be further simplified to:

All people who create a calorie deficit lose weight.

-1ChristianKl
Then try rereading the discussion till you have an insight into why people disagree. I don't think you are too stupid to understand it if you make an effort to try to understand.

The fact that selection pressure for mental ability is everywhere present is an excellent point; thanks. As to why it's a troublesome subject, I always maintain "If there is a quantitative difference, I sure as hell hope we never find it."

I think that'd lead to some pretty unfortunate stuff.

-2The_Lion
You may want to practice reciting the litany of Gendlin. So have false beliefs about equality.
3Vaniver
Even though intelligence helps everywhere,* both the benefit and cost from increased intelligence can vary. For example, brains consume quite a bit of calories--and turn them into heat. Everyone is going to have to pay the caloric cost of powering the brain, but the cooling cost of keeping the brain at a healthy temperature is going to vary with climate. Foresight is going to be more useful the more variable local food availability is. * Well, actually, this should be poked at. The relationship between intelligence and reproductive success could easily be nonlinear, even among early hunter-gatherers and farmers. It's not genetically favored to be smart enough to outwit one's genes! (The effects of widespread female education and careers are too recent to be relevant for this conversation.) ? We can already measure intelligence, and have good estimates of heritability from cross-generational intelligence testing. We've found the quantitative difference. All that's left to find out is how it works under the hood, which is knowledge we could use to re-engineer things to make them better. Why stop at discovering that piece?

I don't think this is a stupid question, but everyone else seems to—that is, the immediate reaction to it is usually "there's obviously no difference." I've struggled with this question a lot, and the commonly accepted answer just doesn't sit well with me.

If different races have different skin, muscle/bone structure, genetics, and maybe other things, shouldn't it follow that different races could have different brains, too?

I know this is taboo, and feel the following sort of disclaimer is obligatory: I'm not racist, nor do I think any difference ... (read more)

3Usul
When the relevant experts, Anthropologists, say that the concept of race is a social construct with no basis in biological fact they aren't just bowing to some ivory tower overlord of political correctness. We would do well to consider their expertise as a starting point in any such inquiry. Start anywhere on a map of the Eastern Hemisphere and trace what the people look like in any geographic area relative to the regions beside them and then consider why the term "race" has any meaning. sami, swede, finn, rus, tatar, khazak, turk, kurd, arab, berber, ethiopian, tutu. Or Han, mongol, uiger, kyrgir, uzbek, khazak, pashtun, persian, punjabi, hindi, bangali, burmese, thai, javanese, dayak. Where exactly do you parse the line of Caucasian, Negroid, Mongoloid? And why? Historically, in the cultures from which our culture was derived, skin color, and later eyelid morphology, has been used to define three races (conveniently ignoring the pacific ocean and western hemisphere), for no reason other than the biases of the people in those cultures. If you actually look at facial structure (and why not, no less arbitrary) you'll find the people of the horn of africa have more in common with central european populations in terms of nose and lip shape than they do with more inland African populations. It is our bias to see skin color as more relevant than nose morphology that causes us to group Ethiopians with Hottentots and Biafrans as a single race. We could just as easily group them with Arabs, Berbers, and Kurds. An albino from the Indian subcontinent could claim without fear of contradiction to be an albino of just about any heritage in south asia or europe. Burmese and Japanese have vastly different average skin color but we arbitrarily group them together because of eyelid morphology. So your question becomes "If different people..." to which the answer is: Of course. The question you think you are asking, I think, is best rendered "Are those morphological features our
0MrMind
No, not really. That doesn't mean that they don't, anyway, it's just that it doesn't follow from the premise. We do not know much about individual variability of the genome, and as such we do not know much about what parts of the DNA are affected by individual (a posteriori, ethnic) differences. A recent experiment, for example, showed that there is more DNA variability within a single ethnic group (subsaharians, probably the most ancient alive today) than within different other ethnic groups.
3Lumifer
They do. Even if you don't want to go into IQ measurements, different races have different brain volume just for starters. See e.g. Cochrane:
0ChristianKl
If you look at any two people they have different brains. Even if you look at the same person at different ages they have different brains. If you care about the issue you have to make statements that are less vague.
9fubarobfusco
Given that various mental disorders are heritable, it's not clearly impossible for psychological properties to be selected for. However, unlike dark or light skin (which matters for dealing with sunlight or the lack of it), mental ability is generally useful for survival and success in all climates and regions of the world. Every physical and social setting has problems to figure out; friendships and relationships to negotiate; language to acquire; mates to charm; rivals to overcome or pacify; resources that can be acquired through negotiation, deception, or wit; and so on. This means that all human populations will be subject to some selection pressure for mental ability; whereas with skin color there are pressures in opposite directions in different climates. So why is this such a troublesome subject? The problem with the subject is that there's an ugly history behind it — of people trying to explain away historical conditions (like "who conquered whom" or "who is richer than whom") in terms of psychological variation. And this, in turn, has been used as a way of justifying treating people badly ... historically, sometimes very badly indeed. Classifications don't exist for themselves; they exist in order for people to do things with them. People don't go around classifying things (or people) and then not doing anything with the classification. But sometimes people make particular classifications in order to do horrible things, or to convince other people to do horrible things. "Earthmen are not proud of their ancestors, and never invite them round to dinner." —Douglas Adams

I do not care at all about watching other people play sports. Everyone thinks it's super boring.

... doesn't seem to make much sense to me. In what context would he not mean that?

It took me a minute or two to figure out what you were trying to say. For anyone else who didn't get it first-read, I believe Lumifer's saying something like:

"World War II was 60 years ago. On a 1,400 year timescale, that's not getting somewhere, that's just a random blip of time where no gigantic wars happened; those blips have happened before. What do you mean 'to get to where we are now'?"

Now, to answer that, I think he means "to get to a society where fear of being killed or kidnapped (then killed) isn't a normal part of every day life, and women can wear whatever they want."

2Lumifer
More specifically, if you are operating on the time scale of a millenium and a half and setting up the contemporary Western society as the one to emulate, that contemporary Western society includes, say, the entire XX century. So you're going to emulate attempts at genocide, concentration camps, massive slaughter of civilians through nukes and firebombings, etc.? That's you normal hunter-gatherer tribe, Pharaonic Egypt, Ancient Rome -- pretty much any successful society. Of course if you're treating "women can wear whatever they want" literally, it's not true for contemporary West as well. See the public obscenity laws.

I know this post is five years old, but can someone explain this to me? I understood that both questions could have an answer of no because one may want to minimize the monetary loss / maximize the monetary gain of the poorer family—therefore, the poorer family should get a higher reduction and a lower penalty. Am I misunderstanding something about the situation?

Ah! This puts everything into a sensible context—thank you.

I'd like to have a conversation on said fairness sometime; maybe I'll make a thread about it.

Sorry, I'm a bit confused. Not being fully versed in the terminology of utilitarians, I may be somewhat in the dark...

... but, is the point of this piece "Money should be the unit of caring" or "Money is the unit of caring"? I expected it to be the latter, but it reads to me like the former, with examples as to why it currently isn't. That is, if money were actually the unit of caring—if people thought of how much money they spend on something as synonymous with how much they care about something—then a lawyer would hire someone to work... (read more)

1Raemon
I think his point was a fairly critical "money is the unit of actually caring". Donating your clothes or some soup kitchen time is the thing you do if you want to feel good about yourself. But if you actually care about getting shit done, money is the unit of how much of that you did. This may or may not be fair, and may or may not be a useful framing to consider whether it's fair or not.

I... I don't actually understand why this comment got so many downvotes—and I'm 100% for cryonics. In fact, I agree with the above comment.

Is this a toxic case of downvoting?

I really like this idea, but I can't tell whether I failed the test, I passed the test, or the article-selection for this test was bad.

  • I very much felt the "condemnation of the hated telecoms" (and a bit of victory-hope). I think this means I've failed the test.
  • It took no time to realize that I was reading a debate over a definition and its purpose. I think this means I've passed the test.
  • I feel like the above realization was trivial. I didn't consciously think "I am reading a debate of definition. " In the same way that, when I'm
... (read more)

Wait, IlyaShipitser—I think you overestimate my knowledge of the field of statistics. From what it sounds like, there's an actual, quantitative difference between Bayesian and Frequentist methods. That is, in a given situation, the two will come to totally different results. Is this true?

I should have made it more clear that I don't care about some abstract philosophical difference if said difference doesn't mean there are different results (because those differences usually come down to a nonsensical distinction [à la free will]). I was under the impressi... (read more)

0DanielLC
I think it's more that there are times when frequentists claim there isn't an answer. It's very common for statistical tests to talk about likelihood. The likelihood of a hypothesis given an experimental result is defined as the probability of the result given the hypothesis. If you want to know the probability of the hypothesis, you take the likelihood and multiply it by the prior probability. Frequentists deny that there always is a prior probability. As a result, they tend to just use the base rate as if it were a probability. Conflating the two is equivalent to the base rate fallacy.
0polymathwannabe
EY believes so.

Sorry, I didn't mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.

That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.

When you say "apply a probability to something," I think:

"If one were to have to make a decision based on whether or not event A will happen, how would one consider the available data in making

... (read more)
3Lumifer
So, you are interpreting probabilities as subjective beliefs, then? That is a Bayesian, but not the frequentist approach. Having said that, it's useful to realize that the concept of probability has many different... aspects and in some situations it's better to concentrate on some particular aspects. For example if you're dealing with quality control and acceptable tolerances in an industrial mass production environment, I would guess that the frequentist aspect would be much more convenient to you than a Bayesian one :-) You may want to reformulate this, as otherwise there's lack of clarity with respect to the uncertainty about the event vs. the uncertainty about your probability for the event. But otherwise you're still saying that probabilities are subjective beliefs, right?

I'm having a hard time answering this question with "yes" or "no":

The event in question is "Alice rolling a particular number on a 6-sided die." Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to i... (read more)

8Lumifer
Well, the key point here is whether the word "probability" can be applied to things which already happened but you don't know what exactly happened. You said which implies that probabilities apply only to the future. The question is whether you can speak of probabilities as lack of knowledge about something which is already "fixed". Another issue is that in your definition you just shifted the burden of work to the word "likely". What does it mean that an event is "likely" or "not likely" to happen?

A quantitative thing that indicates how likely it is for an event to happen.

5Lumifer
Let's say Alice and Bob are in two different rooms and can't see each other. Alice rolls a 6-sided die and looks at the outcome. Bob doesn't know the outcome, but knows that the die has been rolled. In your interpretation of the word "probability", can Bob talk about the probabilities of the different roll outcomes after Alice rolled?

I still don't understand the apparently substantial difference between Frequentist and Bayesian reasoning. The subject was brought up again in a class I just attended—and I was still left with a distinct "... those... those aren't different things" feeling.

I am beginning to come to the conclusion that the whole "debate" is a case of Red vs. Blue nonsense. So far, whenever one tries to elaborate on a difference, it is done via some hypothetical anecdote, and said anecdote rarely amounts to anything outside of "Different people somet... (read more)

8IlyaShpitser
This debate is boring and old, people getting work done in ML/stats have long ago moved past it. My suggestion is to find something better to talk about: it's mostly wankery if people other than ML/stats people are talking.
1[anonymous]
My best try: Frequentist statistics are built upon deductive logic; essentially a single hypothesis. They can be used for inductive logic (multiple hypotheses), but only at the more advanced levels which most people never learn. With Bayesian reasoning inductive logic is incorporated into the framework from the very beginning. This makes it harder to learn at first, but introduces fewer complications later on. Now math majors feel free to rip this explanation to shreds.
-1[anonymous]
They are the same thing. Gertrude Stein had it right: probability is probability is probability. It doesn't matter whether your interpretation is Bayesian or frequentist. The distinction between the two is simply how one chooses to apply probability: as a property of the world (frequentist) or as a description of our mental world-models (Bayesian). In either case the rules of probability are the same.
9Kindly
The whole thing is made more complicated by the debate between frequentist and Bayesian methods in statistics. (It obviously matters which you use even if you don't care what to believe about "what probability is", or don't see a difference.)
2polymathwannabe
What "fundamental definition of probability" are you using?

This ends up being somewhat circular then, doesn't it?

Olbers' paradox is only a paradox in an infinite, static universe. A fininte, expanding universe explains the night sky very well. One can't use Olbers' paradox to discredit the idea of an expanding universe when Olbers' paradox depends on the universe being static.

Furthermore, upon re-reading MazeHatter's "The way I see it is..." comment, Theory B does not put us at some objective center of reality. An intuitive way to think about it is: Imagine "space" being the surface of a balloo... (read more)

Trial-and-error.

There are, of course, inconsistencies that I'm unaware of: These are known unknowns. The idea, though, is that when I'm presented with a situation, any such relevant inconsistencies come up and are eliminated (either by a change of the foundation or a change of the judgement).

That is, inconsistencies that exist but don't come up aren't relevant.

An example—extreme but illustrative: Say an element of this foundational set is "I want to 'treat everyone equally'". I interview a Blue man for a job and, upon reflecting, think very ne... (read more)

Haha, that's what I do.

If my cost is $14.32, I know $1.43 is 10%, and half of that is about $0.71, so the tip's $2.14 (though I tip 20%, which is even easier).

0gjm
Right. In the UK, we have a sales tax called VAT (for "value-added tax"). For a while its rate was 17.5%. The way you work that out is: shift the decimal point (10%), halve (5%), halve again (2.5%), and add up :-). (Tips in the UK are usually about 10%, so that's a bit easier. And now our VAT rate is 20%.)

Yes and no. It's a different experience—like taking a bath and going swimming.

Why is the Newcomb problem... such a problem? I've read analysis of it and everything, and still don't understand why someone would two-box. To me, it comes down to:

1) Thinking you could fool an omniscient super-being 2) Preserving some strictly numerical ideal of "rationality"

Time-inconsistency and all these other things seem totally irrelevant.

0ike
Well, there are people who would say the same thing but in reverse. There is a rationale behind that, even if I think it's wrong. I don't think two-boxers think they can fool an omniscient super-being. They do think that whatever is in the box cannot be changed now, so it's foolish to give away $1,000. Would you one-box even with transparent boxes? If not, then you understand this logic. There's a reasonable argument there, especially as in the original paradox Omega is not perfect, so there's a chance that you'll get nothing while passing up $1,000.

I have a fundamental set of morals from which I build my views. They aren't explicit, but my moral decisions all form a consistent web. Sometimes one of these moral-elements must be altered because of some inconsistency it presents, and sometimes my judgement of a situation must be altered because it was inconsistent with the foundation. But ultimately, consistency is what I aim for. I know this is super vague, and for that I apologize.

So far, this has worked for me 100% of the time. There have been some sticky situations, but even those have been worked o... (read more)

1gjm
How do you know? (Or, if the answer is "I can just tell" or something: How do you know that your consistency is any better than anyone else's?)

In this country, we charge students tens of thousands of dollars for that diploma. In fact, at my "public" university, first year students are required to:

  1. Live on campus (this comes out to about $700 per-person, per-month, for a very tiny room you share with other people)
  2. Purchase a meal plan ($1,000 - $2,500 a semester)

Of course, these and all other services (except teaching and research) are privately owned.

Otherwise, everything's pretty much the same.

I check it once a day. My work e-mail a few more times if I'm working (which involve a constant correspondance w/ people).

We observe a finite amount of light from a finite distances.

That's an empirical fact.

That is to say, the empirical and theoretical range of electromagnetic radiation are not in agreement.

Why does observing a finite amount of light from a finite distance contradict anything about the range of electromagnetic radiation?

(Also... has anyone read http://en.wikipedia.org/wiki/Redshift? It's... well... good.)

-3[anonymous]
Because the range of electromagnetic radiation is infinite. (And light is electromagnetic radiation, FYI) So that's what we expected to see. Infinite light. But that's not what we saw. Light does not come from 1 trillion light years away. It does not come from 20 billion light years away. It makes it to Hubble's Limit, c/H. This wasn't expected. To explain its redshifting into nothing, one answer is that space is expanding, and if space is expanding uniformly (which we now know isn't true by a long shot), then it would have began expanding 13.8 billion years ago. Therefore, in theory, only 13.8 billion years existed for light to travel. And that's why you don't seem to think there's a problem. Because you can solve it with some new logic: Here's the recapp: In theory, light travels to infinity In observation, light comes from finite distances So in theory space must expand (v_galaxy = HD) So in theory only a finite amount of time exists in physics So in theory, no problem, we see finite light because of finite time Of course, the evidence against the 13.8 billion number is so overwhelming, they invented an inflation period to magically fast forward through a trillion or more years of it. Even then, all the examples in my OP describe how the theory still doesn't work. If the sun goes around the Milky Way once every 225 million years, then our galaxy has formed in less than 60 spins. Starting to wonder why cosmologists have no legitimate theory of galaxy formation? Now consider trying to explain galaxy that look likes ours that formed in 20 spins. That's what the new observations ask of us. Completely out of the question. Except, now we have dark matter, which can basically do anything arbitrarily, just like dark energy. Here's the alternative: Observation 1. light doesn't travel to infinity New Theory A. light doesn't travel to infinity (v_photon = c - HD) Crazy, I know. Some people say "hey, that challenges relativity!", well, it challenges the a
0Richard_Kennaway
I guess this is a reference to Olbers' paradox. If every ray projected from a given point must eventually hit the surface of a star, then the night sky should look uniformly as bright as the Sun.

Hi. I apologize: this is a pretty long reply—but thanks very much for your comment. :) I really appreciate the opportunity to follow up like this on something I said a few years ago.

My thoughts on being poly. haven't changed. I still think it's the most functional romantic outlook. Although, after re-reading my comment: "without introducing new problems in their place" is somewhat of a loaded statement. If someone has a difficult time being polyamorous, then it introduces a lot of problems. Not to dwell on this too much, but that part of the co... (read more)

0SeekingEternity
Thanks for the reply, and no apology needed; I write long comments myself! The "without introducing new problems" part is actually kind of funny in this time context, since I had just spent a while on the Negative Polyamory Outcomes? post, and poly definitely does sometimes cause problems. If nothing else, I think it introduces new ways to screw up, in terms of both emotional and physical health... but it can also be pretty beneficial when everybody involved can handle the emotions involved. The cost of going against a societal norm will be very context-dependent, I suspect; a bunch of the LW crowd apparently live in the SF Bay Area, where it's fairly common and has minimal societal costs, but some of us are in less-progressive areas. Congrats on the happy relationship (against earlier expectations). I wonder how many people would be able to just slide into a consensually non-monogamous (dammit, we need better words; even that phrase isn't accurate) relationship when they hadn't previously thought of themselves as poly. It came easy to me - I'm actually kind of upset by jealousy in general, and once I found other people who were OK with this relationship style it just fell into place - but a lot of people do seem to have hang-ups with the idea.

This may be an unrelated question, but I've seen a lot of similar exercises here—is the general implication that:

1 Person tortured for 1 year = 365 people tortured for 1 day = 8760 people tortured for 1 hour = 525600 people tortured for 1 minute?

2TheOtherDave
Agreeing with shminux above, elaborating a little... there's a general agreement that marginal utility changes aren't linear with changes in the thing being measured. How much I value a hundred dollars depends on how much money I have; how much I antivalue a minute of torture depends on how long I've already been tortured. So I expect that very few people here will claim that 1 person getting a million dollars has the same aggregate utility as a million people getting a dollar each, or that 1 person tortured for a year has the same aggregate antiutility as half a million people tortured for a minute. One reason the Torture vs Dust Specks story uses such huge numbers is to avoid having to worry about that.
2Shmi
It's not the exact numbers that matter, it'as the (transitivity) assumption that they exist. Whether 1 Person tortured for 1 year = 525600 people tortured for 1 minute or 10000000000 people tortured for 1 minute is immaterial to the conclusion.

... Oh.

Hm. In that case, I think I'm still missing something fundamental.

2Decius
I care about self-consistency because an inconsistent self is very strong evidence that I'm doing something wrong. It's not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away. The general case of "find a self-inconsistency, make the minimum change to remove it" is not error-correcting.
0lalaithion
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you're aware that you're being inconsistent, you can still alter your actions to try and correct for that fact.

I mean moreso: Consider a FAI so advanced that it decides to reward all beings who did not contribute to creating Roko's Basilisk with eternal bliss, regardless of whether or not they knew of the potential existence of Roko's Basilisk.

Why is Roko's Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes? What's so special about this one in particular that makes it non-negligible? Or to make anyone concerned about it in the slightest? (That is the part I'm missing. =\ )

0ChristianKl
The idea is that an FAI build on timeless decision theory might automatically behave that way. There's also Eliezer's conjecture that any working FAI has to be build on timeless decision theory.
2RowanE
Well, in the original formulation, Roko's Basilisk is an FAI that decided the good from bringing an FAI into the world a few days earlier (saving ~150,000 lives per day eralier it gets here) outweighs the bad from making the threats, so there's no reason it shouldn't want you to aid FAI projects that promise not to make a Basilisk, just as long as you do something instead of sitting around, so there's no inconsistency and now there's more than one being trying to acausally motivate you into working yourself to the bone for something that most people think is crazy. More generally, we have more than zero information about future AI, because they will be built by humans if they are built at all. Additionally, we know even more if we rule out certain categories, such as the archetypal "paperclip maximiser". There's room for a lot of speculation and uncertainty, but far from enough room to assume complete agnosticism and that for every AI that wants one thing from us there's an equal and opposite AI that wants the opposite.

I don't understand why Roko's Basilisk is any different from Pascal's Wager. Similarly, I don't understand why its resolution is any different than the argument from inconsistent revelations.

Pascal's Wager: http://en.wikipedia.org/wiki/Pascal%27s_Wager

Argument: http://en.wikipedia.org/wiki/Argument_from_inconsistent_revelations#Mathematical_description

I would actually be surprised (really, really surprised) if many people here have not heard of these things before—so I am assuming that I'm totally missing something. Could someone fill me in?

(Edit: Instead... (read more)

3RowanE
I'm not sure I understand timeless decision theory well enough to give the "proper" explanation for how it's supposed to work. You can see one-boxing on Newcomb's problem as making a deal with Omega - you promise to one-box, Omega promises to put $1,000,000 in the box. But neither of you ever actually talked to each other, you just imagined each other and made decisions on whether to cooperate or not, based on your prediction that Omega is as described in the problem, and Omega's prediction of your actions which may as well be a perfect simulation of you for how accurate they are. The Basilisk is trying to make a similar kind of deal, except it wants more out of you and is using the stick instead of the carrot. Which makes the deal harder to arrange - the real solution is just to refuse to negotiate such deals/not fall for blackmail. Which is true more generally in game theory, but "we do not negotiate with terrorists" much easier to pull off with threats that are literally only imaginary. Although, the above said, we don't really talk about the Basilisk here in capacities beyond the lingering debate over whether it should have been censored and "oh look, another site's making LessWrong sound like a Basilisk-worshipping death cult".

To those who seem to not like the manner in which XiXiDu is apologizing: If someone who genuinely thinks the sky is falling apologizes to you while still wearing their metal hat—then that's the best you can possibly expect. To reject the apology until the hat is removed is...

Load More