What are you all most interested in?
Your solution to the "Four People Who Do Everything" organization problem. This will be immediately relevant to my responsibilities within the next couple months.
I'm actually not making an accusation of overconfidence; just pointing out that using qualified language doesn't protect against it. I would prefer language that gives (or at least suggests) probability estimates or degrees of confidence, rather than phrases like "looks like" or "many suggest".
ID theorists are more likely than evolutionary biologists to use phrases like "looks like" or "many suggest" to defend their ideas, because those phrases hide the actual likelihood of ID. When I find myself thinking, "it...
An exercise in parody:
The bacterial flagellum looks like a good candidate for an intelligently designed structure.
Many [non-biologist] researchers think Intelligent Design has explanatory value.
Many [non-biologist] researchers suggest Intelligent Design is scientifically useful.
Our brains may have been intelligently designed to...
but we may not have been designed to...
Evolutionary psychology isn't as catastrophically implausible as ID; hence the bit about parody. The point is that merely using qualified language is no guarantee against overconfidence.
I'm not convinced that "offense" is a variety of "pain" in the first place. They feel to me like two different things.
When I imagine a scenario that hurts me without offending me (e.g. accidentally touching a hot stovetop), I anticipate feelings like pain response and distraction in the short term, fear in the medium term, and aversion in the long term.
When I imagine a scenario that offends me without hurting me (e.g. overhearing a slur against a group of which I'm not a member) I anticipate feelings like anger and urge-to-punish in th...
I'm not convinced that "offense" is a variety of "pain" in the first place. They feel to me like two different things.
Extremely important point. And the "offense" variety of feeling is the dangerous one - the one we shouldn't accede to.
(A side note: one of the most insidious forms of procrastination is taking offense at a problem, rather than actually solving it. Offense motivates punish-and-protest behavior, rather than problem-solving behavior.)
They're a physical effect caused by the operation of a brain
You haven't excluded a computational explanation of qualia by saying this. You haven't even argued against it! Computations are physical phenomena that have meaningful consequences.
"Mental phenomena are a physical effect caused by the operation of a brain."
"The image on my computer monitor is a physical effect caused by the operation of the computer."
I'm starting to think you're confused as a result of using language in a way that allows you to claim computations "do...
I didn't intend to start a reductionist "race to the bottom," only to point out that minds and computations clearly do exist. "Reducible" and "non-existent" aren't synonyms!
Since you prefer the question in your edit, I'll answer it directly:
...if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced y
If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context.
As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Ob...
"Computation exists within physics" is not equivalent to " "2" exists within physics."
If computation doesn't exist within physics, then we're communicating supernaturally.
If qualia aren't computations embodied in the physical substrate of a mind, then I don't know what they are.
I'm asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.
Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.
I know what it means to compute "2+2" on an abacus. I know what it means to compute "2+2" on a computer. I know what it means to simulate "2+2 on an abacus" on a computer. I ev...
the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.... [to simulate humans] the simulator must physically incorporate a human brain.
It seems like the definition of "physical" used in this article is "existing within physics" (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all "physical" and are referred to as such in the article itself.
Brains are physical, and local physics seems Tu...
http://en.wikipedia.org/wiki/Intentional_base_on_balls
Baseball pitchers have the option to 'walk' a batter, giving the other team a slight advantage but denying them the chance to gain a large advantage. Barry Bonds, a batter who holds the Major League Baseball record for home runs (a home run is a coup for the batter's team), also holds the record for intentional walks. By walking Barry Bonds, the pitcher denies him a shot at a home run. In other words, Paige is advising other pitchers to walk a batter when it minimizes expected risk to do so.
Since thi...
Other concepts that happen to also be termed "values", such as your ancestors' values, don't say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.
I'm having difficulty understanding the relevance of this sentence. It sounds like you think I'm treating "my ancestors' values" as a term in my own set of values, instead of a separate set of values that overlaps with mine in some respects.
My ancestors tried to steer their future away from economic systems that in...
The problem with this logic is that my values are better than those of my ancestors. Of course I would say that, but it's not just a matter of subjective judgment; I have better information on which to base my values. For example, my ancestors disapproved of lending money at interest, but if they could see how well loans work in the modern economy, I believe they'd change their minds.
It's easy to see how concepts like MWI or cognitive computationalism affect one's values when accepted. It's likely bordering on certain that transhumans will have more ins...
Reading LessWrong is primarily a willpower restorer for me. I use the "hit" of insight I get from reading a high quality post or comment to motivate me to start Working (and it's much easier to continue Working than to start). I save posts that I expect to be high quality (like Yvain's latest) for just before I'm about to start Working. Occasionally the insight itself is useful, of course.
Commenting on LessWrong has raised my standards of quality for my own ideas, understanding them clearly, and expressing them concisely.
I don't know if either of those are Work, but they're both definitely Win.
New ideas are held to much higher standard than old ones... Behaviorists, Freudians, and Social Psychologists all had created their own theories of "ultimate causation" for human behavior. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.
I'm not sure what you mean. Are you saying that standards of evidence for new ideas are higher now than they have been in the past, or that people are generally biased in favor of older ideas over newer ones? Either claim interests me and ...
I agree (see, e.g., The Second Law of Thermodynamics, and Engines of Cognition for why this is the case). Unfortunately, I see this as a key inferential gap between people who are and aren't trained in rationality.
The problem is that many people-- dare I say most-- feel no obligation to gather evidence for their intuitive feelings, or to let empirical evidence inform their feelings. They don't think of intuitive feelings as predictions to be updated by Bayesian evidence; they treat their intuitive feelings as evidence.
It's a common affair (at least in th...
'Instinct,' 'intuition,' 'gut feeling,' etc. are all close synonyms for 'best guess.' That's why they tend to be the weakest links in an argument-- they're just guesses, and guesses are often wrong. Guessing is useful for brainstorming, but if you really believe something, you should have more concrete evidence than a guess. And the more you base a belief on guesses, the more likely that belief is to be wrong.
Substantiate your guesses with empirical evidence. Start with a guess, but end with a test.
Sure, but then the question becomes whether the other programmer got the program right...
My point is that if you don't understand a situation, you can't reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber's implementation, logical might well conclude that there was just some glitch in the code that he didn't notice...
If--and I mean do mean if, I wouldn't want to spoil the empirical test--logical doesn't understand the situation well enough to predict the correct outcome, there's a good chance he won't be able to program it into a computer correctly regardless of his programming skill. He'll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.
On the other hand, if he's right about the Monty Hall problem and he programs it correctly... it will still return the result he expects.
I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.
"If Monty 'replaced' a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket."
"Monty wants to keep the diamond for himself, so if he's offering to trade with me, he probably thinks I have it and wants to get it back."
It might seem paradoxical, but using 'transmute at random' instead of 'replace', or 'Omega' instead of 'Monty Hall', act...
Your analogy doesn't hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.
If you've really thought about XiXiDu's analogies and they haven't helped, here's another; this is the one that made it obvious to me.
Omega transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bu...
As a tentative rephrasing, something that's "emotionally implausible" is something that "I would never do" or that "could never happen to me." Like you, I can visualize myself falling with a high degree of accuracy; but I can't imagine throwing myself off the bridge in the first place. Suicide? I would never do that.
It occurs to me that "can't imagine" implies a binary division when ability to imagine is more of a continuum: the quality of imagination drops steadily between trying to imagine brushing my teeth (ev...
If you've exercised before, you can probably remember the feeling in your body when you're finished--the 'afterglow' of muscle fatigue, endorphins, and heightened metabolism--and you can visualize that. If you haven't, or can't remember, you can imagine feelings in your mind like confidence and self-satisfaction that you'll have at the end of the exercise.
As for studying, the goal isn't to study, per se; it's to do well on the test. Visualizing the emotional rewards of success on the test itself can motivate you to study, as well as get enough sle...
The human experience of colour is not really about recognizing a specific wavelength of light.
True, but irrelevant to the subject at hand.
the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.
No, the qualia of color have nothing to do with the observed object. This is the pons asinorum of qualia. The experience of color is a product of the invariant surface properties of objects; the qualia of color is a product of the relationship between that experience and oth...
Your eyes do detect the frequency of light, your nose does detect the chemical composition of smells, and your tongue does detect the chemical composition of food. That's exactly what the senses of sight, smell, and taste do.
Our brains then interpret the data from our eyes, noses, and tongues as color, scent, and flavor. It's possible to 'decode', e.g., color into a number (the frequency of light), and vice versa; you can find charts on the internet that match frequency/wavelength numbers to color. Decoding taste and scent data into the molecules that p...
When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.
Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it's impossible for anything, even Omega, to simulate itself perfectly. So a general "perfect predictor" may be impossible. B...
The more I think about this, the more I suspect that the problem lies in the distinction between quantum and logical coin-flips.
Suppose this experiment is carried out with a quantum coin-flip. Then, under many-worlds, both outcomes are realized in different branches. There are 40 future selves--2 red and 18 green in one world, 18 red and 2 green in the other world--and your duty is clear:
(50% ((18 +$1) + (2 -$3))) + (50% ((18 -$3) + (2 +$1))) = -$20.
Don't take the bet.
So why Eliezer's insistence on using a logical coin-flip? Because, I suspect,...
ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the statistics of the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.
Of course, this problem of identity and continuity has been hash...
[Rosencrantz has been flipping coins, and all of them are coming down heads]
Guildenstern: Consider: One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within un-, sub- or super-natural forces. Discuss.
Rosencrantz: What?
Rosencrantz & Guildenstern Are Dead, Tom Stoppard
Newcomb's problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.
Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega's actions instead of focusing on the problem of decision-making under prediction.
IAWY and this also applies to hypotheticals testing non-mathematical models. For instance, there isn't much isomorphism between Newcomblike problems involving perfectly honest game players who can predict your every move, and any gamelike interaction you're ever likely to have.
Thanks for the heads-up. Fixed.
I may be in the minority in this respect, but I like it when Less Wrong is in crisis. The LW community is sophisticated enough to (mostly) avoid affective spirals, which means it produces more and better thought in response to a crisis. I believe that, e.g., the practice of going to the profile of a user you don't like and downvoting every comment, regardless of content, undermines Less Wrong more than any crisis has or will.
Furthermore, I think the crisis paradigm is what a community of developing rationalists ought to look like. The conceit of student...
Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible.
Just because what you believe happens to be true, doesn't mean you're right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does--then I still wasn't right to believe it would.
Hypothetical Hume-worlders, like us, do not have the luxury of access to reality's "source code": they have not been informed that they exist in a hypothetical Hum...
I agree. My comment was meant as a clarification, not a correction, because the paragraph I quoted and the subsequent one could be misinterpreted to suggest that humans and animals use entirely different methods of cognition--"excecut[ing] certain adaptions without really understanding how or why they worked" versus an "explicit goal-driven propositional system with a dumb pattern recognition algorithm." I expect we both agree that human cognition is a subsequent modification of animal cognition rather than a different system evolved...
All animals except for humans had no explicit notion of maximizing the number of children they had, or looking after their own long-term health. In humans, it seems evolution got close to building a consequentialist agent...
Clarification: evolution did not build human brains from scratch. Humans, like all known life on earth, are adaptation executers. The key difference is that thanks to highly developed frontal lobes, humans can predict the future more powerfully than other animals. Those predictions are handled by adaptation-executing parts of the...
I think the point of the quote is not that young folks are more able to unlearn falsehoods; it's that they haven't learned as many falsehoods as old people, just by virtue of not having been around as long. If you can unlearn falsehoods, you can keep a "young" (falsehood-free) mind.
You wrote:
My belief in science (trustworthy observation, logic, epistemology, etc.) is equivalent with my belief in God, which is why I find belief in God to be necessary.
Suppose, indeed, I were a rationalist of an Untheist society... Would it be very long before I asked if there was some kind of meta-organization?
The meta-organization is a property of the natural world.
It sounds like you're saying that your "God" is not supernatural. This isn't just a problem of proper usage. A theist who believes in a deity (which, given proper usage, is ...
Is there anything supernatural about meta-organization?
Take your hypothetical a step further: suppose that not only were you born into an Untheist society, but also a universe where physical reality, evolution, and mathematics did not "work." In universe-prime, the laws of physics do not permit stars to form, yet the Earth orbits the Sun; evolution cannot produce life, but humans exist; physicists and mathematicians prove that math can't describe reality, yet people know where the outfielder should stand to catch the fly ball.
byrnema-prime woul...
How about both?
If I understand your terms correctly, it may be possible for realities that are not base-level to be optimization-like without being physics-like, e.g. the reality generated by playing a game of Nomic, a game in which players change the rules of the game. But this is only possible because of interference by optimization processes from a lower-level reality, whose goals ("win", "have fun") refer to states of physics-like processes. I suspect that base-level reality be physics-like. To paraphrase John Donne, no optimizat...
If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.
There's no need to speculate--this has actually happened. From what I know of the current state of Native American culture (which is admittedly limited), modern science is fully accepted for practical purposes, and traditional beliefs guide when to party, how to mourn, how to celebrate rites of passage, etc.
The only people who seem to think...
'Correctness' in theories is a scalar rather than a binary quality. Phlogiston theory is less correct (and less useful) than chemistry, but it's more correct--and more useful!--than the theory of elements. The fact that the modern scientific theories you list are better than their precursors, does not mean their precursors were useless.
You have a false dichotomy going here. If you know of someone who "knows how human cognition works on all scales", or even just a theory of cognition as powerful as Newton's theory of mechanics is in its domain,...
Likewise, every other actual practice that you think would be a good thing for you to do. If you think that, and you are not doing it, why?
If you want to understand akrasia, I encourage you to take your own advice. Take a moment and write down two or three things that would have a major positive impact in your life, that you're not doing.
Now ask yourself: why am I not doing these things? Don't settle for excuses or elaborate System Two explanations why you don't really need to do them after all. You've already stipulated that they would have a major...
Truth-telling is necessary but not sufficient for honesty. Something more is required: an admission of epistemic weakness. You needn't always make the admission openly to your audience (social conventions apply), but the possibility that you might be wrong should not leave your thoughts. A genuinely honest person should not only listen to objections to his or her favorite assumptions and theories, but should actively seek to discover such objections.
What's more, people tend to forget that their long-held assumptions are assumptions and treat them as facts. Forgotten assumptions are a major impediment to rationality--hence the importance of overcoming bias (the action, not the blog) to a rationalist.
Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while.
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being.
Probably true. But if you use those statistical facts about most people as an excuse to never listen to anyone, or even to one s...
Vladimir, the problem has nothing to do with strength--some of these students did very well in other classes. Nor is it about effort--some students had already given up and weren't bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn't solve the problem.
The problem was simply that they believed "math" was impossible for them. The best way to get rid of that belief--maybe the only effective way--was to give them the experien...
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they're taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day's instruction of a language class and learned something about that language. But if a stu...
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
"I'm terrible at math."
"I hate math class."
"I'm just dumb."
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments--very small inferential gaps, no "trick questions".
Now, the "I'm terrible at math" attitude...
Agreed--most of the arguments in good faith that I've seen or participated in were caused by misunderstandings or confusion over definitions.
I would add that once you know the jargon that describes something precisely, it's difficult to go back to using less precise but more understandable language. This is why scientists who can communicate their ideas in non-technical terms are so rare and valuable.
This is why you don't eat silica gel.
I'm always mildly bemused by the use of quotation marks on these packets. I've always seen:
Why would the quotation actually be printed on the package? Who are they quoting?
In case you don't know about this site: The "Blog" of "Unnecessary" Quotation Marks.
Edit: When I worked fast food I had a store manager who used (parentheses) and [various kinds of {brackets and braces}] to ({[emphasize]}) things.