All of Sideways's Comments + Replies

Sideways100

This is why you don't eat silica gel.

I'm always mildly bemused by the use of quotation marks on these packets. I've always seen:

SILICA GEL

" DO NOT EAT "

Why would the quotation actually be printed on the package? Who are they quoting?

4thomblake
Seriously folks, click the link in parent.
arundelo100

In case you don't know about this site: The "Blog" of "Unnecessary" Quotation Marks.

Edit: When I worked fast food I had a store manager who used (parentheses) and [various kinds of {brackets and braces}] to ({[emphasize]}) things.

What are you all most interested in?

Your solution to the "Four People Who Do Everything" organization problem. This will be immediately relevant to my responsibilities within the next couple months.

I'm actually not making an accusation of overconfidence; just pointing out that using qualified language doesn't protect against it. I would prefer language that gives (or at least suggests) probability estimates or degrees of confidence, rather than phrases like "looks like" or "many suggest".

ID theorists are more likely than evolutionary biologists to use phrases like "looks like" or "many suggest" to defend their ideas, because those phrases hide the actual likelihood of ID. When I find myself thinking, "it... (read more)

1thomblake
Sorry, "you're" above refers to its great-grandparent. Will edit.
Sideways100

An exercise in parody:

  • The bacterial flagellum looks like a good candidate for an intelligently designed structure.

  • Many [non-biologist] researchers think Intelligent Design has explanatory value.

  • Many [non-biologist] researchers suggest Intelligent Design is scientifically useful.

  • Our brains may have been intelligently designed to...

  • but we may not have been designed to...

Evolutionary psychology isn't as catastrophically implausible as ID; hence the bit about parody. The point is that merely using qualified language is no guarantee against overconfidence.

5thomblake
No, but qualified language by itself is no basis for an accusation of overconfidence, if it is not accompanied by overconfident probabilities. The 'qualified language' is the only indication I see in the text of degree of confidence, and it indicates a general lack of confidence, and so I don't see on what basis [EDIT:] neq1 is [/EDIT] making the accusation.
Sideways460

I'm not convinced that "offense" is a variety of "pain" in the first place. They feel to me like two different things.

When I imagine a scenario that hurts me without offending me (e.g. accidentally touching a hot stovetop), I anticipate feelings like pain response and distraction in the short term, fear in the medium term, and aversion in the long term.

When I imagine a scenario that offends me without hurting me (e.g. overhearing a slur against a group of which I'm not a member) I anticipate feelings like anger and urge-to-punish in th... (read more)

pjeby260

I'm not convinced that "offense" is a variety of "pain" in the first place. They feel to me like two different things.

Extremely important point. And the "offense" variety of feeling is the dangerous one - the one we shouldn't accede to.

(A side note: one of the most insidious forms of procrastination is taking offense at a problem, rather than actually solving it. Offense motivates punish-and-protest behavior, rather than problem-solving behavior.)

They're a physical effect caused by the operation of a brain

You haven't excluded a computational explanation of qualia by saying this. You haven't even argued against it! Computations are physical phenomena that have meaningful consequences.

"Mental phenomena are a physical effect caused by the operation of a brain."

"The image on my computer monitor is a physical effect caused by the operation of the computer."

I'm starting to think you're confused as a result of using language in a way that allows you to claim computations "do... (read more)

I didn't intend to start a reductionist "race to the bottom," only to point out that minds and computations clearly do exist. "Reducible" and "non-existent" aren't synonyms!

Since you prefer the question in your edit, I'll answer it directly:

if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced y

... (read more)
-1dfranke
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother. They're a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive. ETA: The "grandmother cell" might have been a poorly chosen counterexample, since apparently there's some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.

If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context.

As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Ob... (read more)

0dfranke
Not quite reductionist enough, actually: physics is made of the relationship rules between configurations of spacetime which exist independently of any formal model of them that give us concepts like "quark" and "lepton". But digging deeper into this linguistic rathole won't clarify my point any further, so I'll drop this line of argument. If you started perceiving two apples identically to the way you perceive two oranges, without noticing their difference in weight, smell, etc., then you or at least others around you would conclude that you were quite ill. What is your justification for believing that being unable to distinguish between things that are "computationally identical" would leave you any healthier?

"Computation exists within physics" is not equivalent to " "2" exists within physics."

If computation doesn't exist within physics, then we're communicating supernaturally.

If qualia aren't computations embodied in the physical substrate of a mind, then I don't know what they are.

0dfranke
Computation does not exist within physics, it's a linguistic abstraction of things that exist within physics, such as the behavior of a CPU. Similarly, "2" is an abstraction of a pair of apples, a pair of oranges, etc. To say that the actions of one physical medium necessarily has a similar physical effect (the production of qualia) as the actions of another physical medium, just because they abstractly embody the same computation, is analagous to saying that two apples produce the same qualia as two oranges, because they're both "2". This is my last reply for tonight. I'll return in the morning.

I'm asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.

Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.

I know what it means to compute "2+2" on an abacus. I know what it means to compute "2+2" on a computer. I know what it means to simulate "2+2 on an abacus" on a computer. I ev... (read more)

-3dfranke
You simulate physical phenomena -- things that actually exist. You compute combinations of formal symbols, which are abstract ideas. 2 and 4 are abstract; they don't exist. To claim that qualia are purely computational is to claim that they don't exist.

the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.... [to simulate humans] the simulator must physically incorporate a human brain.

It seems like the definition of "physical" used in this article is "existing within physics" (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all "physical" and are referred to as such in the article itself.

Brains are physical, and local physics seems Tu... (read more)

-1dfranke
You're continuing to confuse reasoning about a physical phenomenon with causing a physical phenomenon. By the Church-Turing thesis, which I am in full agreement with, a Turing machine can reason about any physical phenomenon. That does not mean a Turing machine can cause any physical phenomenon. A PC running a program which reasons about Jupiter's gravity cannot cause Jupiter's gravity.
Sideways130

http://en.wikipedia.org/wiki/Intentional_base_on_balls

Baseball pitchers have the option to 'walk' a batter, giving the other team a slight advantage but denying them the chance to gain a large advantage. Barry Bonds, a batter who holds the Major League Baseball record for home runs (a home run is a coup for the batter's team), also holds the record for intentional walks. By walking Barry Bonds, the pitcher denies him a shot at a home run. In other words, Paige is advising other pitchers to walk a batter when it minimizes expected risk to do so.

Since thi... (read more)

2wedrifid
... to some. There are others who enjoy watching games being played strategically. I don't, for example, take basketball seriously unless the teams are using a full court press. What do you do, for example, if all the bases are loaded and the good hitter comes in? Do you give away the run? It may depend on the score and it would involve some complex mathematical reasoning. That single decision would be more memorable to me than the rest of the entire game of baseball! The latter wouldn't be a reasonable claim to make, even taking your premises regarding what sportsmanship is and what is good for the game for granted. For Paige to be claimed to be advising defection in the Prisoner's Dilemma Paige would have to be asserting or at least believe that the payoffs are PDlike. Since Paige doesn't give this indication he instead seems to be advocating thinking strategically instead of following your pride. Curiously, assuming another set of credible beliefs Paige could consider walking the batter to be the cooperation move in the game theoretic situation. Specifically, when there is another pitcher known to walk who cannot be directly influenced. If all the other pitchers publicly declare that the game's rules should be changed in such a way that free walking is less desirable and then free walk hitters whenever it is is strategic to do so they may force the rule-makers' hands. If just one pitcher tried this strategy of influence then he would lose utility, sacrificing his 'good guy' image without even getting all the benefits that the original free-walker got for being the 'lone bad boy strategic prick pitcher'. If all the pitchers except one cooperate then the one pitcher who lets himself be hit out of the park cleans up on the approval-by-simplistic-folks stakes by being the 'boy scout only true sportsman' guy while everyone else does the hard work of looking bad in order to improve the rules, the game in the long term and the ability of pitchers not to be competitiv
2benelliott
I'm sorry but I'm not very familiar with baseball. Does walking a batter mean something like intentionally throwing the ball to third or fourth base so he doesn't get caught out but can't do a home run? If this is the case then it seems like the advice is more about knowing when to lose.

Other concepts that happen to also be termed "values", such as your ancestors' values, don't say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.

I'm having difficulty understanding the relevance of this sentence. It sounds like you think I'm treating "my ancestors' values" as a term in my own set of values, instead of a separate set of values that overlaps with mine in some respects.

My ancestors tried to steer their future away from economic systems that in... (read more)

5Vladimir_Nesov
That's closer to the sense I wanted to convey with this word. Distinction is between a formal criterion of preference and computationally feasible algorithms for estimation of preference between specific plans. The concept relevant for this discussion is the former one.
Sideways-20

The problem with this logic is that my values are better than those of my ancestors. Of course I would say that, but it's not just a matter of subjective judgment; I have better information on which to base my values. For example, my ancestors disapproved of lending money at interest, but if they could see how well loans work in the modern economy, I believe they'd change their minds.

It's easy to see how concepts like MWI or cognitive computationalism affect one's values when accepted. It's likely bordering on certain that transhumans will have more ins... (read more)

-2[anonymous]
I haven't yet been convinced that my values are any better than the values of my ancestors by this argument. Yes if I look at history people generally tend to move towards my own current values (with periods of detours). But this would be true if I looked at my travelled path after doing a random walk. Sure there are things like knowledge changing proxy values due to knowledge (I would like my ancestors favour punishing witches if it turned out that they factually do use demonically gifted powers to hurt others), but there has also been just plain old value drift. There are plenty of things our ancestors would never approve of even if they had all the knowledge we had.
4Vladimir_Nesov
Your values are what they are. They talk about how good certain possible future-configurations are, compared to other possible future-configurations. Other concepts that happen to also be termed "values", such as your ancestors' values, don't say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values. If you'd like for future people to be different in given respects from how people exist now, that is also a value judgment. For future people to feel different about their condition than you feel about their condition would make them disagree with your values (and dually).
Sideways170

Reading LessWrong is primarily a willpower restorer for me. I use the "hit" of insight I get from reading a high quality post or comment to motivate me to start Working (and it's much easier to continue Working than to start). I save posts that I expect to be high quality (like Yvain's latest) for just before I'm about to start Working. Occasionally the insight itself is useful, of course.

Commenting on LessWrong has raised my standards of quality for my own ideas, understanding them clearly, and expressing them concisely.

I don't know if either of those are Work, but they're both definitely Win.

1patrissimo
Fascinating! This is very different from my own experience. I believe in the "manage your energy" / Pomodoro techniques of regular breaks, so I might work for 25-50 minutes with consciously directed attention and then go read a blog to relax before working again. I am no disbeliever of conscious relaxation and breaks. I am a disbeliever in unconscious slipping of attention. If I let my attention slip to blogs or Reddit or comments during my chunks of work time, it tends to feed on itself and happen again and again and decrease my productivity, not restore my willpower. YMMV. If your attention slips are self-correcting, then congratulations! Your mind has a feature that I envy.
Sideways140

New ideas are held to much higher standard than old ones... Behaviorists, Freudians, and Social Psychologists all had created their own theories of "ultimate causation" for human behavior. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.

I'm not sure what you mean. Are you saying that standards of evidence for new ideas are higher now than they have been in the past, or that people are generally biased in favor of older ideas over newer ones? Either claim interests me and ... (read more)

I agree (see, e.g., The Second Law of Thermodynamics, and Engines of Cognition for why this is the case). Unfortunately, I see this as a key inferential gap between people who are and aren't trained in rationality.

The problem is that many people-- dare I say most-- feel no obligation to gather evidence for their intuitive feelings, or to let empirical evidence inform their feelings. They don't think of intuitive feelings as predictions to be updated by Bayesian evidence; they treat their intuitive feelings as evidence.

It's a common affair (at least in th... (read more)

7MichaelVassar
Intuitive feelings are evidence AND predictions. Sadly, most people simply think of them as facts.

'Instinct,' 'intuition,' 'gut feeling,' etc. are all close synonyms for 'best guess.' That's why they tend to be the weakest links in an argument-- they're just guesses, and guesses are often wrong. Guessing is useful for brainstorming, but if you really believe something, you should have more concrete evidence than a guess. And the more you base a belief on guesses, the more likely that belief is to be wrong.

Substantiate your guesses with empirical evidence. Start with a guess, but end with a test.

2thomblake
I disagree with this one. If it's really your best guess, it should be the result of all of the information you have to muster. And so either each of "instinct", "intuition", "gut feeling", etc. are your best chance of being right, or they're not close synonyms for "best guess".

Sure, but then the question becomes whether the other programmer got the program right...

My point is that if you don't understand a situation, you can't reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber's implementation, logical might well conclude that there was just some glitch in the code that he didn't notice... (read more)

If--and I mean do mean if, I wouldn't want to spoil the empirical test--logical doesn't understand the situation well enough to predict the correct outcome, there's a good chance he won't be able to program it into a computer correctly regardless of his programming skill. He'll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.

On the other hand, if he's right about the Monty Hall problem and he programs it correctly... it will still return the result he expects.

5khafra
He could try one of many already-written programs if he lacks the skill to write one.

I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.

"If Monty 'replaced' a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket."

"Monty wants to keep the diamond for himself, so if he's offering to trade with me, he probably thinks I have it and wants to get it back."

It might seem paradoxical, but using 'transmute at random' instead of 'replace', or 'Omega' instead of 'Monty Hall', act... (read more)

1Blueberry
I really like this technique.

Your analogy doesn't hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.

If you've really thought about XiXiDu's analogies and they haven't helped, here's another; this is the one that made it obvious to me.

Omega transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bu... (read more)

0AlephNeil
I'm not keen on this analogy because you're comparing the effect of the new information to an agent freely choosing to pour sand in a particular way. A confused person won't understand why Omega couldn't decide to distribute sand some other way - e.g. equally between the two remaining buckets. Anyway, I think JoshuaZ's explanation is the clearest I've ever seen.
-5logical
2JoshuaZ
That works better for you? That's deeply surprising. Using entities like Omega and transmutation seems to make things more abstract and much harder to understand what the heck is going on. I must need to massively update my notions about what sort of descriptors can make things clear to people.

As a tentative rephrasing, something that's "emotionally implausible" is something that "I would never do" or that "could never happen to me." Like you, I can visualize myself falling with a high degree of accuracy; but I can't imagine throwing myself off the bridge in the first place. Suicide? I would never do that.

It occurs to me that "can't imagine" implies a binary division when ability to imagine is more of a continuum: the quality of imagination drops steadily between trying to imagine brushing my teeth (ev... (read more)

3pjeby
Allow me to rephrase more precisely for you. It's not plausibility that's at issue, it's whether you have a thought that causes you to stop visualizing. If, as you mentioned in your previous comment, you imagine slapping your mother and "fail utterly", it's not because you can't imagine it, it's because your (early) evaluation of what you imagine causes you to stop before you can really put yourself in the situation. Knowing that, you can ignore the reaction that tells you it's bad, and proceed. IOW, it's not that you can't imagine slapping your mother, it's that you prefer to stop before you actually experience what it would be like. In other words, it's not "can't", it's won't.
3Morendil
I really do mean I imagine committing suicide. It really does feel to me as if it's not outlandish that I might just, as it were, blow a fuse and jump off the bridge on an impulse. I can project how I'd feel the instant after the "decision" - scared out of my mind, gut-wrenchingly regretful, but also inappropriately exhilarated. Conversely, I'm not sure I can imagine brushing my teeth in great detail - it's too boring. But I do occasionally imagine things that I would describe as emotionally implausible with some degree of precision. It's possible that I'm just weird, but anyway I mean my observations as cautions against generalizing from a sample of one.

If you've exercised before, you can probably remember the feeling in your body when you're finished--the 'afterglow' of muscle fatigue, endorphins, and heightened metabolism--and you can visualize that. If you haven't, or can't remember, you can imagine feelings in your mind like confidence and self-satisfaction that you'll have at the end of the exercise.

As for studying, the goal isn't to study, per se; it's to do well on the test. Visualizing the emotional rewards of success on the test itself can motivate you to study, as well as get enough sle... (read more)

1Morendil
How do you mean that? I often find myself imagining things that are totally implausible emotionally, but quite possible physically, for instance, once in a while I imagine throwing myself off a bridge that I'm crossing, and I can feel my guts churning. (When I say "imagine" here, I mean I actually visualize myself falling, it's a stronger thing to me than idly considering the notion of falling.)
2pjeby
Beyond that, it's the improvements or fixes to Status, Affiliation, Safety, or Stimulation that you expect to get as a result of whatever outcome you expect doing well on the test to produce. So, the "mmm" test is a way of verifying that you actually engaged the anticipation of one of those things.
6[anonymous]
I couldn't help but laugh at this.

The human experience of colour is not really about recognizing a specific wavelength of light.

True, but irrelevant to the subject at hand.

the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.

No, the qualia of color have nothing to do with the observed object. This is the pons asinorum of qualia. The experience of color is a product of the invariant surface properties of objects; the qualia of color is a product of the relationship between that experience and oth... (read more)

Your eyes do detect the frequency of light, your nose does detect the chemical composition of smells, and your tongue does detect the chemical composition of food. That's exactly what the senses of sight, smell, and taste do.

Our brains then interpret the data from our eyes, noses, and tongues as color, scent, and flavor. It's possible to 'decode', e.g., color into a number (the frequency of light), and vice versa; you can find charts on the internet that match frequency/wavelength numbers to color. Decoding taste and scent data into the molecules that p... (read more)

4mattnewport
The human experience of colour is not really about recognizing a specific wavelength of light. We've discussed this before here. Our rods and cones are sensitive to the wavelength of light but the qualia of colour are associated more with the invariant surface properties of objects than they are with invariant wavelengths of light.

When a human brain makes a decision, certain computations take place within it and produce the result. Those computations can be perfectly simulated by a sufficiently-more-powerful brain, e.g. Omega. Once Omega has perfectly simulated you for the relevant time, he can make perfect predictions concerning you.

Perfectly simulating any computation requires at least as many resources as the computation itself (1), so AFAICT it's impossible for anything, even Omega, to simulate itself perfectly. So a general "perfect predictor" may be impossible. B... (read more)

Sideways-30

The more I think about this, the more I suspect that the problem lies in the distinction between quantum and logical coin-flips.

Suppose this experiment is carried out with a quantum coin-flip. Then, under many-worlds, both outcomes are realized in different branches. There are 40 future selves--2 red and 18 green in one world, 18 red and 2 green in the other world--and your duty is clear:

(50% ((18 +$1) + (2 -$3))) + (50% ((18 -$3) + (2 +$1))) = -$20.

Don't take the bet.

So why Eliezer's insistence on using a logical coin-flip? Because, I suspect,... (read more)

ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the statistics of the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.

Of course, this problem of identity and continuity has been hash... (read more)

[Rosencrantz has been flipping coins, and all of them are coming down heads]

Guildenstern: Consider: One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within un-, sub- or super-natural forces. Discuss.

Rosencrantz: What?

Rosencrantz & Guildenstern Are Dead, Tom Stoppard

Newcomb's problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.

Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega's actions instead of focusing on the problem of decision-making under prediction.

IAWY and this also applies to hypotheticals testing non-mathematical models. For instance, there isn't much isomorphism between Newcomblike problems involving perfectly honest game players who can predict your every move, and any gamelike interaction you're ever likely to have.

Thanks for the heads-up. Fixed.

Sideways140

I may be in the minority in this respect, but I like it when Less Wrong is in crisis. The LW community is sophisticated enough to (mostly) avoid affective spirals, which means it produces more and better thought in response to a crisis. I believe that, e.g., the practice of going to the profile of a user you don't like and downvoting every comment, regardless of content, undermines Less Wrong more than any crisis has or will.

Furthermore, I think the crisis paradigm is what a community of developing rationalists ought to look like. The conceit of student... (read more)

1thomblake
you need to do some formatting on that link. looks like your (] got switched around.

Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible.

Just because what you believe happens to be true, doesn't mean you're right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does--then I still wasn't right to believe it would.

Hypothetical Hume-worlders, like us, do not have the luxury of access to reality's "source code": they have not been informed that they exist in a hypothetical Hum... (read more)

I agree. My comment was meant as a clarification, not a correction, because the paragraph I quoted and the subsequent one could be misinterpreted to suggest that humans and animals use entirely different methods of cognition--"excecut[ing] certain adaptions without really understanding how or why they worked" versus an "explicit goal-driven propositional system with a dumb pattern recognition algorithm." I expect we both agree that human cognition is a subsequent modification of animal cognition rather than a different system evolved... (read more)

1Roko
agreed

All animals except for humans had no explicit notion of maximizing the number of children they had, or looking after their own long-term health. In humans, it seems evolution got close to building a consequentialist agent...

Clarification: evolution did not build human brains from scratch. Humans, like all known life on earth, are adaptation executers. The key difference is that thanks to highly developed frontal lobes, humans can predict the future more powerfully than other animals. Those predictions are handled by adaptation-executing parts of the... (read more)

-1thomblake
Dogs don't know it's not bacon.
2Roko
well, being a consequentialist is a particular adaptation you can execute. "Consequentialist" is a subset of "Adaption Excecuter" Humans certainly come much closer to pure consequentialism - of explicitly representing a goal and calculating optimal actions based upon the environment you observe to achieve that goal - than any other creature does.

I think the point of the quote is not that young folks are more able to unlearn falsehoods; it's that they haven't learned as many falsehoods as old people, just by virtue of not having been around as long. If you can unlearn falsehoods, you can keep a "young" (falsehood-free) mind.

You wrote:

My belief in science (trustworthy observation, logic, epistemology, etc.) is equivalent with my belief in God, which is why I find belief in God to be necessary.

Suppose, indeed, I were a rationalist of an Untheist society... Would it be very long before I asked if there was some kind of meta-organization?

The meta-organization is a property of the natural world.

It sounds like you're saying that your "God" is not supernatural. This isn't just a problem of proper usage. A theist who believes in a deity (which, given proper usage, is ... (read more)

-3byrnema
Yes, I believe that God is natural -- not supernatural. I think what you're saying is that if I claim that the meta-pattern is natural, then it's part of the physical world – thus inside science, and thus not anything we mean by God. But what I’ve been saying all along is that there are some things – patterns/meanings/interpretations – that are not within science but that are within the natural world. Theists believe (I think most fundamentally it is just this that they believe) that meanings and patterns exist in some real, meaningful way. Religions consist of describing these patterns in great detail, and they have all kinds of disagreements about what the patterns are and what parts are most relevant. And there’s a disagreement about whether the pattern is natural (and, usually, impersonal) or supernatural (usually, then, also personal and interactive). Thus, there are theists that are ideologically scientists (e.g., Einstein) and those that are non-scientists (e.g., Creationists). What they have in common is the belief that the universe is organized (meaningful). There are rationalists that believe the universe is random (a chilling and impersonal place) and those that believe there is meaning. What rationalists have in common is the scientific ideology. IMO, rationalists that believe in meaning but call themselves atheists are a group of people who think it is more important to distinguish themselves from non-scientist theists than nihilist rationalists. If it isn’t clear, my long term goal would be to see this group pulled from anti-atheism. (But untheism, a matter of definition, is fine.) I think non-scientific theists need guidance to more greatly value science, not cultural annihilation of "theism" because it is so immutably antithetical to science. Theism is antithetical only to nihilism. Explaining that a scientific ideology doesn’t eradicate meaning is the first step to guiding theists, but this isn't done very well. And finally my argument that if yo

Is there anything supernatural about meta-organization?

Take your hypothetical a step further: suppose that not only were you born into an Untheist society, but also a universe where physical reality, evolution, and mathematics did not "work." In universe-prime, the laws of physics do not permit stars to form, yet the Earth orbits the Sun; evolution cannot produce life, but humans exist; physicists and mathematicians prove that math can't describe reality, yet people know where the outfielder should stand to catch the fly ball.

byrnema-prime woul... (read more)

0byrnema
No. The meta-organization is a property of the natural world. The God you are talking about in ~A -- the one causing the miraculous violations -- sounds like some kind of creature. It would be a subset of a larger universe U that includes ~A and includes the creature. Does this universe U have any rules? Or suppose you really insist that the creature is God. This creature is not imposing logic, so logic is not one of the rules of ~A. Perhaps it doesn't impose any consistent rules. Then it is not endowing ~A with any consistent value or meaning. So you would have a situation where the humans in ~A have evidence of God, but the notion of God provides nothing.

How about both?

If I understand your terms correctly, it may be possible for realities that are not base-level to be optimization-like without being physics-like, e.g. the reality generated by playing a game of Nomic, a game in which players change the rules of the game. But this is only possible because of interference by optimization processes from a lower-level reality, whose goals ("win", "have fun") refer to states of physics-like processes. I suspect that base-level reality be physics-like. To paraphrase John Donne, no optimizat... (read more)

Sideways100

If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.

There's no need to speculate--this has actually happened. From what I know of the current state of Native American culture (which is admittedly limited), modern science is fully accepted for practical purposes, and traditional beliefs guide when to party, how to mourn, how to celebrate rites of passage, etc.

The only people who seem to think... (read more)

'Correctness' in theories is a scalar rather than a binary quality. Phlogiston theory is less correct (and less useful) than chemistry, but it's more correct--and more useful!--than the theory of elements. The fact that the modern scientific theories you list are better than their precursors, does not mean their precursors were useless.

You have a false dichotomy going here. If you know of someone who "knows how human cognition works on all scales", or even just a theory of cognition as powerful as Newton's theory of mechanics is in its domain,... (read more)

3Cyan
You've misunderstood my emphasis. I'm an engineer -- I don't insist on correctness. In each case I've picked above, the emphasis is on a deeper understanding (a continuous quantity, not a binary variable), not on truth per se. (I mention correctness in the Coriolis example, but even there I have Newtonian mechanics in mind, so that usage was not particularly accurate.) My key perspective can be found in the third paragraph of this comment. I'm all for control theory as a basis for forming hypotheses and for Seth Roberts-style self-experimentation.

Likewise, every other actual practice that you think would be a good thing for you to do. If you think that, and you are not doing it, why?

If you want to understand akrasia, I encourage you to take your own advice. Take a moment and write down two or three things that would have a major positive impact in your life, that you're not doing.

Now ask yourself: why am I not doing these things? Don't settle for excuses or elaborate System Two explanations why you don't really need to do them after all. You've already stipulated that they would have a major... (read more)

6Richard_Kennaway
I can't come up with anything where I don't know the reasons why I am not doing the things I have reasons to do. Now, resolving such conflicts, that is another matter. There are techniques, but I'm not cut out to play the personal development guru, and I don't want to tout any, since what is wanted here is ETA: I'll amplify that a little, as I believe the following is a deep generalisation that does hold everywhere, and there are some references to cite. Any time you are "somehow" not doing what you want to do, it is because you also want to not do it, or want to do something that conflicts with it. The mysterious feeling of somehowness arises because you are unaware of the conflicting motives. But they are always there, and there are ways of uncovering them, and then resolving the conflicts. For the theory behind this, see perceptual control theory (of which I have written here before). For the psychotherapeutic practice developed from that, see the Method Of Levels.

Truth-telling is necessary but not sufficient for honesty. Something more is required: an admission of epistemic weakness. You needn't always make the admission openly to your audience (social conventions apply), but the possibility that you might be wrong should not leave your thoughts. A genuinely honest person should not only listen to objections to his or her favorite assumptions and theories, but should actively seek to discover such objections.

What's more, people tend to forget that their long-held assumptions are assumptions and treat them as facts. Forgotten assumptions are a major impediment to rationality--hence the importance of overcoming bias (the action, not the blog) to a rationalist.

Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while.

The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being.

Probably true. But if you use those statistical facts about most people as an excuse to never listen to anyone, or even to one s... (read more)

2Vladimir_Nesov
A reasonable standard of evidence is established by what it takes to change your mind (ideally you'd need to work from elicited prior, which allows to check how reasonable your requirements are). If it's double-blind trial that is required to change your mind, too bad it's unavailable.

Vladimir, the problem has nothing to do with strength--some of these students did very well in other classes. Nor is it about effort--some students had already given up and weren't bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn't solve the problem.

The problem was simply that they believed "math" was impossible for them. The best way to get rid of that belief--maybe the only effective way--was to give them the experien... (read more)

Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they're taught a second technique that builds on the previous. So there are two skills required:

The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.

The second is the source of trouble. I can (and have) sat in on a single day's instruction of a language class and learned something about that language. But if a stu... (read more)

0[anonymous]
How is that unlike other subjects? Seems pretty universal.
Sideways120

For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:

"I'm terrible at math."

"I hate math class."

"I'm just dumb."

That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments--very small inferential gaps, no "trick questions".

Now, the "I'm terrible at math" attitude... (read more)

7Daniel_Burfoot
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes: 1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is "terrible at Greek" and "just dumb". 2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is "terrible at math" and "just dumb". Anecdote 1) just seems ridiculous. Of course if you walk into a language class that's out of your depth, you're going to be lost, everyone knows that. Every normal person can learn every natural language; there's no such thing as someone who's intrinsically "terrible at Greek". The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can't. This idea seems absurd to me: there is no "math gene"; there are no other examples of skills that some people can get and others not.
3Vladimir_Nesov
An example of dark arts used for a good cause. The problem is that the children weren't strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results. They can't feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.

Agreed--most of the arguments in good faith that I've seen or participated in were caused by misunderstandings or confusion over definitions.

I would add that once you know the jargon that describes something precisely, it's difficult to go back to using less precise but more understandable language. This is why scientists who can communicate their ideas in non-technical terms are so rare and valuable.

Load More