Thanks but no thanks. I do know this really really basic stuff - I just don't agree. Instead of just postulating that all explanations have to be tied to prediction, why don't you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.
Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible.
Just because what you believe happens to be true, doesn't mean you're right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does--then I still wasn't right to believe it would.
Hypothetical Hume-worlders, like us, do not have the luxury of access to reality's "source code": they have not been informed that they exist in a hypothetical Hume-world, any more than we can know the "true nature" of our world. Their Hume-world theory, like yours, cannot be based on reading reality's source code; the only way to justify Hume-world theory is by demonstrating that it makes accurate predictions.
Arguably, it does make at least one prediction: that any causal model of reality will eventually break down. This prediction, to put it mildly, does not hold up well to our investigation of our universe.
Alternatively, you could assert that if all possibilities are randomly realized, we might (with infinitesimal probability) be living in a world that just happened to exactly resemble a causal world. But without evidence to support such a belief, you would not be right to believe it, even if it turns out to be true. Not to mention that, as others have mentioned in this thread, unfalsifiable theories are a waste of valuable mental real estate.
Humans, like all known life on earth, are adaptation executers.
well, being a consequentialist is a particular adaptation you can execute. "Consequentialist" is a subset of "Adaption Excecuter"
Humans certainly come much closer to pure consequentialism - of explicitly representing a goal and calculating optimal actions based upon the environment you observe to achieve that goal - than any other creature does.
I agree. My comment was meant as a clarification, not a correction, because the paragraph I quoted and the subsequent one could be misinterpreted to suggest that humans and animals use entirely different methods of cognition--"excecut[ing] certain adaptions without really understanding how or why they worked" versus an "explicit goal-driven propositional system with a dumb pattern recognition algorithm." I expect we both agree that human cognition is a subsequent modification of animal cognition rather than a different system evolved in parallel.
I'm not sure I agree that humans are closer to pure consequentialism than animals; if anything, the imperfect match between prediction and decision faculties makes us less consequentialist. Eating or not eating one strip of bacon won't have an appreciable impact on your social status! Rather, I would say that future-prediction allows us to have more complicated and (to us) interesting goals, and to form more complicated action paths.
All animals except for humans had no explicit notion of maximizing the number of children they had, or looking after their own long-term health. In humans, it seems evolution got close to building a consequentialist agent...
Clarification: evolution did not build human brains from scratch. Humans, like all known life on earth, are adaptation executers. The key difference is that thanks to highly developed frontal lobes, humans can predict the future more powerfully than other animals. Those predictions are handled by adaptation-executing parts of the brain in the same way as immediate sense input.
For example, consider the act of eating bacon. A human can extrapolate from the bacon to a pattern of bacon-eating to a future of obesity, health risks, and reduced social status (including greater difficulty finding a mate). This explains why humans can dither over whether to eat bacon, while a dog just scarfs it down--dogs can't predict the future that way. (The frontal lobes also distinguish between bad/good/better/best actions--hence the vegetarian's decision to abstain from bacon on moral grounds.)
Eliezer's body of writing on evolutionary psychology and P.J. Eby's writing on PCT and personal effectiveness seem to be regarded as incompatible by some commenters here (and I don't want to hijack this thread into yet another PCT debate), but they both support the proposition that akrasia and other "sub-optimal" mental states result from a brain processing future-predictions with systems that evolved to handle data from proximate environmental inputs and memory.
But is it true? Do young folks have more of an ability to unlearn falsehoods than old folks?
I think the point of the quote is not that young folks are more able to unlearn falsehoods; it's that they haven't learned as many falsehoods as old people, just by virtue of not having been around as long. If you can unlearn falsehoods, you can keep a "young" (falsehood-free) mind.
So the universe was created by an intelligent agent. Well, that's the standard Simulation Hypothesis [...]
I've been thinking about a slightly different question: is base-level reality physics-like, or optimization-like, and if it's optimization-like, did it start out that way?
Here's an example that illustrates what my terms mean. Suppose we are living in base-level reality which started with the Big Bang and evolution, and we eventually develop an AI that takes over the entire universe. Then I would say that base-level reality started off physics-like, then becomes optimization-like.
But it's surely conceivable that a universe could start off being optimization-like, and this hypothesis doesn't seem to violate Occam's Razor in any obvious way. Consider this related question: what is the shortest program that outputs a human mind? Is it an optimization program, or a physics simulation?
An optimization procedure can be very simple, if computing time isn't an issue, but we don't know whether there is a concisely describable objective function that we are the optimum of. On the other hand, the mathematical laws of physics are also simple, but we don't know how rare intelligent life is, so we don't know how many bits of coordinates are needed to locate a human brain in the universe.
Does anyone have an argument that settles these questions, in either direction?
How about both?
If I understand your terms correctly, it may be possible for realities that are not base-level to be optimization-like without being physics-like, e.g. the reality generated by playing a game of Nomic, a game in which players change the rules of the game. But this is only possible because of interference by optimization processes from a lower-level reality, whose goals ("win", "have fun") refer to states of physics-like processes. I suspect that base-level reality be physics-like. To paraphrase John Donne, no optimization process is an island--otherwise how could one tell the difference between an optimization process and purely random modification?
On the other hand, the "evolution" optimization process arose in our universe without a requirement for lower-level interference. Not that I assume our universe is base-level reality, but it seems like evolution or analogous optimizations could arise at any level. So perhaps physics-like realities are also intrinsically optimization-like.
If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.
There's no need to speculate--this has actually happened. From what I know of the current state of Native American culture (which is admittedly limited), modern science is fully accepted for practical purposes, and traditional beliefs guide when to party, how to mourn, how to celebrate rites of passage, etc.
The only people who seem to think science conflicts with Native American belief systems, are New Age converts coming from a Western religious background. From the linked article:
A Minnesota couple who refused chemotherapy for their 13-year-old son was ordered Friday to have the boy re-evaluated... Brown County District Judge John Rodenberg found Daniel Hauser has been "medically neglected" by his parents, Colleen and Anthony Hauser, who belong to a religious group that believes in using only natural healing methods practiced by some American Indians.
As I remarked in another comment, exercise has documented effect. It is rational to do not just for health but for cognition (so why don't I exercise?
Well, why don't you? And everyone else who complains about their "somehow" not exercising. It's a common complaint, even here on LW, where one might expect people to have already risen above such elementary failures of rationality.
This is not a rhetorical question. I speak as someone who does exercise, as a matter of course, every day, and have done for my entire adult life. (Before then, I wasn't averse to exercise, I just didn't give it much attention.) So I do not know what it is like, to not be this person.
So, what is it like, to be someone who thinks they should be doing that, but doesn't? What is going on when you see in front of you the choice to bike to work, to do 20 press-ups right now, to get a set of dumbbells and use them every day, or whatever -- and then not even click the "No" button on the dialog floating in the air in front of you, but just turn away from the choice?
Likewise, every other actual practice that you think would be a good thing for you to do. If you think that, and you are not doing it, why?
Calling it akrasia looks like a way of getting to not fix it.
Likewise, every other actual practice that you think would be a good thing for you to do. If you think that, and you are not doing it, why?
If you want to understand akrasia, I encourage you to take your own advice. Take a moment and write down two or three things that would have a major positive impact in your life, that you're not doing.
Now ask yourself: why am I not doing these things? Don't settle for excuses or elaborate System Two explanations why you don't really need to do them after all. You've already stipulated that they would have a major positive impact on your life! You're not looking for a list of all possible reasons; you're looking for the particular reason that you don't do those things.
If you've chosen the right sort of inactions to reflect on, you'll realize that you don't know why you don't do them. It's not just that you want to do these things, but don't; it's that you don't know why you don't. There is a reason for your inaction, but you aren't consciously aware of what it is. Congratulations: you've discovered akrasia.
Truth-telling is necessary but not sufficient for honesty. Something more is required: an admission of epistemic weakness. You needn't always make the admission openly to your audience (social conventions apply), but the possibility that you might be wrong should not leave your thoughts. A genuinely honest person should not only listen to objections to his or her favorite assumptions and theories, but should actively seek to discover such objections.
What's more, people tend to forget that their long-held assumptions are assumptions and treat them as facts. Forgotten assumptions are a major impediment to rationality--hence the importance of overcoming bias (the action, not the blog) to a rationalist.
Previously in this thread I opined as follows on the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
PJ Eby took exception as follows:
you ignored the part where I just gave somebody a pointer to somebody else's work that they could download for free
Lots of people offer pointers to somebody else's writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader's while.
In the future, I will write "lasting change" when I mean "lasting useful psychological change".
you indirectly accused me of being more interested in financial incentives than results
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being. Let us call the probability I just referred to "probability D". (The D stands for deception.)
You have written (in a response to Eliezer) that you usually charge clients a couple of hundred dollars an hour.
The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.
The fact that you use hypnotic techniques on clients and write a lot about hypnosis raises probability D significantly because hypnotic techniques rely on the natural human machinery for negotiating who is dominant and who is submissive or the natural human machinery for deciding who will be the leader of the hunting party. Putting the client into a submissive or compliant state of mind probably helps a practitioner quite a bit to persuade the client to believe falsely that lasting change has been produced. You have presented no evidence or argument -- nor am I aware of any evidence or argument -- that putting the client into a submissive or compliant state helps a practitioner producing lasting change. Consequently, your reliance on and interest in hypnotic techniques significantly raises probability D.
Parenthetically, I do not claim that I know for sure that you are producing false beliefs rather than producing lasting change. It is just that you have not raised the probability I assign to your being able to produce lasting change high enough to justify my choosing to chase a pointer you gave into the literature or high enough for me to stop wishing that you would stop writing about how to produce lasting change in another human being on this site.
Parenthetically, I do not claim that your deception, if indeed that is what it is, is conscious or intentional. Most self-help and mental-health practitioners deceive because they are self-deceived on the same point.
You believe and are fond of repeating that a major reason for the failure of some of the techniques you use is a refusal by the client to believe that the technique can work. Exhorting the client to refrain from scepticism or pessimism is like hypnosis in that it strongly tends to put the client in a submissive or compliant state of mind, which again significantly raises probability D.
To the best of my knowledge (maybe you can correct me here) you have never described on this site an instance where you used a reliable means to verify that you had produced a lasting change. When you believe for example that you have produced a lasting improvement in a male client's ability to pick up women in bars, have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman's phone number or getting the woman to follow the client out to his car)?
In your extensive writings on this site, I can recall no instance where you describe your verifying your impression that you have created a lasting change in a client using reliable means. Rather, you have described only unreliable means, namely, your perceptions of the mental and the social environment and reports from clients about their perceptions of the mental and the social environment. That drastically raises probability D. Of course, you can bring probability D right back down again, and more, by describing instances where you have used reliable means to verify your impression that you have created a lasting change.
For readers who want to read more, here are two of Eliezer's sceptical responses to PJ Eby: [001][1], [002][2]
If it makes you feel any better, I am not seeing you any more harshly than I see any other self-help, life-coach or mental-health practitioner, including those with PhDs in psychology and MDs in psychiatry and those with prestigious academic appointments. In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
Actually there is one way in which I resent you more than I resent other self-help, life-coach or mental-health practitioners: the other ones do not bring their false beliefs or rather their most-probably-false not-sufficiently-verified beliefs to my favorite place to read about the mental environment and the social environment. I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while.
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being.
Probably true. But if you use those statistical facts about most people as an excuse to never listen to anyone, or even to one specific person, you're setting yourself up for failure. How will you ever revise your probability estimate of one person's knowledge or the general state of knowledge in a field, if you never allow yourself to encounter any evidence?
The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.
have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman's phone number or getting the woman to follow the client out to his car)?
Is that your true rejection? If P.J. Eby said "why, yes I have," would you change your views based on one anecdote? Since a randomized, double-blind trial is impossible (or at least financially impractical and incompatible with the self-help coach's business model), what do you consider a reasonable standard of evidence?
I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
Given the vigorous dissent from you and others, I don't think "discouraging contributions" is a likely problem! However, I personally would like to see discussion of specific claims of fact and (as much as possible) empirical evidence. A simple assertion of a probability estimate doesn't help me understand your points of disagreement.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I may be in the minority in this respect, but I like it when Less Wrong is in crisis. The LW community is sophisticated enough to (mostly) avoid affective spirals, which means it produces more and better thought in response to a crisis. I believe that, e.g., the practice of going to the profile of a user you don't like and downvoting every comment, regardless of content, undermines Less Wrong more than any crisis has or will.
Furthermore, I think the crisis paradigm is what a community of developing rationalists ought to look like. The conceit of students passively absorbing wisdom at the feet of an enlightened teacher is far from the mark. How many people can you think of, who mastered any subject by learning in this way?
That said... both "sides" of the gender crisis are repeating themselves, which strongly suggests they have nothing new to say. So I say Eliezer is right. If you can't understand the other side's perspective by now--if you still have no basis for agreement after all this discussion--you need to acknowledge that you have a blind spot here and either re-read with the intent to understand rather than refute, or just avoid talking about it.