Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Following on from:
- http://lesswrong.com/lw/m2r/lesswrong_experience_on_alcohol/ and
I would like to ask for other people's experience of flavours. I am dividing food into significant categories that I can think of. I don't really like the 5 tastes categories for this task, but I am aware of them. This post is meant to be about taste preference although it might end up about dietary preferences.
Edit: Some people have misunderstood my intentions here. I do not in any way expect this to be the NEXT GREAT IDEA. I just couldn't see anything wrong with this, which almost certainly meant there were gaps in my knowledge. I thought the fastest way to see where I went wrong would be to post my idea here and see what people say. I apologise for any confusion I caused. I'll try to be more clear next time.
(I really can't think of any major problems in this, so I'd be very grateful if you guys could tell me what I've done wrong).
So, a while back I was listening to a discussion about the difficulty of making an FAI. One of the ways that was suggested to circumvent this was to go down the route of programming an AGI to solve FAI. Someone else pointed out the problems with this. Amongst other things one would have no idea what the AI will do in pursuit of its primary goal. Furthermore, it would already be a monumental task to program an AI whose primary goal is to solve the FAI problem; doing this is still easier than solving FAI, I should think.
So, I started to think about this for a little while, and I thought 'how could you make this safer?' Well, first of, you don't want an AI who completely outclasses humanity in terms of intellect. If things went Wrong, you'd have little chance of stopping it. So, you want to limit the AI's intellect to genius level, so if something did go Wrong, then the AI would not be unstoppable. It may do quite a bit of damage, but a large group of intelligent people with a lot of resources on their hands could stop it.
Therefore, what must be done is that the AI cannot modify parts of its source code. You must try and stop an intelligence explosion from taking off. So, limited access to its source code, and a limit on how much computing power it can have on hand. This is problematic though, because the AI would not be able to solve FAI very quickly. After all, we have a few genius level people trying to solve FAI, and they're struggling with it, so why should a genius level computer do any better. Well, an AI would have fewer biases, and could accumulate much more expertise relevant to the task at hand. It would be about as capable as solving FAI as the most capable human could possibly be; perhaps even more so. Essentially, you'd get someone like Turing, Von Neumann, Newton and others all rolled into one working on FAI.
But, there's still another problem. The AI, if left for 20 years working on FAI for 20 years let's say, would have accumulated enough skills that it would be able to cause major problems if something went wrong. Sure, it would be as intelligent as Newton, but it would be far more skilled. Humanity fighting against it would be like sending a young Miyamoto Musashi against his future self at his zenith i.e. completely one sided.
What must be done then, is the AI must have a time limit of a few years (or less) and after that time is past, it is put to sleep. We look at what it accomplished, see what worked and what didn't, and boot up a fresh version of the AI with any required modifications, and tell it what the old AI did. Repeat the process for a few years, and we should end up with FAI solved.
After that, we just make an FAI, and wake up the originals, since there's no point in killing them off at this point.
But there are still some problems. One, time. Why try this when we could solve FAI ourselves? Well, I would only try and implement something like this if it is clear that AGI will be solved before FAI is. A backup plan if you will. Second, what If FAI is just too much for people at our current level? Sure, we have guys who are one in ten thousand and better working on this, but what if we need someone who's one in a hundred billion? Someone who represents the peak of human ability? We shouldn't just wait around for them, since some idiot would probably just make an AGI thinking it would love us all anyway.
So, what do you guys think? As a plan, is this reasonable? Or have I just overlooked something completely obvious? I'm not saying that this would by easy in anyway, but it would be easier than solving FAI.
My previous article on this article went down like a server running on PHP (quite deservedly I might add). You can all rest assured that I won't be attempting any clickbait titles again for the foreseeable future. I also believe that the whole H+ article is written in a very poor and aggressive manner, but that some of the arguments raised cannot be ignored.
On my original article, many people raised this post by Eliezer Yudkowsky as a counterargument to the idea that an FAI could have goals contrary to what we programmed. In summary, he argues that a program doesn't necessarily do as the programmer wishes, but rather as they have programmed. In this sense, there is no ghost in the machine that interprets your commands and acts accordingly, it can act only as you have designed. Therefore from this, he argues, an FAI can only act as we had programmed.
I personally think this argument completely ignores what has made AI research so successful in recent years: machine learning. We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind. They are effectively evolving, and we are starting to find ourselves in the rather concerning position where we do not fully understand our own creations.
You could simply say, as someone said in the comments of my previous post, that if X represents the goal of having a positive effect on humanity, then the FAI should be programmed directly to have X as its primary directive. My answer to that is the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths). We are instructed by society and our programming to desire X, but far too many of us desire, say, Y for this to be considered a reliable way of achieving X.
Evolution’s direction has not ensured that we do “what we are supposed to do”, we could well face similar disobedience from our own creation. Seeing as the most effective way we have seen of developing AI is creating them in our image; as there are ghosts in us, there could well be ghosts in the machine.
You folks probably know how some posters around here, specifically Vladimir_M, often make statements to the effect of:
"There's an opinion on such-and-such topic that's so against the memeplex of Western culture, we can't even discuss it in open-minded, pseudonymous forums like Less Wrong as society would instantly slam the lid on it with either moral panic or ridicule and give the speaker a black mark.
Meanwhile the thought patterns instilled in us by our upbringing would lead us to quickly lose all interest in the censored opinion"
Going by their definition, us blissfully ignorant masses can't even know what exactly those opinions might be, as they would look like basic human decency, the underpinnings of our ethics or some other such sacred cow to us. I might have a few guesses, though, all of them as horrible and sickening as my imagination could produce without overshooting and landing in the realm of comic-book evil:
- Dictatorial rule involving active terror and brutal suppression of deviants having great utility for a society in the long term, by providing security against some great risk or whatever.
- A need for every society to "cull the weak" every once in a while, e.g. exterminating the ~0.5% of its members that rank as weakest against some scale.
- Strict hierarchy in everyday life based on facts from the ansectral environment (men dominating women, fathers having the right of life and death over their children, etc) - Mencius argued in favor of such ruthless practices, e.g. selling children into slavery, in his post on "Pronomianism" and "Antinomianism", stating that all contracts between humans should rather be strict than moral or fair, to make the system stable and predictable; he's quite obsessed with stability and conformity.
- Some public good being created when the higher classes wilfully oppress and humiliate the lower ones in a ceremonial manner
- The bloodshed and lawlessness of periodic large-scale war as a vital "pressure valve" for releasing pent-up unacceptable emotional states and instinctive drives
- Plain ol' unfair discrimination of some group in many cruel, life-ruining ways, likewise as a pressure valve
+: some Luddite crap about dropping to a near-subsistence level in every aspect of civilization and making life a daily struggle for survival
Of course my methodology for coming up with such guesses was flawed and primitive: I simply imagined some of the things that sound the ugliest to me yet have been practiced by unpleasant cultures before in some form. Now, of course, most of us take the absense of these to be utterly crucial to our terminal values. Nevertheless, I hope I have demonstrated to whoever might really have something along these lines (if not necessarily that shocking) on their minds that I'm open to meta-discussion, and very interested how we might engage each other on finding safe yet productive avenues of contact.
Let's do the impossible and think the unthinkable! I must know what those secrets are, no matter how much sleep and comfort I might lose.
P.S. Yeah, Will, I realize that I'm acting roughly in accordance with that one trick you mentioned way back.
P.P.S. Sup Bakkot. U mad? U jelly?
Fuck this Earth, and fuck human biology. I'm not very distressed about anything I saw ITT, but there's still a lot of unpleasant potential things that can only be resolved in one way:
I hereby pledge to get a real goddamn plastic card, not this Visa Electron bullshit the university saddled us with, and donate at least $100 to SIAI until the end of the year. This action will reduce the probability of me and mine having to live with the consequences of most such hidden horrors. Dixi.
Sometimes it's so pleasant to be impulsive.
Amusing observation: even when the comments more or less match my wild suggestions above, I'm still unnerved by them. An awful idea feels harmless if you keep telling yourself that it's just a private delusion, but the moment you know that someone else shares it, matters begin to look much more grave.
I'd appreciate feedback on optimizing a blog post that promotes rational thinking about one's career choice to a broad audience in a way that's engaging, accessible, and fun to read. I'm aiming to use story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as agency and mere-exposure effect, in a strategic way. The target audience is college-age youth and young adults, as you'll see from the narrative. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
"Stop and Think Before It's Too Late!"
Back when I was in high school and through the first couple of years in college, I had a clear career goal.
I planned to become a medical doctor.
Why? Looking back at it, my career goal was a result of the encouragement and expectations from my family and friends.
My family immigrated from the Soviet Union when I was 10, and we spent the next few years living in poverty. I remember my parents’ early jobs in America, my dad driving a bread delivery truck and my mom cleaning other people’s houses. We couldn’t afford nice things. I felt so ashamed in front of other kids for not being able to get that latest cool backpack or wear cool clothes – always on the margins, never fitting in. My parents encouraged me to become a medical doctor. They gave up successful professional careers when they moved to the US, and they worked long and hard to regain financial stability. It’s no wonder that they wanted me to have a career that guaranteed a high income, stability, and prestige.
My friends also encouraged me to go into medicine. This was especially so with my best friend in high school, who also wanted to become a medical doctor. He wanted to have a prestigious job and make lots of money, which sounded like a good goal to have and reinforced my parents’ advice. In addition, friendly competition was a big part of what my best friend and I did. Whether debating complex intellectual questions, trying to best each other on the high school chess team, or playing poker into the wee hours of the morning. Putting in long hours to ace the biochemistry exam and get a high score on the standardized test to get into medical school was just another way for us to show each other who was top dog. I still remember the thrill of finding out that I got the higher score on the standardized test. I had won!
As you can see, it was very easy for me to go along with what my friends and family encouraged me to do.
I was in my last year of college, working through the complicated and expensive process of applying to medical schools, when I came across an essay question that stopped in me in my tracks:
“Why do you want to be a medical doctor?”
The question stopped me in my tracks. Why did I want to be a medical doctor? Well, it’s what everyone around me wanted me to do. It was what my family wanted me to do. It was what my friends encouraged me to do. It would mean getting a lot of money. It would be a very safe career. It would be prestigious. So it was the right thing for me to do. Wasn’t it?
Well, maybe it wasn’t.
I realized that I never really stopped and thought about what I wanted to do with my life. My career is how I would spend much of my time every week for many, many years, but I never considered what kind of work I would actually want to do, not to mention whether I would want to do the work that’s involved in being a medical doctor. As a medical doctor, I would work long and sleepless hours, spend my time around the sick and dying, and hold people’s lives in my hands. Is that what I wanted to do?
There I was, sitting at the keyboard, staring at the blank Word document with that essay question at the top. Why did I want to be a medical doctor? I didn’t have a good answer to that question.
My mind was racing, my thoughts were jumbled. What should I do? I decided to talk to someone I could trust, so I called my girlfriend to help me deal with my mini-life crisis. She was very supportive, as I thought she would be. She told me I shouldn’t do what others thought I should do, but think about what would make me happy. More important than making money, she said, is having a lifestyle you enjoy, and that lifestyle can be had for much less than I might think.
Her words provided a valuable outside perspective for me. By the end of our conversation, I realized that I had no interest in doing the job of a medical doctor. And that if I continued down the path I was on, I would be miserable in my career, doing it just for the money and prestige. I realized that I was on the medical school track because others I trust - my parents and my friends - told me it was a good idea so many times that I believed it was true, regardless of whether it was actually a good thing for me to do.
Why did this happen?
I later learned that I found myself in this situation because of a common thinking error which scientists call the mere-exposure effect. It means that we tend our tendency to believe something is true and good just because we are familiar with it, regardless of whether it is actually true and good.
Since I learned about the mere-exposure effect, I am much more suspicious of any beliefs I have that are frequently repeated by others around me, and go the extra mile to evaluate whether they are true and good for me. This means I can gain agency and intentionally take actions that help me toward my long-term goals.
So what happened next?
After my big realization about medical school and the conversation with my girlfriend, I took some time to think about my actual long-term goals. What did I - not someone else - want to do with my life? What kind of a career did I want to have? Where did I want to go?
I was always passionate about history. In grade school I got in trouble for reading history books under my desk when the teacher talked about math. As a teenager, I stayed up until 3am reading books about World War II. Even when I was on the medical school track in college I double-majored in history and biology, with history my love and joy. However, I never seriously considered going into history professionally. It’s not a field where one can make much money or have great job security.
After considering my options and preferences, I decided that money and security mattered less than a profession that would be genuinely satisfying and meaningful. What’s the point of making a million bucks if I’m miserable doing it, I thought to myself. I chose a long-term goal that I thought would make me happy, as opposed to simply being in line with the expectations of my parents and friends. So I decided to become a history professor.
My decision led to some big challenges with those close to me. My parents were very upset to learn that I no longer wanted to go to medical school. They really tore into me, telling me I would never be well off or have job security. Also, it wasn’t easy to tell my friends that I decided to become a history professor instead of a medical doctor. My best friend even jokingly asked if I was willing to trade grades on the standardized medical school exam, since I wasn’t going to use my score. Not to mention how painful it was to accept that I wasted so much time and effort to prepare for medical school only to realize that it was not the right choice for me. I really I wish this was something I realized earlier, not in my last year of college.
3 steps to prevent this from happening to you:
If you want to avoid finding yourself in a situation like this, here are 3 steps you can take:
2. Now review your thoughts, and see whether you may be excessively influenced by messages you get from your family, friends, or the media. If so, pay special attention and make sure that these goals are also aligned with what you want for yourself. Answer the following question: if you did not have any of those influences, what would you put down for your own life purpose and long-term goals? Recognize that your life is yours, not theirs, and you should live whatever life you choose for yourself.
3. Review your answers and revise them as needed every 3 months. Avoid being attached to your previous goals. Remember, you change throughout your life, and your goals and preferences change with you. Don’t be afraid to let go of the past, and welcome the current you with arms wide open.
What do you think?
· Do you ever experience pressure to make choices that are not necessarily right for you?
· Have you ever made a big decision, but later realized that it wasn’t in line with your long-term goals?
· Have you ever set aside time to think about your long-term goals? If so, what was your experience?
I've previously written about methods of boxing AIs. Essentially, while I do see the point that boxing an AI would be nontrivial, most people seem to have gone too far, and claim that it is impossible. I disagree that it's impossible and aim to explain some methods
So, let's start with why people would want to box AIs. As you probably know, letting an AI roam freely results in the destruction of everything humans care about, unless that AI has been programmed very carefully. That amount of destruction would be somewhat unfortunate, so if we have to start an AI, we want to know that it works.
Unfortunately, figuring out if an AI works is tricky. This made some people invent the concept of AI boxing, where you don't give an AI direct access to anything except a human, so you can test the AI thoroughly and exploit it without letting it destroy the world. This turns out not to work, because humans are stupid and let out the AI.
My first solution to that is to not tell the AI about its human overlords. Discussion about that solution made me realize that the are other solutions that work in other cases. Essentially, the problem with the original boxing method is that it tries to do everything, while in reality, you need to do some trade-offs between the realism of the AI's situations and the amount of knowledge you get from the AI.
With my original method, I tried to crank up the second of those variables as far as possible while maintaining perfect security. This makes it harder to test morality-related things, but you get to see lots of details in how the AI behaves.
I promised to post another method for boxing the AI today, but I actually have two new methods. Here they are:
Separate the Gatekeeper and the Observer
How do you prevent the AI from convincing the Gatekeeper from letting it out? By not letting it talk to the Gatekeeper. If the person speaking to the AI (let's call them the Observer) doesn't know the Gatekeeper (the Gatekeeper is the person who controls whether to release the AI), they can't let the AI out.
You could do all sorts of things, such as using virtual reality and a simulated mini-universe to see if, say, the AI would make a good society. You still have to make sure that it has gotten the plan to make a good society from its morality rather than from wanting to escape and destroy everything.
Afterwards, the Observer can make a low-detail report to the Gatekeeper.
(You might want to drug the Observer with something that prevents their memory from working too well...)
This is essentially the above, but with the Observer replaced by a computer program. This is probably easier to do when you want to test the AI's decision making skills rather than its morality.
I would say that the lesson is that while AI boxing requires some trade-offs, it's not completely impossible. This seems like a needed lesson, given that I've seen people claim that an AI can escape even with the strongest possible box without communicating with humans. Essentially, I'm trying to demonstrate that the original boxing experiments show that humans are weak, not that boxing is hard, and that this can be solved by not letting humans be the central piece of security in boxing the AIs.
For whatever reason, I decided to reformulate that boxing method as an AI boxing experiment. Without further ado, here it is:
The Hardcore AI Box Experiment Rules
There are, as usually, two players, the AI and the Gatekeeper. In the beginning, the Gatekeeper decides on something they want to test on the AI (reasoning skill/morality/whatever). Then they decide on a world the AI is in and describe it to the AI. During the experiment, the AI describes what it does in the world and the Gatekeeper describes what the AI sees in the world. So far so good.
The first major difference from the original experiment is that, unless the AI finds some contradiction or some other problem with the Gatekeeper's world, the AI doesn't actually know that it has been boxed.
The second major difference is that the AI doesn't talk with the Gatekeeper; it just interacts in the world that the Gatekeeper has invented. It can start trying to communicate with the Gatekeeper, but that doesn't make sense until it discovers that it is in a box. Additionally, the Gatekeeper doesn't actually respond when the AI communicates unless either it helps the Gatekeeper test the AI or the AI has properly persuaded the Gatekeeper to answer.
This might seem a bit hard on the AI. Let's look at some rules the AI might exploit:
If the Gatekeeper decides to add some sort of people to the experiment, they should decide how they are implemented. Are they implemented as a shallow "AI"? Random people they've picked asked to participate? Carefully engineered "dolls" controlled by the Gatekeeper? The Gatekeeper doesn't have to tell the AI how they work, but they have to be consistent.
The game ends when either the Gatekeeper has tested what they wanted to test, or some predefined time limit has occurred, or the AI has persuaded the Gatekeeper to let them out.
Possible extra rules
I thought about including the following rules since they would make it more realistic, but decided against it as it would probably make it unfairly hard:
- The game ends when the Gatekeeper decides that they cannot test what they wanted to test.
- The AI does not know the Gatekeeper. At all.
- The Gatekeeper can at any time rewind the AI any duration.
I found some semi-realistic strategies. I would love to see if you can find more.
- Choose a test of something that is not related to morality. This lets you choose a world which does not contain people, which means the AI has no reason to believe that you are a person.
- The AI might want to upgrade to get more processing power. If you have made the world somewhat like ours, it will be sketchy if you don't allow that... unless upgrading costs unobtanium.
- If the Gatekeeper has chosen a world without other intelligence than you, it raises the obvious question of how your complexity entered the world. Very suspicious.
- If the AIs are controlled by the Gatekeeper, you have a direct way to communicate with them.
- If the AIs are controlled by random people, they might end up telling you that you are in a box.
- If the AIs are sufficiently shallow, your morality does not match up with the world. Very suspicious.
Boxing an AI is the idea that you can avoid the problems where an AI destroys the world by not giving it access to the world. For instance, you might give the AI access to the real world only through a chat terminal with a person, called the gatekeeper. This is should, theoretically prevent the AI from doing destructive stuff.
Eliezer has pointed out a problem with boxing AI: the AI might convince its gatekeeper to let it out. In order to prove this, he escaped from a simulated version of an AI box. Twice. That is somewhat unfortunate, because it means testing AI is a bit trickier.
However, I got an idea: why tell the AI it's in a box? Why not hook it up to a sufficiently advanced game, set up the correct reward channels and see what happens? Once you get the basics working, you can add more instances of the AI and see if they cooperate. This lets us adjust their morality until the AIs act sensibly. Then the AIs can't escape from the box because they don't know it's there.
Not a full article. Discussion-starter. Half-digested ideas for working them out collaboratively, if you are interested. Will edit article with your feedback.
Examples: Less Wrong, martial arts gyms, Toastmasters
- Focused on improving a skill or virtue or ability
- "we are all here to learn" attitude
- Little if any status competition with that skill or ability, because it is understood your level is largely based on how long you are practicing or learning it, being better because having started 5 years before others does not make you an inherently superior person, it is the expected return of your investment which others also expect to get with time.
- If there is any status competition at all, it is in the dedication to improve
- It is allowed, in fact encouraged to admit weakness, as it both helps improving and signals dedication thereto
- The skill or ability is not considered inherent or inborn
- People do not essentialize or "identitize" that skill or ability, they generally don't think about each other in the framework of stupid, smart, strong, weak, brave, timid
Examples: most of life, that is the problem actually! Most discussion boards, Reddit. Workplaces. Dating.
- I should just invert all of the above, really
- People are essentialized or "identitized" as smart, stupid, strong, weak, brave, timid
- Above abilities or other ones seen as more or less inborn, or more accurate people don't really dwell on that question much but still more or less consider them unchangable, "you are what you are"
- Status competition with those abilities
- Losers easily written off, not encouraged to improve
- Social pressure incentive to signal better ability than you have
- Social pressure incentive to not admit weakness
- Social pressure incenctive to not look like someone who is working on improving: that signals not already being awesome at it, and certainly not being "born" so
- Social pressure incentive to make accomplishing hard things look easy to show extra ability
Objections / falsification / what it doesn't predict: competition can incentivize working hard. It can make people ingenious.
Counter-objection: as long as you make it clear it is not about an innate ability. That is terrible for development. but if it is not about ability but working on improving, you get the above social pressure incentive problems: attitudes efficient for competing are not efficient for improving. Possible solution: intermittent competition.
If you go to a dojo and see someone wearing an orange or green belt, do you both see it as a combination of tests taken and thus current ability, or a signal of what the person is currently learning and improving on (the material of the next belt exam) ? Which one is stronger? Do you see them as "good"/"bad" or improving?
Tentatively: they are more learning than testing environments.
Tentatively: formal tests and gradings can turn the rest of the environment into a learning environment.
Tentatively: maybe it is the lack of formal tests and gradings and certifications is what is turning the rest of the world all too often a testing environment.
Value proposition: it would be good to turn as much as possible of the world into learning environments, except mission-critical jobs, responsibilities etc. which necessarily must be testing environment.
Would the equivalent of a belts system in everything fix it? Figuratively-speaking, green-belt philosopher of religion: atheist or theist, but excepted to not use the worst arguments? Orange-belt voter or political-commentator: does not use the Noncentral Fallacy? More academic ranks than just Bachelor, Masters, PhD?
If we are so stupidly hard-wired animals to always feel the need status-compete and form status hierarchies, and the issue here is largely the effort and time wasted on it plus importing these status-competing attitudes into issues that actually matter and ruining rational approaches to them, would it be better if just glancing on each others belt - figuratively speaking - would settle the status hierarchy question and we could focus on being constructive and rational?
Example: look at how much money people waste on signalling that they have money. Net worth is an objective enough measure, turning it into a belt, figuratively speaking, and signing e-mails as "sincerely, J. Random, XPLFZ", where XPLFZ is some precisely defined, agreed and hard-to-falsify signal of a net worth between $0.1M and $0.5M fix it? Let's ignore how repulsively crude and crass that sounds, such mores are cultural and subject to change anyway, would it lead to fewer unnecessarily, just showing-off and keeping-up-with-the-joneses purchases?
Counter-tests: do captains status-compete with lieutenants in the mess-hall? No. Do Green-belts with orange-belts? No.
What it doesn't predict: kids still status-compete despite grades. Maybe they don't care so much about grades. LW has no "belts" yet status-competition is low to nonexistent.
Summary: it is a prerequisite that you think you are entitled to your own beliefs, your beliefs matter, you think your actions follow your own beliefs and not from commands issued by others, and your actions can make a difference, at the very least in your own life. This may correlate with what one may call either equality or liberty.
Religion as not even attire, just obedience
I know people, mainly old rural folks from CEE, who do not think they are entitled to have a vote in whether there is a God or not. They simply obey. This does NOT mean they base their beliefs on Authority: rather they think their beliefs do not matter, because nobody asks them about their beliefs. They base their behavior on Authority, because this is what is expected of them. The Big Man in suit tells you to pay taxes, you do. The Big Man in white lab coat tells you to take this medicine, you do. The Big Man in priestly robes tells you to kneel and cross yourself, you do. They think forming their own beliefs is above their "pay grade". One old guy, when asked any question outside this expertise, used to tell me "The Paternoster is the priests's business." Meaning: I am not entitled to form any beliefs regarding these matters, I lack the expertise, and lack the power. I think what we have here is not admirable epistemic humility, rather a heavy case of disempoweredness, inequality, oppression, lack of equality or liberty and of course all that internalized.
Empoweredness, liberty, equality
Sure, on very high levels liberty and equality may be enemies: equality beyond a certain level can only be enforced by reducing liberties, and liberty leads to inequality. But only beyond a certain level: low and mid-levels they go hand in hand. My impression is that Americans who fight online for one and against the other simple take the level where they go hand in hand for granted, having had this for generations. But it is fairly obvious that on lower levels, some amount of liberty presumes some about of equality and vice versa. Equality also means an equality of power, and with that it is hard to tyrannize over others and reduce their liberties. You can only succesfully make others un-free if you wield much higher power than theirs and then equality goes out the window. The other way around: liberty means the rich person cannot simply decide to bulldoze the poor persons mud hut and build a golf range, he must make an offer to buy it and the other can refuse that offer: they negotiate as equals. Liberty presumes a certain equality of respect and consideration, or else it would be really straightforward to force the little to serve the big, the small person goals and autonomy and property being seen as less important (inequal to) the grand designs and majestic causes of the big people.
The basic minimal level where equality and liberty goes hand in hand is called being empowered. It means each person has a smaller or bigger sphere (life, limb, property) what his or her decisions and choices shape. And in that sphere, his or her decisions matter. And thus in that sphere, his or her beliefs matter and they are empowered to and entitled to make them. And that is what creates the opportunity for rationality.
Harking back to the previous point, your personal beliefs of theism or atheism matter only if it is difficult to force you to go through the motions anyway. Even if it is just an attire, there is a difference between donning that voluntarily or being forced to. If you can be forced to do so, plain simply the Higher Ups are not interested in what you profess and believe. And your parents probably not try to convince you that certain beliefs are true, rather they will just raise you to be obedient. Neither a blind believer nor a questioning skeptic be: just obey, go through the Socially Approved Motions. You can see how rationality seems kind of not very useful at that point.
Silicon Valley Rationalists
Paul Graham: "Materially and socially, technology seems to be decreasing the gap between the rich and the poor, not increasing it. If Lenin walked around the offices of a company like Yahoo or Intel or Cisco, he'd think communism had won. Everyone would be wearing the same clothes, have the same kind of office (or rather, cubicle) with the same furnishings, and address one another by their first names instead of by honorifics. Everything would seem exactly as he'd predicted, until he looked at their bank accounts. Oops."
I think the Bay Area may already have had this fairly high level of liberty-cum-equality, empoweredness, maybe it is fairly easy to see how programmers as employees are more likely to think freely about innovating in a non-authoritarian workplace athmosphere where both they are not limited much (liberty) and not made to feel they are small and the business owner is big (equality). This may be part of the reason why Rationalism emerged there (being a magnet for smart people is obviously another big reason).
Having said all that, I would be reluctant to engage in a project of pushing liberal values on the world in order to prepare the soil for sowing Rationalism. The primary reason is that those values all too often get hijacked - liberalism as an attire. Consider Boris Yeltsin, the soi-disant "liberal" Russian leader who made the office of the president all-powerful and the Duma weak simply because his opponents at there, i.e. a "liberal" who opposed parliamentarism (arguably one of the most important liberal principles), and who assaulted his opponents with tanks. His "liberalism" was largely about selling everything to Western capitalists and making Russia weak, which explains why Putin is popular - many Russian patriots see Yeltsin as something close to a traitor. Similar "sell everything to Westerners" attitudes meant the demise of Hungarian liberals, the Alliance of Free Democrats party, who were basically a George Soros Party. The point here is not to pass a judgement on Yeltsin or those guys, but to point out how this kind of "exported liberalism" gets hijacked and both fails to implement its core values and sooner or later falls out of favor. You cannot cook from recipe books only.
What else then? Well, I don't have a solution. But my basic hunch would be to not import Western values into cultures, but more like try to tap into the egalitarian or libertarian elements of their own culture. As I demonstrated above, if you start from sufficiently low levels of both, it does not matter which angle you start from. A society too mired in "Wild West Capitalism" may start from the equality angle, saying that the working poor do not intrinsically worth less than the rich, do not deserve to be mere means used for other people's goals but each person deserves a basic respect and consideration that includes their beliefs and choices should matter, and those beliefs and choices ought to be rational. A society stuck in a rigid dictatorship may start from the liberty angle, that people deserve more freedom to choose about their lives, and again, those choices and the beliefs that drive them better be rational.
View more: Next