Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Friendly-HI 29 March 2015 03:52:03PM -1 points [-]

Could very well be true. But it leaves open the curious question what on earth I would be looking for in the ex-eastern block ;)

In response to comment by Doug_S. on Original Seeing
Comment author: MarsColony_in10years 29 March 2015 01:52:00PM 0 points [-]

It looks like a lot of people are of a similar mind. Judging by the comments, most people seem to be taking this merely as a way around writer's block, or a praise of depth first analysis as a way to narrow down to "a topic about which others haven't already said everything". The most insightful comment (at least to my sense of quality) proclaims this:

If Zen and the Art of Motorcycle Maintenance were a math textbook, the rule would be clear: "if you examine something, you will have something to say about it."

There is of course The Virtue of Narrowness, but what I think what Phaedrus is getting at is that people in general, not just in their writing, tend not to put much effort into thinking new thoughts and thinking for themselves. One tool he has apparently employed successfully on his students is to have them narrow the scopes of their essays, forcing them to think for themselves rather than echo back what other people had already said. But reading just this segment of the story out of context might be a little like reading one of Yudkowsky's later sequences without reading earlier ones. Allow me to supply some of that context.

The book is about Phaedrus's ongoing obsession with finding his own specific version of the nebulous "ultimate good" or "objective morality" that so many philosophers have sought after. He calls his form "quality", which is a mixture of the mechanical/analytic structure of science/rationality with the organic/emotional creativity of art/spirituality. The character is unique in the world with this particular brand of philosophy, and so does a lot of original thinking, placing little value on traditional Aristotelian thought. There are 2 types of people in the world: Aristotelians and Platonists, and he is neither.

Given this, I would suggest that Phaedrus is trying hard to think new thoughts himself, and places little value in small adaptations of existing philosophy. The character would suggest that humanity made a wrong turn in Plato's time, with the divide between passion and logic. Fixing this requires an extraordinary amount of out-of-the-box thinking. Science needs to take seriously the quest to learn where hypotheses come from, and how best to nurture passion, creativity, insight, etc and make them a real part of the scientific process. On the other hand, our culture needs to learn to appreciate beautiful engineering alongside beautiful art, and to find Joy in the Merely Real instead of mystery. These efforts call for new paradigms, new ideas, new modes of thought, and an entire upheaval of societal norms, not unlike during the enlightenment and scientific revolution.

The single concept he sees as uniting those two worlds is "Quality". Quality implies both sound engineering, and elegant, desirable form. It's at once beautiful and offers utility. It can't be defined, because to define it you would have to define every whim of an entire human mind. Even so, we all know intuitively what quality is, because we can all agree that one essay is well written or poorly written, even if we squabble about the precise letter grade it deserves. Quality isn't just what people like. The word "just" has no place in that sentence. Quality IS what people like; everything that we can appreciate, for it's design, it's elegance, it's beauty, it's ingenuity... everything.

If any of this piques your interest, I recommend reading the book itself. What I've done is rather like trying to summarize all of The Sequences in one small post. But the point is, we are not talking about a technique to get over writer's block; the author and Yudkowsky are definitely hinting at insights into the human mind. Our minds are predominately an echo chambers of everything we learn from others, but we must try and add an original thought to the mix every now and then, if we want to improve this world we live in.

Comment author: TheAncientGeek 29 March 2015 09:42:37AM 1 point [-]

Theres still subjective uncertainty in 4D crystals.

Comment author: D_Malik 29 March 2015 05:01:00AM 1 point [-]

My objection: This isn't an answer, it's a refusal to answer. Indexical uncertainty really can exist, e.g. you can give an upload perfect amnesia, so refusing to answer isn't something you're allowed to do.

Comment author: RPMcMurphy 29 March 2015 04:25:55AM -2 points [-]

Yeah, don't write anything that challenges a conclusion of Saint Eliezer's. That's a way to get to the truth. ...idiot.

A few examples of politics constituting, not just an existential risk, but the most common severe risk faced by humanity. It's also an existential risk, in any age with "leading force" (nuclear, biological, strong nanotechnology) weapons.

Much like most bars have signs that say "No Religion or Politics" this idiotic "parable" is approximately as intelligent as biblical parables that also serve to "shut down" discourse. You primates aren't exactly intelligent enough to function without continual discourse checking your excesses, and moderating your insipid tendencies to silence that which you disagree with.

Comment author: RPMcMurphy 29 March 2015 04:11:08AM *  1 point [-]

...However, that would almost certainly rub the LessWrong crowd the wrong way. If only they could have focused on discovering the truth through the use of logic. Then, they could have attempted to get everyone else to agree with that iron-clad logic.

The conflict has not vanished. Society is still divided along Blue and Green lines, and there is a "Blue" and a "Green" position on almost every contemporary issue of political or cultural importance. The Blues advocate taxes on individual incomes, the Greens advocate taxes on merchant sales; the Blues advocate stricter marriage laws, while the Greens wish to make it easier to obtain divorces; the Blues take their support from the heart of city areas, while the more distant farmers and watersellers tend to be Green; the Blues believe that the Earth is a huge spherical rock at the center of the universe, the Greens that it is a huge flat rock circling some other object called a Sun. Not every Blue or every Green citizen takes the "Blue" or "Green" position on every issue, but it would be rare to find a city merchant who believed the sky was blue, and yet advocated an individual tax and freer marriage laws.

OK, so my "reduction to absurdity" might be falling apart now, so I'll just make a few points about the above comments.

1) Lysander Spooner (an early atheist libertarian consequentialist who nonetheless defended deontological natural rights because they produced optimal results) tricked the general public in the North into favoring the value "the abolition of slavery" above "consistent loyalty to the Constitution" by falsely claiming that they were "one and the same." He knew this was false because he later wrote that the Constitution had "no authority." He did this because Northerners liked the outcome the Constitution had given them and hence, were loyal to it. He saw that William Lloyd Garrison's logical claims against the constitution as a "slavery-defending" document might be true, but that by pointing this out the problem of slavery was made totally intractable.

2) This implied that Spooner also knew that most of the electorate then (as it remains today) was irrational and unphilosophical. But what do I mean when I say irrational and unphilosophical? I mean: That the neocortices of humans naturally form linear prediction hierarchies that are specific and detailed at the "low level," and broadly-applicable and general at the "higher levels". At the highest level of a hierarchical worldview, is a concern with systems that are based on emergent order, sometimes exponential, and consist of networks (both voluntary markets and coercive political) comprised of thousands to millions of human minds. This is also sometimes called "philosophical" level of a rationally-prioritized hierarchy because this level is concerned with philosophical questions about social organization.

3) Most people are incompetent at the philosophical level, because it's not necessary for them to do the things they're absolutely required to do, based on iterated feedback and correction. This philosophical hierarchical level is not as concerned with how to make personal decisions (how to thoroughly to wipe your ass, how early you have to leave the house to make it to work on time, whether you should use Tufte Lyx or Powerpoint to design the graphs for your company report, what time to pick your kids up from school, how to pleasure your sex partner so they don't leave you for a better option, etc.) as it is with finding answers to really important "life-or-death" questions (ie: Should I sign up for Alcor? Should I vote for this charismatic chap named Hitler or show up to his party's neighborhood watch meetings? What will happen if the FDA retains control over "drug approval"? Do I need someone's permission to acquire the medicine I need to live past 90 years of age? How will the system use feedback and correction if it is not allowed to test new drugs at market and computation speeds?).

The "blues" and "greens" are actually trying to find the answers to philosophical questions(domain), they just aren't any good at it (strategic incompetence). But are LessWrongians any better? Not when they're not trying. If you don't want to discuss policy, then you're actually not making much use of the LessWrong forum. Those competent to pursue a goal don't need the forum: they can post with permission of the site hosts. (All policies that matter are "political," at some level of the hierarchy. You find this out when you begin pursuing life-extension, and then encounter government roadblocks to you saving your own life. Of course, only a small number of very-well-informed people make this discovery, because very few people are as competent as Stephen Badylak or Ray Kurzweil.) And, of course, you're also excluding from participation all of the comparatively stupid "nodes" or "pattern recognizers" that allow for emergent social order, and market discovery and incentivization. So, once again, the people qualified to solve philosophical problems aren't thinking about them.

This seems to me to be a terrible outcome. This comment can't help but be the highest praise for (most of) the people at LessWrong while at the same time the highest criticism of (some of) their political decisions. (We know our political decisions by the results they yield.) By essentially subtracting themselves from the democratic debate, they make the same mistake I've seen replicated thousands of times from most other libertarians and thinkers. Those most inimical to the ideas of freedom, act as cheerful optimistic, happy network nodes, pushing with all their spare energy in the direction of totalitarianism. Those who are in favor of an open, liberal democracy resign themselves to the sorry state of affairs with detachment, cynicism, and political relinquishment (a very similar phenomenon to Bill Joy's "technological relinquishment").

And when strong AGI is finally created, it will have a strange "choice" to make: 1) Corrigible: Perpetuate the totalitarian "peace," and ally itself with the totalitarians, possibly as an enforcer. or 2) Incorrigible: Be hostile to the vast majority of corrupted humans, favoring the few liberators / libertarians / "rebels." 3) Incorrigible: Be hostile to the totalitarians, on a case-by-case basis, favor the rebuilding of civilization, from its current remnants. In that case, in order to be friendly to humans, it must understand what social organization they best thrive under. ...And we can't tell it, because most of "us" don't know.

Comment author: RPMcMurphy 29 March 2015 03:57:27AM *  0 points [-]

Nice commentary. It reminds me of "The Machine Stops" by E. M. Forster. Both Forster's story and this parable are very interesting as analogies to our own society. Of course, analogies, sequences, and parables sometimes break down because they lose connection to material reality (ungrounded abstraction). Additionally, the way individual humans see patterns in reality varies quite a bit from individual to individual. (And, I dare say, there are more anti-green-discussion and anti-blue-comments on this and other fora as a result of biological determination, rather than any inherent merit or feature of their anti-debate political positions.) Being "above discussion" seems to me to be "above thought," even if that thought is rightfully noted as typically being "of poor quality" due to the majority of humanity's incapacity for philosophy. All goals of a suitably intelligent mind are "political," because the individual mind that is highly intelligent rapidly conquers its own domain and achieves its personal goals. At that point, such a dominant mind becomes a "statesman" and concerns itself with its surrounding environment, and its impact on others. This isn't "required," but it is natural, and nature tends to win.

Look at "politics" now, it's still "might makes right." The DEA, ATF, and other alphabet-soup agencies simply don't follow the common law. (The common law requires a "corpus delicti," due process, etc.)

It's "natural" for one reason: there's no reason not to build gardens instead of battlefields, and battlefields are the default position of low and venal sociopathic intelligences. Which does a powerful and benevolent mind build? Gardens with useful plants, animals, bacteria, and fungi (including Cannabis indica, sativa, and ruderalis; Erythroxylum coca; Papaver somniferum; Psilocybe cubensis, mexicana, cyanescens; millions of kinds of locality-tailored bacteria; etc.) Many of the most useful plants in a human-centric garden are "prohibited." Not only that, the health information relevant to the plants and bacteria that are not prohibited, is prohibited. A bio-centric view of medicine is not allowed, because thugs in the FDA say it's not. If there was such a thing as individual rights, or an educated citizenry, this couldn't last for an instant.

What I need, apparently, is the bacteria that makes Vitamin K2. Research has shown that this will prevent my body from lining my arteries with calcium, and that it will instead cause my body to allocate that calcium to my bones, where it can be better used for purposes I intend. Similarly, the excess K2 (manufactured in bioavailable methaquinone-4, 7, and 11 or MK-4, MK-7, and MK11) will bind to Vitamin D3, both making it bio-available, and washing the excess of it from my system, rather than allowing it to concentrate in my kidneys and liver. This is not advertised anywhere, with health claims that my K-2 levels, arterial plaque, and other relevant measurements can be taken by a qualified technician and adjusted for. This doesn't require a doctor, as it's a simple technique that any lab assistant can be trained to perform. However, it's too complex to cater to my level of knowledge; which then makes the specialists who can navigate the FDA's web of snares too expensive for me (and millions of others) to afford. The FDA and "regulation" have priced the legally-unsophisticated (or highly cost-conscious) producers out of the market, using a negative economic incentive or "disincentive." This doesn't take the form of a business being closed, it takes the form of a business never opened, an innovator biding his time in "stealth mode" or simply failing to offer the best known product.

I think the "take home" is this: We should all be very wary about getting involved in political debates. After all, they typically settle nothing except to make people mad. For example, in the 1850s, there were "greens" and "blues" in the USA, and the Southern blues wanted to perpetuate an institution of theirs, and the Northern greens opposed it. The blues had seemingly won, because they had altered the system by which the rules were enforced in a very sneaky manner: before trials, they had implemented the practice of "voir dire." This term had an apocryphal Latin root, sounded French, and nobody knew what it meant, or had any basis for knowing how legitimate it was. (This was a marked change since the hotly-contested beginning of the nation, in which a great many people had read the history of the conflict, and could tell you, very specifically, why "voir dire" was a very bad idea.) Additionally, "the blues" had shifted the meaning of the legal code by making scores of new laws, most of which had, before the introduction of "voir dire" been unenforceable.

But now their institution was favored by the true(underlying) law of nature, the force of arms.

Then, some trouble-making "greens" began "talking about politics" (where previously existed only the pristine silence of well-organized oppression). They referred to a lot of ancient history, and even some intellectually-dishonest-but-highly-effective strategic arguments, that made the network sympathize more with them. This resulted in a lot of people becoming "mind-killed." They got so "mind-killed" that they took to the streets whenever the sacred blues' institution was being enforced, crowding the courthouses. Of course, one may say they like the outcome of such "mind-killed" "manipulations." One may even say that, when the nature of the mind-killing is benevolent, then a benevolent result occurs as a consequence. Of course, the blue institution I'm referring to here is slavery, which required the greens and the un-affiliated individualists to "affiliate together."

Comment author: PhilGoetz 29 March 2015 02:50:22AM *  0 points [-]

The Amish are unique in their living styles in largely self-sustaining communities. They grow their own food.

The Amish vary greatly from one place to another. Here in Mercer County, they don't grow much of their own food, and when they do, they can it. They do make their own milk, but they like fast food and packaged food. Storing ingredients without refrigeration, cooking fancy meals on a wood stove, and cleaning up after them with no hot running water, isn't so simple.

Comment author: ike 29 March 2015 02:48:31AM -1 points [-]

This argument also relies on a ridiculous definition of rational.

Whilst rational economic actors do attempt to maximise their profit, the argument ignores that this takes place in the context of varying time windows. In effect it argues that it’s “rational” to take a tiny increase in profit today even if that destroys your business and all the potential long term profits you could obtain tomorrow and the day after. This definition is absurd and no actual business works that way.

Mike Hearn, Replace by Fee, a Counter Argument

Comment author: MarsColony_in10years 29 March 2015 02:46:39AM 0 points [-]

I'm not sure I'd interpret the results quite like that. "We believe everything we're told" seems like a bit of an exaggeration. I don't have a deep-seated, strong belief that 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 ≈ 2,250. That's just a quick guess, based on the information currently floating around in my skull. If you asked me for another guess tomorrow, I might give a radically different answer.

It seems like we just encounter a lot of information over the years, and it all gets tossed into the giant box that is our skull. Then something comes up (something we see or hear, a word, an idea we have... anything) and our brain quickly rummages through the box for related concepts. It's not a comprehensive search by any means; it's just a quick search that is heavily bias toward concepts at the top of the box (those added or used most recently). This is generally a useful bias, since it's likely to turn up relevant information quickly.

If some of the concepts that come up during the search have a [FALSE] tag attached, we'll ignore them, or maybe even treat them as counter-evidence to whatever we're evaluating. The problem is that sometimes we're only half-listening when we encountered certain information, and never attached a [FALSE] tag. Or maybe the [FALSE] tag wasn't attached well enough to stick. For example: "I remember two of my geeky friends arguing about whether glass was a slow-flowing liquid or a true solid, but I forget who wound up being correct when they finally googled it."

But there are all sorts of other things attached to each bit of knowledge that's floating around in our brain, besides just a simplistic [FALSE] tag. We can remember where we heard it (college class, hearsay, scifi book, newspaper, pier-reviewed publication, etc.) and maybe even how we felt about it at the time (Were we surprised to learn it? Still skeptical afterward?). Ideally, we'll remember a lot of supporting evidence and ideas, and a few attempts to prove the notion false and how the tests failed.

The things we think of as our core beliefs tend not to be made up of only random hearsay. They tend to be based on ideas we are pretty sure about. They may have accumulated a bunch of week supporting evidence in addition, over the years, due to confirmation bias. Even weaker beliefs (like those based on some source we read once and were pretty sure was reputable) require a basic amount of evidence.

Perhaps my argument is only about the meaning of the word "belief". After all, it seems arbitrary to declare some standard for our guesses at which point we are willing to call one a belief instead of a best guess. But in practice, that seems to be exactly what we do. I try to set my bar fairly high, and reserve judgement on a situation until I'm reasonably confident, but other people seem willing to form opinions on very little evidence, at the risk of turning out to be wrong. And That's fine, so long as our opinions are still evidence based. It doesn't matter if the threshold is p>.99 or p>.95 or even p>.75, so long as we can agree on p and base our decisions on it.

But concentrating on errors, fallacies, heuristics, and biases that affect mainly our guesses seems like it would have limited value. Perhaps they are a way of catching errors early, before they propagate into deeply held beliefs. Or perhaps they would be useful for avoiding continuously adding small bits of support to our deeper beliefs (a form of confirmation bias). It would be extremely interesting to do a longitudinal case study, and track the development of a bad idea, from formation to conclusion. Say, from the journal of someone who came to believe in conspiracy theories or something similar. I wonder to what degree our natural human biases influence the long-term development of our opinions.

Comment author: TheOtherDave 29 March 2015 12:25:49AM 1 point [-]

Let's try it the other way: what are some examples of cases where you find yourself using definitions in what you think, but are not sure, is a meaningful context?

Comment author: elriel 28 March 2015 11:04:49PM 0 points [-]

Whether the fictional evidence actually misses that factor or not, I can't say. However, the fact that the text mentioned that the extermination was for biological homo sapiens leads me to think that those artificial kids weren't supposed to be just substitutes for emotional purposes but could actually act as full member of your family. That is, you wouldn't consider them pets or slaves, but family.

In response to comment by soreff on Circular Altruism
Comment author: Good_Burning_Plastic 28 March 2015 08:33:53PM 4 points [-]

But you also slightly improve your physical fitness which might reduce the probability of an accident further down the line by more than 1/10^10^100.

Comment author: Okeymaker 28 March 2015 07:09:00PM 1 point [-]

On the one hand I understand many reasons for words. What words are for. You notice something, someone names it for you, you look up the word, you connect it to a definition, and voila, you have gained knowledge. Because you know at least some of it´s properties and perhaps also common purposes, from the defintion.

On the other hand I understand why arguing over defintions obviously is pointless in above mentioned example and the examples from How an Algorithm Feels From Inside.

Here is my problem. I have never bothered arguing definitions for the sake of it. I use them in a meaningful context. Or so I THINK. Could someone give me some more every day examples of where this may not be the case? If I can connect this to something more concrete (something I can relate to), I might be able to really understand the issue.

In response to comment by AndyC on Circular Altruism
Comment author: altleft 28 March 2015 04:53:18PM 0 points [-]

It makes some sense in terms of total happiness, since 10 billion happy people would give a higher total happiness than 5 billion happy people.

In response to comment by Jiro on Circular Altruism
Comment author: soreff 28 March 2015 04:45:57PM *  2 points [-]

Whenever one bends down to pick up a dropped penny, one has more than a 1/Googolplex chance of a slip-and-fall accident which would leave one suffering for 50 years.

Comment author: Jiro 28 March 2015 04:01:18PM 0 points [-]

I'm following posts up the chain to 2012 but I think I'll comment on this anyway:

I don't think it's correct to say "its content was crafted to remain within the bounds of the laws, but was judged to be illegal on the grounds of intent and indirect implication". The law doesn't say "using X phrase is illegal, and if you don't use X phrase, you're in the clear". It actually uses the word "intent" to describe what is illegal; intent and implication is inherently a part of the law, and someone who is convicted based on intent and implication isn't convicted despite the law, he's convicted according to the law.

Furthermore, even if it didn't say "intent", lots of laws are interpreted according to intent and implication. If a bank robbery law says you can't threaten to kill someone to get money, and the bank robber points a loaded gun at the bank teller, and the teller gives him money without him having to say a word, he doesn't get to say "the law requies a threat and I didn't say anything, so there's no threat"--he made an implicit threat.

Comment author: Benito 28 March 2015 01:46:47PM 0 points [-]

No,but he point is that what appear to be good examples are often used when they are inappropriate, so you should be wary of agreeing more with someone's position just because they say their case is analogous.

Comment author: Jiro 28 March 2015 08:37:59AM *  1 point [-]

If you do not understand how an intelligent, well-meaning person can have a position, and it's a position that lots of people actually hold, then you do not understand the position yet.

Unless "I think the intelligent, well-meaning person is making an error due to cognitive bias, ignorance, or being lied to" counts as understanding them, I do not understand how an intelligent, well-meaning, person can believe in

-- homeopathy

-- The US political version of intelligent design

-- 9/11 conspiracy theories

These are positions that lots of people actually hold. Do I fail to understand these positions?

Comment author: Jiro 28 March 2015 08:32:27AM *  2 points [-]

A question for comparison: would you rather have a 1/Googolplex chance of being tortured for 50 years, or lose 1 cent?

Whenever I drive, I have a greater than a 1/googlolplex chance of getting into an accident which would leave me suffering for 50 years, and I still drive. I'm not sure how to measure the benefit I get from driving, but there are at least some cases where it's pretty small, even if it's not exactly a cent.

View more: Next