Filter All time

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

One last roll of the dice

0 Mitchell_Porter 03 February 2012 01:59AM

Previous articles: Personal research update, Does functionalism imply dualism?, State your physical account of experienced color.

 

In phenomenology, there is a name for the world of experience, the "lifeworld". The lifeworld is the place where you exist, where time flows, and where things are actually green. One of the themes of the later work of Edmund Husserl is that a scientific image of the real world has been constructed, on the basis of which it is denied that various phenomena of the lifeworld exist anywhere, at any level of reality.

When I asked, in the previous post, for a few opinions about what color is and how it relates to the world according to current science, I was trying to gauge just how bad the eclipse of the lifeworld by theoretical conceptions is, among the readers of this site. I'd say there is a problem, but it's a problem that might be solved by patient discussion.

Someone called Automaton has given us a clear statement of the extreme position: nothing is actually green at any level of reality; even green experiences don't involve the existence of anything that is actually green; there is no green in reality, there is only "experience of green" which is not itself green. I see other responses which are just a step or two away from this extreme, but they don't deny the existence of actual color with that degree of unambiguity.

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green. Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green. Either the lifeworld exists somewhere in reality, or you must assert, as does the philosopher quoted by Automaton, that all that exists are brain processes and words. Your color sensations aren't really there, you're "having a sensation" without there being a sensation in reality.

What about the other responses? kilobug seems to think that pi actually exists inside a computer calculating the digits of pi, and that this isn't dualist. Manfred thinks that "keeping definitions and referents distinct" would somehow answer the question of where in reality the actual shades of green are. drethelin says "The universe does not work how it feels to us it works" without explaining in physical terms what these feelings about reality are, and whether any of them is actually green. pedanterrific asks why wrangle about color rather than some other property (the answer is that the case of color makes this sort of problem as obvious as it ever gets). RomeoStevens suggests I look into Jeff Hawkins. Hawkins mentions qualia once in his book "On Intelligence", where he speculates about what sort of neural encoding might be the physical correlate of a color experience; but he doesn't say how or whether anything manages to be actually colored.

amcknight asks which of 9 theories of color listed in the SEP article on that subject I'm talking about. If you go a few paragraphs back from the list of 9 theories, you will see references to "color as it is in experience" or "color as a subjective quality". That's the type of color I'm talking about. The 9 theories are all ways of talking about "color as in physical objects", and focus on the properties of the external stimuli which cause a color sensation. The article gets around to talking about actual color, subjective or "phenomenal" color, only at the end.

Richard Kennaway comes closest to my position; he calls it an apparently impossible situation which we are actually living. I wouldn't put it quite like that; the only reason to call it impossible is if you are completely invested in an ontology lacking the so-called secondary qualities; if you aren't, it's just a problem to solve, not a paradox. But Richard comes closest (though who knows what Will Newsome is thinking). LW user "scientism" bites a different bullet to the eliminativists, and says colors are real and are properties of the external objects. That gets a point for realism, but it doesn't explain color in a dream or a hallucination.

Changing people's minds on this subject is an uphill battle, but people here are willing to talk, and most of these subjects have already been discussed for decades. There's ample opportunity to dissolve, not the problem, but the false solutions which only obscure the real problem, by drawing on the work of others; preferably before the future Rationality Institute starts mass-producing people who have the vice of quale-blindness as well as the virtues of rationality. Some of those people will go on to work on Friendly AI. So it's highly desirable that someone should do this. However, that would require time that I no longer have.

 

In this series of posts, I certainly didn't set out to focus on the issue of color. The first post is all about Friendly AI, the ontology of consciousness, and a hypothetical future discipline of quantum neurobiology. It may still be unclear why I think evidence for quantum computing in the brain could help with the ontological problems of consciousness. I feel that the brief discussion this week has produced some minor progress in explaining myself, which needs to be consolidated into something better. But see my remarks here about being able to collapse the dualistic distinction between mental and physical ontology in a tensor network ontology; also earlier remarks here about about mathematically representing the phenomenological ontology of consciousness. I don't consider myself dogmatic about what the answer is, just about the inadequacy of all existing solutions, though I respect my own ideas enough to want to pursue them, and to believe that doing so will be usefully instructive, even if they are wrong.

However, my time is up. In real life, my ability to continue even at this inadequate level hangs by a thread. I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task. I have considerable momentum now, but it will go to waste unless I can keep going for a little longer - a few weeks, maybe a few months. That should be enough time to write something up that contains a result of genuine substance, and/or enough time to secure an economic basis for my existence in real life that permits me to keep going. I won't go into detail here about how slim my resources really are, or how adverse my conditions, but it has been the effort that you would want from someone who has important contributions to make, and nowhere to turn for direct assistance.[*] I've done what I can, these posts are the end of it, and the next few days will decide whether I can keep going, or whether I have to shut down my brain once again.

So, one final remark. Asking for donations doesn't seem to work yet. So what if I promise to pay you back? Then the only cost you bear is the opportunity cost and the slight risk of default. Ten years ago, Eliezer lent me the airfare to Atlanta for a few days of brainstorming. It took a while, but he did get that money back. I honor my commitments and this one is highly public. This really is the biggest bargain in existential risk mitigation and conceptual boundary-breaking that you'll ever get: not even a gift, just a loan is required. If you want to discuss a deal, don't do it here, but mail me at mitchtemporarily@hotmail.com. One person might be enough to make the difference.

[*]Really, I can't say that, that's an emotional statement. There has been lots of assistance, large and small, from people in my life. But it's been a struggle conducted at subsistence level the whole way.

 

ETA 6 Feb: I get to keep going.

[Link] Hey Extraverts: Enough is Enough

0 [deleted] 03 January 2013 08:59AM

A fun article by Alan Jacobs. Check out the paper he cites, if anyone finds an non-paywalled version, I'll edit in the link here. HT for the link to Michael Bloom.

So in 2005 a very thoroughly researched and well-argued scholarly article was published that demonstrates, quite clearly, that group productivity is an illusion. All those brainstorming sessions and group projects you’ve been made to do at school and work? Useless. Everybody would have been better off working on their own. Here’s the abstract of the article:

"It has consistently been found that people produce more ideas when working alone as compared to when working in a group. Yet, people generally believe that group brainstorming is more effective than individual brainstorming. Further, group members are more satisfied with their performance than individuals, whereas they have generated fewer ideas. We argue that this ‘illusion of group productivity’ is partly due to a reduction of cognitive failures (instances in which someone is unable to generate ideas) in a group setting. Three studies support that explanation, showing that: (1) group interaction leads to a reduction of experienced failures and that failures mediate the effect of setting on satisfaction; and (2) manipulations that affect failures also affect satisfaction ratings. Implications for group work are discussed."

Has the puncturing of that “illusion of group productivity” had any effect? Of course not. Groupthink is as powerful as ever. Why is that?

I’ll tell you. It’s because the world is run by extraverts. (And FYI, that’s the proper spelling: extrovert is common but wrong, because extra- is the proper Latin prefix.) Extraverts love meetings — any possible excuse for a meeting, they’ll seize on it. They might hear others complain about meetings, but the complaints never sink in: extraverts can’t seem to imagine that the people who say they hate meetings really mean it. “Maybe they hate other meetings, but I know they’ll enjoy mine, because I make them fun! Besides, we’ll get so much done!” (Let me pause here to acknowledge that the meeting-caller is only one brand of extravert: some of the most pronouncedly outgoing people I know hate meetings as much as I do.)

The problem with extraverts — not all of them, I grant you, but many, so many — is a lack of imagination. They simply assume that everyone will feel about things as they do. “The more the merrier, right? It’s a proverb, you know.” Yes it is: a proverb coined by an extravert. So people I do not know will regularly send me emails: “Hey, I’ll be in your town soon and I’d love to have lunch or coffee. Just let me know which you’d prefer!” Notice the missing option: not being forced to have a meal and make conversation with a stranger. (Once a highly extraverted friend of mine was trying to get me involved in some project and said, cheerily, “You’ll get to meet lots of new people!” I turned to him and replied, “You realize, don’t you, that you’ve just ensured my refusal to participate?”)

I really do need to find more written by this author. But while I certainly do very much share this sentiment I have a hard time figuring out how common it is. After all people don't look good saying they "don't like meeting new people".

Though my introversion has grown deeper in recent years, it’s always been there. When I was a kid I’d read about people who got the chance to meet their favorite musician or sports hero or whatever, and I’d think: No way. I would have preferred then, and still prefer now, to write a letter to whomever I deeply admire and hope for a response. I even deliberately lost the school-wide spelling bee in fifth grade so I wouldn’t have to participate in the city-wide competition: it would have meant meeting so many strange kids!

Spelling bees are, of course, organized by extraverts — indeed, pretty much everything that is organized is organized by extraverts, which in turn is their justification for their ruling of the world. “See? If we didn’t organize things they wouldn’t get organized at all!” Precisely, mutters the introvert, under his breath, to avoid confrontation.

So, extraverts of the world, I invite you to make a New Year’s resolution: Refrain from organizing stuff. Don’t plan parties or outings or, God forbid, “team-building exercises.” Just don’t call meetings. (I would ask you to refrain from calling unnecessary meetings, but so many of you think almost all meetings necessary that it’s best you not call them at all.) Leave people alone and let them get their work done. Those who want to socialize can do it after work. I’ll not tell you you’ll enjoy it: you won’t. You’ll be miserable, at least at first, because you won’t be pulling others’ puppet-strings. But everyone will be more productive, and many people will be happier. Give it a try. Let go for a year. Just leave us alone.

I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life

0 turchin 18 July 2015 01:13PM

I have non-zero probability to die next year. In my age of 42 it is not less than 1 per cent, and probably more. I could do many investment which will slightly lower my chance of dying – from healthy life style to cryo contract.  And I did many of them.

From economical point of view the death is at least loosing all you capital.

If my net worth is something like one million (mostly real estate and art), and I have 1 per cent chance to die, it is equal to loosing 10 k a year. But in fact more, because death it self is so unpleasant that it has large negative monetary value. And also I should include the cost of lost opportunities.

Once I had a discussion with Vladimir Nesov about what is better: to fight to immortality, or to create Friendly AI which will explain what is really good. My position was that immortality is better because it is measurable, knowable, and has instrumental value for most other goals, and also includes prevention of worst thing on earth which is the Death. Nesov said (as I remember) that personal immortality does not matter as much total value of humanity existence, and more over, his personal existence has no much value at all. All what we need to do is to create Friendly AI. I find his words contradictory because if his existence does not matter, than any human existence also doesn’t matter, because there is nothing special about him.

But later I concluded that the best is to make bets that will raise the probability of my personal immortality, existential risks prevention and creation of friendly AI simultaneously. Because it is easy to imagine situation where research in personal immortality like creation technology for longevity genes delivery will contradict our goal of existential risks reduction because the same technology could be used for creating dangerous viruses.

The best way here is invest in creating regulating authority which will be able to balance these needs, and it can’t be friendly AI because such regulation needed before it will be created.

That is why I think that US needs Transhumanist president. A real person whose value system I can understand and support. And that is why I support Zoltan Istvan for 2016 campaign.

Me and Exponential Technologies Institute donated 10 000 USD for Immortality bus project. This bus will be the start of Presidential campaign for the writer of “Transhumanist wager”. 7 film crews agreed to cover the event. It will create high publicity and cover all topics of immortality, aging research, Friendly AI and x-risks prevention. It will help to raise more funds for such type of research. 

 

Neil Armstrong died before we could defeat death

-1 kilobug 25 August 2012 07:49PM

The sad news broke tonight : Neil Armstrong, the first human to ever walk another world, died today. We lost him forever. He died before we could defeat death.

Once again the horror of death strikes. This time, in addition from wiping from us forever a hero of humanity, he wiped from us forever a memory that will never exist again. Never again will a human being be able to experience being the first to walk another world. That beautiful experience is lost forever too, along with all the memories, dreams, desires and wishes that made Neil Armstrong.

But thanks to him, humanity made a giant leap. We'll fill the stars and conquer death. The spark of intelligence and sentience will not extinguish. That's the best we can do to honour him.

Source : http://www.reuters.com/article/2012/08/25/us-usa-neilarmstrong-idUSBRE87O0B020120825

Politics Discussion Thread August 2012

0 OrphanWilde 01 August 2012 03:25PM

In line with the results of the poll here, a thread for discussing politics.  Incidentally, folks, I think downvoting the option you disagree with in a poll is generally considered poor form.

 

1.) Top-level comments should introduce arguments; responses should be responses to those arguments.

2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both.

3.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.

4.) In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.

 

If anybody thinks the rules should be dropped here, now that we're no longer conducting a test - I already dropped the upvoting/downvoting limits I tried, unsuccessfully, to put in - let me know.  The first rule is the only one I think is strictly necessary.

Debiasing attempt: If you haven't yet read Politics is the Mindkiller, you should.

Bayes Slays Goodman's Grue

0 potato 17 November 2011 10:45AM

This is a first stab at solving Goodman's famous grue problem. I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem. I haven't looked at many proposed solutions to this paradox, besides some of the basic ones in "The New Problem of Induction". So, I apologize now if my solution is wildly unoriginal. I am willing to put you through this dear reader because:

  1. I wanted to see how I would fare against this still largely open, devastating, and classic problem, using only the arsenal provided to me by my minimal Bayesian training, and my regular LW reading.
  2. I wanted the first LW article about the grue problem to attack it from a distinctly Lesswrongian aproach without the benefit of hindsight knowledge of the solutions of non-LW philosophy. 
  3. And lastly, because, even if this solution has been found before, if it is the right solution, it is to LW's credit that its students can solve the grue problem with only the use of LW skills and cognitive tools.

I would also like to warn the savvy subjective Bayesian that just because I think that probabilities model frequencies, and that I require frequencies out there in the world, does not mean that I am a frequentest or a realist about probability. I am a formalist with a grain of salt. There are no probabilities anywhere in my view, not even in minds; but the theorems of probability theory when interpreted share a fundamental contour with many important tools of the inquiring mind, including both, the nature of frequency, and the set of rational subjective belief systems. There is nothing more to probability than that system which produces its theorems. 

Lastly, I would like to say, that even if I have not succeeded here (which I think I have), there is likely something valuable that can be made from the leftovers of my solution after the onslaught of penetrating critiques that I expect form this community. Solving this problem is essential to LW's methods, and our arsenal is fit to handle it. If we are going to be taken seriously in the philosophical community as a new movement, we must solve serious problems from academic philosophy, and we must do it in distinctly Lesswrongian ways.

 


 

"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green.
(conclusion):
There is a very high probability that a never before observed emerald will be green."

That is the inference that the grue problem threatens, courtesy of Nelson Goodman.  The grue problem starts by defining "grue":

"An object is grue iff it is first observed before time T, and it is green, or it is first observed after time T, and it is blue."

So you see that before time T, from the list of premises:

"The first emerald ever observed was green.
 The second emerald ever observed was green.
 The third emerald ever observed was green.
 … etc.
 The nth emerald ever observed was green."
 (we will call these the green premises)

it follows that:

"The first emerald ever observed was grue.
The second emerald ever observed was grue.
The third emerald ever observed was grue.
… etc.
The nth emerald ever observed was grue."
(we will call these the grue premises)

The proposer of the grue problem asks at this point: "So if the green premises are evidence that the next emerald will be green, why aren't the grue premises evidence for the next emerald being grue?" If an emerald is grue after time T, it is not green. Let's say that the green premises brings the probability of "A new unobserved emerald is green." to 99%. In the skeptic's hypothesis, by symmetry it should also bring the probability of "A new unobserved emerald is grue." to 99%. But of course after time T, this would mean that the probability of observing a green emerald is 99%, and the probability of not observing a green emerald is at least 99%, since these sentences have no intersection, i.e., they cannot happen together, to find the probability of their disjunction we just add their individual probabilities. This must give us a number at least as big as 198%, which is of course, a contradiction of the Komolgorov axioms. We should not be able to form a statement with a probability greater than one.

This threatens the whole of science, because you cannot simply keep this isolated to emeralds and color. We may think of the emeralds as trials, and green as the value of a random variable. Ultimately, every result of a scientific instrument is a random variable, with a very particular and useful distribution over its values. If we can't justify inferring probability distributions over random variables based on their previous results, we cannot justify a single bit of natural science. This, of course, says nothing about how it works in practice. We all know it works in practice. "A philosopher is someone who say's, 'I know it works in practice, I'm  trying to see if it works in principle.'" - Dan Dennett

We may look at an analogous problem. Let's suppose that there is a table and that there are balls being dropped on this table, and that there is an infinitely thin line drawn perpendicular to the edge of the table somewhere which we are unaware of. The problem is to figure out the probability of the next ball being right of the line given the last results. Our first prediction should be that there is a 50% chance of the ball being right of the line, by symmetry. If we get the result that one ball landed right of the line, by Laplace's rule of succession we infer that there is a 2/3ds chance that the next ball will be right of the line. After n trials, if every trial gives a positive result, the probability we should assign to the next trial being positive as well is n+1/n +2.

If this line was placed 2/3ds down the table, we should expect that the ratio of rights to lefts should approach 2:1. This gives us a 2/3ds chance of the next ball being a right, and the fraction of Rights out of trials approaches 2/3ds ever more closely as more trials are performed.

Now let us suppose a grue skeptic approaching this situation. He might make up two terms "reft" and "light". Defined as you would expect, but just in case:

"A ball is reft of the line iff it is right of it before time T when it lands, or if it is left of it after time T when it lands.
 A ball is light of the line iff it is left of the line before time T when it lands, or if it is right of the line after time T when it first lands."

The skeptic would continue:

"Why should we treat the observation of several occurrences of Right, as evidence for 'The next ball will land on the right.' and not as evidence for 'The next ball will land reft of the line.'?"

Things for some reason become perfectly clear at this point for the defender of Bayesian inference, because now we have an easy to imaginable model. Of course, if a ball landing right of the line is evidence for Right, then it cannot possibly be evidence for ~Right; to be evidence for Reft, after time T, is to be evidence for  ~Right, because after time T, Reft is logically identical to ~Right; hence it is not evidence for Reft, after time T, for the same reasons it is not evidence for ~Right. Of course, before time T, any evidence for Reft is evidence for Right for analogous reasons.

But now the grue skeptic can say something brilliant, that stops much of what the Bayesian has proposed dead in its tracks:

"Why can't I just repeat that paragraph back to you and swap every occurrence of 'right' with 'reft' and 'left' with 'light', and vice versa? They are perfectly symmetrical in terms of their logical realtions to one another.
If we take 'reft' and 'light' as primitives, then we have to define 'right' and 'left' in terms of 'reft' and 'light' with the use of time intervals."

What can we possibly reply to this? Can he/she not do this with every argument we propose then? Certainly, the skeptic admits that Bayes, and the contradiction in Right & Reft, after time T, prohibits previous Rights from being evidence of both Right and Reft after time T; where he is challenging us is in choosing Right as the result which it is evidence for, even though "Reft" and "Right" have a completely symmetrical syntactical relationship. There is nothing about the definitions of reft and right which distinguishes them from each other, except their spelling. So is that it? No, this simply means we have to propose an argument that doesn't rely on purely syntactical reasoning. So that if the skeptic performs the swap on our argument, the resulting argument is no longer sound.

What would happen in this scenario if it were actually set up? I know that seems like a strangely concrete question for a philosophy text, but its answer is a helpful hint. What would happen is that after time T, the behavior of the ratio: 'Rights:Lefts' as more trials were added, would proceed as expected, and the behavior of the ratio: 'Refts:Lights' would approach the reciprocal of the ratio: 'Rights:Lefts'. The only way for this to not happen, is for us to have been calling the right side of the table "reft", or for the line to have moved. We can only figure out where the line is by knowing where the balls landed relative to it; anything we can figure out about where the line is from knowing which balls landed Reft and which ones landed Light, we can only figure out because in knowing this and and time, we can know if the ball landed left or right of the line.

To this I know of no reply which the grue skeptic can make. If he/she say's the paragraph back to me with the proper words swapped, it is not true, because  In the hypothetical where we have a table, a line, and we are calling one side right and another side left, the only way for Refts:Lefts behave as expected as more trials are added is to move the line (if even that), otherwise the ratio of Refts to Lights will approach the reciprocal of Rights to Lefts.

This thin line is analogous to the frequency of emeralds that turn out green out of all the emeralds that get made. This is why we can assume that the line will not move, because that frequency has one precise value, which never changes. Its other important feature is reminding us that even if two terms are syntactically symmetrical, they may have semantic conditions for application which are ignored by the syntactical model, e.g., checking to see which side of the line the ball landed on.

 


 

In conclusion:

Every random variable has as a part of it, stored in its definition/code, a frequency distribution over its values. By the fact that somethings happen sometimes, and others happen other times, we know that the world contains random variables, even if they are never fundamental in the source code. Note that "frequency" is not used as a state of partial knowledge, it is a fact about a set and one of its subsets.

The reason that:

"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green.
(conclusion):
There is a very high probability that a never before observed emerald will be green."

is a valid inference, but the grue equivalent isn't, is that grue is not a property that the emerald construction sites of our universe deal with. They are blind to the grueness of their emeralds, they only say anything about whether or not the next emerald will be green. It may be that the rule that the emerald construction sites use to get either a green or non-green emerald change at time T, but the frequency of some particular result out of all trials will never change; the line will not move. As long as we know what symbols we are using for what values, observing many green emeralds is evidence that the next one will be grue, as long as it is before time T, every record of an observation of a green emerald is evidence against a grue one after time T. "Grue" changes meanings from green to blue at time T, 'green'''s meaning stays the same since we are using the same physical test to determine green-hood as before; just as we use the same test to tell whether the ball landed right or left. There is no reft in the universe's source code, and there is no grue. Green is not fundamental in the source code, but green can be reduced to some particular range of quanta states; if you had the universes source code, you couldn't write grue without first writing green; writing green without knowing a thing about grue would be just as hard as while knowing grue. Having a physical test, or primary condition for applicability, is what privileges green over grue after time T; to have a physical consistent test is the same as to reduce to a specifiable range of physical parameters; the existence of such a test is what prevents the skeptic from performing his/her swaps on our arguments.


Take this more as a brainstorm than as a final solution. It wasn't originally but it should have been. I'll write something more organized and consize after I think about the comments more, and make some graphics I've designed that make my argument much clearer, even to myself. But keep those comments coming, and tell me if you want specific credit for anything you may have added to my grue toolkit in the comments.

Karma as Money

0 diegocaleiro 02 June 2013 01:46AM

How do you gather a theory of Counterfactuals, Karma, and Economics, into a revised algorithm for thinking about Lesswrong?

Thinking of Karma as money. 

There are a lot of things that one may consider worth saying on Lesswrong. Things that go against the agenda, things that may make people unconfortable, things that are different from what the high-ranking officials would prefer to read here. But we don't do it, because we don't want to "loose" precious Karma points. Each Karma point loss is felt as an insecurity, as a tiny arrow penetrating the chest.  But should it be that way? 

Here is the alternative: Think of Karma as money. You work hard for getting a few karma points by writing interesting stuff on superintelligence and whatnot, society rewards you by paying some karma points. Then you go there and write something you think people need to hear, but will downvote for sure, at least initially. Some people by now will be very rich, which affords them the opportunity of saying a lot of things that they are not sure will get themselves upvoted, but are sure should be posted.

Citizen: Wait, you said counterfactuals...

Yes, just like your State doesn't really care or like you going out in your hovercraft through the river and using equipment to climb a mountain, so the people here may not care about putting attention into that idea which you think they should hear. Thus, they dowvote it. They make you pay for their attention. If you mentalize it as "they are drawing my soul and life is worthless if karma is negative", then you are much less likely to end up posting something controversial that may be counterfactually relevant

Just like efficient charity donation works because the vast majority of people are not paying to effectively cause others into being happier, using karma as money works because the vast majority of people are afraid their soul is being sucked every time a downvote comes. But it isn't, this is just the price people charge for their attention, if you think the way I'm tentatively suggesting.  It is just a test worth trying, not necessarily something that I fully endorse. I like the idea, and have been using it since forever. Every post linked here, or an earlier subpart of it, has been negative at some point, and from before posting, I knew it would be a "costly one".  Try it, if you are rich, you may have nothing much to loose, and more controversial but useful stuff will show up with time.  

Let's see how much this costs. 

Why safe Oracle AI is easier than safe general AI, in a nutshell

0 Stuart_Armstrong 03 December 2011 12:33PM

Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"

Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."

Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."

Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"

Hitler and Gandhi together: "Stay inside the box and answer questions accurately".

Pascal's Mugging Solved

0 common_law 27 May 2014 03:28AM

Since Pascal’s Mugging is well known on LW, I won’t describe it at length. Suffice to say that a mugger tries to blackmail you by threatening enormous harm by a completely mysterious mechanism. If the harm is great enough, a sufficiently large threat eventually dominates doubts about the mechanism.

I have a reasonably simple solution to Pascal’s Mugging. In four steps, here it is:

  1. The greater the harm, the more likely the mugger is trying to pick a greater threat than any competitor picks (we’ll call that maximizing).
  2. As the amount of harm threatened gets larger, the probability that the mugger is maximizing approaches unity.
  3. As the probability that the mugger is engaged in maximizing approaches unity, the likelihood that the mugger’s claim is true approaches zero.
  4. The probability that a contrary claim is true—that contributing to the mugger will cause the feared calamity—exceeds the probability that the mugger’s claim is true when the probability that the mugger is maximizing increases sufficiently.

Pascal’s Mugging induces us to look at the likelihood of the claim in abstraction from the fact that the claim is made. The paradox can be solved by breaking the probability that the mugger’s claim is true into two parts: the probability of the claim itself (its simplicity) and the probability that the mugger is truthful. Even if the probability of magical harm doesn’t decrease when the amount of harm increases, the probability that the mugger is truthful decreases continuously as the amount of harm predicted increases.

Solving the paradox in Pascal’s Mugging depends on recognizing that, if the logic were sound, it would engage muggers in a game where they try to pick the highest practicable number to represent the amount of harm. But this means that the higher the number, the more likely they are to be playing this game (undermining the logic believed sound).

But solving Pascal’s Mugging also depends on recognizing that the evidence that the mugger is maximizing can lower the probability below that of the same harm when no mugger has claimed it. It involves recognizing that, when it is almost certain that the claim is motivated by something unrelated to the claim’s truth, the claim can become less believable than if it hadn’t been expressed. The mugger’s maximizing motivation is evidence against his claim.

If someone presents you with a number representing the amount of threatened harm 3^3^3..., continued as long as a computer can print out when the printer is allowed for run for, say, a decade, you should think this result less probable than if someone had never presented you with the tome. While people are more likely to be telling the truth than to be lying, if you are sufficiently sure they are lying, their testimony counts against their claim.

The proof is the same as the proof of the (also counter-intuitive) proposition that failure to find (some definite amount of) evidence for a theory constitutes negative evidence. The mugger has elicited your search for evidence, but because of the mugger’s clear interest in falsehood, you find that evidence wanting.

Global warming is a better test of irrationality that theism

-2 Stuart_Armstrong 16 March 2012 05:10PM

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

Theism is a symptom of excess compartmentalisation, of not realising that absence of evidence is evidence of absence, of belief in belief, of privileging the hypothesis, and similar failings. But these are not intrinsically huge problems. Indeed, someone with a mild case of theism can have the same anticipations as someone without, and update their evidence in the same way. If they have moved their belief beyond refutation, in theory it thus fails to constrain their anticipations at all; and often this is the case in practice.

Contrast that with someone who denies the existence of anthropogenic global warming (AGW). This has all the signs of hypothesis privileging, but also reeks of fake justification, motivated skepticism, massive overconfidence (if they are truly ignorant of the facts of the debate), and simply the raising of politics above rationality. If I knew someone was a global warming skeptic, then I would expect them to be wrong in their beliefs and their anticipations, and to refuse to update when evidence worked against them. I would expect their judgement to be much more impaired than a theist's.

Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational. I work in England, in a university environment, so my acceptance of AGW is the default position and not a sign of rationality. But if someone is in a milieu that discouraged belief in AGW (one stereotype being heavily Republican areas of the US) and has risen above this, then kudos to them: their acceptance of AGW is indeed a sign of rationality.

How confident are you in the Atomic Theory of Matter?

0 DataPacRat 19 January 2013 08:39PM

How much confidence do you place in the scientific theory that ordinary matter is made of discrete units, or 'atoms', as opposed to being infinitely divisible?

More than 50%? 90%? 99%? 99.9%? 99.99%? 99.999%? More? If so, how much more? (If describing your answer in percentages is cumbersome, then feel free to use the logarithmic scale of decibans, where 10 decibans corresponds to 90% confidence, 20 to 99%, 30 to 99.9%, etc.)

 

This question freely acknowledges that there are aspects of physics which the atomic theory does not directly cover, such as conditions of extremely high energy. This question is primarily concerned with that portion of physics in which the atomic theory makes testable predictions.

 

This question also freely acknowledges that its current phrasing and presentation may not be the best possible to elicit answers from the LessWrong community, and will be happy to accept suggestions for improvement.

 

 

Edit: By 'atomic theory', this question refers to the century-plus-old theory. A reasonably accurate rewording is: "Do you believe 'H2O' is a meaningful description of water?".

Newcomblike problem: Counterfactual Informant

0 Clippy 12 April 2012 08:25PM

I want to propose a variant of the Counterfactual Mugging problem discussed here.  BE CAREFUL how you answer, as it has important implications, which I will not reveal until the known dumb humans are on record.

Here is the problem:

Clipmega is considering whether to reveal to humans information that will amplify their paperclip production efficiency.  It will only do so if it expects that, as a result of revealing to humans this information, it will receive at least 1,000,000 paperclips within one year.

Clipmega is highly accurate in predicting how humans will respond to receiving this information.

The smart humans' indifference curve covers both their current condition and the one in which Clipmega reveals the idea and steals 1e24 paperclips.  (In other words, smart humans would be willing to pay a lot to learn this if they had to, and there is an enormous "consumer surplus".)

Without Clipmega's information, some human will independently discover this information in ten years, and the above magnitude of the preference for learning now vs later exists with this expectation in mind.  (That is, humans place a high premium on learning it how, even though they will eventually learn it either way.)

The human Alphas (i.e., dominant members of the human social hierarchy), in recognition of how Clipmega acts, and wanting to properly align incentives, are considering a policy: anyone who implements this idea in making paperclips must give Clipmega 100 paperclips within a year, and anyone found using the idea but not having donated to Clipmega is fined 10,000 paperclips, most of which are given to Clipmega.  It is expected that this will result in more than 1,000,000 paperclips being given to Clipmega.

Do you support the Alphas' policy?

Problem variant: All of the above remains true, but there also exist numerous "clipmicros" that unconditionally (i.e. irrespective of their anticipation of behavior on the part of other agents) reveal other, orthogonal paperclip production ideas.  Does your answer change?

Optional variant:  Replace "paperclip production" with something that current humans more typically want (as a result of being too stupid to correctly value paperclips.)

NonGoogleables

0 Thomas 12 January 2012 01:23PM

Recently in another topic I mentioned the "two bishops against two knights" chess endgame problem. I claimed it was investigated over two decades ago by a computer program and established that it is a win situation for the two bishops' side. Then I was unable to Google a solid reference for my claim.

I also remember a "Hermes Set Theory". It was something like ZFC, regarded as a valid Set Theory axiom system for 40 years, until a paradox was found inside. Now, I can't Google it out.

And then it was the so called "Baryon number conservation law", which was postulated for a short while in physics. Until it was found that a subatomic decay may in fact in/decrease the number of baryons in the process. I can't Google that one either.

Is that just me, or what?

Living in the shadow of superintelligence

0 Mitchell_Porter 24 June 2013 12:06PM

Although it regularly discusses the possibility of superintelligences with the power to transform the universe in the service of some value system - whether that value system is paperclip maximization or some elusive extrapolation of human values - it seems that Less Wrong has never systematically discussed the possibility that we are already within the domain of some superintelligence, and what that would imply. So how about it? What are the possibilities, what are the probabilities, and how should they affect our choices?

[LINK] Fixed-action patterns: Stop FAPing!

0 David_Gerard 04 May 2013 08:23PM

A worse pun than "JAQing off" for a title, but a nice reminder of a small way not to be stupid.

If you notice yourself making the same arguments over and over, or being accused of saying things irrelevant to the argument, try to stop yourself.

Population Ethics Shouldn't Be About Maximizing Utility

0 Ghatanathoah 18 March 2013 02:35AM

let me suggest a moral axiom with apparently very strong intuitive support, no matter what your concept of morality: morality should exist. That is, there should exist creatures who know what is moral, and who act on that. So if your moral theory implies that in ordinary circumstances moral creatures should exterminate themselves, leaving only immoral creatures, or no creatures at all, well that seems a sufficient reductio to solidly reject your moral theory.

-Robin Hanson

I agree strongly with the above quote, and I think most other readers will as well. It is good for moral beings to exist and a world with beings who value morality is almost always better than one where they do not. I would like to restate this more precisely as the following axiom: A population in which moral beings exist and have net positive utility, and in which all other creatures in existence also have net positive utility, is always better than a population where moral beings do not exist.

While the axiom that morality should exist is extremely obvious to most people, there is one strangely popular ethical system that rejects it: total utilitarianism. In this essay I will argue that Total Utilitarianism leads to what I will call the Genocidal Conclusion, which is that there are many situations in which it would be fantastically good for moral creatures to either exterminate themselves, or greatly limit their utility and reproduction in favor of the utility and reproduction of immoral creatures. I will argue that the main reason consequentialist theories of population ethics produce such obviously absurd conclusions is that they continue to focus on maximizing utility1 in situations where it is possible to create new creatures. I will argue that pure utility maximization is only a valid ethical theory for "special case" scenarios where the population is static. I will propose an alternative theory for population ethics I call "ideal consequentialism" or "ideal utilitarianism" which avoids the Genocidal Conclusion and may also avoid the more famous Repugnant Conclusion.

 

I will begin my argument by pointing to a common problem in population ethics known as the Mere Addition Paradox (MAP) and the Repugnant Conclusion. Most Less Wrong readers will already be familiar with this problem, so I do not think I need to elaborate on it. You may also be familiar with a even stronger variation called the Benign Addition Paradox (BAP). This is essentially the same as the MAP, except that each time one adds more people one also gives a small amount of additional utility to the people who already existed. One then proceeds to redistribute utility between people as normal, eventually arriving at the huge population where everyone's lives are "barely worth living." The point of this is to argue that the Repugnant Conclusion can be arrived at from "mere addition" of new people that not only doesn't harm the preexisting-people, but also one that benefits them.

The next step of my argument involves three slightly tweaked versions of the Benign Addition Paradox. I have not changed the basic logic of the problem, I have just added one small clarifying detail. In the original MAP and BAP it was not specified what sort of values the added individuals in population A+ held. Presumably one was meant to assume that they were ordinary human beings. In the versions of the BAP I am about to present, however, I will specify that the extra individuals added in A+ are not moral creatures, that if they have values at all they are values indifferent to, or opposed to, morality and the other values that the human race holds dear.

1. The Benign Addition Paradox with Paperclip Maximizers.

Let us imagine, as usual, a population, A, which has a large group of human beings living lives of very high utility. Let us then add a new population consisting of paperclip maximizers, each of whom is living a life barely worth living. Presumably, for a paperclip maximizer, this would be a life where the paperclip maximizer's existence results in at least one more paperclip in the world than there would have been otherwise.

Now, one might object that if one creates a paperclip maximizer, and then allows it to create one paperclip, the utility of the other paperclip maximizers will increase above the "barely worth living" level, which would obviously make this thought experiment nonalagous with the original MAP and BAP. To prevent this we will assume that each paperclip maximizer that is created has a slightly different values on what the ideal size, color, and composition of the paperclip they are trying to produce is. So the Purple 2 centimeter Plastic Paperclip Maximizer gains no addition utility from when the Silver Iron 1 centimeter Paperclip Maximizer makes a paperclip.

So again, let us add these paperclip maximizers to population A, and in the process give one extra utilon of utility to each preexisting person in A. This is a good thing, right? After all, everyone in A benefited, and the paperclippers get to exist and make paperclips. So clearly A+, the new population, is better than A.

Now let's take the next step, the transition from population A+ to population B. Take some of the utility from the human beings and convert it into paperclips. This is a good thing, right?

So let us repeat these steps adding paperclip maximizers and utility, and then redistributing utility. Eventually we reach population Z, where there is a vast amount of paperclip maximizers, a vast amount of many different kinds of paperclips, and a small amount of human beings living lives barely worth living.

Obviously Z is better than A, right? We should not fear the creation of a paperclip maximizing AI, but welcome it! Forget about things like high challenge, love, interpersonal entanglement, complex fun, and so on! Those things just don't produce the kind of utility that paperclip maximization has the potential to do!

Or maybe there is something seriously wrong with the moral assumptions behind the Mere Addition and Benign Addition Paradoxes.

But you might argue that I am using an unrealistic example. Creatures like Paperclip Maximizers may be so far removed from normal human experience that we have trouble thinking about them properly. So let's replay the Benign Addition Paradox again, but with creatures we might actually expect to meet in real life, and we know we actually value.

2. The Benign Addition Paradox with Non-Sapient Animals

You know the drill by now. Take population A, add a new population to it, while very slightly increasing the utility of the original population. This time let's have it be some kind animal that is capable of feeling pleasure and pain, but is not capable of modeling possible alternative futures and choosing between them (in other words, it is not capable of having "values" or being "moral"). A lizard or a mouse, for example. Each one feels slightly more pleasure than pain in its lifetime, so it can be said to have a life barely worth living. Convert A+ to B. Take the utilons that the human beings are using to experience things like curiosity, beatitude, wisdom, beauty, harmony, morality, and so on, and convert it into pleasure for the animals.

We end up with population Z, with a vast amount of mice or lizards with lives just barely worth living, and a small amount of human beings with lives barely worth living. Terrific! Why do we bother creating humans at all! Let's just create tons of mice and inject them full of heroin! It's a much more efficient way to generate utility!

3. The Benign Addition Paradox with Sociopaths

What new population will we add to A this time? How about some other human beings, who all have anti-social personality disorder? True, they lack the key, crucial value of sympathy that defines so much of human behavior. But they don't seem to miss it. And their lives are barely worth living, so obviously A+ has greater utility than A. If given a chance the sociopaths will reduce the utility of other people to negative levels, but let's assume that that is somehow prevented in this case.

Eventually we get to Z, with a vast population of sociopaths and a small population of normal human beings, all living lives just barely worth living. That has more utility, right? True, the sociopaths place no value on things like friendship, love, compassion, empathy, and so on. And true, the sociopaths are immoral beings who do not care in the slightest about right and wrong. But what does that matter? Utility is being maximized, and surely that is what population ethics is all about!

Asteroid!

Let's suppose an asteroid is approaching each of the four population Zs discussed before. It can only be deflected by so much. Your choice is, save the original population of humans from A, or save the vast new population. The choice is obvious. In 1, 2, and 3, each individual has the same level utility, so obviously we should choose which option saves a greater number of individuals.

Bam! The asteroid strikes. The end result in all four scenarios is a world in which all the moral creatures are destroyed. It is a world without the many complex values that human beings possess. Each world, for the most part, lack things like complex challenge, imagination, friendship, empathy, love, and the other complex values that human beings prize. But so what? The purpose of population ethics is to maximize utility, not silly, frivolous things like morality, or the other complex values of the human race. That means that any form of utility that is easier to produce than those values is obviously superior. It's easier to make pleasure and paperclips than it is to make eudaemonia, so that's the form of utility that ought to be maximized, right? And as for making sure moral beings exist, well that's just ridiculous. The valuable processing power they're using to care about morality could be being used to make more paperclips or more mice injected with heroin! Obviously it would be better if they died off, right?

I'm going to go out on a limb and say "Wrong."

Is this realistic?

Now, to fair, in the Overcoming Bias page I quoted, Robin Hanson also says:

I’m not saying I can’t imagine any possible circumstances where moral creatures shouldn’t die off, but I am saying that those are not ordinary circumstances.

Maybe the scenarios I am proposing are just too extraordinary. But I don't think this is the case. I imagine that the circumstances Robin had in mind were probably something like "either all moral creatures die off, or all moral creatures are tortured 24/7 for all eternity."

Any purely utility-maximizing theory of population ethics that counts both the complex values of human beings, and the pleasure of animals, as "utility" should inevitably draw the conclusion that human beings ought to limit their reproduction to the bare minimum necessary to maintain the infrastructure to sustain a vastly huge population of non-human animals (preferably animals dosed with some sort of pleasure-causing drug). And if some way is found to maintain that infrastructure automatically, without the need for human beings, then the logical conclusion is that human beings are a waste of resources (as are chimps, gorillas, dolphins, and any other animal that is even remotely capable of having values or morality). Furthermore, even if the human race cannot practically be replaced with automated infrastructure, this should be an end result that the adherents of this theory should be yearning for.2 There should be much wailing and gnashing of teeth among moral philosophers that exterminating the human race is impractical, and much hope that someday in the future it will not be.

I call this the "Genocidal Conclusion" or "GC." On the macro level the GC manifests as the idea that the human race ought to be exterminated and replaced with creatures whose preferences are easier to satisfy. On the micro level it manifests as the idea that it is perfectly acceptable to kill someone who is destined to live a perfectly good and worthwhile life and replace them with another person who would have a slightly higher level of utility.

Population Ethics isn't About Maximizing Utility

I am going to make a rather radical proposal. I am going to argue that the consequentialist's favorite maxim, "maximize utility," only applies to scenarios where creating new people or creatures is off the table. I think we need an entirely different ethical framework to describe what ought to be done when it is possible to create new people. I am not by any means saying that "which option would result in more utility" is never a morally relevant consideration when deciding to create a new person, but I definitely think it is not the only one.3

So what do I propose as a replacement to utility maximization? I would argue in favor of a system that promotes a wide range of ideals. Doing some research, I discovered that G. E. Moore had in fact proposed a form of "ideal utilitarianism" in the early 20th century.4 However, I think that "ideal consequentialism" might be a better term for this system, since it isn't just about aggregating utility functions.

What are some of the ideals that an ideal consequentialist theory of population ethics might seek to promote? I've already hinted at what I think they are: Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom... mutual affection, love, friendship, cooperation; all those other important human universals, plus all the stuff in the Fun Theory Sequence. When considering what sort of creatures to create we ought to create creatures that value those things. Not necessarily, all of them, or in the same proportions, for diversity is an important ideal as well, but they should value a great many of those ideals.

Now, lest you worry that this theory has any totalitarian implications, let me make it clear that I am not saying we should force these values on creatures that do not share them. Forcing a paperclip maximizer to pretend to make friends and love people does not do anything to promote the ideals of Friendship and Love. Forcing a chimpanzee to listen while you read the Sequences to it does not promote the values of Truth and Knowledge. Those ideals require both a subjective and objective component. The only way to promote those ideals is to create a creature that includes them as part of its utility function and then help it maximize its utility.

I am also certainly not saying that there is never any value in creating a creature that does not possess these values. There are obviously many circumstances where it is good to create nonhuman animals. There may even be some circumstances where a paperclip maximizer could be of value. My argument is simply that it is most important to make sure that creatures who value these various ideals exist.

I am also not suggesting that it is morally acceptable to casually inflict horrible harms upon a creature with non-human values if we screw up and create one by accident. If promoting ideals and maximizing utility are separate values then it may be that once we have created such a creature we have a duty to make sure it lives a good life, even if it was a bad thing to create it in the first place. You can't unbirth a child.5

It also seems to me that in addition to having ideals about what sort of creatures should exist, we also have ideals about how utility ought to be concentrated. If this is the case then ideal consequentialism may be able to block some forms of the Repugnant Conclusion, even if situations where the only creatures whose creation is being considered are human beings. If it is acceptable to create humans instead of paperclippers, even if the paperclippers would have higher utility, it may also be acceptable to create ten humans with a utility of ten each instead of a hundred humans with a utility of 1.01 each.

Why Did We Become Convinced that Maximizing Utility was the Sole Good?

Population ethics was, until comparatively recently, a fallow field in ethics. And in situations where there is no option to increase the population, maximizing utility is the only consideration that's really relevant. If you've created creatures that value the right ideals, then all that is left to be done is to maximize their utility. If you've created creatures that do not value the right ideals, there is no value to be had in attempting to force them to embrace those ideals. As I've said before, you will not promote the values of Love and Friendship by creating a paperclip maximizer and forcing it to pretend to love people and make friends.

So in situations where the population is constant, "maximize utility" is a decent approximation of the meaning of right. It's only when the population can be added to that morality becomes much more complicated.

Another thing to blame is human-centric reasoning. When people defend the Repugnant Conclusion they tend to point out that a life barely worth living is not as bad as it would seem at first glance. They emphasize that it need not be a boring life, it may be a life full of ups and downs where the ups just barely outweigh the downs. A life worth living, they say, is a life one would choose to live. Derek Parfit developed this idea to some extent by arguing that there are certain values that are "discontinuous" and that one needs to experience many of them in order to truly have a life worth living.

The Orthogonality Thesis throws all these arguments out the window. It is possible to create an intelligence to execute any utility function, no matter what it is. If human beings have all sorts of complex needs that must be fulfilled in order to for them lead worthwhile lives, then you could create more worthwhile lives by killing the human race and replacing them with something less finicky. Maybe happy cows. Maybe paperclip maximizers. Or how about some creature whose only desire is to live for one second and then die. If we created such a creature and then killed it we would reap huge amounts of utility, for we would have created a creature that got everything it wanted out of life!

How Intuitive is the Mere Addition Principle, Really?

I think most people would agree that morality should exist, and that therefore any system of population ethics should not lead to the Genocidal Conclusion. But which step in the Benign Addition Paradox should we reject? We could reject the step where utility is redistributed. But that seems wrong, most people seem to consider it bad for animals and sociopaths to suffer, and that it is acceptable to inflict at least some amount of disutilities on human beings to prevent such suffering.

It seems more logical to reject the Mere Addition Principle. In other words, maybe we ought to reject the idea that the mere addition of more lives-worth-living cannot make the world worse. And in turn, we should probably also reject the Benign Addition Principle. Adding more lives-worth-living may be capable of making the world worse, even if doing so also slightly benefits existing people. Fortunately this isn't a very hard principle to reject. While many moral philosophers treat it as obviously correct, nearly everyone else rejects this principle in day-to-day life.

Now, I'm obviously not saying that people's behavior in their day-to-day lives is always good, it may be that they are morally mistaken. But I think the fact that so many people seem to implicitly reject it provides some sort of evidence against it.

Take people's decision to have children. Many people choose to have fewer children than they otherwise would because they do not believe they will be able to adequately care for them, at least not without inflicting large disutilities on themselves. If most people accepted the Mere Addition Principle there would be a simple solution for this: have more children and then neglect them! True, the children's lives would be terrible while they were growing up, but once they've grown up and are on their own there's a good chance they may be able to lead worthwhile lives. Not only that, it may be possible to trick the welfare system into giving you money for the children you neglect, which would satisfy the Benign Addition Principle.

Yet most people choose not to have children and neglect them. And furthermore they seem to think that they have a moral duty not to do so, that a world where they choose to not have neglected children is better than one that they don't. What is wrong with them?

Another example is a common political view many people have. Many people believe that impoverished people should have fewer children because of the burden doing so would place on the welfare system. They also believe that it would be bad to get rid of the welfare system altogether. If the Benign Addition Principle were as obvious as it seems, they would instead advocate for the abolition of the welfare system, and encourage impoverished people to have more children. Assuming most impoverished people live lives worth living, this is exactly analogous to the BAP, it would create more people, while benefiting existing ones (the people who pay less taxes because of the abolition of the welfare system).

Yet again, most people choose to reject this line of reasoning. The BAP does not seem to be an obvious and intuitive principle at all.

The Genocidal Conclusion is Really Repugnant

There is nearly nothing repugnant than the Genocidal Conclusion. Pretty much the only way a line of moral reasoning could go more wrong would be concluding that we have a moral duty to cause suffering, as an end in itself. This means that it's fairly easy to counter any argument in favor of total utilitarianism that argues the alternative I am promoting has odd conclusions that do not fit some of our moral intuitions, while total utilitarianism does not. Is that conclusion more insane than the Genocidal Conclusion? If it isn't, total utilitarianism should still be rejected.

Ideal Consequentialism Needs a Lot of Work

I do think that Ideal Consequentialism needs some serious ironing out. I haven't really developed it into a logical and rigorous system, at this point it's barely even a rough framework. There are many questions that stump me. In particular I am not quite sure what population principle I should develop. It's hard to develop one that rejects the MAP without leading to weird conclusions, like that it's bad to create someone of high utility if a population of even higher utility existed long ago. It's a difficult problem to work on, and it would be interesting to see if anyone else had any ideas.

But just because I don't have an alternative fully worked out doesn't mean I can't reject Total Utilitarianism. It leads to the conclusion that a world with no love, curiosity, complex challenge, friendship, morality, or any other value the human race holds dear is an ideal, desirable world, if there is a sufficient amount of some other creature with a simpler utility function. Morality should exist, and because of that, total utilitarianism must be rejected as a moral system.

 

1I have been asked to note that when I use the phrase "utility" I am usually referring to a concept that is called "E-utility," rather than the Von Neumann-Morgenstern utility that is sometimes discussed in decision theory. The difference is that in VNM one's moral views are included in one's utility function, whereas in E-utility they are not. So if one chooses to harm oneself to help others because one believes that is morally right, one has higher VNM utility, but lower E-utility.

2There is a certain argument against the Repugnant Conclusion that goes that, as the steps of the Mere Addition Paradox are followed the world will lose its last symphony, its last great book, and so on. I have always considered this to be an invalid argument because the world of the RC doesn't necessarily have to be one where these things don't exist, it could be one where they exist, but are enjoyed very rarely. The Genocidal Conclusion brings this argument back in force. Creating creatures that can appreciate symphonies and great books is very inefficient compared to creating bunny rabbits pumped full of heroin.

3Total Utilitarianism was originally introduced to population ethics as a possible solution to the Non-Identity Problem. I certainly agree that such a problem needs a solution, even if Total Utilitarianism doesn't work out as that solution.

4I haven't read a lot of Moore, most of my ideas were extrapolated from other things I read on Less Wrong. I just mentioned him because in my research I noticed his concept of "ideal utilitarianism" resembled my ideas. While I do think he was on the right track he does commit the Mind Projection Fallacy a lot. For instance, he seems to think that one could promote beauty by creating beautiful objects, even if there were no creatures with standards of beauty around to appreciate them. This is why I am careful to emphasize that to promote ideals like love and beauty one must create creatures capable of feeling love and experiencing beauty.

5My tentative answer to the question Eliezer poses in "You Can't Unbirth a Child" is that human beings may have a duty to allow the cheesecake maximizers to build some amount of giant cheesecakes, but they would also have a moral duty to limit such creatures' reproduction in order to spare resources to create more creatures with humane values.

EDITED: To make a point about ideal consequentialism clearer, based on AlexMennen's criticisms.

A case study in fooling oneself

-2 Mitchell_Porter 15 December 2011 05:25AM

Note: This post assumes that the Oxford version of Many Worlds is wrong, and speculates as to why this isn't obvious. For a discussion of the hypothesis itself, see Problems of the Deutsch-Wallace version of Many Worlds.

smk asks how many worlds are produced in a quantum process where the outcomes have unequal probabilities; Emile says there's no exact answer, just like there's no exact answer for how many ink blots are in the messy picture; Tetronian says this analogy is a great way to demonstrate what a "wrong question" is; Emile has (at this writing) 9 upvotes, and Tetronian has 7.

My thesis is that Emile has instead provided an example of how to dismiss a question and thereby fool oneself; Tetronian provides an example of treating an epistemically destructive technique of dismissal as epistemically virtuous and fruitful; and the upvotes show that this isn't just their problem. [edit: Emile and Tetronian respond.]

I am as tired as anyone of the debate over Many Worlds. I don't expect the general climate of opinion on this site to change except as a result of new intellectual developments in the larger world of physics and philosophy of physics, which is where the question will be decided anyway. But the mission of Less Wrong is supposed to be the refinement of rationality, and so perhaps this "case study" is of interest, not just as another opportunity to argue over the interpretation of quantum mechanics, but as an opportunity to dissect a little bit of irrationality that is not only playing out here and now, but which evidently has a base of support.

The question is not just, what's wrong with the argument, but also, how did it get that base of support? How was a situation created where one person says something irrational (or foolish, or however the problem is best understood), and a lot of other people nod in agreement and say, that's an excellent example of how to think?

On this occasion, my quarrel is not with the Many Worlds interpretation as such; it is with the version of Many Worlds which says there's no actual number of worlds. Elsewhere in the thread, someone says there are uncountably many worlds, and someone else says there are two worlds. At least those are meaningful answers (although the advocate of "two worlds" as the answer, then goes on to say that one world is "stronger" than the other, which is meaningless).

But the proposition that there is no definite number of worlds, is as foolish and self-contradictory as any of those other contortions from the history of thought that rationalists and advocates of common sense like to mock or boggle at. At times I have wondered how to place Less Wrong in the history of thought; well, this is one way to do it - it can have its own chapter in the history of intellectual folly; it can be known by its mistakes.

Then again, this "mistake" is not original to Less Wrong. It appears to be one of the defining ideas of the Oxford-based approach to Many Worlds associated with David Deutsch and David Wallace; the other defining idea being the proposal to derive probabilities from rationality, rather than vice versa. (I refer to the attempt to derive the Born rule from arguments about how to behave rationally in the multiverse.) The Oxford version of MWI seems to be very popular among thoughtful non-physicist advocates of MWI - even though I would regard both its defining ideas as nonsense - and it may be that its ideas get a pass here, partly because of their social status. That is, an important faction of LW opinion believes that Many Worlds is the explanation of quantum mechanics, and the Oxford school of MWI has high status and high visibility within the world of MWI advocacy, and so its ideas will receive approbation without much examination or even much understanding, because of the social and psychological mechanisms which incline people to agree with, defend, and laud their favorite authorities, even if they don't really understand what these authorities are saying or why they are saying it.

However, it is undoubtedly the case that many of the LW readers who believe there's no definite number of worlds, believe this because the idea genuinely makes sense to them. They aren't just stringing together words whose meaning isn't known, like a Taliban who recites the Quran without knowing a word of Arabic; they've actually thought about this themselves; they have gone through some subjective process as a result of which they have consciously adopted this opinion. So from the perspective of analyzing how it is that people come to hold absurd-sounding views, this should be good news. It means that we're dealing with a genuine failure to reason properly, as opposed to a simple matter of reciting slogans or affirming allegiance to a view on the basis of something other than thought.

At a guess, the thought process involved is very simple. These people have thought about the wavefunctions that appear in quantum mechanics, at whatever level of technical detail they can muster; they have decided that the components or substructures of these wavefunctions which might be identified as "worlds" or "branches" are clearly approximate entities whose definition is somewhat arbitrary or subject to convention; and so they have concluded that there's no definite number of worlds in the wavefunction. And the failure in their thinking occurs when they don't take the next step and say, is this at all consistent with reality? That is, if a quantum world is something whose existence is fuzzy and which doesn't even have a definite multiplicity - that is, we can't even say if there's one, two, or many of them - if those are the properties of a quantum world, then is it possible for the real world to be one of those? It's the failure to ask that last question, and really think about it, which must be the oversight allowing the nonsense-doctrine of "no definite number of worlds" to gain a foothold in the minds of otherwise rational people.

If this diagnosis is correct, then at some level it's a case of "treating the map as the territory" syndrome. A particular conception of the quantum-mechanical wavefunction is providing the "map" of reality, and the individual thinker is perhaps making correct statements about what's on their map, but they are failing to check the properties of the map against the properties of the territory. In this case, the property of reality that falsifies the map is, the fact that it definitely exists, or perhaps the corollary of that fact, that something which definitely exists definitely exists at least once, and therefore exists with a definite, objective multiplicity.

Trying to go further in the diagnosis, I can identify a few cognitive tendencies which may be contributing. First is the phenomenon of bundled assumptions which have never been made distinct and questioned separately. I suppose that in a few people's heads, there's a rapid movement from "science (or materialism) is correct" to "quantum mechanics is correct" to "Many Worlds is correct" to "the Oxford school of MWI is correct". If you are used to encountering all of those ideas together, it may take a while to realize that they are not linked out of logical necessity, but just contingently, by the narrowness of your own experience.

Second, it may seem that "no definite number of worlds" makes sense to an individual, because when they test their own worldview for semantic coherence, logical consistency, or empirical adequacy, it seems to pass. In the case of "no-collapse" or "no-splitting" versions of Many Worlds, it seems that it often passes the subjective making-sense test, because the individual is actually relying on ingredients borrowed from the Copenhagen interpretation. A semi-technical example would be the coefficients of a reduced density matrix. In the Copenhagen interpetation, they are probabilities. Because they have the mathematical attributes of probabilities (by this I just mean that they lie between 0 and 1), and because they can be obtained by strictly mathematical manipulations of the quantities composing the wavefunction, Many Worlds advocates tend to treat these quantities as inherently being probabilities, and use their "existence" as a way to obtain the Born probability rule from the ontology of "wavefunction yes, wavefunction collapse no". But just because something is a real number between 0 and 1, doesn't yet explain how it manages to be a probability. In particular, I would maintain that if you have a multiverse theory, in which all possibilities are actual, then a probability must refer to a frequency. The probability of an event in the multiverse is simply how often it occurs in the multiverse. And clearly, just having the number 0.5 associated with a particular multiverse branch is not yet the same thing as showing that the events in that branch occur half the time.

I don't have a good name for this phenomenon, but we could call it "borrowed support", in which a belief system receives support from considerations which aren't legitimately its own to claim. (Ayn Rand apparently talked about a similar notion of "borrowed concepts".)

Third, there is a possibility among people who have a capacity for highly abstract thought, to adopt an ideology, ontology, or "theory of everything" which is only expressed in those abstract terms, and to then treat that theory as the whole of reality, in a way that reifies the abstractions. This is a highly specific form of treating the map as the territory, peculiar to abstract thinkers. When someone says that reality is made of numbers, or made of computations, this is at work. In the case at hand, we're talking about a theory of physics, but the ontology of that theory is incompatible with the definiteness of one's own existence. My guess is that the main psychological factor at work here is intoxication with the feeling that one understands reality totally and in its essence. The universe has bowed to the imperial ego; one may not literally direct the stars in their courses, but one has known the essence of things. Combine that intoxication, with "borrowed support" and with the simple failure to think hard enough about where on the map the imperial ego itself might be located, and maybe you have a comprehensive explanation of how people manage to believe theories of reality which are flatly inconsistent with the most basic features of subjective experience.

I should also say something about Emile's example of the ink blots. I find it rather superficial to just say "there's no definite number of blots". To say that the number of blots depends on definition is a lot closer to being true, but that undermines the argument, because that opens the possibility that there is a right definition of "world", and many wrong definitions, and that the true number of worlds is just the number of worlds according to the right definition.

Emile's picture can be used for the opposite purpose. All we have to do is to scrutinize, more closely, what it actually is. It's a JPEG that is 314 pixels by 410 pixels in size. Each of those pixels will have an exact color coding. So clearly we can be entirely objective in the way we approach this question; all we have to do is be precise in our concepts, and engage with the genuine details of the object under discussion. Presumably the image is a scan of a physical object, but even in that case, we can be precise - it's made of atoms, they are particular atoms, we can make objective distinctions on the basis of contiguity and bonding between these atoms, and so the question will have an objective answer, if we bother to be sufficiently precise. The same goes for "worlds" or "branches" in a wavefunction. And the truly pernicious thing about this version of Many Worlds is that it prevents such inquiry. The ideology that tolerates vagueness about worlds serves to protect the proposed ontology from necessary scrutiny.

The same may be said, on a broader scale, of the practice of "dissolving a wrong question". That is a gambit which should be used sparingly and cautiously, because it easily serves to instead justify the dismissal of a legitimate question. A community trained to dismiss questions may never even notice the gaping holes in its belief system, because the lines of inquiry which lead towards those holes are already dismissed as invalid, undefined, unnecessary. smk came to this topic fresh, and without a head cluttered with ideas about what questions are legitimate and what questions are illegitimate, and as a result managed to ask something which more knowledgeable people had already prematurely dismissed from their own minds.

Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild"

3 Will_Newsome 08 July 2014 02:53AM

My stupid fanfic chapter was banned without explanation so I reposted it; somehow it was at +7 when it was deleted and I think silently deleting upvoted posts is a disservice to LessWrong. I requested that a justification be given in the comments if it were to be deleted again, so LessWrong readers could consider whether or not that justification is aligned with what they want from LessWrong. Also I would like to make clear that this fanfic is primarily a medium for explaining some ideas that people on LessWrong often ask me about; that it is also a lighthearted critique of Yudkowskyanism is secondary, and if need be I will change the premise so that the medium doesn't drown out the message. But really, I wouldn't think a lighthearted parody of a lighthearted parody would cause such offense.

 

The original post has been unbanned and can be found here, so I've edited this post to just be about the banning.

The Craft And The Community: The Basics: Apologizing

0 Ritalin 23 November 2013 04:55PM

Now, it is said we all here pride ourselves on our intelligence, rationality, and moral sense. It is also said, however, that we are a fiercely independent bunch, and that we can let this pride of ours get the better of us. There have also been comments that the live communities that appear at meetups provide much more positive interactions than what goes on on this site's discussions; this might merit further investigation.

My point is; we've done a lot of research on how to do proper ethical and metaethical calculations, and on how to achieve self-empowerment and deal with our own akrasia, which is awesome. We've also done some work on matters of gender equality, which is very positive as well. But I haven't seen us do anything about the basic details of human interaction, what one would call "politeness" and "basic human decency". And I think it might be useful if we started tackling these, for our own sakes, that of those who surround us, and that of easing our mission along, which is, as I understand it so far, to save the world (from existential risk (at the hands of (unfriendly and self-modifying) artificial intelligence))).

What inspired me to propose this post was a video I just saw from Hank Green of the famed and fabled vlogbrothers. I hold these two individuals in very high esteem, and I would expect many here to share my feelings about them, on account of their values and sensibilities largely overlapping with ours; namely the sense that intelligence, knowledge and curiosity are awesome, and that intellectuals ought to use their power to help improve themselves and the world around them.

Here it is; I hope you enjoy it

 

 

The Cause of Time

0 johnswentworth 05 October 2013 02:56AM

In a recent comment, I suggested that correlations between seemingly unrelated periodic time series share a common cause: time. However, the math disagrees... and suggests a surprising alternative.

Imagine that we took measurements from a thermometer on my window and a ridiculously large tuning fork over several years. The first set of data is temperature T over time t, so it looks like a list of data points [(t0, T0), (t1, T1), ...]. The second set of data is mechanical strain e in the tuning fork over time, so it looks like a list of data points [(t0, e0), (t1, e1), ...]. We line up the temperature and strain data according to time, yielding [(T0, e0), (T1, e1), ...] and find a significant correlation between the two, since they happen to have similar periodicity.

Recalling Judea Pearl, we suggest that there is almost certainly some causal relationship between the temperature outside the window and the strain in the ridiculously large tuning fork. Common sense suggests that neither causes the other, so perhaps they have some common cause? The only other variable in the problem is time, so perhaps time is the common cause. This sort of makes sense, since changes in time intuitively seem to cause the changes in temperature and strain.

Let's check that intuition with some math. First, imagine that we ignore the time data. Now we just have a bunch of temperature data points [T0, T1, ...] and strain data points [e0, e1, ...]. In fact, in order to truly ignore time data, we cannot even order the points according to time! But that means that we no longer have any way to line up the points T0 with e0, T1 with e1, etc. Without any way to match up temperature points to corresponding strain points, the temperature and strain data are randomly ordered, and the correlation disappears!

We have just performed a d-separation. When time t was known (i.e., controlled for), the variables T and e were correlated. But when t was unknown, the variables were uncorrelated. Now, let's wave our hands a little and equate correlation with dependence. If time were a common cause of temperature and strain, then we should see that T and e are correlated without knowledge of time, but the correlation disappears when controlling for time. However, we see exactly the opposite structure: controlling for t induces the correlation. This pattern is called a "collider", and it implies that time is a common effect of temperature and strain. Rather than time causing the oscillations in our time series, the oscillations in our time series cause time.

Whoa. Now that the math has given us the answer, let's step back and try to make sense of it. Imagine that everything in the universe stopped moving for some time, and then went back to moving exactly as before. How could we measure how much time passed while the universe was stopped? We couldn't. For all practical purposes, if nothing changes, then time has stopped. Time, then, is an effect of motion, not vice versa. This is an old idea from philosophy/physics (I think I originally read it in one of Stephen Hawking's books). We've just rederived it.

But we may still wonder: what caused the correlation between temperature and strain? A common effect cannot cause a correlation, so where did it come from? The answer is that there was never any correlation between temperature and strain to begin with. Given just the temperature and strain data, with no information about time (e.g. no ordering or correspondence between points), there was no correlation. The correlation was induced by controlling for time. So the correlation is only logical; there is no physical cause relating the two, at least within our model.

Magical Healing Powers

0 jkaufman 12 August 2012 03:19AM

Imagine you had magical healing powers. Sitting quietly with someone and holding their hand you could restore them to health. While this would be a wonderful ability to have, it would also be a hard one: any time you spent on something other than healing people would mean unnecessary suffering. How could you justify a pleasant dinner with your family or a relaxing weekend at the beach when that meant more people living in pain?

But you already have these powers. Through the magic of effective charity you can donate money to help people right now. The tradeoff remains: time you give yourself when you could be working means money you don't earn which then can't go to help the people who would most benefit from it.

(I don't think this means you should try for complete selflessness; you need to balance your needs against others'. But the balance should probably be a lot further towards others' than it currently is.)

Update 2012-08-12: this is a response to hearing people offline saying that if they had magical "help other people" powers then they should spend lots of time using them, without having considered that they already have non-magical "help other people" powers.

I also posted this on my blog

Creationism's effect on the progress of our understanding of evolution

0 AlexMennen 28 March 2011 08:36PM

Lynn Margulis argues that natural selection cannot provide a powerful enough evolutionary force to account for the punctuated equilibrium demonstrated in the fossil record. She proposes as an alternative that evolution is driven by changes in symbiotic relationships. I'm not a biologist, and I don't understand what exactly her theory means, so I'm not going to try to argue for or against it, but it got me thinking:

Evolutionary biologists cannot afford to let Margulis's theory become well-known and accepted as a mainstream theory, because that would create a rift in the pro-evolution camp, and creationists would be able to exploit this by combining Margulis's argument that natural selection cannot account for punctuated equilibrium with arguments by Neo-Darwinists against Margulis's theory to support their claim that evolution is false. This would be effective because many people would not understand that "we do not understand everything about how evolution works" does not imply "creationism is correct". Thus, many evolutionary biologists might feel that they have to be very careful to look like they do know everything about how evolution works. This could make it more difficult for them to spot aspects in which their assumptions about evolution are mistaken. Maybe the biggest damage caused by creationism is that it suppresses legitimate criticism of the current accepted models of evolution, besides spreading false information to the general public.

Again, I'm not arguing in favor of Margulis's theory in particular, but the statement "There exists at least one false fact about evolutionary biology that is accepted as true by a consensus of researchers in that field" seems fairly likely to be true.

Second Life creators to attempt to create AI

0 nick012000 09 January 2011 01:50PM

http://nwn.blogs.com/nwn/2010/02/philip-rosedale-ai.html

http://www.lovemachineinc.com/

Should I feel bad for hoping they'll fail? I do not want to see the sort of unFriendly AI would be created after being raised on social interactions with pedophiles, Gorians, and furries. Seriously, those are some of the more prominent of the groups still on Second Life, and an AI that spends its formative period interacting with them (and the first two, especially) could develop a very twisted morality.

The raw-experience dogma: Dissolving the “qualia” problem

2 metaphysicist 16 September 2012 07:15PM

[Cross-posted.]

1. Defining the problem: The inverted spectrum

Philosophy has been called a preoccupation with the questions entertained by adolescents, and one adolescent favorite concerns our knowledge of other persons’ “private experience” (raw experience or qualia). A philosophers’ version is the “inverted spectrum”: how do I know you see “red” rather than “blue” when you see this red print? How could we tell when we each link the same terms to the same outward descriptions? We each will say “red” when we see the print, even if you really see “blue.”

The intuition that allows us to be different this way is the intuition of raw experience (or of qualia). Philosophers of mind have devoted considerable attention to reconciling the intuition that raw experience exists with the intuition that inverted-spectrum indeterminacy has unacceptable dualist implications making the mental realm publicly unobservable, but it’s time for nihilism about qualia, whose claim to exist rests solely on the strength of a prejudice.

A. Attempted solutions to the inverted spectrum.

One account would have us examine which parts of the brain are activated by each perception, but then we rely on an unverifiable correlation between brain structures and “private experience.” With only a single example of private experience—our own—we have no basis for knowing what makes private experience the same or different between persons.

A subtler response to the inverted spectrum is that red and blue as experiences are distinct because red looks “red” due to its being constituted by certain responses, such as affect. Red makes you alert and tense; blue, tranquil or maybe sad. What we call the experience of red, on this account, just is the sense of alertness, and other manifestations. The hope is that identical observable responses to appropriate wavelengths might explain qualitative redness. Then, we could discover we experience blue when others experience red by finding that we idiosyncratically become tranquil instead of alert when exposed to the long wavelengths constituting physical red. This complication doesn’t remove the radical uncertainty about experiential descriptions. Emotion only seems more capable than cognition of explaining raw experience because emotional events are memorable. The affect theory doesn't answer how an emotional reaction can constitute a raw subjective experience.

B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.”

As in those examples, attempts at analyzing raw experience commonly appeal to the substitution process that psychologist Daniel Kahneman discovered in many cognitive fallacies. Substitution is the unthoughtful replacement of an easy for a related hard question. In the philosophy of mind, the distinct questions are actually termed the “easy problem of consciousness” and the “hard problem of consciousness,” and errors regarding consciousness typically are due to substituting the “easy problem” for the “hard,” where the easy problem is to explain some function that typically accompanies “awareness.” The philosopher might substitute knowledge of one’s own brain processes for raw experience; or, as in the previous example, experience’s neural accompaniments or its affective accompaniments. Avoiding the “substitution bias” is particularly hard when dealing with raw awareness, an unarticulated intuition; articulating it is a present purpose.

2. The false intuition of direct awareness

A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true.

The theory that direct awareness reveals raw experience has long been almost sacrosanct in philosophy. According to the British Empiricists, direct experience consists of sense data and forms the indubitable basis of all synthetic knowledge. For Continental Rationalist Descartes, too, my direct experience—“I think”—indubitably proves my existence.
We do have a strong intuition that we have raw experience, the substance of direct awareness, but we have other strong intuitions, some turn out true and others false. We have an intuition that space is necessarily flat, an intuition proven false only with non-Euclidean geometries in the 19th century. We have an intuition that every event has a cause, which determinists believe but indeterminists deny. Sequestered intuitions aren’t knowledge.

B. Experience can’t reveal the error in the intuition that raw experience exists.

To correct wayward intuitions, we ordinarily test them against each other. A simple perceptual illusion illustrates: the popular Muller-Lyer illusion, where arrowheads on a line make it appear shorter than an identical line with the arrowheads reversed. Invoking the more credible intuition that measuring the lines finds their real length convinces us of the intuitive error that the lines are unequal. In contrast, we have no means to check the truth of the belief in raw experience; it simply seems self-evident, but it might seem equally self-evident if it were false. 

C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there.

One task in philosophy is articulating the intuitions implicit in our thinking, and sometimes rejecting the intuition should result from concluding it employs concepts illogically. What shows the intuition of raw experience is incoherent (self-contradictory or vacuous) is that the terms we use to describe raw experience are limited to the terms for its referents; we have no terms to describe the experience as such, but rather, we describe qualia by applying terms denoting the ordinary cause of the supposed raw experience. The simplest explanation for the absence of a vocabulary to describe the qualitative properties of raw experience is that they don’t exist: a process without properties is conceptually vacuous.

D. We believe raw experience exists without detecting it.

One error in thinking about the existence of raw experience comes from confusing perception with belief, which is conceptually distinct. When people universally report that qualia “seem” to exist, they are only reporting their beliefs—despite their sense of certainty. Where “perception” is defined as a nervous system’s extraction of a sensory-array’s features, people can’t report their perceptions except through beliefs the perceptions sometimes engender: I can’t tell you my perceptions except by relating my beliefs about them. This conceptual truth is illustrated by the phenomenon of blindsight, a condition in  patients report complete blindness yet, by discriminating external objects, demonstrate that they can perceive them. Blindsighted patients can report only according to their beliefs, and they perceive more than they believe and report that they perceive. Qualia nihilism analyzes the intuition of raw experience as perceiving less than you believe and report you perceive, the reverse of blindsight.

3. The conceptual economy of qualia nihilism pays off in philosophical progress

Eliminating raw experience from ontology produces conceptual economy. A summary of its conceptual advantages:

   A. Qualia nihilism resolves an intractable problem for materialism: physical concepts are dispositional, whereas raw experiences concern properties that seem, instead, to pertain to noncausal essences. If raw experience was coherent, we could hope for a scientific insight, although no one has been able to define the general character of such an explanation. Removing a fundamental scientific mystery is a conceptual gain.
 
    B. Qualia nihilism resolves the private-language problem. There seems to be no possible language that uses nonpublic concepts. Eliminating raw experience allows explaining the absence of a private language by the nonexistence of any private referents.

    C.  Qualia nihilism offers a compelling diagnosis of where important skeptical arguments regarding the possibility of knowledge go wrong. The arguments—George Berkeley’s are their prototype—reason that sense data, being indubitable intuitions of direct experience, are the source of our knowledge, which must, in consequence, be about raw experience rather than the “external world.” If you accept the existence of raw experience, the argument is notoriously difficult to undermine logically because concepts of “raw experience” truly can’t be analogized to any concepts applying to the external world. Eliminating raw experience provides an effective demolition; rather than the other way around, our belief in raw experience depends on our knowledge of the external world, which is the source of the concepts we apply to fabricate qualia.

4. Relying on the brute force of an intuition is rationally specious.

Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence. How much weight should we attach to a strong belief whose validity we can't check? None. Beliefs ordinarily earn a presumption of truth from the absence of empirical challenge, but when empirical challenge is impossible in principle, the belief deserves no confidence.

Torture vs Dust Specks Yet Again

-1 sentientplatypus 20 August 2013 12:06PM

The first time I read Torture vs. Specks about a year ago I didn't read a single comment because I assumed the article was making a point that simply multiplying can sometimes get you the wrong answer to a problem. I seem to have had a different "obvious answer" in mind.

And don't get me wrong, I generally agree with the idea that math can do better than moral intuition in deciding questions of ethics. Take this example from Eliezer’s post Circular Altruism which made me realize that I had assumed wrong:

Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
1. Save 400 lives, with certainty.
2. Save 500 lives, with 90% probability; save no lives, 10% probability.

I agree completely that you pick number 2. For me that was just manifestly obvious, of course the math trumps the feeling that you shouldn't gamble with people’s lives…but then we get to torture vs. dust specks and that just did not compute. So I've read most every argument I could find in favor of torture(there are a great deal and I might have missed something critical), but...while I totally understand the argument (I think) I'm still horrified that people would choose torture over dust specks.

I feel that the way that math predominates intuition begins to fall apart when you the problem compares trivial individual suffering with massive individual suffering, in a way very much analogous to the way in which Pascal’s Mugging stops working when you make the credibility really low but the threat really high. Like this. Except I find the answer to torture vs. dust specks to be much easier...

 

Let me give some examples to illustrate my point.

Can you imagine Harry killing Hermione because Voldemort threatened to plague all sentient life with one barely noticed dust speck each day for the rest of time? Can you imagine killing your own best friend/significant other/loved one to stop the powers of the Matrix from hitting 3^^^3 sentient beings with nearly inconsquential dust specks? Of course not. No. Snap decision.

Eliezer, would you seriously, given the choice by Alpha, the Alien superintelligence that always carries out its threats, give up all your work, and horribly torture some innocent person, all day for fifty years in the face of the threat of a 3^^^3 insignificant dust specks barely inconveniencing sentient beings? Or be tortured for fifty years to avoid the dust specks?

I realize that this is much more personally specific than the original question: but it is someone's loved one, someone's life. And if you wouldn't make the sacrifice what right do you have to say someone else should make it? I feel as though if you want to argue that torture for fifty years is better than 3^^^3 barely noticeable inconveniences you had better well be willing to make that sacrifice yourself.

And I can’t conceive of anyone actually sacrificing their life, or themselves to save the world from dust specks. Maybe I'm committing the typical mind fallacy in believing that no one is that ridiculously altruistic, but does anyone want an Artificial Intelligence that will potentially sacrifice them if it will deal with the universe’s dust speck problem or some equally widespread and trivial equivalent? I most certainly object to the creation of that AI. An AI that sacrifices me to save two others - I wouldn't like that, certainly, but I still think the AI should probably do it if it thinks their lives are of more value. But dust specks on the other hand....

This example made me immediately think that some sort of rule is needed to limit morality coming from math in the development of any AI program. When the problem reaches a certain low level of suffering and is multiplied it by an unreasonably large number it needs to take some kind of huge penalty because otherwise to an AI it would be vastly preferable the whole of Earth be blown up than 3^^^3 people suffer a mild slap to the face.

And really, I don’t think we want to create an Artificial Intelligence that would do that.

I’m mainly just concerned that some factor be incorporated into the design of any Artificial Intelligence that prevents it from murdering myself and others for trivial but widespread causes. Because that just sounds like a sci-fi book of how superintelligence could go horribly wrong.

I may have just had a dangerous thought.

0 Eitan_Zohar 22 September 2014 08:04PM

I'm interested in discussing this with someone, non-publicly. It's safe to know about personally, but it's not something I'd like people in general to know.

I'm really not sure if there is a protocol for this sort of thing.

Improving Cryonics - Regulations and Ethical Considerations

0 [deleted] 14 May 2013 09:54PM

Here is my understanding - correct me if I'm wrong:

Cryonics is only allowed once a person is determined legally dead: when the heart stops beating.

One of the reasons why they have to be dead seems to be that the majority of the population consider cryonics to be a death-sentence, as there is no guarantee at this time that subjects can be revived - regardless of if there's a cure for whatever ailment caused a person's death.

It is difficult at this time to improve the revitalizing process as the patients - or clients - are incapable of surviving as their body was already in the process of shutting down, and we do not have the technology to bring them fully back.

 

Now, to some conjecturing.

 

We might be able to more reasonably test the effectiveness of procedures to revive current patients if we had healthier people, ones not yet at death's door.

Here's where the ethical dilemma hits home: we could use people who are in good health, here defined as 'not terminally-ill or otherwise dying from health complications in the near future,' who are already intending to end their life. Simply stated, those who are suicidal.

For all intensive purposes they would cease to exist, which would be part of the appeal to that subgroup. At this time there is a probability of them dying from the procedure, which should be ok as they were self-destructing anyway. And if they don't die, they get the chance to reflect on their life or go at it again. In this way their death would be more beneficial to the whole.

The benefits to this would be the additional research into the effects of cryonics on the body and how to develop a procedure to guarantee that you CAN be revived once put under.

I am aware of a couple of problems: legal complications, how to find willing participants, etc., and am thinking of ways to resolve that.

I've just been thinking about this for the past week or so and wanted additional insight. Thoughts?

 

***On Suicide

For those opposed to suicide: this idea does not encourage people to kill themselves. Rather, it provides those who are already intent upon ending their existence a means to do so more honorably.

In case people have not read it, I recommend Schopenhauer's Essay on Suicide, found here: http://www.egs.edu/library/arthur-schopenhauer/articles/essays-of-schopenhauer/on-suicide/

Donation tradeoffs in conscientious objection

0 [deleted] 27 December 2012 05:23PM

Suppose that you believe larger scale wars than current US military campaigns are looming in the next decade or two (this may be highly improbable, but let's condition on it for the moment). If you thought further that a military draft or other forms of conscription might be used, and you wanted to avoid military service if that situation arose, what steps should you take now to give yourself a high likelihood of being declared a conscientious objector?

I don't have numbers to back any of this up, but I am in the process of compiling them. My general thought is to break down the problem like so: Pr(serious injury or death | conscription) * Pr(conscription | my conscientious objector behavior & geopolitical conditions ripe for war) * Pr(geopolitical conditions ripe for war), assuming some conscientious objector behavior (or mixture distribution over several behaviors).

If I feel that Pr(serious injury or death | conscription) and Pr(geopolitical conditions ripe for war) are sufficiently high, then I might be motivated to pay some costs in order to drive Pr(conscription | my conscientious objector behavior) very low.

There's a funny bit in the American version of the show The Office where the manager, Michael, is concerned about his large credit card debt. The accountant, Oscar, mentions that declaring bankruptcy is an option, and so Michael walks out into the main office area and yells, "I DECLARE BANKRUPTCY!"

In a similar vein, I don't think that draft boards will accept the "excuse" that a given person has "merely" frequently expressed pacifist views. So if someone wants to robustly signal that she or he is a conscientious objector, what to do? In my ~30 minutes of searching, I've found a few organizations that, on first glance, look worthy of further investigation and perhaps regular donations.

Here are the few I've focused on most:

Center on Conscience and War

Coffee Strong

War-Resister's International

 

The problems I'm thinking about along these lines include:

  1. Whether or not the donation cost is worth it. There's no Giving What We Can type measure for this as far as I can tell, and even though I know from family experience that veteran mental illness can be very bad, I'm not convinced that donations to the above organizations provide a lot of QALY bang for the buck.
  2. Another component of bang for the buck is how much the donation will credibly signal that I actually am a serious conscientious objector. If I donate and then a draft board chooses to ignore it, it would be totally wasted. But if I think that 'going to war' is highly correlated with very significant negative outcomes, then just as with cryonics, I might feel that such costs are worth it even for a small probability of avoiding a combat environment.
  3. Even assuming that I resolve 1 & 2, there's the problem of trading off these donations with other donations that I make. In a self-interest line of thinking, I might forego my current donations to places like SIAI or Against Malaria because, good as those are, they may not offer the same shorter term benefits to me as purchasing a conscientious objector signal.

 

I'm curious if others have thought about this. Good literature references are welcome. My plan is to compile statistics that let me make reasonable estimates of the different conditional probabilities.

 

Addendum

Several people seem very concerned with the signal faker aspect of this question. I don't understand the preoccupation with this and feel tired of trying to justify the question to people who only care about the signal faker aspect. So I'll just add this copy of one of my comments from below. Hopefully this gives some additional perspective, though I don't expect it to change anyone's mind. I still stand by the post as-is: it's asking about a conditional question based on sincere belief. Even if the answer would be of interest to fakers too, that alone doesn't make that explanation more likely and even if that explanation was more likely it doesn't make the question unworthy of thoughtful answers.

Here's the promised comment:

... my question is conditional. Assume that you already sincerely believe in conscientious objection, in the sense of personal ideology such that you could describe it to a draft board. Now that we're conditioning on that, and we assume already that your primary goal is to avoid causing harm or death... then further ask what behaviors might be best to generate the kinds of signals that will work to convince a draft board. Merely having actual pacifist beliefs is not enough. Someone could have those beliefs but then do actions that poorly communicate them to a draft board. Someone else could have those beliefs and do behaviors that more successfully communicate them to draft boards. And to whatever extent there are behaviors outside of the scope of just giving an account of one's ideology I am asking to analyze the effectiveness.

I really think my question is pretty simple. Assume your goal is genuine pacifism but that you're worried this won't convince a draft board. What should you do? Is donation a good idea? Yes, these could be questions a faker would ask. So what? They could also be questions a sincere person would ask, and I don't see any reason for all the downvoting or questions about signal faking. Why not just do the thought experiment where you assume that you are first a sincere conscientious objector and second a person concerned about draft board odds?

Stated another way:

1) Avoiding combat where I cause harm or death is the first priority, so if I have to go to jail or shoot myself in the foot to avoid it, so be it and if it comes to that, it's what I'll do. This is priority number one.

2) I can do things to improve my odds of never needing to face the situation described in (1) and to the extent that the behaviors are expedient (in a cost-benefit tradeoff sense) to do in my life, I'd like to do them now to help improve odds of (1)-avoidance later. Note that this in no way conflicts with being a genuine pacifist. It's just common sense. Yes, I'll avoid combat in costly ways if I have to. But I'd also be stupid to not even explore less costly ways to invest in combat-avoidance that could be better for me.

3) To the extent that (2) is true, I'd like to examine certain options, like donating to charities that assist with legal issues in conscientious objection, or which extend mental illness help to affected veterans, for their efficacy. There is still a cost to these things and given my conscientious objection preferences, I ought to weigh that cost.

 

What are the boundaries?

0 Stabilizer 26 July 2012 08:15AM

Computer science and information theory were separate from physics. Not anymore. People realized that information had to be physical and this had profound consequences, especially in the form of quantum information/computation.

Psychology and economics were separate. Not anymore. People realized that humans were the core of economic systems and their behaviors fundamentally shape the nature of economies, even at the largest scales. Note the rise of behavioral economics.

Neuroscience and computer science were separate. Not anymore. People realized that thinking about the brain as a computer is probably the best possible abstraction to understand it. 

Reality exists. There are no intrinsic boundaries in reality. All fields of study are created by humans. But these divisions seem so natural that nobody realizes that the boundaries have to dissolve. The fields have to collide. And when we realize that--or finally have the language and ideas to meaningfully talk about it--we find out all of kinds of crazy, cool stuff.

So: what collisions are we currently blind to?

 

 

[post redacted]

0 Will_Newsome 26 January 2012 01:30AM

[Post redacted 'cuz I unfairly and carelessly misrepresented someone's views (Eliezer's). The messages of this post was: disbelief that aliens visit Earth in spaceships is a bad reason not to look into ufology. My apologies for this ugly post.]

Is love a good idea?

1 adamzerner 22 February 2014 06:59AM

I've searched around on LW for this question, and haven't seen it brought up. Which surprises me, because I think it's an important question.

I'm honestly not sure what I think. One one hand, love clearly leads to an element of happiness when done properly. This seems to be inescapable, probably because it's encoded in our DNA or something. But on the other hand, there's two things that really make me question whether or not love is a good idea.

1) I have a very reductionist viewpoint, on everything. So I always ask myself, "What am I really trying to optimize here, and what is the best way to optimize it?". When I think about it, I come to the conclusion that I'm always trying to optimize my happiness. The answer to the question of, "why does this matter?" is always, "because it makes me happy". So then, the idea of love bothers me, because you sort of throw rational thinking out the window, stop asking why something actually matters, and just decide that this significant other intrinsically matters to you. I question whether this type of thinking is optimal, and personally, whether or not I'm even capable of it.

2) It seems so obsessive, and I question whether or not it makes sense to obsess so much over one thing. This article actually explores the brain chemicals involved in love, and suggests that the chemicals are similar to those that appear in OCD.

Finally, there's the issue of permanence. Not all love is intended to be permanent, but a lot of the time it is. How can you commit to something so permanently? This makes me think of the mind projection fallacy. Perhaps people commit it with love. They think that the object of their desire is intrinsically desirable, when in fact it is the properties of this object that make it desirable. These properties are far from permanent (I'd go as far as to say that they're volatile, at least if you take the long view). So how does it make sense to commit to something so permanently?

So my take is that there is probably a form of love that is rational to take. Something along the lines of enjoying each others company, and caring for one another and stuff, but not being blindly committed to one another, and being honest about the fact that you wouldn't do anything for one another, and will in fact probably grow apart at some point. 

What do you guys think? 

Buridan's ass and the psychological origins of objective probability

1 common_law 30 March 2013 09:43AM

[Crossposted]

The medieval philosopher Buridan reportedly constructed a thought experiment to support his view that human behavior was determined rather than “free”—hence rational agents couldn’t choose between two equally good alternatives. In the Buridan’s Ass Paradox, an ass finds itself between two equal equidistant bales of hay, noticed simultaneously; the bales’ distance and size are the only variables influencing the ass’s behavior. Under these idealized conditions, the ass must starve, its predicament indistinguishable from a physical object suspended between opposite forces, such as a planet that neither falls into the sun nor escapes into outer space. (Since the ass served Buridan as metaphor for the human agent, in what follows, I speak of “ass” and “agent” interchangeably.)

Computer scientist Leslie Lamport formalized the paradox as “Buridan’s Principle,” which states that the ass will starve if it is situated in a range of possibilities that include midpoints where two opposing forces are equal and it must choose in a sufficiently short time span. We assume, based on a principle of physical continuity, that the larger the bale of hay compared to the other, the faster will the ass be able to decide. Since this is true on the left and on the right, at the midpoint, where the bales are equal, symmetry requires an infinite decision time  Conclusion: within some range of bale comparisons, the ass will require decision time greater than a given bounded time interval. (For rigorous treatment, see Buridan’s Principle (1984).)

Buridan’s Principle is counterintuitive, as Lamport discovered when he first tried to publish. Among the objections to Buridan’s Principle summarized by Lamport, the main objection provides an insight about the source of the mind-projection fallacy, which treats probability as a feature of the world. The most common objection is that when the agent can’t decide it may use a default metarule. Lamport points out this substitutes another decision subject to the same limits: the agent must decide that it can’t decide. My point differs from that of Lamport, who proves that binary decisions in the face of continuous inputs are unavoidable and that with minimal assumptions they preclude deciding in bounded time; whereas I draw a stronger conclusion: no decision is substitutable when you adhere strictly to the problem’s conditions specifying that the agent be equally balanced between the options. Any inclination to substitute a different decision is a bias toward making the decision that the substitute decision entails. In the simplest variant, the ass may use the rule: turn left when you can’t decide, potentially entrapping it in the limbo between deciding whether it can’t decide. If the ass has a metarule resolving conflicting to favor the left, it has an extraneous bias.

Lamport’s analysis discerns a kind of physical law; mine elucidates the origins of the mind-projection fallacy. What’s psychologically telling is that the most common metarule is to decide at random. But if by random we mean only apparently random, the strategy still doesn’t free the ass from its straightjacket. If it flips a coin, an agent is, in fact, biased toward whatever the coin will dictate, bias, here, means an inclination to use means causally connected with a certain outcome, but the coin flip’s apparent randomness is due to our ignorance of microconditions; truly random responding would allow the agent to circumvent the paradox’s conditions. The theory that the agent might use a random strategy expresses the intuition that the agent could turn either way. It seems a route to where the opposites of functioning according to physical law and acting “freely” in perceived self-interest are reconciled.

This false reconciliation comes through confusing two kinds of symmetry: the epistemic symmetry of “chance” events and the dynamic symmetry in the Buridan’s ass paradox. If you flip a coin, the symmetry of the coin (along with your lack of control over the flip) is what makes your reasons for preferring heads and tails equivalent, justifying assigning each the same probability. We encounter another symmetry with Buridan’s ass, where we also have the same reason to think the ass will turn in either direction. Since the intuition of “free will” precludes impossible decisions, we construe our epistemic uncertainty as describing a decision that’s possible but inherently uncertain.

When we conceive of the ass as a purely physical process  subject to two opposite forces (which, of course, it is), and then it’s obvious that the ass can be “stuck.” What miscues intuition is that the ass need not be confined to one decision rule. But if by hypothesis it is confined to one rule, the rule may preclude decision. This hypothetical is made relevant by the necessity of there being some ultimate decision rule.

The intuitive physics of an agent that can’t get stuck entails: a) two equal forces act on an object producing an equilibrium; b) without breaking the equilibrium, an additional natural law is added specifying that the ass will turn. Rather than conclude this is impossible, intuition “resolves” the contradiction through conceiving that the ass will go in each direction half the time: the probability of either course is deemed .5. Confusion of kinds of symmetry, fueled by the intuition of free will, makes Buridan’s Principle counter-intuitive and objective probabilities intuitive.

How do we know that reality can’t be like this intuitive physics? We know because realizing a and b would mean that the physical forces involved don’t vary continuously. It would make an exception, a kind of singularity, of the midpoint.  

 

Bayesian Epistemology vs Popper

-1 curi 06 April 2011 11:50PM

 

 

I was directed to this book (http://www-biba.inrialpes.fr/Jaynes/prob.html) in conversation here:

http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3ug7?context=1#3ug7

I was told it had a proof of Bayesian epistemology in the first two chapters. One of the things we were discussing is Popper's epistemology.

Here are those chapters:

http://www-biba.inrialpes.fr/Jaynes/cc01p.pdf

http://www-biba.inrialpes.fr/Jaynes/cc02m.pdf

I have not found any proof here that Bayesian epistemology is correct. There is not even an attempt to prove it. Various things are assumed in the first chapter. In the second chapter, some things are proven given those assumptions.

Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false. I agree so far. Next it says "we will grant that it had a certain degree of validity". But I will not grant that. Popper's epistemology explains that *this is a mistake* (and Jaynes makes no attempt at all to address Popper's arguments). In any case, simply assuming his readers will grant his substantive claims is no way to argue.

The next sentences blithely assert that we all reason in this way. Jaynes' is basically presenting the issues of this kind of reasoning as his topic. This simply ignores Popper and makes no attempt to prove Jaynes' approach is correct.

Jaynes goes on to give syllogisms, which he calls "weaker" than deduction, which he acknowledges are not deductively correct. And then he just says we use that kind of reasoning all the time. That sort of assertion only appeals to the already converted. Jaynes starts with arguments which appeal to the *intuition* of his readers, not on arguments which could persuade someone who disagreed with him (that is, good rational arguments). Later when he gets into more mathematical stuff which doesn't (directly) rest on appeals to intution, it does rest on the ideas he (supposedly) established early on with his appeals to intuition.

The outline of the approach here is to quickly gloss over substantive philosophical assumptions, never provide serious arguments for them, take them as common sense, do not detail them, and then later provide arguments which are rigorous *given the assumptions glossed over earlier*. This is a mistake.

So we get, e.g., a section on Boolean Algebra which says it will state previous ideas more formally. This briefly acknowledges that the rigorous parts depend on the non-rigorous parts. Also the very important problem of carefully detailing how the mathematical objects discussed correspond to the real world things they are supposed to help us understand does not receive adequate attention.

Chapter 2 begins by saying we've now formulated our problem and the rest is just math. What I take from that is that the early assumptions won't be revisted but simply used as premises. So the rest is pointless if those early assumptions are mistaken, and Bayesian Epistemology cannot be proven in this way to anyone who doesn't grant the assumptions (such as a Popperian).

Moving on to Popper, Jaynes is ignorant of the topic and unscholarly. He writes:

http://www-biba.inrialpes.fr/Jaynes/crefsv.pdf

> Karl Popper is famous mostly through making a career out of the doctrine that theories may not be proved true, only false

This is pure fiction. Popper is a fallibilist and said (repeatedly) that theories cannot be proved false (or anything else).

It's important to criticize unscholarly books promoting myths about rival philosophers rather than addressing their actual arguments. That's a major flaw not just in a particular paragraph but in the author's way of thinking. It's especially relevant in this case since the author of the books tries to tell us about how to think.

Note that Yudkowsky made a similar unscholarly mistake, about the same rival philosopher, here:

http://yudkowsky.net/rational/bayes

> Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning.  Karl Popper's idea that theories can be definitely falsified, but never definitely confirmed

Popper's philosophy is not falsificationism, it was never the most popular, and it is fallibilist: it says ideas cannot be definitely falsified. It's bad to make this kind of mistake about what a rival's basic claims are when claiming to be dethroning him. The correct method of dethroning a rival philosophy involves understanding what it does say and criticizing that.

If Bayesians wish to challenge Popper they should learn his ideas and address his arguments. For example he questioned the concept of positive support for ideas. Part of this argument involves asking the questions: 'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me.

'Effective Altruism' as utilitarian equivocation.

1 Dias 24 November 2013 06:35PM

Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.

Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.

The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.

However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.

A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and  hence 2) inferior to EA things.

Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*

...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.

Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.

There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.

* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.

State your physical account of experienced color

-1 Mitchell_Porter 01 February 2012 07:00AM

Previous post: Does functionalism imply dualism? Next post: One last roll of the dice.

Don't worry, this sequence of increasingly annoying posts is almost over. But I think it's desirable that we try to establish, once and for all, how people here think color works, and whether they even think it exists.

The way I see it, there is a mental block at work. An obvious fact is being denied or evaded, because the conclusions are unpalatable. The obvious fact is that physics as we know it does not contain the colors that we see. By "physics" I don't just mean the entities that physicists talk about, I also mean anything that you can make out of them. I would encourage anyone who thinks they know what I mean, and who agrees with me on this point, to speak up and make it known that they agree. I don't mind being alone in this opinion, if that's how it is, but I think it's desirable to get some idea of whether LessWrong is genuinely 100% against the proposition.

Just so we're all on the same wavelength, I'll point to a specific example of color. Up at the top of this web page, the word "Less" appears. It's green. So, there is an example of a colored entity, right in front of anyone reading this page.

My thesis is that if you take a lot of point-particles, with no property except their location, and arrange them any way you want, there won't be anything that's green like that; and that the same applies for any physical theory with an ontology that doesn't explicitly include color. To me, this is just mindbogglingly obvious, like the fact that you can't get a letter by adding numbers.

At this point people start talking about neurons and gensyms and concept maps. The greenness isn't in the physical object, "computer screen", it's in the brain's response to the stimulus provided by light from the computer screen entering the eye.

My response is simple. Try to fix in your mind what the physical reality must be, behind your favorite neuro-cognitive explanation of greenness. Presumably it's something like "a whole lot of neurons, firing in a particular way". Try to imagine what that is physically, in terms of atoms. Imagine some vast molecular tinker-toy structures, shaped into a cluster of neurons, with traveling waves of ions crossing axonal membranes. Large numbers of atoms arranged in space, a few of them executing motions which are relevant for the information processing. Do you have that in your mind's eye? Now look up again at that word "Less", and remind yourself that according to your theory, the green shape that you are seeing is the same thing as some aspect of all those billions of colorless atoms in motion.

If your theory still makes sense to you, then please tell us in comments what aspect of the atoms in motion is actually green.

I only see three options. Deny that anything is actually green; become a dualist; or (supervillain voice) join me, and together, we can make a new ontology.

Does functionalism imply dualism?

-1 Mitchell_Porter 31 January 2012 03:43AM

This post follows on from Personal research update, and is followed by State your physical explanation of experienced color.

In a recent post, I claimed that functionalism about consciousness implies dualism. Since most functionalists think their philosophy is an alternative to dualism, I'd better present an argument.

But before I go further, I'll link to orthonormal's series on dissolving the problem of "Mary's Room": Seeing Red: Dissolving Mary's Room and Qualia, A Study of Scarlet: The Conscious Mental Graph, Nature: Red in Truth, and Qualia. Mary's Room is one of many thought experiments bandied about by philosophers in their attempts to say whether or not colors (and other qualia) are a problem for materialism, and orthonormal presents a computational attempt to get around the problem which is a good representative of the functionalist style of thought. I won't have anything to say about those articles at this stage (maybe in comments), but they can serve as an example of what I'm talking about. 

Now, though it may antagonize some people, I think it is best to start off by stating my position plainly and bluntly, rather than starting with a neutral discussion of what functionalism is and how it works, and then seeking to work my way from there to the unpopular conclusion. I will stick to the example of color to make my points - apologies to blind and colorblind readers.

My fundamental thesis is that color manifestly does exist - there are such things as shades of green, shades of red, etc - and that it manifestly does not exist in any standard sort of physical ontology. In an arrangement of point particles in space, there are no shades of green present. This is obviously true, and it's equally obvious for more complicated ontologies like fields, geometries, wavefunction multiverses, and so on. It's even part of the history of physics; even Galileo distinguished between primary qualities like location and shape, and secondary qualities like color. Primary qualities are out there and objectively present in the external world, secondary qualities are only in us, and physics will only concern itself with primary qualities. The ontological world of physical theory is colorless. (We may call light of a certain wavelength green light or red light, but that is because it produces an experience of seeing green or seeing red, not because the light itself is green or red in the original sense of those words.) And what has happened due to the progress of the natural sciences is that we now say that experiences are in brains, and brains are made of atoms, and atoms are described by a physics which does not contain color. So the secondary qualities have vanished entirely from this picture of the world; there is no opportunity for them to exist within us, because we are made of exactly the same stuff as the external world.

Yet the "secondary qualities" are there. They're all around us, in every experience. It really is this simple: colors exist in reality, they don't exist in theory, therefore the theory needs to be augmented or it needs to be changed. Dualism is an augmentation. My speculations about quantum monads are supposed to pave the way for a change. But I won't talk about that option here. Instead, I will try to talk about theories of consciousness which are meant to be compatible with physicalism - functionalism is one such theory.

Such a theory will necessarily present a candidate, however vague, for the physical correlate of an experience of color. One can then say that color exists without having to add anything to physics, because the color just is the proposed physical correlate. This doesn't work because the situation hasn't changed. If all you have are point particles whose only property is location, then individual particles do not have the property of being colored, nor do they have that property in conjunction. Identifying a physical correlate simply picks out a particular set of particles and says "there's your experience of color". But there's still nothing there that is green or red. You may accustom yourself to thinking of a particular material event, a particular rearrangement of atoms in space, as being the color, but that's just the power of habitual association at work. You are introducing into your concept of the event a property that is not inherently present in it.

It may be that one way people manage to avoid noticing this, is by an incomplete chain of thought. I might say: none of the objects in your physical theory are green. The happy materialist might say: but those aren't the things which are truly green in the sense you care about; the things which are green are parts of experiences, not the external objects. I say: fine. But experiences have to exist, right? And you say that physics is everything. So that must mean that experiences are some sort of physical object, and so it will be just as impossible for them to be truly green, given the ontological primitives we have to work with. But for some reason, this further deduction isn't made. Instead, it is accepted that objects in physical space aren't really green, but the objects of experience exist in some other "space", the space of subjective experience, and... it isn't explicitly said that objects there can be truly green, but somehow this difference between physical space and subjective space seems to help people be dualists without actually noticing it.

It is true that color exists in this context - a subjective space. Color always exists as part of an "experience". But physical ontology doesn't contain subjective space or conscious experience any more than it does contain color. What it can contain, are state machines which are structurally isomorphic to these things. So here we can finally identify how a functionalist theory of consciousness works psychologically: You single out some state machines in your physical description of the brain (like the networks in orthonormal's sequence of posts); in your imagination, you associate consciousness with certain states of such state machines, on the basis of structural isomorphism; and now you say, conscious states are those physical states. Subjective space is some neural topographic map, the subjectively experienced body is the sensorimotor homunculus, and so forth.

But if we stick to any standard notion of physical theory, all those brain parts still don't have any of the properties they need. There's no color there, there's no other space there, there's no observing agent. It's all just large numbers of atoms in motion. No-one is home and nothing is happening to them.

Clearly it is some sort of progress to have discovered, in one's physical picture of the world, the possibility of entities which are roughly isomorphic to experiences, colors, etc. But they are still not the same thing. Most of the modern turmoil of ideas about consciousness in philosophy and science is due to this gap - attempts to deny it, attempts to do without noticing it, attempts to force people to notice it. orthonormal's sequence, for example, seems to be an attempt to exhibit a cognitive model for experiences and behaviors that you would expect if color exists, without having to suppose that color actually exists. If we were talking about a theoretical construct, this would be fine. We are under no obligation to believe that phlogiston exists, only to explain why people once talked about it.

But to extend this attitude to something that most of us are directly experiencing in almost every waking moment, is ... how can I put this? It's really something. I'd call it an act of intellectual desperation, except that people don't seem to feel desperate when they do it. They are just patiently explaining, recapitulating and elaborating, some "aha" moment they had back in their past, when functionalism made sense to them. My thesis is certainly that this sense of insight, of having dissolved the problem, is an illusion. The genuineness of the isomorphism between conscious state and coarse-grained physical state, and the work of several generations of materialist thinkers to develop ways of speaking which smoothly promote this isomorphism to an identity, combine to provide the sense that no problem remains to be solved. But all you have to do is attend for a moment to experience itself, and then to compare that to the picture of billions of colorless atoms in intricate motion through space, to realize that this is still dualism.

I promised not to promote the monads, but I will say this. The way to avoid dualism is to first understand consciousness as it is in itself, without the presupposition of materialism. Observe the structure of its states and the dynamics of its passage. That is what phenomenology is about. Then, sketch out an ontology of what you have observed. It doesn't have to contain everything in infinite detail, it can overlook some features. But I would say that at a minimum it needs to contain the triad of subject-object-aspect (which appears under various names in the history of philosophy). There are objects of awareness, they are being experienced within a common subjective space, and they are experienced in a certain aspect. Any theory of reality, whether or not it is materialist, must contain such an entity in order to be true.

The basic entity here is the experiencing subject. Conscious states are its states. And now we can begin to tackle the ontological status of state machines, as a candidate for the ontological category to which conscious beings belong.

State machines are abstracted descriptions. We say there's a thing, it has a set of possible states; here are the allowed transitions between them, and the conditions under which those transitions occur. Specify all that and we have specified a state machine. We don't care about why those are the states or why the transitions occur; those are irrelevant details.

A very simple state machine might be denoted by the state transition network "1<->2". There's a state labeled 1 and another state labeled 2. If the machine is in state 1, it proceeds to state 2, and the reverse is also true. This state machine is realized wherever you have something that oscillates between two states without stopping in either. First the earth is close to the sun, then it is far from the sun, then it is close again... The Earth in its orbit instantiates the state machine "1<->2". I get involved with Less Wrong, then I quit for a while, then I come back... My Internet habits also instantiate the state machine "1<->2".

A computer program is exactly like this, a state machine of great complexity (and usually its state transition rules contain some dependence on external conditions, like user input) which has been physically instantiated for use. But one cannot claim that its states have any intrinsic meaning, any more than I can claim that the state 1 in the oscillating state machine is intrinsically about the earth being close to the sun. This is not true, even if I write down the state transition network in the form "CloseToTheSun<->FarFromTheSun".

This is another ontological deficiency of functionalism. Mental states have meanings, thoughts are always about something, and what they are about is not the result of convention or of the needs of external users. This is yet another clue that the ontological status of conscious states is special, that their "substance" matters to what they are. Of course, this is a challenge to the philosophy which says that a detailed enough simulation of a brain will create a conscious person, regardless of the computational substrate. The only reason people believe this, is because they believe the brain itself is not a special substrate. But this is a judgment made on the basis of science that is still at a highly incomplete stage, and certainly I expect science to tell us something different by the time it's finished with the brain. The ontological problems of functionalism provide a strong apriori reason for this expectation.

What is more challenging is to form a conception of the elementary parts and relations that could form the basis of an alternative ontology. But we have to do this, and the impetus has to come from a phenomenological ontology of consciousness that is as precise as possible. Fortunately, a great start was made on this about 100 years ago, in the heyday of phenomenology as a philosophical movement.

A conscious mind is a state machine, in the sense that it has states and transitions between them. The states also have structure, because conscious experiences do have parts. But the ontological ties that combine those parts into the whole are poorly apprehended by our current concepts. When we try to reduce them to nothing but causal coupling or to the proximity in space of presumed physical correlates of those parts, we are, I believe, getting it wrong. Clearly cause and effect operates in the realm of consciousness, but it will take great care to state precisely and correctly the nature of the things which are interacting and the ways in which they do so. Consider the ability to tell apart different shades of color. It's not just that the colors are there; we know that they are there, and we are able to tell them apart. This implies a certain amount of causal structure. But the perilous step is to focus only on that causal structure, detach it from considerations of how things appear to be in themselves, and instead say "state machine, neurons doing computations, details interesting but not crucial to my understanding of reality". Somehow, in trying to understand conscious cognition, we must remain in touch with the ontology of consciousness as partially revealed in consciousness itself. The things which do the conscious computing must be things with the properties that we see in front of us, the properties of the objects of experience, such as color.

You know, color - authentic original color - has been banished from physical ontology for so long, that it sounds a little mad to say that there might be a physical entity which is actually green. But there has to be such an entity, whether or not you call it physical. Such an entity will always be embedded in a larger conscious experience, and that conscious experience will be embedded in a conscious being, like you. So we have plenty of clues to the true ontology; the clues are right in front of us; we're subjectively made of these clues. And we will not truly figure things out, unless we remain insistent that these inconvenient realities are in fact real.

Should we admit it when a person/group is "better" than another person/group?

0 adamzerner 16 February 2016 09:43AM

This sort of thinking seems bad:

me.INTRINSIC_WORTH = 99999999; No matter what I do, this fixed property will remain constant.

This sort of thinking seems socially frowned upon, but accurate:

a.impactOnSociety(time) > b.impactOnSociety(time)

a.qualityOfCharacter > b.qualityOfCharacter // determined by things like altruism, grit, courage, self awareness...

Similar points could be made by replacing a/b with [group of people]. I think it's terrible to say something like:

This race is inherently better than that race. I refuse to change my mind, regardless of the evidence brought before me.

But to me, it doesn't seem wrong to say something like:

Based on what I've seen, I think that the median member of Group A has a higher qualityOfCharacter than the median member of Group B. I don't think there's anything inherently better about Group A. It's just based on what I've observed. If presented with enough evidence, I will change my mind.

Credit and accountability seem like good things to me, and so I want to live in a world where people/groups receive credit for good qualities, and are held accountable for bad qualities.

I'm not sure though. I could see that there are unintended consequences of such a world. For example, such "score keeping" could lead to contentiousness. And perhaps it's just something that we as a society (to generalize) can't handle, and thus shouldn't keep score.

Taking Effective Altruism Seriously

2 Salemicus 07 June 2015 06:59AM

Epistemic status: 90% confident.

Inspiration: Arjun Narayan, Tyler Cowen.

The noblest charity is to prevent a man from accepting charity, and the best alms are to show and enable a man to dispense with alms.

Moses Maimonides.

Background

Effective Altruism (EA) is "a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world." Along with the related organisation GiveWell, it often focuses on getting the most "bang for your buck" in charitable donations. Unfortunately, despite their stated aims, their actual charitable recommendations are generally wasteful, such as cash transfers to poor Africans. This leads to the obvious question - how can we do better?

Doing better

One of the positive aspects of EA theory is its attempt to widen the scope of altruism beyond the traditional. For instance, to take into account catastrophic risks, and the far future. However, altruism often produces a far-mode bias where intentions matter above results. This can be a particular problem for EA - for example, it is very hard to get evidence about how we are affecting the far future. An effective method needs to rely on a tight feedback loop between action and results, so that continual updates are possible. At the extreme, Far Mode operates in a manner where no updating on results takes place at all. However, it is also important that those results are of significant magnitude as to justify the effort. EA has mostly fallen into the latter trap - achieving measurable results, but which are of no greater consequence.

The population of sub-Saharan Africa is around 950 million people, and growing. They have been a prime target of aid for generations, but it remains the poorest region of the world. Providing cash transfers to them mostly merely raises consumption, rather than substantially raising productivity. A truly altruistic program would enable the people in these countries to generate their own wealth so that they no longer needed poverty - unconditional transfers, by contrast, is an idea so lazy even Bob Geldof could stumble on it. The only novel thing about the GiveWell program is that the transfers are in cash.

Unfortunately, no-one knows how to turn poor African countries into productive Western ones, short of colonization. The problem is emphatically not a shortage of capital, but rather low productivity, and the absence of effective institutions in which that capital can be deployed. Sadly, these conditions and institutions cannot simply be transplanted into those countries.

A greater charity

However, there do exist countries with high productivity, and effective institutions in which that capital can be deployed. That capital then raises world productivity. As F.A. Harper wrote:

Savings invested in privately owned economic tools of production amount to... the greatest economic charity of all.

That is because those tools increase the productivity of labour, and so raise output. The pie has grown. Moreover, the person who invests their portion of the pie into new capital is particularly altruistic, both because they are not taking a share themselves, and because they are making a particularly large contribution to future pies.

In the same way that using steel to build tanks means (on the margin) fewer cars and vice-versa, using craftsmen to build a new home means (on the margin) fewer factories and vice-versa. Investment in capital is foregone consumption. Moreover, you do not need to personally build those economic tools; rather, you can part-finance a range of those tools by investing in the stock market, or other financial mechanisms.

Now, it's true that little of that capital will be deployed in sub-Saharan Africa at present, due to the institutional problems already mentioned. Investing in these countries will likely lead to your capital being stolen or becoming unproductive - the same trap that prevents locals from advancing equally prevents foreign investors from doing so. However, if sub-Saharan Africa ever does fix its culture and institutions, then the availability of that capital will then serve to rapidly raise productivity and then living standards, much as is taking place in China. Moreover, by making the rest of the world richer, this increases the level of aid other countries could provide to sub-Saharan Africa in future, should this ever be judged desirable. It also serves to improve the emigration prospects of individuals within these countries.

Feedback

Another great benefit of capital investment is the sharp feedback mechanism. The market economy in general, and financial markets in particular, serve to redistribute capital from ineffective to effective ventures, and from ineffective to effective investors. As a result, it is no longer necessary to make direct (and expensive) measurements of standards of living in sub-Saharan Africa; as long as your investment fund is gaining in value, you can rest safe in the knowledge that its growth is contributing, in a small way, to future prosperity.

Commitment mechanisms

However, if investment in capital is foregone consumption, then consumption is foregone investment. If I invest in the stock market today (altruistic), then in ten years' time spend my profits on a bigger house (selfish), then some of the good is undone. So the true altruist will not merely create capital, he will make sure that capital will never get spent down. One good way of doing that would be to donate to an institution likely to hold onto its capital in perpetuity, and likely to grow that capital over time. Perhaps the best example of such an institution would be a richly-endowed private university, such as Harvard, which has existed for almost 400 years and is said to have an endowment of $32 billion.

John Paulson recently gave Harvard $400 million. Unfortunately, this meant he came in for a torrent of criticism from people claiming he should have given the money to poor Africans, etc. I hope to see Effective Altruists defending him, as he has clearly followed through on their concepts in the finest way.

Further thoughts and alternatives

 

  • Some people say that we are currently going through a "savings glut" in which capital is less productive than previously thought. In this case, it may be that Effective Altruists should focus on funding (and becoming!) successful entrepreneurs in different spaces.
  • I am sympathetic to the Thielian critique that innovation is being steadily stifled by hostile forces. I view the past 50 years, and the foreseeable future, as a race between technology and regulation, which technology is by no means certain to win. It may be that Effective Altruists should focus on political activity, to defend and expand economic liberty where it exists - this is currently the focus of my altruism.
  • However, government is not the enemy; rather, the enemy is the cultural beliefs and conditions that create a demand for the destruction of economic liberty. To the extent this critique, it may be that Effective Altruists should focus on promoting a pro-innovation and pro-liberty mindset; for example, through movies and novels.

Conclusion


Effective altruists should be applauded for trying to bring evidence and reason to a subject that is plagued by far-mode thinking. But taking their ideas seriously quickly leads to a much more radical approach.

 

Could auto-generated troll scores reduce Twitter and Facebook harassments?

0 Stefan_Schubert 30 April 2015 02:05PM

There's been a lot of discussion in the last few yeas on the problem of hateful behaviour on social media such as Twitter and Facebook. How can this problem be solved? Twitter and Facebook could of course start adopting stricter policies towards trolls and haters. They could remove more posts and tweets, and ban more users. So far, they have, however, been relatively reluctant to do that. Another more principled problem with this approach is that it could be seen as a restriction on the freedom of speech (especially if Twitter and Facebook were ordered to do this by law).

 

There's another possible solution, however. Using sentiment analysis, you could give Twitter and Facebook users a "troll score". Users whose language is hateful, offensive, racist, etc, would get a high troll score.* This score would in effect work as a (negative) reputation/karma score. That would in itself probably incentivize trolls to improve. However, if users would be allowed to block (and make invisible the writings by) any user whose troll score is above a certain cut-off point (of their choice), that would presumably incentivize trolls to improve even more. 

Could this be done? Well, it's already been shown to be possible to infer your big five personality traits, with great accuracy, from what you've written and liked, respectively, on Facebook. The tests are constructed of the basis of correlations between data from standard personality questionnaires (more than 80'000 Facebook users filled in such tests on the behalf of YouAreWhatYouLike, who constructed one of the Facebook tests) and Facebook writings or likes. Once it's been established that, e.g. extraverted people tend to like certain kinds of posts, or use certain kinds of words, this knowledge can be used to predict the level of extraversion of Facebook users who haven't taken the questionnaire.

This suggest that there are no principled reasons a reliable troll score couldn't be constructed with today's technology. However, a problem is that while there are agreed criteria for what is to count as an extraverted person, there are no agreed criteria for what counts as a troll. Also, it seems you couldn't use questionnaires, since people who actually do behave like trolls online would be discinlined to admit that they do in a questionnaire.

One way to proceed could instead be this. First, you could define in rather general and vague terms what is to count as trolling - say "racism", "vicious attacks", "threats of violence", etc. You could then use two different methods to go from this vague definition to a precise score. The first is to let a number of sensible people give their troll scores of different Facebook posts and tweets (using the general and vague definition of what is to count as trolling). You would feed this into your algorithms, which would learn which combinations of words are characteristic of trolls (as judged by these people), and which arent't. The second is to simply list a number of words or phrases which would count as characteristic of trolls, in the sense of the general and vague definition. This latter method is probably less costly - particularly if you can generate the troll-lexicon automatically, say from existing dictionaries of offensive words - but also probably less accurate.

 

In any case, I expect it to be possible to solve this problem. The next problem is: who would do this? Facebook and Twitter should be able to construct the troll score, and to add the option of blocking all trolls, but do they want to? The risk is that they will think that the possible down-side to this is greater than the possible up-side. If people start disliking this rather radical plan, they might leave en masse, whereas if they like it, well, then trolls could potentially disappear, but it's unlikely that this will affect their bottom line drastically. Thus it's not clear that they will be more positive to this idea than they are to conventional banning/moderating methods.

Another option is for an outside company to create a troll score using Facebook or Twitter data. I don't know whether that's possible at present - whether you'd need Facebook and Twitter's consent, and whether they'd then be willing to give it. It seems you definitely would need it in order for the troll score to show up on your standard Facebook/Twitter account, and in order to enable users to block all trolls.

This second problem is thus much harder. A troll score could probably be constructed by Facebook and Twitter, but potentially they are not very likely to want to do it. Any suggestions on how to get around this problem would be appreciated.

 

My solution is very similar to the LessWrong solution to the troll problem. Just like you can make low karma users invisible on LessWrong, you would be able to block (and make invisible the writings by) Facebook and Twitter users with a high troll score. A difference is, though, that whereas karma is manually generated (by voting) the troll score would be automatically generated from your writings (for more on this distinction, see here).

One advantage of this method, as opposed to conventional moderation methods, is that it doesn't restrict freedom of speech in the same way. If trolls were blocked by most users, you'd achieve much the same effect as you would from bannings (the trolls wouldn't be able to speak to anyone), but in a very different way: it would result from lots of blockings from individual users, who presumably have a full right to block anyone, rather than from the actions of a central admin.

 

Let me finish with one last caveat. You could of course extend this scheme, and construct all sorts of scores - such as a "liberal-conservative score", with whose help you could block anyone whose political opinions are insufficiently close to yours. That would be a very bad idea, in my view. Scores of this sort should only be used to combat harassment, threats and other forms of anti-social behaviour, and not to exclude any dissenter from discussion.

 

* I here use "troll" in the wider sense which "equate[s] trolling with online harassment" rather than in the narrower (and original) sense according to which a troll is "a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response or otherwise disrupting normal on-topic discussion" (Wikipedia).

Sortition - Hacking Government To Avoid Cognitive Biases And Corruption

0 Aussiekas 06 May 2014 06:10AM

I've elaborated on this form of government I have proposed in great detail on my blog here

 


The purpose of this post is to be a persuasive argument for my proposed system of democracy.  I am arguing along the lines that my legislature by sortition, random selection, is superior to electoral systems.  It also mirrors the advances in overcoming bias which are currently being pioneered in the Sciences.

I. The Problem

It is insane that we allow the same people who are elected to cast their eye on society to identify problems, write up the solutions to those problems, and then also vote to approve those solutions.  This triple function of government by elected officials isn't simply corruptible, but is inherently flawed in its decision making process.



II. The Central Committee, overcoming bias, electoral shenanigans, and demographics bias

In my system of sortition election there is a mini-referendum done by a huge sampling of 1,000-5,000 representatives at the highest level.  They vote everything up or down and cannot change anything about a bill themselves.  They are not congregated into one place and there is no politics between them.  They don't even need to know, nor could they know each other.  Perhaps they could be part of political parties, but there is no need or money behind this as the members of what I'm calling the Central Committee (C2) are never candidates and can individually never serve more than once per lifetime or perhaps per decade in 3 year terms.

Contentious issues can be moved to a general referendum.  In the 1,000 member C2, any law in the margins of 550-450 can have a special second vote proposed by the disagreeing side such that if more than 600 agree then the item is added to the general monthly or quarterly referendum conducted electronically with the entire population.  In this way the average person participates and feels heard by their government on a regular basis.

The major advantage of this C2 is that it is representative. It will have people from all areas, be 50% male and 50% female and will include all minorities.  There can be no great misrepresentation or capture of the legislature by a powerful group.  This overcome many of the inherent biases of an electoral system which in almost every democracy today routinely under represents minorities.

III. The Issue Committees (IC)

The IC is a totally separate body whose sole job is to identify areas of the law which need updating.  They are comprised of 100 citizens and are a split between 51 Regular Citizens (RCs) and 49 Expert Citizens (EC) serving single 3 year terms.  There are around 30 ICs and they each serve an area such as defence, environment, food safety, drug safety, telecommunications, changes to government, finance sector, banking sector, etc.

These committees will meet in person and discuss what needs exist which the government can address.  They do not get to write any laws, nor do they get to vote on any laws.  There are in fact more of these than there are members of the C2 and they will be the primary face of government where the average citizen can send in requests or communicate needs.  The IC shines a spotlight on the issues facing the country.  They also form the law writing bodies

IV. The Sub Committee (SC)

These are temporary parts of the legislature who write the laws.  They have no authority over what topic area they get to write laws about, that is determined by the IC and then voted upon by the C2.  They are composed of 10 RCs and 10 ECs with the support of 10 Lawyer Citizens (LC). The LCs do not participate to vote when the draft law can be moved up to the C2 for consideration, they simply help draft reasonable laws.

These SC's form and dissolved quickly, lasting no more than 3-6 months before a proposed law is made.  Being called up to the SC is a lot more akin to being drafted for Jury Duty than the IC or C2 level of government as it is a short term of service.

V. Conclusions

  • This system is indeed more democratic and more representative than current electoral democracies.  It is less prone to corruption and electioneering is impossible as there are no elections. 


  • Members of the C2, IC, and SC parts of intentionally split in their duties so no conflict of interest can arise and there is no legislator bias where they have pet bills and issues to push through for benefits to specific parts of the country.

  • This system is also less influenced by the views an opinions of the very wealthy and the demographic and economic makeup of the people involved.

And that's it.  Could it work?  Would it work?  I'd like to think it has some advantages over the current and outdated mechanisms of democracy in terms of new knowledge about how the human mind works.

EDIT:  moved notes to bottom of post

NOTE 1:  I anticipate this objection.  Random Citizens (RC) and Expert Citizens (EC) have various stipulations on their service and on how often they can serve, check out my linked post at the top for  details.  Suffice to say, the RCs must have completed high school and cannot be intellectually disabled.  Whatever you can think of that might disqualify someone for a jury, think of something along those lines.

NOTE 2: As for the nature of this being different, look at juries.  We already use a process of sortition, though heavily and perhaps unfairly constrained in its current form, to determine if people are guilty or innocent and what sort of punishment they might receive.  We even use sortition in committees of experts in various forms form peer reviewed journals with somewhat random selection from a pool of qualified individuals or ECs in my system.

NOTE 3:  This is not about politics.  I often say I am interested in government, but not politics.  This confuses a lot of people.  If anything, this system would lessen or (too optimistically) eliminate politics.  I know there is a general ban on discussion of politics and this is not that.  I am trying to modify government and democratic systems to reflect advances in cognitive bias, decision theory, and computer technology to modernize and further democratize the practice of government.

View more: Next