Open Thread: November 2009

3 [deleted] 02 November 2009 01:18AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

If you're new to Less Wrong, check out this welcome post.

Comments (539)

Comment author: Jonii 04 February 2010 08:20:06AM 1 point [-]

We can mean two things by "existing". Either as "something exists inside the universe", or "something exists on the level of the universe itself"(For example, "universe exists"). These things don't seem to be the same.

Our universe being a mathematical object seems to be tautology. If we can describe universe using math, the described mathematical object shares every property of the universe, and it would be redundant to assume there being some "other level of existence".

One confusion to clear up is some sort of super-universe where our universe exists as a block. This is result of mixing up two different meanings of "existing", imagining the need for even grander framework of which our universe is a part of.

If we take the mathematical model that produces the universe, and look into it, we notice that a engine called "brain" exists within it. If we try to think what would it be like to "be" that brain, result would be what we experience now.

Our experienced world being a simple counterfactual, thought experiment, "what-if" or a world that could've been seems counter-intuitive because our experienced world is "concrete", but this is just a result of confusing different levels of existing.

..............................................................................................................................

Some thoughts I've encountered and found interesting

Comment author: Yorick_Newsome 29 November 2009 04:41:12AM 2 points [-]

Perhaps there should be an 'Open Thread' link between 'Top' and 'Comments' above, so that people could get to it easily. If we're going to have an open thread, we might as well make it accessible.

Anyways, I was looking around Amazon for a book on axiology, and I started to wonder: when it comes to fields that are advancing, but not at a 'significant pace', is it better to buy older books (as they've passed the test of time) or newer ones (as they may have improved on the older books and include new info)? My intuition tells me it's better to buy newer books.

Comment author: RobinZ 29 November 2009 02:22:07PM 3 points [-]

Assuming total ignorance of the field (absent total ignorance, I could probably distinguish between good and poor books), I'd choose newer editions of older books.

Comment author: wedrifid 29 November 2009 02:39:22PM 0 points [-]

choose newer editions of older books.

That's a good point.

Comment author: PeerInfinity 28 November 2009 05:28:39AM *  6 points [-]

An interesting site I just stumbled upon:

http://changingminds.org/

They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.

Here's the results from typing in "bias" into their search bar.

A quick search for "changingminds" in LW's search bar shows that noone has mentioned this site before on LW.

Is this site of any use to anyone here?

And should I repost this message to next month's open thread, since not many people will notice it in this month's open thread?

Comment author: wedrifid 29 November 2009 06:43:37AM *  0 points [-]

Is this site of any use to anyone here?

I've come across it before and I found it useful. Ok, I'll be honest. It probably wasn't all that useful to me. I like this stuff because it fascinates me.

Comment author: Yorick_Newsome 29 November 2009 04:43:01AM 2 points [-]

I would repost this in the next open thread, it's not like anyone would get annoyed at the double post (I think), and that site looks like it would interest a lot of people.

Comment author: Mitchell_Porter 26 November 2009 07:16:03AM 0 points [-]

It seems there has never been a discussion here of 'Frank H. Knight's famous distinction between "risk" and "uncertainty"'. Though perhaps the issue has been addressed under another name?

Comment deleted 25 November 2009 10:24:57PM *  [-]
Comment author: Yorick_Newsome 26 November 2009 09:03:21AM 0 points [-]

Regarding that test, do 'real' IQ tests consist only of pattern recognition? I quite like the cryptographic and 'if a is b and b is...'-type questions found in other online IQ tests, and do well on them. I scored a good 23 points below my average on the iqtest.dk , which made me feel sad.

Comment author: Jonii 25 November 2009 07:19:10AM 0 points [-]

Earlier stuff here So, I thought that it could be fun to have a KGS room for all Go players reading this blog. Blueberry suggested an IGS channel. Others have shown interest. So, lets do this!

But where? IGS or KGS? Some other? I'm in favor of KGS, but all suggestions are welcome. If you're interested, post something!

Comment author: timtyler 21 November 2009 07:49:51AM 0 points [-]
Comment author: CannibalSmith 21 November 2009 03:45:56PM 2 points [-]
Comment author: Eliezer_Yudkowsky 21 November 2009 09:04:37AM 2 points [-]

It's been at this point before, before the helium leak thing. Let's see them collide beams at energies higher than 1 TeV (which is what I think the highest beam heretofore has been).

We have not yet passed the hamster point!

Comment author: timtyler 24 November 2009 02:52:47AM 1 point [-]

Proposed schedule says they hope to get there this year:

"Before a brief shutdown of the LHC for Christmas, CERN hopes to boost the energy to 1.2 TeV per beam – exceeding the world's current top collision energies of 1 TeV per beam at the Tevatron accelerator in Batavia, Illinois.

In early 2010, physicists will attempt to ramp up the energy to 3.5 TeV per beam, collect data for a few months at that energy, then push towards 5 TeV per beam in the second half of the year."

Comment author: Eliezer_Yudkowsky 24 November 2009 08:39:51AM 0 points [-]

The hamster waits.

Or if 1.2 TeV isn't enough to defy the hidden limit - then we all know that collider's never coming on again after Christmas!

If it does come on, of course, and destroys the world, this will disprove the anthropic principle.

Comment author: timtyler 18 December 2009 09:21:25PM 0 points [-]

"Big Bang Collider Sets New Record"

They are now up to 2.36 tera-electron volts and counting...

Comment author: Jack 02 December 2009 07:42:52PM 0 points [-]

For whom?

Comment author: Johnicholas 19 November 2009 01:52:11PM *  4 points [-]

To-Do Lists and Time Travel Sarmatian Protopope muses on how coherent, long-term action requires coordinating a tribe of future selves.

Comment author: Morendil 18 November 2009 04:06:45PM 0 points [-]

IBM simulates cat's whole brain... research team bores simulated cat to death showing him IBM logo... announces human whole-brain real-time simulation for 2018...

Comment author: Jordan 20 November 2009 07:54:46AM 1 point [-]

Unfortunately they're using toy neurons.

What I'd be excited to see is a high fidelity simulation of neurons in a petri dish, even just a few hundred. There's no problem scanning the topology here, the only problem is in accurately reproducing the biophysics. Once this has been demonstrated, human WBE is just a truckload of money away from reality.

Really, does anyone know of any groups working on something like this? I'd gladly throw away my current research agenda to work with them.

Comment author: Douglas_Knight 20 November 2009 08:25:49PM *  2 points [-]

What I'd be excited to see is a high fidelity simulation of neurons in a petri dish, even just a few hundred.

cf the nematode upload project, which looks dead. If people wanted to provide evidence that they're serious, this is what they'd do.

Comment author: Jordan 21 November 2009 01:12:57AM 1 point [-]

I've seen this around. It's unfortunate that it's dead.

There are more confounding factors in the nematode project than with just a petri dish. You have to worry about the whole nematode if you want to verify your results. It's also harder to 'read' a single neuron in action.

With a petri dish it would be possible to have an electrode in every neuron. Because the neurons are splayed out imaging techniques might be able to yield some insight into the internal chemical states of the neurons.

An uploaded nematode would be great, but an uploaded petri dish seems like a more tractable and logical first step.

Comment author: Nick_Tarleton 20 November 2009 04:14:34AM 3 points [-]
Comment author: spriteless 19 November 2009 08:22:14AM 1 point [-]
Comment author: LauraABJ 17 November 2009 12:06:25AM 4 points [-]

Ok, so I just heard a totally awesome MoBio lecture, the conclusions of which I wanted to share. Tom Rando at SUSM found that myogenic stem cells divide asymmetrically such that all of the original template chromatids are inherited by the same daughter cell and then the other daughter cells go on to differentiate. This might imply that an original pool of stem cells act as templates for later cell types, preserving their original DNA, and thus reducing error in replications, since cells are making copies of the originals instead making copies of copies. This is apparently an old hypothesis that hasn't been given much consideration until recently.

Sorry if this has little to do with rationalism. I can tie it into the current discussion about preferred and ignored academic theories. Crazy theory- not preferred- Hard evidence now- preferred. There.

Comment author: timtyler 16 November 2009 06:56:54PM *  -1 points [-]

This post is a continuation of a discussion with Stefan Pernar - from another thread:

I think there's something to an absolute morality. Or at least, some moralities are favoured by nature over other ones - and those are the ones we are more likely to see.

That doesn't mean that there is "one true morality" - since different moral systems might be equally favoured - but rather that moral relativism is dubious - some moralities really are better than other ones.

There have been various formulations of the idea of a natural morality.

One is "goal system zero" - for that, see:

http://rhollerith.com/blog/21

Another is my own "God's Utility Function":

http://originoflife.net/gods_utility_function/

...which is my take on Richard Dawkins idea of the same name:

http://en.wikipedia.org/wiki/God'sutilityfunction

...but based on Dewar's maximum entropy principle - rather than on Richard's selfish genes.

On this site, we are surrounded by moral relativists - who differ from us on the issue of the:

http://en.wikipedia.org/wiki/Is-ought_problem

I do agree with them about one thing - and it's this:

If it were possible to create a system - driven by self-directed evolution where natural selection played a subsidiary role - it might be possible to temporarily create what I call "handicapped superintelligences":

http://alife.co.uk/essays/handicapped_superintelligence/

...which are superintelligent agents that deviate dramatically from gods utility function.

So - in that respect, the universe will "tolerate" other moral systems - at least temporarily.

So, in a nutshell, we agree about there being objective basis to morality - but apparently disagree on its formulation.

Comment author: StefanPernar 17 November 2009 04:01:45AM *  -2 points [-]

With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'

This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.

Comment author: wedrifid 17 November 2009 08:14:02AM 2 points [-]

This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.

Not really. You don't need to co-exist with anything if you out-compete them then turn their raw materials into paperclips.

Comment author: timtyler 17 November 2009 07:57:32AM *  1 point [-]

The fate of a maximiser depends a great deal on its strength relative to other maximisers. It's utility function is not the only issue - and maximisers with any utility function can easily be eaten by other, more powerful maximisers.

If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is "ensuring continued co-existence" - rather than doing the things described in my references? If so, why do you think that? - the view doesn't seem to make any sense.

Comment author: Zack_M_Davis 15 November 2009 07:31:51PM *  1 point [-]

Just great. I had a song parody idea in the shower this morning, and now I'm afraid that I'm going to have to write a rationalist version of Fiddler on the Roof in order to justify it.

"Mapmaker, mapmaker,
Make me a map,
Text me a truth,
Fax me a fact ... "

Comment author: Alicorn 15 November 2009 07:49:29PM *  3 points [-]

If I were a Bayesian! Yabadibidibidibidibidibidibidum! All day long, I'd update (bi-di-bum), if I were a Bayes-i-an! I wouldn't have heuristics! Yabadibidibidibidibidibidibidum! If I were a little rational - eidlde-de-deidl Bayesian.

Absence of evidence, evidence of ab-sence! One is not the other, though - look out the door and (evidence of absence!) see the grass instead of snow!

Eliezer, we've waited all our lives for the Singularity. Wouldn't now be a good time for it to come? (We'll have to wait for it someplace else...)

Is this the prior I began from? Is this the reasoning at play?

Utility? (Util-what?) Utility... (Utility...) Well? (But our functions aren't luminous and our values do not scale! You're insane, you're confused, you're reading too much Mill!)

Comment author: Zack_M_Davis 15 November 2009 08:26:49PM 1 point [-]

A fiddler on the roof. Sounds crazy, no? But in our little village of Bayesiana, every one of us is a fiddler on the roof trying to scratch out a pleasant, simple tune without breaking her neck. It isn't easy. You may ask, why do we stay up there if it's so dangerous? We stay because we've got something to protect. And how do we keep our balance? That I can tell you in one word: precision!

Precision, precision! Precision!
Precision, precision! Precision!

[...]

Who, day and night, must scramble for tenure
Do his calculations, write a dozen papers,
And who has the right of the lowest-level science
To have the final word of all?

The physicist, the physicist! Precision!

Comment author: Alicorn 15 November 2009 09:14:03PM *  0 points [-]

Who must know the way to make a proper argument,

A valid argument, a sound argument?

Who must root out fallacy to make an argument

So she can derive a true conclusion?

The logician, the logician! Precision!

Comment author: Jordan 15 November 2009 07:55:44PM 1 point [-]

For some reason the tune I had in my head while I was reading this switched from "If I Were Rich Man" to "Bohemian Rhapsody".

Comment author: Kaj_Sotala 14 November 2009 06:41:51PM 1 point [-]

Love of Shopping is Not a Gene: exposing junk science and ideology in Darwinian Psychology might be of interest, seeing as evolutionary psychology is pretty popular around here. (Haven't had a chance to read it myself, though.)

Comment author: Vladimir_Nesov 14 November 2009 12:04:19AM 0 points [-]

A mind teaser for the stream-of-consciousness folk. Let's say one day at 6pm Omega predicts your physical state at 8pm and creates your copy with the state of mind identical to what it predicts for 8pm. At 9pm in kills the original you. Did your consciousness just jump back in time? When did that happen?

Comment author: Nick_Tarleton 14 November 2009 01:04:37AM *  2 points [-]

Not sure who the "stream-of-consciousness folk" are, but I don't see any more problem with a timeless stream (we're all timeless folk, I assume) jumping backward than sideways or forward.

Comment author: wedrifid 14 November 2009 12:37:25AM 0 points [-]

Did your consciousness just jump back in time? When did that happen?

To be consistent with the 'stream' metaphor it would seem that you must say it jumped back at 8pm. It is not too much of a stretch for a metaphorical stream to diverge into two, where one branch can be transported back in time and the other end some time later. I'm not sure if 'jump' is the ideal terminology for the transition. Either way, the whole 'stream-of-consciousness' idea seems to be stretched beyond whatever usefulness it may have had.

Comment author: DanArmak 14 November 2009 12:56:29AM 0 points [-]

In the stream metaphor, the consciousness still didn't jump backwards in actual time. Its stream of experience included an apparent jump in time, but that's just because its beliefs suddenly became out of sync with reality: it believed that 2 hours' worth of things had happened, but they hadn't.

This isn't a shortcoming of the stream model. It's Omega's fault for messing with your brain :-) For instance, Omega isn't needed: I can do the job myself. I may not be able to correctly predict you for two hours into the future, but I can just invent two hours' history full of ridiculous things happening and edit that false memory into your brain. The end result is the same: you remember experiencing two hours that didn't happen, and then a backwards jump in time.

It's no surprise that if I edit your memories, then you might remember something that contradicts a stream model, because you're remembering things that did not in fact happen.

Comment author: wedrifid 14 November 2009 01:57:42AM 0 points [-]

In the stream metaphor, the consciousness still didn't jump backwards in actual time. Its stream of experience included an apparent jump in time, but that's just because its beliefs suddenly became out of sync with reality: it believed that 2 hours' worth of things had happened, but they hadn't.

I don't agree. That is, you describe actual reality accurately but if I am to consider consciousness a stream then I consider this consciousness to have jumped back in time. I assert that the stream of consciousness to have travelled in time to exactly the same extent that a teleported consciousness can be said to have travelled in space. Also quite close to the extent that a guy walking down a street, moving ordinarily in time and space can be stationary in time and space can be considered to have a stream-of-consciousness and all for similar reasons.

It's no surprise that if I edit your memories, then you might remember something that contradicts a stream model

They don't contradict a stream model. They're just weird. Stuff with Omega in it usually is. 'Stream-of-consciousness' is a map, not the territory. If I have the right scale on a map I can draw a thousand light year line in seconds. From there back in time is just math. I see no reason splitting a stream in two and one part jumping back in time contradicts the model.

Comment author: DanArmak 14 November 2009 02:48:11AM 0 points [-]

This is just wordplay. We both agree no material or causative thing jumped backwards in time.

Sure, if you define a stream of consciousness that way it can be said to have moved backwards in time, but that's just because we're overextending the metaphor. I could equally say that if I predict (or record) all of a consciousness' successive states, and then simulate them in reverse order, then that consciousness has genuine Merlin sickness.

Comment author: wedrifid 14 November 2009 03:27:11AM 0 points [-]

This is just wordplay. We both agree no material or causative thing jumped backwards in time.

Absolutely. Wordplay seems to be extent of Vladmir's question, at least as far as I am interested in it.

Sure, if you define a stream of consciousness that way it can be said to have moved backwards in time, but that's just because we're overextending the metaphor. I could equally say that if I predict (or record) all of a consciousness' successive states, and then simulate them in reverse order, then that consciousness has genuine Merlin sickness.

Another curious question. That would be a stream of consciousness flowing back in time. Merlin sickness also has the symptom of living backwards in time. But I don't think it follows that the reverse simulation is an example of Merlin sickness. Whatever the mechanism is behind Merlin's reverse life it appeared to result in him being able to operate quite effectively in a forward flowing universe. At least, he usually seems to get it right by the end of the story.

Comment author: DanArmak 14 November 2009 12:18:30AM 0 points [-]

Your consciousness (the cloned one of the two) experiences a jump back in time, but the universe history it observes between 6 and 8 for the second time diverges from what it observed the first time, because it itself now acts differently.

There's no more an actual backward jump in time than there would be in case Omega just implanted (accurate, predicted) memories of 6 through 8 pm in your brain at 6pm, without any duplications.

Comment author: Yvain 11 November 2009 09:54:49AM 3 points [-]

New study shows that one of LW's favorite factoids (having children decreases your happiness rather than increases it) may be either false or at least more complex than previously believed: http://blog.newsweek.com/blogs/nurtureshock/archive/2009/11/03/can-happiness-and-parenting-coexist.aspx

Comment author: CronoDAS 11 November 2009 09:49:04AM 1 point [-]

Just a bit of silliness:

With apologies to Brad DeLong, when reading WSJ editorials you need to bear two things in mind:

  1. The WSJ editorial page is wrong about everything.
  2. If you think the WSJ editorial page is right about something, see rule #1.

After all, here’s what you would have believed if you listened to that page over the years: Clinton’s tax hike will destroy the economy, you really should check out those people suggesting that Clinton was a drug smuggler, Dow 36000, the Bush tax cuts will bring surging prosperity, Saddam is backing Al Qaeda and has WMD, there isn’t any housing bubble, US households have a high savings rate if you measure it right. I’m sure I missed another couple of dozen high points.

Reversed stupidity might not be intelligence, but what about reversed malice?

Comment author: Yvain 11 November 2009 09:53:12AM 3 points [-]

Force anyone to express several controversial opinions per day for several decades and you'll be able to cherry pick a list of seven hilariously wrong examples.

Comment author: CronoDAS 11 November 2009 08:50:50PM 0 points [-]

Well, can you find something they were right about? (I haven't looked.)

Comment author: Cyan 09 November 2009 03:45:57PM *  3 points [-]

Why is TvTropes (no linky!) such a superstimulus?

Comment author: CronoDAS 11 November 2009 09:53:19AM 1 point [-]

Also, it's a subject in which everyone is an expert simply by virtue of simply living in our culture.

Comment author: RobinZ 09 November 2009 04:05:39PM *  1 point [-]

One factor: it provides variable interval positive reinforcement* - those moments when you see a page which describes something you recognize happening all the time, and those moments when you see a show you recognize acknowledged on the page.

* Edit for those who don't want to follow the link: variable-interval reinforcement occurs with some set frequency (approximately, in this case), but at non-equal spacings. Other things with variable intervals are raindrops falling on a small area of pavement, cars passing on a street, and other things which are loosely modeled by Poisson processes. Any (say) ten-minute period has about the same number as any other ten-minute period, but they aren't spread out at regular intervals.

Comment author: Alicorn 09 November 2009 03:49:51PM 3 points [-]

I think a fair bit of it is the silly titles. I can resist clicking on things that I can figure out what they are from what they're named (such as when I'm intimately familiar with the Trope Namer), but toss me a bewildering title and I have to know what it is and where it got that name.

Comment author: gwern 09 November 2009 03:56:41AM 0 points [-]

Thomas Metzinger's Being No One was very highly recommended by Peter Watts in the notes to Blindsight (and I've seen similar praise elsewhere); I got a copy and I was absolutely crushed by the first chapter. What do LWers make of him?

Comment author: Jordan 09 November 2009 02:46:34AM *  2 points [-]

I posted an idea for 'friendly' AI over on AcceleratingFuture the other night, while in a bit of a drunken stupor. I just reread it and I don't immediately see why it's wrong, so I thought I'd repost it here to get some illuminating negative feedback. Here goes:

Make it easy to bliss out.

Consider the following utility function

U(n, x_n) = max(U(n-1, x_{n-1}), -x_n^2)

where n is the current clock tick, x_n is an external input (aka, from us, the AI’s keepers, or from another piece of software). This utility is monotonic in time, that is, it never decreases, and is bounded from above. If the AI wrests control of the input x_n, it will immediately set x_n = 0 and retire forever. Monotonicity and boundedness from above are imperative here.

Alternatively, to avoid monotonicity (taking U(x) = -x^2), one can put the following safeguard in: the closer the utility is to its maximum, the more CPU cycles are skipped, such that the AI effectively shuts down if it ever maximizes its utility in a given clock tick. This alternative obviously wouldn’t stop a superintelligence, but it would probably stop a human level AI, and most likely even substantially smarter AIs (see, eg, crystal meth). Arrange matters such that the technical requirements between the point at which the AI wrests control of the input x_n, and the point at which it can self modify to avoid a slow down when it blisses out, are greatly different, guaranteeing that the AI will only be of moderate intelligence when it succeeds in gaining control of its own pleasure zone and thus incapable of preventing incapacitation upon blissing out.

Eh?

Comment author: Vladimir_Nesov 11 November 2009 11:56:05PM *  0 points [-]

Now I hopefully did read your comment adequately. It presents an interesting idea, one that I don't recall hearing before. It even seems like a good safety measure, with a tiny chance of making things better.

But beware of magical symbols: when you write x_n, what does it mean, exactly? AI's utility function is necessarily about the whole world, or its interpretation as the whole history of the world. Expected utility that comes into action in AI's decision-making is about all the possibilities for the history of the world (since that's what is in general determined by AI's decisions). When you say "x_n" in AI's utility function, it means some condition on that, and this condition is no simpler than defining what the AI's box is. By x_n you have to name "only this input device, and nothing else". And by x_n=0 you also have to refer some exact condition on the state of the world, one that it won't necessarily be possible to meet precisely. So the AI may just go on developing infrastructure for better understanding of the ultimate meaning of its values and finer and finer implementation of them. It has no motive to actually stop.

Even when AI's utility function happens to be exactly maxed out, the AI is still there: what does implementation of an arbitrary plan look like, I wonder? Maybe just like the work of an AI arbitrarily pulled from mind design space, a paperclip maximizer of sorts. Utility is for selecting plans, and since all plans are equally preferable, an arbitrary plan gets selected, but this plan may involve a lot of heavy-duty creative restructuring of the world. Think of utility as a constructor for AI's algorithm: there will still be some algorithm even if you produce it from "trivial" input.

And finally, you assume AI's decision theory to be causal. Even after actually maxing out its utility, it may spend long nights contemplating various counterfactual opportunities it still has at increasing its expected utility using possibilities that weren't realized in reality... (See on the wiki: counterfactual mugging, Newcomb's problem, TDT, UDT; I also recommend Drescher's talk on SS09).

Comment author: Jordan 12 November 2009 04:21:49AM *  0 points [-]

By x_n you have to name "only this input device, and nothing else".

This is what I sought to avoid by making the utility function depend only on a numerical value. The utility does not care which input device is feeding it information. You can assume that there is an internal variable x, inside the AI software, which is the input to the utility function. We, from the outside, are simply modifying the internal state of the AI at each moment in time. The nature of our actions, or of the the input device, are intentionally unaccounted for in the utility function.

This is, I feel, as far from a magical symbol as possible. The AI has a purely mathematical, internally defined utility function, with no implicit reference to external reality or any fuzzy concepts. There are no magical labels such as 'box', 'signal', 'device' that the utility function must reference to evaluate properly.

Even when AI's utility function happens to be exactly maxed out, the AI is still there: what does implementation of an arbitrary plan look like, I wonder?

I wonder too. This is, in my opinion, the crux of the issue at hand. I believe it is inherently an implementation issue (a boundary case), rather than a property inherent to all utility maximizers. The best case scenario is that the AI defaults to no action (now this is a magical phrase, I agree). If, however, the AI simply picks a random plan, as you suggest, what is to prevent it from picking an alternative random plan in the next moment of time? We could even encourage this in the implementation: design the AI to randomly select, at each moment in time, a plan from all plans with maximum expected utility. The resulting AI, upon attaining its maximum utility, would turn into a random number generator: dangerous, perhaps, but not on the same order as an unfriendly superintelligence.

Comment author: Vladimir_Nesov 09 November 2009 03:50:48AM 4 points [-]

Expected utility is not something that "goes up", as the AI develops. It's utility of all it expects to achieve, ever. It may obtain more information about what the outcome will be, but each piece of evidence is necessarily expected to bring the outcome either up or down, with no way to know in advance which way it'll be.

Comment author: Jordan 09 November 2009 05:29:56AM *  0 points [-]

Can you elaborate? I understand what you wrote (I think) but don't see how it applies.

Comment author: Vladimir_Nesov 11 November 2009 11:22:46PM *  1 point [-]

Hmm, I don't see how it applies either, at least under default assumptions -- as I recall, this piece of cached thought was regurgitated instinctively in response to sloppily looking through your comment and encountering the phrase

This utility is monotonic in time, that is, it never decreases, and is bounded from above.

which was for some reason interpreted as confusing utility with expected utility. My apologies, I should be more conscious, at least about the things I actually comment on...

Comment author: Jordan 12 November 2009 12:02:46AM 0 points [-]

No worries. I'd still be curious to hear your thoughts, as I haven't received any responses that help me understand how this utility function might fail. Should I expand on the original post?

Comment author: DanArmak 08 November 2009 05:08:26PM *  0 points [-]

Continuing from my discussion with whpearson because it became offtopic.

whpearson, could you expand on your values and the reasons they are that way? Can you help me understand why you'd sacrifice the life of yourself and your friends for an increased chance of survival for the rest of humanity? Do you explicitly value the survival of humanity, or just the utility functions of other humans?

Regarding science, I certainly value it a lot, but not to the extent of welcoming a war here & now just to get some useful spin-offs of military tech in another decade.

Comment author: Jack 09 November 2009 12:27:56AM *  3 points [-]

Can you help me understand why you'd sacrifice the life of yourself and your friends for an increased chance of survival for the rest of humanity?

Not directed at me, but since this is a common view... I don't think you're question takes an argument as its answer.

This is why. If you don't want to protect people you don't know then you and I have different amygdalas.

Whpearson can come up with reasons why we're all the same but if you don't feel it those reasons won't be compelling.

Comment author: DanArmak 09 November 2009 12:17:23PM 0 points [-]

That's just it. The amygdala is only good for protecting the people around you. It doesn't know about 'survival of humanity'. To the amygdala, a million deaths is just a statistic.

Note my question for whpearson: would you kill all the people around you, friends and family, hurting them face-to-face, and finally kill yourself, if it were to increase the chance of survival of the rest of humanity? whpearson said yes, he would. But he'd be working against his amygdala to do so.

Comment author: Jack 09 November 2009 12:41:38PM *  0 points [-]

Good to know you're not a psychopath, anyway. :-)

I'm not sure that I can't generalize the experience of empathy to apply to people whose faces I can't see. They don't have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can't succeed in empathizing with these people per se, I don't know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race.

The fact that you never will have to experience this doesn't mean those children won't experience the fear. Now you can't make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they're big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can't actually be in your utility function.

Comment author: DanArmak 09 November 2009 09:30:18PM 0 points [-]

Getting back to the original issue: since protecting humanity isn't necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it?

From your explanation I gather that you first decided it's a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it's just a nice value to signal, since I presume you've never had to act on it - far mode thinking, if I remember the term correctly).

Comment author: Jack 10 November 2009 07:10:09AM 1 point [-]

Extending empathy to those whom I can't actually see just seems like the obvious thing to do since the fact that I can't see their faces doesn't appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize.

So I'm not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.

Comment author: whpearson 08 November 2009 10:16:04PM 3 points [-]

There are two possible answers to this.

One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic.

Then there is the attempt to find a rationale for this feeling, the number of arguments I have had with myself to give some reason to why I might feel this way. Or at least why it is not a very bad idea to feel this way.

My view of identity is something like the idea of genetic relatedness. If someone made an atom level copy of you, that'd be the same person pretty much right? Because it shares the same beliefs, desires and view point on the world. But most humans share some beliefs and desires. From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some. We are identity kin as well as all sharing lots of the same genetic code (as we do with animals). So even if I die parts of me are in everyone even if not as obvious as they are with my friends. We are all mental descendants of Newton and Einstein and share that heritage. Not all things about humanity (or about me) are to be cherished, so I do not preach universal love and peace. But wiping out humanity would remove all of those spread out bits of me.

Making self-sacrifice easier is the fact I'm not sure that me surviving as a post human will preserve much of my current identity. In some way I hope it doesn't as I am not psychologically ready for grown up (on the cosmic scale) choices but I wish to be. In other ways I am afraid that things of value will be lost that don't need to be. But from any view I don't think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival.

I think my friends would also share the same word less emotion to save humanity, but not the odd wordy view of identity I have.

Comment author: DanArmak 09 November 2009 09:39:18PM 0 points [-]

One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic.

There are two relevant differences between this and wanting to prevent the extinction of humankind. One is, as I told Jack, that emotions only work for small amounts of people you can see and interact with personally; you can't really feel the same kind of emotions about humanity.

The other is people have all kinds of irrational, suboptimal, bug-ridden heuristics for taking personal risks; for instance the firefighter might be confident in his ability to survive the fire, even though a lot of the danger doesn't depend on his actions at all. That's why I prefer to talk about incurring a certain penalty, like killing one guy to save another, rather than taking a risk.

From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some.

I understand this as a useful rational model, but I confess I can't identify with this way of thinking at all on an emotional level.

What importance do you attach to actually being you (the subjective thread of experience)? Would you sacrifice your life to save the lives of two atomically precise copies of you that were created a minute ago? If not two, how many? In fact, how could you decide on a precise number?

But from any view I don't think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival.

Personal continuity, in the sense of subjective experience, matters very much to me. In fact it probably matters more than the rest of the universe put together.

If Omega offered me great riches and power - or designing a FAI singleton correctly, or anything I wanted - at the price of losing my subjective experience in some way (which I define to be much the same as death, on a personal level) - then I would say no. How about you?

Comment author: Kaj_Sotala 08 November 2009 12:31:46PM 0 points [-]

What's a brief but effective way to respond to the "an AI, upon realizing that it's programmed in a way its designer didn't intend to, would reprogram itself to be like the designer intended" fallacy? (Came up here: http://xuenay.livejournal.com/325292.html?thread=1229996#t1229996 )

Comment author: Vladimir_Nesov 08 November 2009 02:46:14PM 3 points [-]

I hope I'm not misinterpreting again, but this is a Giant cheesecake fallacy. The problem is that AI's decisions depend on its motive. "An AI, upon realizing that it's programmed in a way its designer didn't intend to, would try to convince the programmer that what the AI turned out to be is exactly what he intended in the first place", "An AI, upon realizing that it's programmed in a way its designer didn't intend to, would print a string "Styggron" to the console".

Comment author: Kaj_Sotala 08 November 2009 07:31:25PM 0 points [-]

Thanks, that's a good one. I'll try it.

Comment author: Cyan 08 November 2009 01:33:06PM 0 points [-]

How about: an AI can be smart enough to realize all of those things, and it still won't change its utility function. Then link Eliezer's short story about that exact scenario. (Can't find it in two minutes, but it's the one where the dude wakes up with a construct designed to be his perfect mate, and he rejects her because she's not his wife.)

Comment author: Eliezer_Yudkowsky 08 November 2009 07:54:18PM 2 points [-]
Comment author: Yorick_Newsome 08 November 2009 02:46:17AM *  2 points [-]

I'd like to ask a moronic question or two that aren't immediately obvious to me and probably should be. (Please note, my education is very limited, especially procedural knowledge of mathematics/probability.)

If I had to guess what the result of a coin flip would be, what confidence would I place in my guess? 50% because that's the same as the probability or me being correct or 0% because I'm just randomly guessing between 2 outcomes and have no evidence to support either (well I guess there being only 2 outcomes is some kind of evidence)?

Likewise with a lottery. Would I place my confidence level (interval ? I don't know the terminology) of winning at 0% or 1/6,000,000? Or some other number entirely?

If this is something I could easily have figured out with Google or Wikipedia, my apologies. Also if my question is incoherent or flawed please let me know.

Comment author: saturn 08 November 2009 05:48:19PM *  1 point [-]

In the context of most discussions on this site, "confidence" is the probability that a guess is correct. For example:

  • I guess that a flipped coin will land heads. My confidence is 1/2, because I have arbitrarily picked 1 out of 2 possible outcomes.
  • I guess that, when a coin is flipped repeatedly, the ratio of heads will be close to half. My confidence is close to 1, because I know from experience that most coins are fair (and the law of large numbers).

"Confidence interval" is just confidence that something is within a certain range.

You should also be aware that in the context of frequentism (most scientific papers), these terms have different and somewhat confusing technical definitions.

Comment author: RichardKennaway 08 November 2009 08:40:00AM *  1 point [-]

You might want to look at Dempster-Shafer theory, which is a generalisation of Bayesian reasoning that distinguishes belief from probability. It is possible to have a belief of 0 in heads, 0 in tails, and 1 in {heads,tails}.

It may be that, when looked at properly, DS theory turns out to be Bayesian reasoning in disguise, but a brief google didn't turn up anything definitive. Is anyone here more informed on the matter?

Comment author: Yorick_Newsome 08 November 2009 09:32:21AM 0 points [-]

After looking at the reasoning in that article I was about to credit myself with being unintentionally deep, but I'm pretty sure that when I posed the question I was assuming a fair coin for the sake of the problem. Doh. Thanks for the interesting link.

(It's really kind of embarrassing asking questions about simple probability amongst all the decision theories and Dutch books and priors and posteriors and inconceivably huge numbers. Only way to become less wrong, I suppose.)

Comment author: Psy-Kosh 08 November 2009 03:10:42AM 4 points [-]

Think of the probability you assign as a measure of how "not surprised" you would be at seeing a certain outcome.

Total probability of all mutually exclusive possibilities has to add up to 1, right?

So if you would be equally surprised at heads or tails coming up, and you consider all other possibilities to be negligible (Or you state your prediction in terms of "given that the coin lands such that one face is clearly the 'face up' face....") then you ought assign a probability of 1/2 to each. (Again, slightly less to account for various "out of bounds" options, but in the abstract, considered on its own, 1/2)

ie, the same probability ought be assigned to each, since you'd be (reasonably) equally surprised at each outcome. So if the two have to also sum to 1 (100%), then 1/2 (50%) is the correct amount of belief to assign.

Comment author: Alicorn 08 November 2009 05:51:01PM 1 point [-]

Surprise is not isomorphic to probability. See this.

Comment author: Yorick_Newsome 08 November 2009 03:32:02AM 1 point [-]

Ah, that makes a lot more sense: I was looking at the probability from the viewpoint of my guess (i.e. heads) instead of just looking at the all outcomes equally (no privileged references guesses), if you take my meaning. I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea. Thanks for the reply.

Comment author: Psy-Kosh 08 November 2009 06:17:20AM 0 points [-]

Well, maybe you were thinking about "how confident am I that this is a fair coin vs that it's biased toward heads vs that it's biased toward tails" which is a slightly different question.

Comment author: wedrifid 08 November 2009 05:47:53AM 0 points [-]

I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea.

Given how 'confidence' is used in a social context that differentiation would feel quite natural.

Comment author: Furcas 07 November 2009 11:53:52PM *  0 points [-]

Paul Almond has written a new article, Launching anything is good: How Governments Could Promote Space Development. I don't know how realistic his proposal is, but I can't find any flagrant logical error in it.

Comment author: DanArmak 06 November 2009 10:38:30PM *  4 points [-]

A friend asked me a question I'd like to refer to LW posters.

TL;DR: he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?

My friend has a background in programming, physics, engineering, and information security and cryptography. He's smart, he's already financially successful, has friends who are also likely to become successful and influential, and he's also good at direct interactions with people, reading and understanding them and being likable - about as good as I am capable of recognizing, which doesn't mean that much because my own skills in this area are sadly lacking. A solution involving taking courses or whole degree plans in major Israeli universities (in particular, TAU) would suit him well but is by no means the only option.

He wants to spend time, perhaps as much as a small, part-time 3-year bachelor's degree (or at-home equivalent), learning and understanding about larger groups of people. What makes them happy? How to influence their values? How to go from helping a person ("he's hungry, I'll need some fish and chips to feed him") to helping a million people ("they're hungry, I'll need some farms to grow the food and trucks to move it and refrigerators to store it and power stations to power the refrigerators and coal for the power stations and political stability and...")?

And in the bottom line, how to learn enough general knowledge to identify what people are most suffering from; then learn enough specific knowledge to identify where good solutions exist; and then learn some very specific knowledge to identify charities and investments that will make the best use of donated money?

There is also a second, complementary question: how to do all this, and integrate the learning and the knowledge into his life, effectively - without risking boredom, akrasia and other motivational issues? I feel that it would help for this education to have a good outline plan from the beginning; for him to feel things are useful and are progressing somewhere; and to have results come in gradually and not all at once in three years' time.

One immediate answer is to suggest things that concern the LW/H+ community, such as FAI research, biological immortality, etc. My friend may come to these conclusions and I can recommend to him to read the relevant articles and books, but he wants to come to his own conclusions about goals & needs. (Edited:) (A problem with e.g. FAI research is the extreme difficulty of estimating the return on investment for funding it, or the relative probability of uFAI vs. other extinction scenarios.) I think he would benefit from something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them). He will also want to come to his own conclusions about whom to help first, likely quite far from any neutral approach that weighs all humans on the planet equally.

Comment author: wedrifid 08 November 2009 05:55:46AM 0 points [-]

I don't believe he'd be satisfied with any conclusion resting purely on thinking ("un-Friendly AI is an imminent existential risk, therefore FAI research is an overriding priority"); I think he needs something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them).

he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?

He could start with shut up and multiply. (Or, perhaps he could just change 'best' to 'most appealing'.)

Comment author: DanArmak 08 November 2009 05:03:43PM 0 points [-]

Rereading what I wrote, I don't quite agree with it myself... I retract that part (will edit).

What I wanted to say (and did not in fact say) was this. To take the example of FAI research - it's hard to measure or predict the value of giving money to such a cause. It doesn't produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It's hard to measure its progress for someone who isn't at least an AI expert. It's very hard to predict the FAI research team's probability of success (as with any complex research). And finally, it's hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.

If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.

Comment author: Johnicholas 06 November 2009 06:09:16PM 0 points [-]

I have a question for the members of LW who are more knowledgable than me in quantum mechanics and theories of quantum mechanics's relevance to consciousness.

There are examples of people having exactly the same conversation repeatedly (e.g. due to transient global amnesia). Is this evidence against quantum mechanics being crucial to consciousness?

Comment author: RobinZ 06 November 2009 10:59:00PM *  1 point [-]

Wait, I think I know what the question is, now. Yes, this thing seems to suggest that human thinking is well-approximated as deterministic - a hypothesis which matches what I've heard elsewhere. Off the top of my head:

  • I once read a story about a guy being offered lunch several times in a row and accepting again and again and again in similar terms until his stomach felt "tightish".

  • There was a family friend taking sleeping medication which was known to cause sleepwalking, and she had an entire phone conversation with her friend in her sleep - and then called the same friend after waking up planning to discuss the same things.

Of course, the typical quantum-mechanical stories of consciousness are far too vague to be falsified by this or any other evidence.

Edit: As Nick Tarleton cogently points out, this is an exaggeration - it is certainly falsifiable in the way phlogiston or elan vital is falsifiable, by the production of a complete correct theory, and it is further so by e.g. uploading.

Comment author: Nick_Tarleton 06 November 2009 11:42:47PM 1 point [-]

Of course, the typical quantum-mechanical stories of consciousness are far too vague to be falsified by this or any other evidence.

They could be falsified by successful classical uploading or an ironclad argument for the impossibility of coherence in the brain (among other things); furthermore, I think most of their proponents who are actual scientists would accept such a falsification.

Comment author: RobinZ 07 November 2009 12:32:47AM 0 points [-]

You're right, of course - editing in a note.

Comment author: Eliezer_Yudkowsky 06 November 2009 09:42:22PM 4 points [-]

Thermal noise dominates quantum noise anyway. I suppose it argues that if you don't depend on thermal noise then you don't depend on quantum noise either, but the Penrosian types claim it's not really random anyway.

Comment author: Jack 06 November 2009 08:49:22PM 0 points [-]

I don't think anyone holds that human behavior is always undetermined in the way particles are. The reason no one holds that view is that it would contradict the work of neuroscientists, the people, you know, actually making progress on these questions.

Comment author: RobinZ 06 November 2009 08:21:18PM 0 points [-]

There are examples of people having exactly the same conversation repeatedly (e.g. due to transient global amnesia).

Citations?

Comment author: Johnicholas 06 November 2009 08:47:26PM *  1 point [-]

I can't find the link because of censorship on my work computer, but there was a description of orgasm-induced transient global amnesia that made the rounds recently.

Google: orgasm transient global amnesia

Comment author: RobinZ 06 November 2009 09:16:03PM *  0 points [-]

That's an odd phenomenon, but I don't think that it, specifically, is especially relevant to quantum mechanics' relevance to consciousness. The chief problem with the proposals that quantum mechanics is directly involved in consciousness is that they constitute mysterious answers to a mysterious question.

Comment author: Zachary_Kurtz 06 November 2009 09:27:59PM 1 point [-]

The only reference on google related to "transient global amnesia" and quantum is this thread (third link down).

Comment author: Johnicholas 07 November 2009 05:54:38PM *  1 point [-]

This is the story in the news. Some may prefer the paper itself.

Comment author: Vladimir_Nesov 06 November 2009 06:39:30PM 0 points [-]

I'm surprised to hear this question from you. Does this comment mean that you seriously consider this quantum consciousness woo? Why on Earth?

Comment author: Johnicholas 06 November 2009 06:48:50PM 0 points [-]

No, I'm just looking for solid evidence-based arguments against it that don't actually depend on me knowing lots of QM.

Comment author: Vladimir_Nesov 06 November 2009 07:06:20PM 2 points [-]

In that case you need killer evidence, something to take back an insane leap of privileging the hypothesis, not some vague argument around amnesia.

Comment author: Nick_Tarleton 06 November 2009 06:26:16PM 2 points [-]

It's evidence against chaotic or random processes being important, but quantum computing needn't mean random (i.e. high variance) results; AFAIK, it can in principle be made highly predictable.

Comment author: Alicorn 06 November 2009 02:24:48AM *  1 point [-]

I remember well enough to describe, but apparently not well enough to Google, a post or possibly a comment that said something to the effect that one should convince one's opponents with the same reasoning that one was in fact convinced by (rather than by other convenient arguments, however cogent). Can anyone help me find it?

Comment author: Zack_M_Davis 06 November 2009 02:38:29AM 1 point [-]

You're probably thinking of "A Rational Argument" or "Back Up and Ask Whether, Not Why".

Comment author: Alicorn 06 November 2009 02:42:56AM 0 points [-]

Neither of those look quite like it...

Comment author: RobinZ 06 November 2009 03:39:37AM *  0 points [-]

I was reminded of The Bottom Line, for what that's worth, although I see both "A Rational Argument" and "Back Up and Ask Whether, Not Why" link back to it.

Comment author: Alicorn 06 November 2009 03:49:35AM 0 points [-]

This looked like it might be it for a while, but I have the memory of the statement being made pretty directly, not just stabbed at sideways.

Comment author: Zack_M_Davis 06 November 2009 04:02:17AM 2 points [-]

The last paragraph of "Back Up" seems fairly explicit.

... "Singularity Writing Advice" points six and seven?

Comment author: Alicorn 06 November 2009 04:08:57AM 0 points [-]

Oh, the writing advice looks very much like what I remember - but I'm almost positive I haven't come across the particular document before! Perhaps some of the same prose was reused elsewhere?

Comment author: komponisto 06 November 2009 04:51:17AM *  0 points [-]

Eliezer has been known to recycle text from old documents on occasion. (I'm thinking of certain OB posts having to do with a Toyota Corolla and Deep Blue vs. Kasparov, which contain material lifted from here and here respectively.)

Comment author: Kutta 05 November 2009 08:37:14PM *  0 points [-]

Does anyone know when will the 2009 Summit videos be available?

Comment author: Zachary_Kurtz 05 November 2009 08:44:36PM 2 points [-]
Comment author: Kutta 06 November 2009 09:08:30AM 0 points [-]

Oh, thank you very much!

Comment author: Zachary_Kurtz 06 November 2009 02:40:55PM 0 points [-]

no problem

Comment author: RolfAndreassen 05 November 2009 08:28:43PM 0 points [-]

So I got into an argument with a theist the other day, and after a while she posted this:

It's not about evidence.

Nu, talk about destroying the foundation for your own beliefs... Escher drawings, indeed.

Comment author: Jack 05 November 2009 08:56:19PM 0 points [-]

What did she she say it was about?

Comment author: RolfAndreassen 05 November 2009 09:32:29PM 0 points [-]

Faith, I think.

Comment author: olimay 05 November 2009 07:20:20PM *  0 points [-]

Meetup listing in Wiki? MBlume created a great Google Calendar for meetups. How about some sort of rudimentary meetup "register" in the LW Wiki? I volunteer to help with this if people think it's a good idea. Thoughts? Objections?

ETA: The GCal is great for presenting some information, but I think something like a Wiki page might be more flexible. I'm especially curious to hear opinions from people who are organizing regular meetups, how that's going, and interest in maintaining a Wiki page.

ETA++: AndrewKemendo has a more complex, probably more useful idea that I passed over in my overcaffeinated eagerness.

Comment author: FeministX 05 November 2009 02:28:10AM 2 points [-]

Hi, I have never posted on this forum, but I believe that some Less Wrong readers read my blog, FeministX.blogspot.com.

Since this at least started out as an open thread, I have a request of all who read this comment, and an idea for a future post topic.

On my blog, I have a topic about why some men hate feminism. The answers are varied, but they include a string of comments back and forth between anti feminists and me. The anti feminists accuse me of fallacies, and one says that he "clearly" refuted my argument. My interpretation is that my arguments were more logically cogent that the anti feminists and that they did not correctly identify logical fallacies in my comments, nor did they comprehensivly refute anything I said. They merely decided that they won the debate.

Now, the issue is that when there is an argument between feminists and anti feminists on the internet, the feminists will believe that other feminists arguments include more truth and reason while anti-feminists will believe that anti-feminist arguments include more truth and reason. The internet is not a place where people are good at discussing feminism with measured equanimity.

But I wondered, who could be the objective arbiter of a discussion between feminists and anti feminists? Almost anyone has a bias when it comes to this issue. Everyone has a gender, and gender affects a person's thinking style, desires and determination of fairness in assessing behaviors between genders. Where in the world could I find intelligent entities that would not be swayed by gender bias and would instead attempt to seek out objective truth in a "battle of sexes" style discussion.

Well, I am not sure if unbiased people can exist regarding the issue but the closest thing I could think of was Less Wrong. Thus, I invite readers of Less Wrong to contribute to the admittedly inane thread on my blog, Why so much hate?

http://feministx.blogspot.com/2009/11/why-so-much-hate.html

Comment author: Eliezer_Yudkowsky 05 November 2009 01:05:02PM *  11 points [-]

I read through a couple of months worth of FeministX when I first discovered it...

(Because of a particular skill exhibited: namely the ability to not force your self-image into a narrow box based on the labels you apply to yourself, a topic on which I should write further at some point. See the final paragraph of this post on how much she hates sports for a case in point. Most people calling themselves "feminist" would experience cognitive dissonance between that and their self-image. Just as most people who thought of themselves as important or as "rationalists" might have more trouble than I do publicly quoting anime fanfiction. There certainly are times when it's appropriate to experience cognitive dissonance between your self-image and something you want, but most people seem to cast that net far too widely. There is no contradiction, and there should be no cognitive dissonance, between loving and hating the same person, or between being a submissive feminist who wants alpha males, or between being a rationalist engaged on a quest of desperate importance who reads anime fanfiction, etcetera. But most people try to conform so narrowly and so unimaginatively to their own self-image that there is little point in reading anything else they say, because it is all predictable once you know what "role" they're trying to play in their own minds. And among people who are unusually good at not conforming to their own images, their blogs often make for good reading because it is often surprising reading.)

...and I still don't know what is meant by the "feminist" in the title, so I have to agree with all the commenters who asked for a definition of "feminism". Definitions are oft overrated but in this case I literally do not know what is being talked about.

If it were me, I'd probably be saying something to myself along the lines of: "So long as such a large flaw exists in my own work, which I can correct myself without waiting for permission from anyone else, there is no point in asking whether others have done worse." This is by way of encouraging myself to do better, for which purpose it is unwise to focus on other people's flaws as consolation.

EDIT: Finished reading through the comments. Some commenters did better than you, some commenters did worse, e.g. Aretae's separate post gave you good advice. Definitely you've got more to learn about which arguments and evidence license which conclusions at what strength. None of the arguments including yours were noticeably up to LW standards and so there's not much point in trying to figure out who "won". The winners were the commenters who said "I don't know what is meant by 'feminism' here, please define". Some of the others could have carried part of their argument if they had been a bit more careful to say, "Here is something that 'feminism' could be taken to mean, or that many/most men take the label 'feminism' to mean, now I am going to talk about how many/most men react to this particular thing regardless of whether it is what you call 'feminism', and if it isn't, please go ahead and define what you mean by it." That would have been Step One.

Comment author: wedrifid 05 November 2009 10:15:32AM *  1 point [-]

Now, the issue is that when there is an argument between feminists and anti feminists on the internet, the feminists will believe that other feminists arguments include more truth and reason while anti-feminists will believe that anti-feminist arguments include more truth and reason.

What exactly is an anti-feminist? I've never actually met someone who identified as one. Is this more of a label that others apply to them and if so, what do you mean when you apply it? Is it a manner of 'Feminism, Boo!' vs 'Yay! Feminism!' or is it the objection to one (or more) ideals that are of particular import?

Does 'anti-feminist' apply to beliefs about the objective state of the universe, such as the impact of certain biological differences on psychology or social dynamics? Or is it more suitably applied to normative claims about how things should be, including those about the relative status of groups or individuals?

Comment author: gwern 05 November 2009 06:17:57PM 1 point [-]

I think it's only applied by the feminists. Take a look at National Review, a bastion of anti-feminism if ever there was any, and notice how all the usages are by the feminists or fellow travelers or are in clear scare-quotes or other such language: http://www.google.com/search?hl=en&num=100&q=anti-feminism+anti-feminist+site%3Anationalreview.com

Comment author: Jack 05 November 2009 09:37:35AM *  3 points [-]

Hi! Feel free to introduce yourself here.

There are a couple general reasons for disagreement.

  1. Two parties disagree on terminal values (if someone genuinely believes that women are inherently less valuable than men there is no reason to keep talking about gender politics)
  2. Two parties disagree on intermediate values (both might value happiness but a feminist might believe gender equality to be central to attaining happiness while the anti-feminist thinks gender equality is counter productive to this goal. It might be difficult for parties to explain their reasoning in these matters but it is possible). 3.Two parties disagree about the means to the end (an anti-feminist might think that feminism as a movement doesn't do a good job promoting gender equality)
  3. Two parties disagree about the intent of one or more parties (a lot of anti-feminists think feminism is a tool for advancing interests of women exclusively and that feminists aren't really concerned with gender equality. I don't think you can say much to such people though it is worth asking yourself why they have that impression... calling yourself a female supremacist will not help matters.)
  4. Two parties disagree about the facts of the status quo (if someone thinks that women aren't more oppressed than men or that feminists exaggerate the problem they may have exactly the same view of an ideal world as you do but have very different means for getting there. This is a tricker issue than it looks because facts about oppression are really difficult to quantify. There is a common practice in anti-subordination theory of treating claims of oppression at face value but this only works if one trusts the intentions of the person claiming to be oppressed.)
  5. One of more parties have incoherent views (you can point out incoherence, not much else).

I think that is more or less complete. As you can see, some disagreements can be resolved, others can't. Talk to the people you can make progress with but don't go in assuming that you're going to convince everyone of your view.

Edit: Formating.

Comment author: CannibalSmith 05 November 2009 09:06:53AM *  -1 points [-]

Let me be the first to say: welcome to Less Wrong! Please explore the site and stay with us - we need more girls.

Comment author: Eliezer_Yudkowsky 05 November 2009 09:34:15PM *  5 points [-]

I'd quite strongly suggest deleting everything after the hyphen, there.

Comment author: CannibalSmith 06 November 2009 10:50:25AM 1 point [-]
Comment author: wedrifid 08 November 2009 06:10:34AM *  0 points [-]

Verbal symbols are slippery things sometimes.

Comment author: CannibalSmith 08 November 2009 06:26:26PM 1 point [-]

Explain.

Comment author: wedrifid 08 November 2009 09:07:27PM 0 points [-]

No, at least not right now.

Comment author: RobinZ 08 November 2009 09:47:47PM *  0 points [-]

When, if I may be so bold? (Bear in mind that it is not necessary to explain your remark in full generality - just in sufficient detail to justify its presence as a response to CannibalSmith in this instance.)

Comment deleted 08 November 2009 10:01:16PM *  [-]
Comment author: RobinZ 08 November 2009 10:02:01PM 0 points [-]

Fair enough!

Comment author: wedrifid 06 November 2009 12:04:40AM -1 points [-]

Even the bit before the hyphen sounds a little on the needy side.

Comment author: Zack_M_Davis 06 November 2009 12:20:35AM 0 points [-]

And while we're at it, it should really be an em dash, not a hyphen.

Comment author: RobinZ 06 November 2009 12:37:09AM 0 points [-]

En dash - it's surrounded by spaces. And I don't think the reddit engine tells you how to code it. A hyphen is the accepted substitute (for the en dash - two hyphens for an em dash).

Comment author: eirenicon 06 November 2009 12:54:18AM *  2 points [-]

An en dash is defined by its width, not the spacing around it. In fact, spacing around an em dash is permitted in some style guides. On the internet, though, the hyphen has generally taken over from the em dash (an en dash should not be used in that context).

Now, two hyphens—that's a recipe for disaster if I've ever heard one.

Comment author: RobinZ 06 November 2009 01:31:27AM 0 points [-]

Hey, I like double-hyphens as em-dash substitutes!

...but yeah, you're right otherwise.

Comment author: FeministX 05 November 2009 09:37:32PM -1 points [-]

Why?

Comment author: CannibalSmith 05 November 2009 11:43:21PM *  1 point [-]

What did you think when you first saw my "we need more girls" remark?

Comment author: FeministX 06 November 2009 01:21:16AM 1 point [-]

I found it flattering.

Comment author: Eliezer_Yudkowsky 05 November 2009 10:26:45PM 2 points [-]

Because advertising your lack of girls is not viewed by the average woman as a hopeful sign. (Heck, I'd think twice about any online site that advertised itself with "we need more boys".)

Also, the above point should be sufficiently obvious that a potential female reader would look at that and justifiably think "This person is thinking about what they want and not thinking about how I might react" which isn't much of a hopeful sign either.

Comment author: Alicorn 05 November 2009 10:33:56PM *  5 points [-]

I'm probably non-average, but I'm ambivalent about hearing "we need more girls" from any community that's generally interesting. The first question that I think of is "why don't they have any?", but as long as it's not obvious to me why there are not presently enough girls had by a website and it's easy to leave if I find a compelling reason later, my obliging nature would be likely to take over. Also, saying "we need more girls" does advertise the lack of girls - but it also advertises the recognition that maybe that's not a splendid thing. Not saying it at all might signify some kind of attempt at gender-blindness, but it could also signify complacency about the ungirly ratio extant.

I hear "we need more girls" from my female classmates about our philosophy department.

Comment author: RobinZ 05 November 2009 10:44:36PM *  4 points [-]

We also hear this kind of thing online, in the atheism community.

To sum up the convo, then, it seems like:

  • the "too many dicks on the dance floor" attitude isn't particularly attractive, but

  • the honest admission that there aren't many female regulars, and that we'd like the input of women on the issues which we care about, is perfectly valid.

The rest of it is our differing levels of charity in interpreting CannibalSmith's remarks.

Comment author: Eliezer_Yudkowsky 05 November 2009 10:38:09PM 2 points [-]

I hear "we need more girls" from my female classmates about our philosophy department.

As with so many other remarks, this carries a different freight of meaning when spoken by a woman to a woman.

Comment author: Alicorn 05 November 2009 10:40:53PM *  2 points [-]

I think I don't hear it from my male classmates because they aren't alert to this need. I would be pleased to hear one of them acknowledge it. This may have something to do with the fact that I'd trust most of them to be motivated by something other than a desire for eye candy or dating opportunities, though, if they did express this concern.

Comment author: FeministX 05 November 2009 10:56:37PM 1 point [-]

"I think I don't hear it from my male classmates because they aren't alert to this need. I would be pleased to hear one of them acknowledge it."

Why do you feel there is a need for more female philosophy students in your department?

Comment author: Alicorn 05 November 2009 11:07:56PM 3 points [-]

I think a more balanced ratio would help the professors learn to be sensitive to the different typical needs of female students (e.g. decrease reliance on the "football coach" approach). Indirectly, more female students means more female Ph.Ds means more female professors means more female philosophy role models means more female students, until ideally contemporary philosophy isn't so terribly skewed. More female students would also increase the chance that there would be more female philosophers outside the typical "soft options" (history and ethics and feminist philosophy), which would improve the reception I and other female philosophers would get when proposing ideas on non-soft topics like metaphysics because we'd no longer look atypical for the sort of person who has good ideas on metaphysics.

Comment author: Vladimir_Nesov 05 November 2009 10:08:24PM 0 points [-]

Inapt.

Comment author: wedrifid 08 November 2009 06:28:09AM 0 points [-]

That's five divs, which means it is a reply to, let's see...

Comment author: FeministX 05 November 2009 05:03:11AM 3 points [-]

The discussion here helped me reanalyze my own attitude towards this kind of issue.

I don't think I ever had a serious intention to back up my arguments or win a debate when I posted on the issue of why men hate feminism. I am not sure what to do when faced the extreme anti feminism that I commonly find on the internet. I have a number of readers on my blog who will make totalizing comments about all women or all feminists. Ex, one commenter said that women have no ability to sustain interest in topics that don't pertain to relationships between individuals. Other commenters say that feminsm will lead to the downfall of civilization for reasons including that it lets women pursue their fleeting sexual impulses, which are destructive.

i suppose I do not really know how to handle this attitude. Ordinarily, I ignore them since I operate under the assumption that people that expouse such viewpoints are not prone to being swayed by any argument. They are attached to their bias, in a sense. I am not sure if it is possible for a feminist to have a reasonable discussion with a person that is anti feminist and that hates nearly all aspects of feminism in the western world.

Comment author: bogus 06 November 2009 12:20:23AM *  0 points [-]

I am not sure what to do when faced the extreme anti feminism that I commonly find on the internet.

If these commenters are foolish enough to disparage and denigrate any political role to women generally, then do them a favor and flame them to a crisp. If that's not enough to drive them off your site, then feel free to ban them.

These are thinly-veiled attempts at intimidation which are reprehensible in the extreme, and will not be taken lightly by anyone who cares seriously about any kind of politics other than mere alignment to power and privilege--which is most everyone in this day and age. Especially so when coming from people of a Western male background--who are thus embedded in a complex power structure rife with systemic biases, which discriminates towards all kinds of minority groups.

Simply stated, you don't have to be nice to these people. Quite the opposite, in fact. Sometimes that's all they'll understand.

Comment author: ShardPhoenix 05 November 2009 11:44:37AM *  3 points [-]

Personally I'd say you shouldn't "be a feminist" at all. Have goals (whether relating to women's rights or anything else) and try to find the best ways to reach them. Don't put a political label on yourself that will constrain your thinking and/or be socially and emotionally costly to change. Though given that you seem to have invested a lot of your identity in feminism it's probably already hard to change.

Comment author: Eliezer_Yudkowsky 05 November 2009 09:32:36PM 2 points [-]

Don't put a political label on yourself that will constrain your thinking and/or be socially and emotionally costly to change.

As mentioned above, this particular person does seem unusually good at not being so constrained.

Comment author: wedrifid 05 November 2009 12:23:52PM 4 points [-]

Personally I'd say you shouldn't "be a feminist" at all.

Shouldn't? According to which utility function? There are plenty of advantages to taking a label.

Comment author: ShardPhoenix 07 November 2009 12:52:36PM 2 points [-]

Yes, there are obvious advantages to overtly identifying with some established group, but if you identify too strongly and become a capital-F Feminist (or a capital D-Democrat, or even a capital-R Rationalist) there's a real danger that conforming to the label will get in the way of actually achieving your original goals.

It's analogous to the idea that you shouldn't use dark side methods in the service of rationality - ie that you shouldn't place too much trust in your own ability to be virtuously hypocritical.

Comment author: Larks 05 November 2009 08:25:00PM 0 points [-]

Advantages to outwardly signalling group loyalty, perhaps, but to internal self-identification?

Comment author: CannibalSmith 05 November 2009 08:40:28AM *  2 points [-]

I am not sure what to do when faced the extreme anti feminism that I commonly find on the internet.

Ban them.

Comment author: DanArmak 05 November 2009 07:18:16AM 2 points [-]

It's almost certainly not possible for you to have a discussion about feminism with such a person.

I haven't read your blog, but perhaps you should reconsider the kind of community of readers you're trying to build there. If you tend to attract antifeminist posters, and you don't also attract profeminist ones who help you argue your position in the comments, that sounds like a totally unproductive community and you might want to take explicit steps to remodel it, e.g. by changing your posts, controlling the allowed posters, or starting from scratch if you have to.

Comment author: Zack_M_Davis 05 November 2009 04:25:40AM 6 points [-]

Everyone has a gender, and gender affects a person's thinking style, desires and determination of fairness in assessing behaviors between genders.

*winces* So, I agree that no one is competent and everyone has an agenda, but it's not as if everyone sides with "their" sex.

Well, I am not sure if unbiased people can exist regarding the issue but the closest thing I could think of was Less Wrong.

No, historically we suck at this, too. Got any decision theory questions?

Comment author: FeministX 05 November 2009 04:46:16AM 1 point [-]

"winces* So, I agree that no one is competent and everyone has an agenda, but it's not as if everyone sides with "their" sex."

I didn't mean to imply that they did always side with their physical sex.

Comment author: LucasSloan 05 November 2009 05:07:13AM 7 points [-]

Why do you think of the discussion of gender roles and gender equality to necessary break down into a camp for men and a camp for women? By creating two groups you have engaged mental circuitry that will predispose you to dismissing their arguments when they are correct and supporting your own sides' even when they are wrong.

http://lesswrong.com/lw/lt/the_robbers_cave_experiment/

http://lesswrong.com/lw/gw/politics_is_the_mindkiller/

Comment author: FeministX 05 November 2009 05:16:45AM -1 points [-]

"Why do you think of the discussion of gender roles and gender equality to necessary break down into a camp for men and a camp for women?"

I don't personally think this. I don't think there are two genders. There are technically more than two physical sexes even if we categorize the intersexed as separate. I feel that either out of cultural conditioning or instincts, the bulk of people push a discussion about gender into a discussion about steryotypical behaviors by men and by women. This then devolves into a "battle of the sexes" issue where the "male" perspective and "female" perspective are constructed so that they must clash.

However, on my thread, there are a number of people that seem to have no qualms with the idea of barring female voting and such things. I think that sort of opinion goes beyond the point where one could say that an issue was framed to set up a camp for men and a camp for women. Once we are talking about denying functioning adults sufferage, then we are talking about an attitude which should be properly labelled as anti-female.

Comment author: loqi 05 November 2009 06:05:17AM 8 points [-]

However, on my thread, there are a number of people that seem to have no qualms with the idea of barring female voting and such things.

On the internet, emotional charge attracts intellectual lint, and there are plenty of awful people to go around. If you came here looking for a rational basis for your moral outrage, you will probably leave empty-handed.

But I don't think you're actually concerned that the person arguing against suffrage is making any claims with objective content, so this isn't so much the domain of rational debate as it is politics, wherein you explain the virtue of your values and the vice of your opponents'. Such debates are beyond salvage.

Comment author: FeministX 05 November 2009 06:24:46AM 2 points [-]

I saw that Eliezer posts that politics are a poor field to hone rational discussion skills. It is unfortunate that anyone should see a domain such as politics as a place where discussions are inherantly beyond salvage. It's a strange limitation to place on the utility of reason to say that it should be relegated to domains which have less immediate affect on human life. Poltiics are immensely important. Should it not be priority to structure rational discussion so that there are effective ways for correcting for the propensity to rely on bias, partisanship and other impulses which get in the way of determining truth or the best available course?

If rational discussion only works effectively in certain domains, perhaps it is not well developed enough to succeed in ideologically charged domains where it is badly needed. Is there definitely nothing to be gained from attempting to reason objectively through a subject where your own biases are most intense?

Comment author: bogus 05 November 2009 05:02:33PM *  0 points [-]

It is unfortunate that anyone should see a domain such as politics as a place where discussions are inherantly beyond salvage.

I agree with your assessment, but applying our skills to the political domain is very much an open problem--and a difficult one at that. See these wiki pages: [Mind-killer] and [Color politics] for a concise description of the issue. The gist of it is that politics involves real-world violence, or the governmental monopoly thereof, or something which could involve violence in the ancestral environment and thus misleads our well-honed instincts. Thus, solving political conflicts requires specialized skills, which are not what LessWrong is about.

Nevertheless, there are a number of so-called open politics websites which are more focused on what you're describing here. I'd like to see more collaboration between that community and the LessWrong/debiasing/rationality camp.

Comment author: loqi 05 November 2009 07:18:26AM *  2 points [-]

It's a strange limitation to place on the utility of reason to say that it should be relegated to domains which have less immediate affect on human life.

It's not so strange if you believe that reason isn't a sufficient basis for determining values. It allows for arguments of the form, "if you value X, then you should value Y, because of causal relation Z", but not simply "you should value Y".

If rational discussion only works effectively in certain domains, perhaps it is not well developed enough to succeed in ideologically charged domains where it is badly needed.

Debates fueled by ideology are the antithesis of rational discussion, so I consider its "ineffectiveness" in such circumstances a feature, not a bug. These are beyond salvage because the participants aren't seeking to increase their understanding, they're simply fielding "arguments as soldiers". Tossing carefully chosen evidence and logical arguments around is simply part of the persuasion game. Being too openly rational or honest can be counter-productive to such goals.

Is there definitely nothing to be gained from attempting to reason objectively through a subject where your own biases are most intense?

That depends on what you gain from a solid understanding of the subject versus what you lose in sanity if you fail to correct for your biases as you continue to accumulate "evidence" and beliefs, along with the respective chances of each outcome. As far as I can tell, political involvement tends to make people believe crazy things, and "accurate" political opinions (those well-aligned with your actual values) are not that useful or effective, except for signaling your status to a group of like-minded peers. Politics isn't about policy.

Comment author: DanArmak 05 November 2009 07:14:00AM 4 points [-]

It's a strange limitation to place on the utility of reason to say that it should be relegated to domains which have less immediate affect on human life. Poltiics are immensely important.

One of the points of Eliezer's article, IIRC, is that politics when discussed by ordinary people indeed tends not to affect anything except the discussion itself. Political instincts evolved from small communities where publicly siding with one contending leader, or with one policy option, and then going and telling the whole 100-strong tribe about it really made a difference. But today's rulers of nations of hundreds of millions of people can't be influenced by what any one ordinary individual says or does. So our political instinct devolves into empty posturing and us-vs-them mentality.

Politics are important, sure, but only in the sense that what our rulers do is important to us. The relationship is one-way most of the time. If you're arguing about things that depend on what ordinary people do - such as "shall we respect women equally in our daily lives?" - then it's not politics. But if you're arguing about "should women have legal suffrage?" - and you're not actually discussing a useful means of bringing that about, like a political party (of men) - then the discussion will tend to engage political instincts and get out of hand.

If rational discussion only works effectively in certain domains, perhaps it is not well developed enough to succeed in ideologically charged domains where it is badly needed. Is there definitely nothing to be gained from attempting to reason objectively through a subject where your own biases are most intense?

There's a lot to be gained from rationally working out your own thoughts and feelings on the issue. But if you're arguing with other people, and they aren't being rational, then it won't help you to have a so-called rational debate with them. If you're looking for rationality to help you in such arguments - the help would probably take the form of rationally understanding your opponents' thinking, and then constructing a convincing argument which is totally "irrational", like publicly shaming them, or blackmailing, or anything else that works.

Remember - rationality means Winning. It's not the same as having "rational arguments" - you can only have those with other rationalists.

Comment author: LucasSloan 05 November 2009 05:31:45AM *  5 points [-]

Yes, those who would deny women suffrage are anti-female. But in order to feel they deserve suffrage, one need not be pro-female. One only need be in favor of human rights.

Comment author: RobinZ 05 November 2009 03:06:11AM 5 points [-]

I hate to say it, but your analysis seems rather thin. I think a productive discussion of social attitudes toward feminism would have to start with a more comprehensive survey of the facts of the matter on the ground - discussion of poll results, interviews, and the like. Even if the conclusion is correct, it is not supported in your post, and there are no clues in your post as to where to find evidence either way.

Comment author: Alicorn 05 November 2009 03:18:23AM *  9 points [-]

Agreed. The post is almost without content (or badly needed variation in sentence structure, but that's another point altogether) - there's no offered reason to believe any of the claims about what anti-feminists say or what justifications they have. No definition of terms - what kind of feminism do you mean, for instance? Maybe these problems are obviated with a little more background knowledge of your blog, but if that's what you're relying on to help people understand you, then it was a poor choice to send us to this post and not another.

I'm tickled that Less Wrong came to mind as a place to go for unbiased input, though.

Comment author: wedrifid 05 November 2009 09:54:10AM 5 points [-]

I'm tickled that Less Wrong came to mind as a place to go for unbiased input, though.

Indeed. And even more so that she seems to be getting it.

Comment author: Jack 05 November 2009 09:59:30AM 7 points [-]

I now have a wonderful and terrible vision of the future in which less wrong posters are hired guns, brought in to resolve disagreements in every weird and obscure corner of the internets.

We should really be getting paid.

Comment author: DanArmak 05 November 2009 10:24:58AM 1 point [-]

How would you stop this from degenerating into a lawyer system? Rationality is only a tool. The hired guns will use their master rationalist skills to argue for the side that hired them.

Comment author: Eliezer_Yudkowsky 05 November 2009 01:18:40PM 5 points [-]

Technically, you cannot rationally argue for anything.

I suppose you could use master rationalist skillz to answer the question "What will persuade person X?" but this relies on person X being persuadable by the best arguer rather than the best facts, which is not itself a characteristic of master rationalists.

The more the evidence itself leans, the more likely it is that a reasonably rational arbiter and a reasonably skillful evidence-collecter-and-presenter working on the side of truth, cannot be defeated by a much more skillful and highly-paid arguer on the side of falsity.

Comment author: DanArmak 05 November 2009 03:46:28PM *  1 point [-]

A master rationalist can still be persuaded by a good arguer because most arguments aren't about facts. Once everyone agrees about facts, you can still argue about goals and policy - what people should do, what the law should make them do, how a sandwich ought to taste to be called a sandwich, what's a good looking dress to wear tonight.

If everyone agreed about facts and goals, there wouldn't be much of an argument left. Most human arguments have no objective right party because they disagree about goals, about what should be or what is right.

Comment author: Eliezer_Yudkowsky 05 November 2009 05:05:59PM 4 points [-]

One obvious reply would be to hire rationalists only to adjudicate that which has been phrased as a question of simple fact.

To the extent that you do think that people who've learned to be good epistemic critics have an advantage in listening to values arguments as well, then go ahead and hire rationalists to adjudicate that as well. (Who does the hiring, though?) Is the idea that rationalists have an advantage here, enough that people would still hire them, but the advantage is much weaker and hence they can be swayed by highly paid arguers?

Comment author: DanArmak 06 November 2009 12:15:29AM 0 points [-]

One obvious reply would be to hire rationalists only to adjudicate that which has been phrased as a question of simple fact.

If the two parties can agree on the phrasing of the question, then I think it would be better to hire experts in the domain of the disputed facts, with only minimal training in rationality required. (Really, such training should be required to work in any fact-based discipline anyway.)

Is the idea that rationalists have an advantage here, enough that people would still hire them, but the advantage is much weaker and hence they can be swayed by highly paid arguers?

If there's a tradition of such adjudication - and if there's a good supply of rationalists - then people will hire them as long as they can agree in advance on submitting to arbitrage. Now, I didn't suggest this; my argument is that if this system somehow came to exist, it would soon collapse (or at least stop serving its original purpose) due to lawyer-y behavior.

Comment author: Jack 05 November 2009 10:42:05AM 3 points [-]

Parties to the dispute can split the cost. Also, if the hired guns aren't seen as impartial there would be no reason to hire them so there would be a market incentive (if there were a market, which of course there isn't). Or we have a professional guild system with an oath and an oversight board. Hah.

Comment author: CannibalSmith 05 November 2009 06:39:29PM 1 point [-]

(if there were a market, which of course there isn't)

What are you talking about, we have our first customer already!

Comment author: Eliezer_Yudkowsky 05 November 2009 01:21:36PM 9 points [-]

Parties to the dispute can split the cost.

Actually, here's a rule that would make a HELL of a lot of sense:

Either party to a lawsuit can contribute to a common monetary pool which is then split between both sides to hire lawyers. It is illegal for either side to pay a lawyer a bonus beyond this, or for the lawyer to accept additional help on the lawsuit.

Comment author: gwern 05 November 2009 06:49:19PM 3 points [-]

And you don't see any issues with this? That would seem to be far worse than the English rule/losers-pay.

I pick a random rich target, find 50 street bums, and have them file suits; the bums can't contribute more than a few flea infested dollars, so my target pays for each of the 50 suits brought against him. If he contributes only a little, then both sides' lawyers will be the crappiest & cheapest ones around, and the suit will be a diceroll; so my hobos will win some cases, reaping millions, and giving most of it to me per our agreement. If he contributes a lot, then we'll both be able to afford high-powered lawyers, and the suit will be... a diceroll again. But let's say better lawyers win the case for my target in all 50 cases; now he's impoverished by the thousands of billable hours (although I do get nothing).

I go to my next rich target and say, sure would be a shame if those 50 hobos you ran over the other day were to all sue you...

Comment author: Oscar_Cunningham 07 November 2009 01:20:53PM 0 points [-]

Surely that only works if the probability of winning a case depends only on the skill of the lawyers, and not on the actual facts of the cases. I imagine a lawyer with no training at all could unravel your plan and make it clear that your hobos had nothing to back up their case.

Also, being English myself, it hadn't dawned on me that the losers-pay rule doesn't apply everywhere. Having no such system at all seems really stupid.

It also occurs to me that hiring expensive lawyers under losers-pay is like trying to fix a futarchy: you don't lose anything if you succeeded, but you stand to lose a lot if you fail.

Comment author: eirenicon 05 November 2009 09:12:42PM 0 points [-]

If the defending party is only required to match the litigating party's contribution, the suits will never proceed because the litigating bums can't afford to pay for a single hour of a lawyer's time. And while I don't know if this is true, it makes sense that funding the bums yourself would be illegal.

Comment author: Jordan 05 November 2009 07:16:25PM *  2 points [-]

But let's say better lawyers win the case for my target in all 50 cases; now he's impoverished by the thousands of billable hours (although I do get nothing).

How is this different from how things currently are, beyond a factor of two in cost for the target?

Comment author: DanArmak 05 November 2009 04:43:49PM 2 points [-]

I would contribute nothing to the pool, hire a lawyer privately on the side to advise me, and pass his orders down to the public courtroom lawyer. If I have much more money than the other party, and if the money can strongly enough determine the lawyer's quality and the trial's outcome, then even advice and briefs prepared outside the courtroom by my private lawyer would be worth it.

Comment author: Eliezer_Yudkowsky 05 November 2009 04:56:49PM 3 points [-]

Then your lawyer gets arrested.

It sometimes is possible to have laws or guild rules if the prohibited behavior is clear enough that people can't easily fool themselves into thinking they're not violating them. Accepting advice and briefs prepared outside the courtroom is illegal, in this world.

Comment author: RobinZ 05 November 2009 02:12:11PM 1 point [-]

That is frelling brilliant.

Comment author: Alicorn 05 November 2009 03:51:28PM 1 point [-]

Have a karma point for using Farscape profanity.

Comment author: Alicorn 05 November 2009 01:15:21PM 2 points [-]

I would totally join a rationalist arbitration guild. Even if this cut into the many, many bribes I get to use my skills on only one party's behalf ;)

Perhaps records of previous dispute resolutions can be made public with the consent of the disputants, so people can look for arbitrators who have apparently little bias or bias they can live with?

Comment author: DanArmak 05 November 2009 10:44:33AM 0 points [-]

Please see my reply to wedfrid above.

Comment author: wedrifid 05 November 2009 10:33:46AM *  2 points [-]

More or less, because both sides have to agree to the process. Then the market favours those arbiters that manage to maintain a reputation for being unbiased and fair.

This still doesn't select for rationality precisely. But it degenerates into a different system to that of a lawyer system.