by [anonymous]
1 min read

5

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

If you're new to Less Wrong, check out this welcome post.

Open Thread: November 2009
New Comment
551 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I was wondering if Eliezer could post some details on his current progress towards the problem of FAI? Specifically details as to where he is in the process of designing and building FAI. Also maybe some detailed technical work on TDT would be cool.

8cousin_it
This email by Eliezer from 2006 addresses your question about FAI. I'm extremely skeptical that he has accomplished or will accomplish anything at all in that direction, but if he does, we shouldn't expect the intermediate results to be openly published, because half of a friendly AI is a complete unfriendly AI.

Just another example of a otherwise-respectable (though not by me) economist spouting nonsense. I thought you guys might find it interesting, and it seemed short for a top-level post.

Steven Landsburg has a new book out and a blog for it. In a post about arguments for/against God, he says this:

the most complex thing I’m aware of is the system of natural numbers (0,1,2,3, and all the rest of them) together with the laws of arithmetic. That system did not emerge, by gradual degrees, from simpler beginnings.

If you doubt the complexity of the natural numbers, take note that you can use just a small part of them to encode the entire human genome. That makes the natural numbers more complex than human life.

So how many whoppers is that? Let's see: the max-compressed encoding of the human genome is insufficient data to describe the working of human life. The natural numbers and operations thereon are extremely simple because it takes very little to describe how they work. This complexity is not the same as the complexity of a specific model implemented with the natural numbers.

His description of it as emerging all at once is just confused: yes, people use natural numbers to describe... (read more)

1SilasBarta
UPDATE2: Landsburg responds to my criticism on his blog, though without mentioning me :-(
1zaph
I'm probably exposing my ignorance here, but didn't zero have a historical evolution, so to speak? I'm going off vague memories of past reading and a current quick glance at wikipedia, but it seems like there were separate developments of using place holders, the concept of nothing, and the use of a symbol, which all eventually converges onto the current zero. Seems like the evolution of a number to me. And it may be a just so story, but I see it as eminently plausible that humans primarily work in base 10 because, for the most part, we have 10 digits, which again would be dictated by the evolutionary process. On his human life, point, if DNA encoding encompasses all of complex numbers (being that it needs that system in order to be described), isn't it then necessarily more complex, since it requires all of complex numbers plus it's own set of rules and knowledge base as well? The ban was probably for the best Silas, you were probably confusing everyone with the facts.
4DanArmak
It sounds like a true story (note etymology of the word "digit"). But lots of human cultures used other bases (some of them still exist). Wikipedia lists examples of bases 4, 5, 8, 12, 15, 20, 24, 27, 32 and 60. Many of these have a long history and are (or were) fully integrated into their originating language and culture. So the claim that "humans work in base 10 because we have 10 digits" is rather too broad - it's at least partly a historical accident that base 10 came to be used by European cultures which later conquered most of the world.
0zaph
That's a good point, Dan. I guess we'd have to check what the number of base 10 systems were vs. overall systems. Though I would continue to see that as again demonstrating an evolution of complex number theory, as multiple strands joined together as systems interacted with one another. There were probably plenty of historical accidents at work, like you mention, to help bring about the current system of natural numbers.
1SilasBarta
Your recollection is correct: the understanding of math developed gradually. My criticism of Landsburg was mainly that he's not even using a consistent definition of math. And as you note, under reasonable definitions of math, it did develop gradually. Yes, exactly. That's why human life is more complex than the string representing the genome: you also have to know what that (compressed) genome specification refers to, the chemical interactions involved, etc. :-)
0DanArmak
Why does DNA encoding need complex numbers? I'm pretty sure simple integers are enough... Maybe you meant the "complexity of natural numbers" as quoted?
0zaph
Sounds good to me (that's what I get for typing quickly at work).
0SilasBarta
UPDATE: Landsburg replies to me several times on my blog. I had missed the window for comments, but Bob Murphy posted a reply to Landsburg on his (Murphy's) blog, and I expanded my points in the linked post, which drew Landsburg.
0Bo102010
Entertaining read. I'm a Landsburg fan, but he's stepped in it on this one.
0AnlamK
What is the notion of complexity in question? It could for instance be the (hypothetically) shortest program needed to produce a given object, i.e. Kolmogorov complexity. In that case, the natural numbers would have a complexity of infinity, which would be much greater than any finite quantity - i.e. a human life. I may be missing something because the discussion to my eyes seems trivial.
4RobinZ
The complexity doesn't count the amount of data storage required, only the length of the executable code. n = 1; while n>0 print n; n++; end looks simple to me.
0AnlamK
Yes, but how are you going to represent 'n' under the hood? You are going to need eventually infinite bits to represent it? I guess this is what you mean by storage. I should confess that I don't know enough about alogrithmic information theory so I may be in deeper waters than I can swim. I think you are right though... I had something more in mind like, the number of bits required to represent any natural number, which is obviously log(n) (or maybe 2loglog(n) - with some clever tricks I think) and if n can get as big as possible, then the complexity, log(n) also gets arbitrarily big. So maybe the problem of producing every natural number consecutively has a different complexity from producing some arbitrary natural number. Interesting...
3RobinZ
Someone else should be the one to say this (do we have an information theorist in the house?), but my understanding is that Kolmogov complexity does not account for memory usage problems (e.g. by using Turing machines with infinite tape). And thus producing a single specific sufficiently large arbitrary natural number is more complex than producing the entire list - because "sufficiently" in this case is "longer than the program which produces the entire list".
4Emile
Yup, Kolmogorov complexity is only concerned with the length of the shortest algorithm. There are other measures (more rarely used, it seems) that take into account things like memory used, or time (number of steps), though I can't remember their name just now. Note that usually the complexity of X is the size of the program that outputs X exactly, not the program that outputs a lot of things including X. Otherwise you can write a quite short program that outputs, say, all possible ascii texts, and claim that it's size is an upper bound of the complexity of the Bible. Actually, the information needed to generate the Bible is the same as the information to locate the Bible in all those texts. Example in Python: chars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'+\ '"!#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r' def iter_strings_of_size(n): if n <= 0: yield '' else: for string in iter_strings_of_size(n-1): for char in chars: yield string + char def iter_all_strings(): n = 0 while True: for string in iter_strings_of_size(n): yield string n = n + 1
1Liron
This has improved my understanding of the Python "yield" statement.
2Emile
Glad I could be of use ! Understanding the Power of Yield was a great step forwards for me, afterwards I was horrified to reread some old code that was riddled with horrible classes like DoubleItemIterator and IteratorFilter that my C++-addled brain had cooked up, and realized half my classes were useless and the rest could have there linecount divided by ten. And some people still cound "lines of code" as a measure of productivity. Sob.
4Mitchell_Porter
So that's why my self-enhancing AI keeps getting bogged down!
2NancyLebovitz
Voted up for being really funny.
1Hook
"Actually, the information needed to generate the Bible is the same as the information to locate the Bible in all those texts." Or to locate it in "The Library of Babel".
0AnlamK
I actually took information theory but this is more of an issue algorithmic information theory - something I have not studied all that much. Though still, I think you are probably right since Kolgomorov complexity refers to descriptive complexity of an object. And here you can give a much shorter description of all of consecutive natural numbers. This is very interesting to me because intuitively one would think that both are problems involving infinity and hence I lazily thought that they would both have the same complexity.
[-]kpreid110

Our House, My Rules reminded me of this other article which I saw today: teach your child to argue. This seems to me to be somewhat relevant to the subject of promoting rationality.

Why would any sane parent teach his kids to talk back? Because, this father found, it actually increased family harmony.

...

Those of you who don’t have perfect children will find this familiar: Just as I was withdrawing money in a bank lobby, my 5-year-old daughter chose to throw a temper tantrum, screaming and writhing on the floor while a couple of elderly ladies looked on in disgust. (Their children, apparently, had been perfect.) I gave Dorothy a disappointed look and said, “That argument won’t work, sweetheart. It isn’t pathetic enough.”

She blinked a couple of times and picked herself up off the floor, pouting but quiet.

...

I had long equated arguing with fighting, but in rhetoric they are very different things. An argument is good; a fight is not. Whereas the goal of a fight is to dominate your opponent, in an argument you succeed when you bring your audience over to your side. A dispute over territory in the backseat of a car qualifies as an argument, for example, in the unlikely event that one child attempts to persuade his audience rather than slug it.

2wedrifid
In My house I would also teach that the difference between 'argument' and 'fight' is quite distinct from the difference between 'good' and 'bad'. I'd also teach them that a good response to a persuasive and audience swaying argument that they should give territory to another is "No. I want it."

Singularity Summit 2009 videos - http://www.vimeo.com/siai/videos

2AngryParsley
I recommend Anna Salamon's presentation How Much it Matters to Know What Matters: A Back of the Envelope Calculation. She did a good job of showing just how important existential risk research is. I thought it would be nice to be able to plug in my own numbers for the calculation, so I quickly threw this together.
2whpearson
Interesting. You can do similar calculations for things like asteroid prevention, I wonder which would win out. It also gave me a sickly feeling when you could use the vast numbers to justify killing a few people to guarantee a safe singularity. In effect that is what we do when we divert resources away from efficient charities we know work towards singularity research.
0Technologos
We make these kind of tradeoffs, for better or worse, all the time--and sometimes, as when we take money from the fire department or health care to put it into science or education, we are letting people die in the short term as part of a gamble that the long term outcomes will justify it.

An interesting site I just stumbled upon:

http://changingminds.org/

They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.

Here's the results from typing in "bias" into their search bar.

A quick search for "changingminds" in LW's search bar shows that noone has mentioned this site before on LW.

Is this site of any use to anyone here?

And should I repost this message to next month's open thread, since not many people will notice it in this month's open thread?

2Yorick_Newsome
I would repost this in the next open thread, it's not like anyone would get annoyed at the double post (I think), and that site looks like it would interest a lot of people.
0wedrifid
I've come across it before and I found it useful. Ok, I'll be honest. It probably wasn't all that useful to me. I like this stuff because it fascinates me.
[-]Jack60

At it's height this poll registered 66 upvotes. As it is meta, no longer useful and not interesting enough for the top comments page please down vote it. Upvote the attached karma dump to compensate.

(It looks like CannibalSmith hasn't been on lately so I'll post this) This post tests how much exposure comments to open threads posted "not late" get. If you are reading this then please either comment or upvote. Please don't do both and don't downvote. The exposure count to this comment will then be compared to that of previous comment made "la... (read more)

6Jack
Down vote this comment if you upvoted the above and want to neutralize the karma I get.
4TheOtherDave
Note that voting this down doesn't seem to remove it from the "top" list. As far as I can tell, that seems to sort by the number of upvotes the comment received, not by the (upvotes - downvotes).
2CannibalSmith
Thanks. :)
2Jack
So by my count it is 37 to 71 and that probably overestimates the response a late comment would get given that there was something of a feedback loop.
1MendelSchmiedekamp
Glad to see something like this.
0arundelo
Ack.
0[anonymous]
0JamesAndrix
see/saw
1RobinZ
Huh?
0RobinZ
I see it.

I'll go ahead and predict here that the Higgs boson will not be showing up. As best I can put the reason into words: I don't think the modern field of physics has its act sufficiently together to predict that a hitherto undetected quantum field is responsible for mass. They are welcome to prove me wrong.

(I'll also predict that the LHC will never actually run, but that prediction is (almost entirely) a joke, whereas the first prediction is not.)

Anyone challenging me to bet on the above is welcome to offer odds.

Okay, so I guess I'll be the first person to ask how you've updated your beliefs after today's news.

Physicists have their act together better than I thought. Not sure how much I should update on other scientific fields dissimilar to physics (e.g. "dietary science") or on the state of academia or humanity as a whole. Probably "some but not much" for dietary science, with larger updates for fields more like physics.

Just curious, given that physicists have their act together better than you thought, then, conditioning on that fact and the fact that physicists don't, as a whole, consider MWI to be slam dunk (though, afaik, many at least consider it a reasonable possibility), does that lead to any update re your view that MWI is all that slam dunk?

5Shmi
That's because physicists, though they clearly enjoy speculating very much, tend to withhold judgment until there is some experimental evidence one way or the other. In that sense they are more instrumentalists than EY. Experimental physicists much more so.
5A1987dM
“A physicist answers all questions with ‘I don't know, but I'll find out.’” -- Nicola Cabibbo (IIRC), as quoted by a professor of mine. (As for “experimental evidence”, in the past couple of years people have managed to put bigger and bigger systems -- some visible with the naked eye -- into quantum superpositions, which is evidence against objective collapse theories.)
5Eliezer Yudkowsky
Nope. That's nailed down way more solidly than anything I know about mere matters of culture and society, so any tension between it and another proposition would move the other, less certain one. It would cause me to update in the direction of believing that more physicists probably see MWI as slam-dunk. :)
7Mitchell_Porter
What exactly is it that you claim to know here? It's not a particular quantitative many-worlds theory that makes predictions, or you wouldn't be asking where the Born probabilities come from. It's not a particular qualitative model of many worlds, or else you wouldn't talk about Robin's mangled worlds in one post, and Barbour's timeless physics in another. What does it boil down to? "I know that quantum mechanics has something to do with parallel worlds"?

I think it comes down to:

(1) The wavefunction is what there is; and

(2) it doesn't collapse.

6wedrifid
Well said, this has seemed to be what Eliezer has tried to argue for in his posts. He even went out of his way to avoid putting the "MWI" label on it a lot the time.
7Shmi
Every genius is entitled to some eccentricity, and the MWI is EY's. It might be important to remind the regulars why MWI is not required for rationality, but it is pointless to argue about it with EY. For all the dilettantes out there who learned about quantum physics from Eliezer's posts and think that they understand it, despite the clear evidence that understanding a serious scientific topic in depth requires years of study, you know where the karma sink is.
9A1987dM
EY's level of support for cryonics (to the point of saying that people who don't sign their children up for cryo are lousy parents) sound waaaay more eccentric to me than acceptance of the MWI.
[-]Shmi150

Cryonics is a last-ditch long-shot attempt to cheat death, so I can relate quite easily.

I don't want to achieve immortality through my work; I want to achieve immortality through not dying. I don't want to live on in the hearts of my countrymen; I want to live on in my apartment.

-- Woody Allen

6fubarobfusco
Is that just because it has human-level consequences? Belief in MWI doesn't tell you what to do.
7Jack
No, it's because MWI has broad support among physicists as at least being a very plausible candidate interpretation. Support for cryonics among biologists and neuroscientists is much more limited.
-2Quantumental
Well.... It does not have a broad support among physicists for being a VERY plausible. A tiny fraction consider it very plausible. The vast majority consider it very unlikely and downright wrong due to it's many problems.
5Jack
You're overstating the extent of the opposition.
0Quantumental
No. If you even just go to the discussion page you will see that the reception part is one of the most erronous and most objected to in that wiki article. The entire article in itself is a disaster and most Many Worldian proponents does not endorse it at all. You have to understand that there are literally THOUSANDS of physicists who hold a opinion on the matter, a few polls conducted by proponents do no matter at all. Do you really think that a talk held by Max Tegmark will not attract people who share his views? If someone where to do a global poll, you would see...
2Shmi
Actually, this is not true. Having been in academia for some time, I can vouch that a celebrity talk like that would attract many faculty members regardless of their views on the matter.
0DaFranker
I believe that is an improper phrasing on Quantumental's part. No one thought, ever, (to my knowledge and immediately visible evidence) including someone like me who is completely unrelated to the discussion and has no idea who Max Tegmark is, that such a talk would not attract [any] people who share his views. This is not mutually-exclusive with "people of all distributions will be attracted in a population-representative sample", however. To me, it just seems like an accidental (possibly caused by some bias its writer is insufficiently aware of) breach of the no-ninja-connotation rule.
0Quantumental
Well the one I watched had like 15 guys in it, 9 pro-MWI. Indicating that this talk definitely attracted more MWI'ers than what is regular
-1Jack
You're making an assertion with zero evidence...
0Quantumental
I pointed you towards the evidence. One of the guys in the talksection did a survey of his own of 30 or so leading physicists. But just the fact that David Deutsch himself says less than 10% believe in any kind of MWI speaks volumes. He has been in the community where these matters are discussed for decades
3A1987dM
No. Jack apparently read my mind.
-3wedrifid
No, merely by.
0Psy-Kosh
Fair enough. (Well, technically both should move at least a little bit , of course, but I know what you mean.) Hee hee. :)
2[anonymous]
Speaking as someone with an academic background in physics, I don't think the group as a whole as anti-MWI as you seem to imply. It was taught at my university as part of the standard quantum sequence, and many of my professors were many-worlders... What isn't taught and what should be taught is how MWI is in fact the simpler theory, requiring fewer assumptions, and not just an interesting-to-consider alternative interpretation. But yes, as others have mentioned physicists as a whole are waiting until we have the technology to test which theory is correct. We're a very empirical bunch.
2Psy-Kosh
I don't think I was implying physicists to be anti-MWI, but merely not as a whole considering it to be slam dunk already settled.
0[anonymous]
Interesting. What technology lets you test that?
1Shmi
We have discussed it here. A reading list is here.
4RolfAndreassen
You seem to be conceding that this is in fact the Higgs boson. In fairness I have to point out that, although it is now very certain that there is a particle at 125 GeV, it may not be the predicted Higgs boson. With this in mind, would you like to keep our bet running a while longer while CERN nails down the properties? Or do you prefer to update all at once, and pay me the 25 dollars?
6Eliezer Yudkowsky
I'd rather pay the $25 now. (Paypal data?) My understanding is that besides the mass, there's also supposed to be other characteristics of the particle data that match the predicted Higgs, otherwise I would've waited before fully updating. If the story is retracted I might counter-update and ask for the money back, but my understanding is that this is not supposed to happen.
0wedrifid
What other features (apart from being a particle at 125 GeV) do you consider a necessary part of the specification "Higgs Boson" for the purpose of this bet?
1bogdanb
I notice that in your prediction you welcomed bets, but you did not offer odds, nor gave a confidence interval. I’m not sure (haven’t actually checked), but I have an impression that you usually do at least give a number. Since the prediction was in 2009 it might just be that you recently formed the habit. If that’s not the case, not giving odds (even when welcoming offers) might be an indicator that you don’t believe something as much as you think you do. (The last two "you" are meant both as generic people references and to you in particular.) Does that seem plausible on a quick introspection?
0Eliezer Yudkowsky
I did make a bet and pay it.
2bogdanb
Yes, I know. But those were even odds. When someone makes a prediction unprompted, it suggests more confidence than that. (Well, unless they’re just testing what odds other people offer, but I don’t think that was the case here.) That is, it is possible that your inner censor for “don’t predict things that might prove wrong” didn’t trigger (maybe because you’ve trained yourself to ignore embarrassment about people’s opinion of you), but the censor for “don’t bet when you might be wrong” triggered without you noticing it. In other words, it might be an indication of a difference between what you believe and what you think you believe, or even what you want to appear to believe :-) (It might also be that you actualy thought the odds were 50:50, and anticipated others to offer much higher odds. How likely did you think it was at the time, anyway?)

I will take up the bet on the Higgs field, with a couple of caveats:

You use the phrase "the Higgs boson", when several theories predict more than one. If more than one are found, I want that to count as a win for me.

If the LHC doesn't run, the bet is off.

Time limit: I suggest that if observation of the Higgs does not appear in the 2014 edition of "Review of Particle Physics", I've lost. "Observation" should be a five-sigma signal, as is standard, either in one channel or smaller observations in several channels.

25 dollars, even odds.

As a side note, this is more of a hedge position than a belief in the Higgs: I'm a particle physicist, and if we don't find the Higgs that will be very interesting and well worth the trivial pain of 25 dollars and even the not-so-trivial pain of losing a public bet. (I'm not a theorist, so strictly speaking it's not my theory on the chopping block.) While if we do find it, I will (assuming Eliezer takes up this offer) have the consolation of having demonstrated the superior understanding and status of my field against outsiders. (It's one thing for me to say "Death to theorists" and laugh at their heads-in-the-clouds attitude and incomprehensible math. It's quite another for one who has not done the apprenticeship to do so.) And 25 dollars, of course.

8Mitchell_Porter
I've just learned that Stephen Hawking has bet against the Higgs showing up. Here's my argument against Higgs boson(s) showing up: The Higgs boson was just the first good idea we had about how to generate mass. Theory does not say anything about how massive the Higgs itself it is, just that there is an upper bound. The years have passed, it hasn't shown up, and the LHC will finally take us into the last remaining region of parameter space. So Higgs believers say "hallelujah, the Higgs will finally show up". But a Higgs skeptic just says this is the end of the line. It's just one idea, it hasn't been confirmed so far, why would we expect it to be confirmed at the last possible chance? Two years ago: I wrote to him at the time expressing interest in the bet, but asking for more details. (No reply.) The rather bold statement that QM itself implies a Higgs "or something like it" I think must be a reference to the breakdown in unitarity of the Standard Model that should occur at 1 TeV - which implies that the Standard Model is incomplete, so something will show up. But does it have to be a new scalar boson? There are Higgsless models of mass generation in string theory. This all leads me to think anew about what's going to happen. The LHC will collide protons and detectors will pick up some of the shrapnel. I think no-one expects new types of particle to be detected directly. They are expected to be heavy and to decay quickly into known particles; the evidence of their existence will be in the shrapnel. The Standard Model makes predictions about the distribution of shrapnel, but breaks down at 1 TeV. So one may predict that what will be observed is a deviation in shrapnel distributions from SM predictions and that is all. Can we infer from this, and from the existing range of physics models, what the likely developments in theory are going to be, even before the experiment is performed? Although I said that totally new particles will not be observed directly, my u
5Eliezer Yudkowsky
I was hoping to make some more money on this :) in a shorter time and hence greater implied interest rate :) but sure, it's a bet.
3RolfAndreassen
Sorry, graduate students can't afford to be flinging around the big bucks. :) If I get the postdoc I'm hoping for, we can up the stakes, if you like.
2soreff
This is a side issue but I'm curious as to what people's reactions are: I'm kind-of hoping that dark matter turns out to be massive neutrinos. Of the various candidates, it seems like the most familiar and comforting. We've even seen neutrinos interact in particle detectors, which is way more than you can say for most of the other alternatives... Compared to axions or supersymmetric particles, or WIMPs, massive neutrinos have have more of the comfort of home. Anyone feel similarly?
2rwallace
As I understand it, there is a known upper bound on neutrino mass that is large enough to allow them to account for some of the dark matter, but too small to allow them to account for all or most of it.
5RolfAndreassen
That is correct as far as the known neutrinos go. If there is a fourth generation of matter, however, all bets are off. (I'm too lazy to look up the limits on that search at the moment.) On the other hand, since neutrinos oscillate and the sun flux is one-third what we expect rather than one-fourth, you need some mechanism to explain why this fourth generation doesn't show up in the oscillations. A large mass is probably helpful for that, though, if I remember correctly. Point of order! A massive neutrino is a WIMP. "Weakly Interacting" - that's neutrino to you - "Massive Particle".
1A1987dM
Well, but “massive” in WIMP usually means very massive (i.e. non-relativistic at T = 2.7 K). As far as gravitational effects, particles with non-zero mass but ultrarelativistic speeds behave very much like photons AFAIK.
1soreff
Thanks, point taken - I'd been thinking of more exotic WIMPs
1gwern
I've added this bet to PredictionBook at http://predictionbook.com/predictions/1566 based on http://wiki.lesswrong.com/wiki/Bets_registry
5taw
Intrade has market for Higgs boson.
3Eliezer Yudkowsky
Too thinly traded, deadline too soon, rules for what counts as "confirmation" too narrow given the deadline.
0Technologos
The market, unfortunately, is only through the end of next year; does anybody know whether all the relevant experiments are slated to be performed by then? I'd like to unwind P(Find Higgs|LHC runs and does the tests) down to just P(Find Higgs) or some approximation thereof.
1taw
There are markets for further years, but have almost no activity, so I didn't link to them.
3SilasBarta
Semi-OT: It's discussions like these that remind me: Whenever physicists remark about how the laws of nature are wonderfully simple, they mean simple to physicists or compared to most computer programs. For most people, just looking at the list of elementary particles is enough to make their heads blow up. Heck, it nearly does that for me!
5RolfAndreassen
Seriously? Dude, it's a list of names. It should no more make your head asplode than the table of the elements does, and nobody thinks that memorising those is a great feat of intellect. Are you sure you're not allowing modesty-signalling to overcome your actual ability? Now, if you want to get into the math of the actual Lagrangians that describe the interactions, I'll admit that this is a teeny bit difficult. But come on, a list of particles?
8Alicorn
"Antimony, arsenic, aluminum, selenium, and hydrogen and oxygen and nitrogen and rhenium..."
3wedrifid
I followed the link Silas provided. Rather than seeing a list to be memorised my brain started throwing up all sorts of related facts. The pieces of physics I have acquired from various sources over the years reasserted themselves and I tried to piece together just how charm antiquarks fit into things. And try to remember just why it was that if I finally meet my intergalactic hominid pen pal and she tries to shake hands with her left hand I can be sure that shaking would be a cataclysmic-ally bad idea. I seem to recall being able to test symmetry with cobalt or something. But I think it's about time I listened to Feynman again. Point is, being able to find the list of elementary particles more overwhelming than, say, a list of the world's countries requires a certain amount of knowledge and a desire for a complete intuitive grasp. That's not modesty-signalling in my book.
1DanArmak
Everyone knows what a country is. Few people know what the term "elementary particle" means. (It's not a billiard ball.)
0wedrifid
It's not a billiard balls from the movie they showed? Then surely 'elementary particles' must refer to those things on the Table of the Elements that was on the wall!
1SilasBarta
I have a metaphorical near-head-explosion for different reasons than the average person that I was referring to. For me, it's mainly a matter of the properties shown on the chart being more abstract and not knowing what observations they would map to (as wedrifid noted in his signaling analysis...). Compared to the Periodic Table, elementary particle chart also has significantly less order. With the PT, I may not know each atomic mass number, but I know in which direction it increases, and I know the significance of its arrangement into rows and columns. The values in the EPC seem more random.
9RolfAndreassen
Granted, but there are also nowhere near as many of them. Besides, fermion mass increases to the right, same as in the PT; charge depends only on the row; and spin is 1/2 for all fermions and 1 for all bosons. This is not very complicated. I would also suggest that the seeming randomness is a sign you're getting closer to the genuinely fundamental stuff: The order in the periodic table is due to (using loose language) repeated interactions of only a few underlying rules - basically just combinations of up and down quarks, with electrons, and electromagnetic interactions only. Nu, mass and charge are hardly abstract for someone who has done basic physics; that leaves spin, which just maps to the observation that a beam of electrons in a magnetic field will split into two. (Although admittedly things then get a bit counter-intuitive if you put one of the split beams through a further magnetic field at a different angle, but that's more the usual QM confusion.)
7SilasBarta
Alright! Point taken! The chart is less daunting than I thought. You mind loosening your grip on my, um, neck? ;-) An especially good point -- maximally compressed data looks like random noise, so at the fundamental level, there should be no regularity left that allows one entry to tell you something about another.
1Psy-Kosh
Oh, a bit off topic, but mind clarifying something for me? My QFT knowledge is very limited at the moment, and I'm certainly not (yet) up to the task of actually trying to really grasp the Standard Model, but... Is it correct to say that in a sense the force carriers are, in a sense, illusory? That is, the gauge bosons are kind of an illusion in the same sense that the "force of gravity" is? From what little I managed to pick up, the idea is that instead one starts without them, but assigns certain special kinds of symmetries to the configuration space. These local (aka) gauge symmetries allow interference effects that basically amount to the forces of interaction. One can then "rephrase" those effects in a way that more looks like another quantum field interacting with, well, whatever it's interacting with? ie, can the electromagnetic, strong, and weak forces (as forces) be made to go away and turn into symmetries in configuration space in the same sense that in GR, the force of gravity goes away and all that's left is geometry of spacetime? Or have I rolled a critical fail with regards to attempting to comprehending the notion of gauge fields/bosons? Thanks. Again, I know it's a slight tangent, but since the subject of the Standard Model came up anyways...
4RolfAndreassen
Ok, I'm not touching the ECE thing; as noted, I'm not a theorist. I just measure stuff. I've taken classes in formal QFT, but I don't use it day-to-day, so it's a weak point for me. However, it seems a bit odd to describe things that can be produced in collisions and (at least in principle) fired at your enemies to kill them by radiation poisoning as 'illusory'. If you bang two electrons together, measuring the cross-section as a function of the center-of-mass energy, you will observe a classic 1/s decline interrupted by equally classic resonance bumps. That is, at certain energies the electrons are much more likely to interact with each other; that's because those are the energies that are just right for producing other particles. Increase the CM energy through 80 GeV or so, and you'll find a Breit-Wigner shape like any other particle; that's the W, and if it weren't so short-lived you could make a beam of them to kill your enemies. (With asymmetric electron energies you can produce a relativistic-speed W and get arbitrarily long lifetimes in the lab frame, but that gets on for being difficult engineering. In fact, just colliding two electrons at these energies is difficult, they're too light; that's why CERN used an electron and a proton in LEP.) Now, returning to the math, my memory of this is that particles appear as creation and annihilation operators when field theories with particular gauge symmetries are quantized. If you want to call the virtual particles that appear in Feynmann diagrams illusory, I won't necessarily argue with you; they are just a convenient way of expressing a huge path integral. But the math doesn't spring fully-formed from Feynmann's brow; the particular gauge symmetry that is quantised is chosen such that it describes particles or forces already known to exist. (Historically, forces, since the theory ran ahead of the experiments in the sixties - we saw beta decay long before we saw actual W bosons.) If the forces were different, the t
1Psy-Kosh
I wasn't bringing up the ECE thing. I meant illusory in the same sense that "sure, the force of gravity can cause me to fall down and get ouchies... but by a bit of a coordinate change and so on, we can see that there really is no 'force', but instead that it's all just geometry and curvature and such. Gravity is real, but the 'force' of gravity is an illusion. There's a deeper physical principle that gives rise to the effect, and the regular 'force' more or less amounts to summing up all the curvature between here and there." My understanding was that gauge bosons are similar "we observe this forces/fields/etc... but actually, we don't need to explicitly postulate those fields as existing. Instead, we can simply state that these other fields obey these symmetries, and that produces the same results. Obviously, to figure out which symmetries are the ones that actually are valid, we have to look at how the universe actually behaves" ie, my understanding is that if you deleted from your mind the knowledge of the electromagnetic and nuclear forces and instead just knew about the quark and lepton fields and the symmetries they obeyed, then the forces of interaction would automatically "pop out". One would then see behaviors that looks like photons, gluons, etc, but the total behavior can be described without explicitly adding them to the theory, but simply taking all the symmetries of the other stuff into account when doing the calculations. That's what I was asking about. Is this notion correct, or did I manage to critically fail to comprehend something? And thanks for taking the time to explain this, btw. :) (I'm just trying to figure out if I've got a serious misconception here, and if so, to clear it up)
3RolfAndreassen
I guess you can think of it that way, but I don't quite see what it gains you. Ultimately the math is the only description that matters. Whether you think of gravity as being a force or a curvature is just words. When you say "there is no force, falling is caused by the curvature of space-time" you haven't explained either falling or forces, you've substituted different passwords, suitable for a more advanced classroom. The math doesn't explain anything either, but at least it describes accurately. At some point - and in physics you can reach that point surprisingly fast - you're going to have to press Ignore (being careful to avoid Worship, thanks), at least for the time being, and concentrate on description rather than explanation.
1Psy-Kosh
Well, my question could be viewed as about the math. ie: "does the math of the standard model have the property that if you removed any explicit mention of electromagnetism, strong force, or weak force and just kept the quark and lepton fields + the math of the symmetries for those, would that be sufficient for it to effectively already contain EM, strong, and weak forces?" And as far as gravity being force or geometry, uh... there's plenty of math associated with that. I mean, how would one even begin to talk about the meaning of the Einstein field equation without interpreting it in terms of geometry? Perhaps there is a deeper underlying principle that gives rise to it, but the Einstein field equation is an equation about how matter shapes the geometry of spacetime. There's no way really (that I know of) to reasonably interpret it as a force equation, although one can effectively solve it and eventually get behaviors that Newtonian gravity approximates (at low energies/etc...) (EDIT: to clarify, I'm trying to figure out how to semivisualize this. ie, with gravity and curvature, I can sorta "see" and get the idea of everything's just moving in geodesics and the behavior of stuff is due to how matter affects the geometry. (though I still can only semi "grasp" what precisely G is. I get the idea of curvature (the R tensor), I get the idea of metric, but the I currently only have a semigrasp on what G actually means. (Although I think I now have a bit of a better notion than I used to). Anyways, loosely similar, am trying to understand if the fundamental forces arise similarly, rather than being "forces", they're more an effect of what sorts of symmetries there are, what bits of configuration space count as equivalent to other bits, etc...)
0RolfAndreassen
I guess I'm not enough of a theorist to answer your question: I do not know whether the symmetries alone are sufficient to produce the observed particles. My intuition says not, for the following reason: First, SU(3) symmetry is broken in the quarks; second, the Standard Model contains parameters which must be hand-tuned, including the electromagnetic/weak separation phase that gives you the massless photon and the very massive weak-force carriers. Theories which spring purely from symmetry ought not to behave like that! But this is hand-waving. As an aside, I seem to recall that GR does not produce our universe from symmetries alone, either; there are many solutions to the equations, and you have to figure out from observation which one you're in. If you like, I can quote our exchange and ask some local theorists if they'd like to comment?
0Psy-Kosh
But GR explains (or explains away, depending on how you look at it) the force of gravity in terms of geometry. I meant "does the standard model do something similar with the gauge bosons via symmetry?" May still leave some tunable parameters, not sure. But does the basic structure of the interactions pop straight out of the symmetries? And yeah, I'd like that, thanks. It's nothing urgent, just am unclear if I have the basic idea or if I have severe misconceptions.
1Mitchell_Porter
It is a while since I thought about this. But... The basic fact about quantum field theory is field-particle duality. Quantum field states can be thought of either as a wavefunction over classical field configuration space, or as a wavefunction over a space of multi-particle states. You can build the particle states out of the field states (out of energy levels of the Fourier modes), so the field description is probably fundamental. But whenever there is a quantum field, there are also particles, the quanta of the field. In classical general relativity, particles follow geodesics, they are guided by the local curvature of space. This geometry is actually an objective, coordinate-independent property of space, though the way you represent it (e.g. the metric you use) will depend on the coordinate system. Something similar applies to the gauge fields which produce all the other forces in the Standard Model. Geometrically, they are "connections" describing "parallel transport" properties, and these connections are not solely an artefact of a coordinate system. See first paragraph here. You will see it said that the equations of motion in a gauge field theory are obtained by taking a global symmetry and making it local. These global symmetries apply to the matter particle (which is usually a complex-valued vector): if the value of the matter vector is transformed in the same way at every point (e.g. multiplied by a unit complex number), it makes no difference to the equation of motion of the "free field", the field not yet interacting with anything. Introducing a connection field allows you to compare different transformations at different points (though the comparison is path-dependent, depending on the path between them that you take), so now you can leave one particle's state vector unmodified, and transform a distant particle's vector however you want, and so long as the intervening gauge connection transforms in a compensatory fashion, you will be talking about
3DanArmak
What is the difference between saying gravity is a force and saying it's a curvature of spacetime? What is your definition of "a force" that makes it inapplicable to gravity? Is electromagnetism a force, or is it a curvature in the universe's phase space? I don't know much about physics, please enlighten me...
1Tyrrell_McAllister
To say that gravity is a curvature of spacetime means that gravity "falls out of" the geometry of spacetime. To say that gravity is something else (e.g., a force) means that, even after you have a complete description of the geometry of spacetime, you can't yet explain the behavior of gravity.
1DanArmak
Isn't it equally valid to say that the geometry of spacetime falls out of gravity? I.e., given a complete description of any one of them, you get the other for free. What is a force by your definition? Something fundamental which can't be explained through something else? But it seems to me that "the curvature of spacetime" is the same thing as gravity, not a separate thing that is linked to gravity by causality or even by logical necessity. They're different descriptions of the same thing. So we can still call gravity a fundamental force, it's not being caused by something else that exists in its own right.
4Psy-Kosh
What I meant is that the notion of gravity as "something that pulls on matter" goes away. There're a couple of concepts that're needed to see this. First is "locality is really important" For instance, you're in an elevator that's freefalling toward the earth... or it's just floating in space. Either way, the overall average net force you feel inside is zero. How do you tell the difference? "look outside and see if there's a planet nearby that you're moving toward"? Excuse me? what's this business about talking about objects far away from you? Alternately, you're either on the surface of the earth, or in space accelerating at 9.8m/s^2 Which one? "look outside" is a disallowed operation. It appeals to nonlocal entities. Once again, return to you being in the box and freefalling toward the ground. What can you say locally? Well... I'm going to appeal to Newtonian gravity briefly just to illustrate a concept, but then we'll sort of get rid of it: Place two test particles in the elevator, one above the other. What do you see? You'd see them accelerating away from each other, right? ie, if one's closer to the earth than the other, then you get tidal force pulling them away from each other. Similarly, placing them side by side, well, the lines connecting each of them to the center of the earth make a bit of an angle to each other. So you'll see them accelerate toward each other. Again, tidal force. From the perspective of locality, tidal force is the fundamental thing, it's the thing that's "right here", rather than far away, and regular gravity is just sort of the sum (well, integral) of tidal force. Now, let's do a bit of a perspective jump to geometry. I'll get back to the above in a moment. To help illustrate this, I'll just summarize the "Parable of the Apple" from Gravitation: Imagine you see ants crawling on the side of the apple. You see them initially seem to move parallel, then as they crawl up, you see them moving toward each other. "hrm... they att
0DanArmak
Edited: I understand what you've said (and thanks for taking the time to write all that out!). But I'm not sure why "the concept of gravity as something that pulls on matter goes away". Is it the case that it's mathematically impossible to define gravity as attraction between matter and still have a correct relativistic physics? Is it impossible to generalize Newton's law that way?
4Psy-Kosh
Well, it goes away in the sense that "this particular theory of physics explains gravity without directly having a 'force' associated with it as such" In GR, one doesn't see any forces locally pulling on objects. One instead sees (if one zooms in closely) objects moving on straight (geodesic) paths through spacetime. It simply happens to be that spacetime is in some cases shaped in ways that alter the relationships between nearby geodesics. I guess it's an attraction, sort of, but once one starts taking locality seriously, that's not that good, is it? "don't tell me what's going on way over there, tell me what's happening right here!" There may be alternate theories, but GR itself is a geometric theory and I wouldn't even know how to interpret the central equations as force equations. Saying "there could be other explanations" or such is a separate issue. What I meant was "In GR, once one has the geometry, nothing more needs to be said, really." (Well, I'm skipping subtleties, stuff gets tricky in that you have shape of space affecting motion of matter, and motion of matter affecting shape of space, but yeah...) Actually, there's really no way for Newtonian stuff to be reasonably extended to describe GR effects without going to geometry in some form. I mean, GR predicts stuff about measured distances not quite obeying the rules that they would in flat spacetime. Measured times too, for that matter. One would have to get really creatively messy to produce a theory that is more an extension of Newtonian gravity, isn't at all based on geometry, curvature, etc... any more than regular Newtonian gravity, yet still produces the same predictions for experimental outcomes that GR does. It would, at best, be rather complicated and messy, I'd expect. If it's even possible. Actually, I don't think it is. More or less no matter what, other stuff would have to be added on that doesn't at all even resemble Newtonian stuff.
3CronoDAS
I think the Wikipedia page on Gravitomagnetism might be relevant; it seems to be an approximation to GR that looks an awful lot like classical electromagnetism.
2DanArmak
OK, now I understand better, thanks :-) Incidentally, what about electromagnetism and the other fundamental forces? Can they be described the same way as gravity? In classical mechanics they're the same kind of thing as gravity, except they can be repulsive as well. And a lot of popsci versions of modern physics research seems to postulate the same kind of properties for gravity as we know from electromagnetism: like repulsive gravity, or gravitational shields, or effects due to gravitational waves propagating at speed of light, or artificial gravity. And all forces are related through inertial mass. So is there a description of all these things, including gravity, in the same terms? Either all of them "forces" or fields with mediating particles, or all of them affecting some kind of geometry?
2Oscar_Cunningham
Scott Aaronson has a nice post about the differences between gravity and electromagnetism. It seems his thoughts were running along the same lines as yours when he wrote it; he asks almost all the same questions. http://www.scottaaronson.com/blog/?p=244
0DanArmak
That was very interesting and relevant. Thanks.
0Psy-Kosh
Gravity waves come straight out of GR. (Actually, weak gravity waves show up in the linearized theory (the linearized theory of GR being a certain approximation of it that's easier to deal with, good for low energies and such)) And that was part of what I was asking about. Well, others have tried to find that sort of thing, but I was asking something like "in the standard model and such, are the forces really aspects of what would amount to the geometry (specifically the symmetries) of configuration space rather than additional dimensions in the config space?" And, of course, one of the BIG questions for modern physics is how to get a quantum description of gravity or to otherwise find a model of reality which contains both QM and GR in a "natural" way. So, basically, at this point, all I can say is "I don't really know." :) (well, also, I guess depending on how you look at it, curvature either explains or explains away tidal force. It explains the effects/behaviors, but explains away any apparent "forces" being involved.)
0RobinZ
...but forces fall out of something - electromagnetic interactions, for example. As an engineer, I am inclined to call something a force if it goes on the "force" side of the equation in the domain I'm modeling, and not worry about whether to call it "real". (Then again, as an engineer, I rarely need to exceed Newtonian mechanics.)
-10sbannist
2Vladimir_Nesov
I guess hardly anybody here knows even what the question means, exactly, so all a bead jar guess.
2CronoDAS
Well, the Standard Model hasn't been wrong yet. If you want to bet against it, I'll take you up on it. I assert that the LHC will not establish the non-existence of the Higgs boson. Will you wager $20 at even odds on against that proposition?
2Eliezer Yudkowsky
I'll bet that the LHC will not establish existence. It's not clear to me what would count as establishing non-existence.
5whpearson
There are papers that establish upper bounds on the energy of the higgs boson, http://arxiv.org/abs/hep-ph/9212305 If the LHC can make particles up to those energy bounds (I don't know and don't have the time to figure it out), and it can be run for sufficient time to make it very unlikely that one wouldn't be created. Then you could establish probable non-existence.
-3Thomas
It is also quite possible that the Higgs boson will come out and it will be utterly useless, as most of those particles are. You can't do a thing with them and they don't tell you very much. Of course, the euphoria will be massive. Still, most likely, nothing will be to see.
6mormon2
What? Who voted this up? "It is also quite possible that the Higgs boson will come out and it will be utterly useless, as most of those particles are." So understanding the sub-atomic level for things like nano-scale technology in your books is a complete waste of time? Understanding the universe I can only assume is also a waste of time since the discovery of the Higgs Boson in your books is essentially meaningless in all probability. "You can't do a thing with them and they don't tell you very much. Of course, the euphoria will be massive." Huh? From someone who studies particle physics to one (you) who doesn't obviously (and I am going to be hard on you) you should refrain making such comments in nearly total ignorance. The fact that you don't understand the significance of the Higgs Boson or particle physics should have been a cue that you have noting to contribute to this thread. Sorry but there it is...
2Thomas
No ato-tech in sight, no use for already discovered particles and you are telling me how valuable Higgs boson will be. Not only you but the whole CERN affiliated community and most of the media. I remain skeptic, if you don't mind.
-1soreff
You have a point. I have a somewhat similar view of elements above perhaps Einsteinium. I'll be more impressed with physics' control over the electroweak interaction when I see the weak nuclear force equivalent of an electromagnet :-) I wonder what is the maximum particle energy that someone has actually used in a non-elementary-particle-physics-research application? Maybe the incoming beam for a spallation neutron source, somewhere in the MeV range?
6mormon2
Ok, I am going to reply to both soreff and Thomas: Particle physics isn't about making technology at least at the moment. Particle physics is concerned with understanding the fundamental elements of our world. As far as the details of the relevance of particle physics I won't waste the time to explain. Obviously neither of you have any real experience in the field. So this concludes what comments I am going to make on this topic until someone with real physics knowledge decides to comment.

Another danger of unfriendly AI: It doesn't invite you to the orgy.

2Technologos
I feel like this particular danger should be the primary research topic for FAI researchers. Intermediate discoveries might be a good source of funding.

To-Do Lists and Time Travel Sarmatian Protopope muses on how coherent, long-term action requires coordinating a tribe of future selves.

[-][anonymous]50

So, I'm having one of those I-don't-want-to-go-to-school moments again. I'm in my first year at a university, and, as often happens, I feel like it's not worth my time.

As far as math goes, I feel like I could learn all the facts my classes teach on Wikipedia in a tenth of the time--though procedural knowledge is another matter, of course. I have had the occasional fun chat with a professor, but the lecture was never it.

As far as other subjects go, I think forces conspired to make me not succeed. I had a single non-math class, though it was twice the length... (read more)

I feel like I could learn all the facts my classes teach on Wikipedia in a tenth of the time--though procedural knowledge is another matter, of course.

Take it from me (as a dropout-cum-autodidact in a world where personal identity is not ontologically fundamental, I'm fractionally one of your future selves), that procedural knowledge is really, really important. It's just too easy to fall into the trap of "Oh, I'm a smart person who reads books and Wikipedia; I'm fine just the way I am." Maybe you can do better than most college grads, simply by virtue of being smart and continuing to read things, but life (unlike many schools) is not graded on a curve. There are so many levels above you, that you're in mortal danger of missing out on entirely if you think you can get it all from Wikipedia, if you ever let yourself believe that you're safe at your current level. If you think school isn't worth your time, that's great, quit. But know that you don't have to be just another dropout who likes to read; you can quit and hold yourself to a higher standard.

You want to learn math? Here's what I do. Get textbooks. Get out a piece of paper, and divide it into two columns. Read or... (read more)

5komponisto
It's amazing how rarely people -- including textbook authors -- actually bother to point this out. (Admittedly, it's only true over an algebraically closed field such as the complex numbers.) Were you by any chance using Axler? While I certainly agree with the main point of your comment, I nevertheless think that this particular comparison illustrates mainly that the mathematical Wikipedia articles still have a way to go. (Indeed, the property of determinants mentioned above is buried in the middle of the "Further Properties" section of the article, whereas I think it ought to be prominently mentioned in the introduction; in Axler it's the definition of the determinant [in the complex case]!)
1Zack_M_Davis
Mostly Bretscher, but checking out Axler's vicious anti-deteminant screed the other month certainly influenced my comment.
2Jack
I up voted this but I just wanted to follow this tangent. This isn't true in all worlds where personal identity is not ontologically fundamental. It is a reasonable thing to say if certain versions of the psychological continuity theory are true. But, those theories don't exhaust the set of theories in which personal identity isn't ontologically fundamental. For example, if personal identity supervenes on human animal identity than you are not one of Warrigal's future selves, even fractionally.

I think you should ask yourself this: if you drop out, what realistically are you going to do with your time? If you don't have a very good answer to that question, stay where you are.

View university in the same way as you would view a long lap-swimming workout. Boring as hell, maybe, but you'll be better off and feel better when you're done. Sure, you could skip your pool workout and go do something Really Important, but most people skip their workouts and then go watch TV instead.

5DanArmak
Suppose you have an idea or desire for something to do instead of university. You should create a gradual, reversible transition. For instance if you want to work and earn some money, find a job first (telling them you've dropped out), work for a couple of weeks, make sure you like it, and only then actually drop out. Or if you want to study alone at home, start doing it for 10 hours every week, then 20, drop just one or two classes to free the time, and when you see it's working out, go all out.
0Desrtopa
This may not be convenient if working for a couple weeks requires time you'll only have if you drop out. If you end up not wanting to drop out after all, you can't necessarily afford to miss the classes.

Is school worth it for the learning? How about for the little piece of paper I get at the end?

In the comment section of this post, "Doug S." gives the most salient analysis I have seen. After stating, "the job of a university professor is to do research and bring in grant money for said research, not to teach! Teaching is incidental," he was asked why parents would pay upward of $40,000 annually for such a service. His parsimonious reply: "In most cases, it’s not the education that’s worth $40,000+. It’s the diploma. Earning a diploma demonstrates that you are willing to suffer in exchange for vague promises of future reward, which is a trait that employers value."

7Technologos
Before I started college, I read this professor's speech, which attempted to explain, given your concerns, why an education may nevertheless be valuable. It's biased towards its audience (UChicago students) but I think its relevant point can be summarized as: few jobs allow you to continue practicing the diversity of skills employed by academic work, and having a degree keeps your options wide open for a longer period. However, the real thesis of the speech is that university is uniquely a place to devote oneself to practicing the Art, broadly construed, of generating knowledge and beauty from everything. Other considerations: * Becoming an academic is very hard without an undergraduate degree, so if you want that life, stick with it. * It takes a great deal of luck to pull a Bill Gates. It is otherwise hard to convince people that your reasons for not having a degree are genuine and not ex post. * At least in my case, it has been hard to find anywhere near as high a concentration of intelligent and interesting people outside the university as in the one I attended. Hope something in there helps!
7[anonymous]
What do you plan on spending your time on if you don't go to school? Most jobs largely consist of being forced to do some assignment that you feel isn't worth your time. - you're not going to be escaping that by dropping out. And I'd wager that a college degree is one of the best ways to snag a job that you DO actually enjoy. I suspect the REAL value of a college degree, aside from the basic intelligence indication, is that it says you can handle 4 years doing largely unpleasant work.
9Zack_M_Davis
I can't speak for all people or all jobs, but in my experience, there's a certain dignity and autonomy in paid work that I never got out of school. After quitting University, I worked in a supermarket for nineteen months. Sure, it was low-paying, low-status, and largely boring, but I was much happier at the store, and I think a big reason for this was that I had a function other than simply to obey. At University, I had spent a lot of time worrying that I wasn't following the professor's instructions exactly to the letter, and being terrified that this made me a bad person. Whereas at the store, it didn't matter so much if I incidentally broke a dozen company rules in the course of doing my job, because what mattered was that the books were balanced and the customers were happy. It's not so bad, nominally having a boss, as long as there's some optimization criterion other than garnering the boss's approval: you can tell if you couldn't solve a customer's problem, or if the safe is fifty dollars short, or if the latte you made is too foamy. And when the time comes, you can clock out, and walk to the library, with no one to tell you what to study. Kind of idyllic, really.
4CronoDAS
I worked at a supermarket for three days, and was fired for insubordination. (I wanted to read a book when there were no customers coming to my register, and the boss told me not to...)
1gwern
I have a similar story; except in my case I was fired because my shirt was insufficiently black.
2Douglas_Knight
Could you elaborate? Were you fired for once not having a black shirt, or for not being able to acquire / evaluate black shirts? or, if it's possible to tell, having a bad attitude about the shirt rule?
4gwern
I had a shirt I felt was black & meet the dress code; the manager felt that it didn't. I felt that since I had already spent something like 60$ on new clothes to meet the dress code, and since I didn't interact with the customers at all, I wasn't going to go and buy a new black shirt. The manager felt I no longer needed to work there.
3DanArmak
Letting your future employer know you're willing to do all the unpleasant stuff you feel isn't worth your time. If you did it for a piece of paper, then surely you'll do it for a paycheck... right?
0[anonymous]
If I'm not actually willing to put up with pointless stuff for a paycheck, would I benefit from signaling that I am? Or would I just lose a useful filter on potential employers?
5wedrifid
In as much as most people require the motivational structure and then if you consider the material worth learning. Yes.
4[anonymous]
Well... that isn't the answer I wanted. I wanted "no".
1billswift
The correct answer should have been "It depends". Mostly on what you might want to do with that paper. I would mostly say "No". Unless you want a fairly boring, routine job working for someone else.
-1denisbider
In general, the answer is: If you intend to always work for yourself, owning your own companies, being your own boss, then a diploma is a waste of time. Diplomas are for people who want to work for others. But if you want to work for others, then get a degree, by all means. If you work for yourself, your customers are generally going to be moved by most other factors prior to being moved by the owner's formal education. Bosses and owners, however, are going to be moved by degrees. Owners like to see their underlings to have degrees because it demonstrates a certain irrational loyalty, and a lack of business savvy. This assures the owner that he will remain in charge - that you won't negotiate too hard for your benefits, or run away with his business plans and start a competitive company, etc. Bosses like to see their underlings to have degrees because they had to get one as well, so why shouldn't you suffer at least as much. By getting a degree, you signal your acceptance of your humble status in the pecking order. This is a prerequisite if you want to find your place in the hierarchy, but pointless if you want to be at the top.
6Jordan
There are some people who prefer to work for others, and some who prefer to work for themselves; however, the vast majority of people prefer neither, and for them college is neither a waste of time nor a means to signal: it is a stay of execution.
4Jordan
My two cents: If you're excelling in math, move up to a higher level. Math departments are usually very flexible in this regard (engineering departments not always so). My freshman year I signed up for a couple of graduate level math classes, and believe me, the knowledge I gained is not to be found in Wikipedia, or any other written form. You have to struggle for an understanding of higher math, and the setup for the struggle is greatly helped by having fellow students, a professor to guide you, and hard deadlines to motivate you. I also felt a lot of classes I was forced to take were incredibly lame. I dropped a few classes throughout my undergrad, including two English classes. All I cared about was math as an undergrad, and because of that the education I got was incredibly impoverished. Looking back, I think this was simply a defense mechanism. I knew I was a hot shot at math, so whenever I felt challenged in another subject it was easier to simply say, "This is trivial, I just can't be bothered! I'm clearly intelligent anyway." Don't let the knowledge of your own intelligence prevent you from undertaking things that challenge your supposed intelligence! In particular, writing papers is hard, but is often misidentified by science oriented people as being lame or stupid. Now, as a graduate student, I fantasize about being an undergrad again and having the luxury of being coerced into studying a variety of different topics. Yes, there are still lame aspects to many classes, but that is largely a factor in lower division work. If you can teach yourself then do so! Leverage your intelligence, learn more, and get yourself into upper division classes in multiple subjects where you can interact with intelligent people who are passionate about the subject, and where the professor will treat you like a valuable resource to be developed rather than simply a chore.
4JamesAndrix
This depends on a lot of things: How much debt will you be in at the end? If you press on now, will you actually finish? Do you have the personality to make money without a diploma? I made the mistake of pressing on early and incurring extra debt, but not pushing through to get a diploma. Not having a diploma is hard if you want the kinds of jobs that often require one arbitrarily. Doing something freelance or taking a non-degree job are hard in other ways. Fortunately you can test this with some time away from college. There's also a difference between what you CAN learn on your own and what you will actually take the time to learn. I know there are things that I would have been forced to learn which I have neglected to. If you're probably not going to finish, then cut your losses now, but make a clean break that will make it easy to go back. Finish the semester well.
4mormon2
This is going to sound horrible but here goes: In my experience schools value depends on how smart you are. For example if you can teach yourself math you can often test out of classes. If your really smart you may be able to get out of everything but grad-school. Depending on what you want to do you may or may not need grad school. Do you have a preferred career path? If so have you tried getting into it without further schooling? The other question is what have you done outside of school? Have you started any businesses or published papers? With a little more detail I think the question can be better answered.
4zaph
I reservedly second Wedrifid's comment that the little piece of paper at the end is worth it. I know people who have gone far in life without one, and I don't mean amazing genius-savants either, just folks who spent time in industry, the military, etc. and progressed along. But I've also seen a number who got stuck at some point for lacking a degree. This was more a lack of signaling cred that smarts or ability. The statistics show that people with degrees on average earn more than those who don't, if that's of interest to you. But degrees don't instantly grant jobs, and some degrees are better preparation than others for the real world. It sounds like you're interested in a degree in math, which carries over into a lot of different fields. I think it's great that your taking stock of what your education experience is giving you. As Wedifrid mentioned, the motivation is an important part of schooling, and if you're in a program that is known to be rigorous, the credentials are definitely worth it. But those have to be weighed against current employment options. I'd encourage you to consider working with professors on research, investigating internships, etc., so that you get the full educational experience that you're looking for, and not be one of those graduates that only took classes and then expected a job to be waiting for them when they graduated.
1Zack_M_Davis
Correlation is not causation. Graduates as a group are smarter and more ambitious than nongraduates. The question is not whether people with a degree do better; the question is what the degree itself is buying you, if you're already a smart ambitious person who knows how to study.
2Technologos
I recall some studies (I hate not remembering authors or links) that tried to control for the effect of the degree itself by comparing those who got into a particular school but graduated from somewhere else to those who graduated from that school. Controls for the general "graduate" characteristic, but still misses the reasons for the choice. Upshot was that there wasn't much difference in income, though I believe that was in part because the highest-level schools send a substantial part of their undergraduates on to become academics.
3gwern
http://www.psychologytoday.com/blog/freedom-learn/200810/reasons-consider-less-selective-less-expensive-college-saving-money-is-jus
2Douglas_Knight
That quote asserts that SAT scores are the same as prestige. The 1998 and 1999 drafts of the paper looked at both, with different results, finding that average SAT score didn't matter, but various measures of prestige did. They have three versions of prestige: variance of SAT scores, Barron's ratings, and tuition. Variance is dropped in the 2002 published version. Tuition still predicts income. The most direct measure of prestige, rankings, seems to be quietly dropped in the few months between the 1998 and 1999 versions (am I missing something?). The final version seems to say on 1515, in a weirdly off-hand manner, that it doesn't matter, but I'm not sure if it's the same measure.
0CronoDAS
I remember reading about that study in the New York Times. I think that they said that they only found evidence of an income effect for black students...
0gwern
http://www.nytimes.com/2008/04/19/business/19money.html I suppose black students do tend to be poorer...
0Technologos
There it is. Thanks!
0wedrifid
Thanks for that link. I had wondered.
2Cyan
Off-topic*: Someone recently made the suggestion that it should standard practice to link the Welcome Thread in the body of all Open Thread posts going forward, and I think that's a great idea. * ...but made as a reply to Warrigal to bring it to the attention of the owner of this open thread; not a PM so as to throw it open to general comment.
1[anonymous]
I read your comment when you posted it. I wonder why it took me until now to realize that by "the owner of this open thread", you meant me.
1Jack
That non-math class sounds dreadful. Are you really in to classics or something? Also, I don't know where you go to school but a lot of places allow students to do independent-study in an area with the guidance of a professor. This is a really good option if the best non-math course you can find involves reading the Iliad. Also, I'm really just replying to this so that I can congratulate you on this sentence: This is maybe the best sentence I have read in the last few months.
1[anonymous]
Well, there's a choice of nine "Arts & Humanities" sequences I could be taking. Each one covers a single civilization (e.g. ancient Greece and Rome, early Europe, the Islamic Middle East) in detail, including history and paper-writing. Each consists of one double class each semester for a year. This sequence is the biggest component of the general education requirements here. Perhaps dreadfulness is mandatory. Awesome! Now, if only I could figure out why.
1Jack
Perhaps some is. But that requirement sounds especially bad. It definitely isn't a universal requirement. Any particular reason you are at this university? I know some schools have gotten rid of core requirements altogether (though if you aren't in the US you probably have fewer options). It is simple. And the notion that we should celebrate unwieldy discussions (and do so by expanding them!) perfectly encapsulates the culture of Less Wrong. But celebrating and unwieldy are two words that are never related in this way which makes the sentence seem fresh and counter to prevailing custom.
1dominov
Oh god, this is still an issue for people in college? And here I was assuming that after I got out of high school I wouldn't think along these tempting-yet-ultimately-ruinous lines ever again.
1Matt_Simpson
It depends. The first few years may be like this as you take a bunch of classes in areas your probably aren't interested in, but if you choose a major you like, it gets better as you schedule becomes dominated by those classes. Your own personality is another important factor here.
1dominov
Ach, I had not realized that required classes in college might feel as useless as required classes in high school. But perhaps college classes will be more rigorous and less likely to induce I-Could-Learn-This-On-Wikpedia Syndrome. I can but hope.

Sorry if this is getting annoying, but I recently thought of two new ideas that might make interesting video games, and I couldn't resist posting them here:


The first idea I had is an adventure game where you have a reality-distorting device that you must use before you try to do anything that wouldn't work in real life, but that you must not use before you do anything that would work in real life.

If you fail to use the device before doing something that wouldn't work in real life, then the consequences will be realistic, and disastrous. For example, if y... (read more)

6wedrifid
I am mentally cringing at the idea of being forced to guess the game developer's password. The first time I am punished for something that should work but doesn't I would have to discard the game. For a game of any significant depth or breadth I would be shocked if I couldn't come up with a strategy that the developer hadn't considered and is penalised inappropriately. I suspect I would find a more conventional game a more useful (and enjoyable) challenge to my rational thinking. Not that a game designed to teach some chemistry (gunpowder, etc) and engineering (what happens with the gunpowder takes out that post?) is useless. I just think it is an inferior tool for training rationality specifically than, say ADOM is.
3PeerInfinity
Perhaps I didn't explain clearly: In the game, whenever you make any significant action, you must choose whether to do so with the reality-distorting device on or off. You make this decision based on whether you expect that the plan would work in real life or not. This means that if there is a "game developer's password", then it's only one bit long for each decision, and can be guessed by trial and error. Perhaps this is a feature, rather than a bug. If you save your game before making the decision, then you don't even lose any time. Perhaps the game could have an "easy mode" where the game just shows you the results of your choice, and then continues as if you had made the right choice, rather than forcing you to restart or reload from a saved game. And I agree that the game shouldn't require advanced knowledge of chemistry and engineering. The gunpowder/pillar thing was just the first example I thought of. Anyway, this game was just a random idea I had, and your criticism is welcome. And is this the ADOM you're referring to? http://en.wikipedia.org/wiki/Ancient_Domains_of_Mystery I suppose that pretty much any game (not just video games) can be better for training rationality than more passive forms of entertainment, like watching TV. Pretty much any game is based on objective criteria that tell you when you made a bad decision. Though it's not always easy to figure out what the bad decision was, or what you should have done instead, or even if there was anything you could have done better.

Ok, so I just heard a totally awesome MoBio lecture, the conclusions of which I wanted to share. Tom Rando at SUSM found that myogenic stem cells divide asymmetrically such that all of the original template chromatids are inherited by the same daughter cell and then the other daughter cells go on to differentiate. This might imply that an original pool of stem cells act as templates for later cell types, preserving their original DNA, and thus reducing error in replications, since cells are making copies of the originals instead making copies of copies... (read more)

New study shows that one of LW's favorite factoids (having children decreases your happiness rather than increases it) may be either false or at least more complex than previously believed: http://blog.newsweek.com/blogs/nurtureshock/archive/2009/11/03/can-happiness-and-parenting-coexist.aspx

I've been trying to ease some friends into basic rationality materials but am running into a few obstacles. Is there a quick and dirty way to deal with the "but I don't want to be rational" argument without seeming like Mr. Spock? Also, what's a good source on the rational use of emotions?

I've been trying to ease some friends into basic rationality materials but am running into a few obstacles.

I suggest the same techniques that work with any kind of evangelism. Convey that you are extremely sexually attractive and otherwise high in status by virtue of your rationalist identity. Let there be an unspoken threat in the background that if they don't come to share your beliefs someone out there somewhere may just kill them or limit their mating potential.

2SilasBarta
Sad thought, but that explains what makes evangelism successful. To whoever modded wedrifid down: was it because of the implicit endorsement of bad behavior, or because you have some reason to believe this is not how evangelism often works?
1jimmy
I think it's worth distinguishing between two possible reasons to be against endorsement. One is that this is bad epistemic hygiene. The other is the possibility of lost purpose so that the person ends up trying to "act" rational rather than be rational. In response to the former, epistemic hygiene is good and should be practiced when when possible, but is not necessary. Bullets kill good guys just as easily as bad guys, but guns remain a valuable tool if you're sufficiently careful. I'm surprised there hasn't been more discussion of when usage of the 'dark arts' is acceptable. In response to the latter, how might we make sure we achieve the wrong goal here?
1RobinZ
"Implicit" endorsement?
0wedrifid
If it is given that I think evangelising rational culture is possibly a net negative to the culture itself even the implication is gone.
0RobinZ
Could you rephrase that? I'm not sure what you think I should assume or what that assumption implies with regards to your original statement. My objection was similar to jimmy's, if that helps.
0wedrifid
I also implied a similarity between the in-group and out-groups, in particular a similarity to the out-group 'religious believers'. Then there is the fact that my suggestions just don't really help Bindbreaker in a practical actionable way. Not that my suggestions weren't an effective recipe for influence. It's just that they are too general to be useful. Of course, I could be more specific about just what techniques Bindbreaker could use to generate the social dominance and influence he desires but that is just asking for trouble! ;)
1SilasBarta
Don't sell yourself short! The first part (about conveying sexual attractiveness) might not be actionable, since people are generally already doing whatever they know how to do to maximize this or are okay with its current level. But the second part (about the implied threat of not joining) certainly converts easily into actionable advice. At least, it's far more specific and usable than most dating advice I've seen!
0wedrifid
Interesting. My intuition would be that the 'convey sexual attractiveness' part is more actionable than the implied threat part. I think the amount of influence that can be gained by increasing personal status is greater per unit of effort than that that can be expected from attempting to socially engineer an infrastructure that coercively penalises irrationality. Maybe that is just because I haven't spent as much time researching the latter!
1LauraABJ
That's an interesting proposition you have going. In order to convey the superior sexual attractiveness of rationality we need some sexy rationalists to proselytize. Thank you Carl Sagan! But seriously, the problem might be that basic rationality doesn't translate easily into sexuality, threat, or other emotional appeal. Those things need to be brought in from other skill sets. Rationality can help apply skills and techniques to a given end, but it doesn't give you those techniques or skills.
0[anonymous]
A significant part of my point is that rational persuasion isn't the most effective way of influencing them or of drawing them into a belief system.
1SilasBarta
To achieve this: it's only necessary that to convincingly give the impression that failure to join will have those negative consequences. You don't need to actually move society in this direction! What I had in mind for Bindbreaker's case was something like, "If you're not familiar with rationality, you leave yourself open for being turned into a money pump. I know a ton of people who know exactly how to do this [probably a lie], and I'd really hate for one of them to take advantage of you like that [truth]! I'd never forgive myself for not doing more to teach you about being a rationalist! [half-truth]" Not that I'd advocate lying like that, of course :-(
1Technologos
Danger, Will Robinson: "If you're not familiar with [Jesus], you leave yourself open for [going to hell]. I know [the devil] knows exactly how to [send you there], and I'd really hate for [the devil] to take advantage of you like that. I'd never forgive myself for not doing more to teach you about [Jesus]!" At least the argument for rationalism would be in terms they are familiar with, I suppose.
3SilasBarta
I said it would be more convincing, not that it would necessarily be a better argument. And I think the money pump is just a little more demonstrable than the devil. In any case, the way that you would achieve the subtle threats when evangelizing standard, popular religions wouldn't be with any kind of direct argument like that one. Rather, you would innocently drop references to how popular it already is, how the social connections provided by the religion help its members, how they have strength in numbers and strength members' fanaticism (hinting how it can be deployed against those it deems a threat) ... you get the idea.
4JamesAndrix
I've asked this before: Why don't rationalist run money pumps? As far as I know, none of us are exploiting biases or irrationality for profit in any systematic way, which is itself irrational if we really believe this is an option. We're either an incredibly ethical group, or money pumping isn't as easy as it would seem from reading the research.
2SilasBarta
I think you've answered your own question. Let me elaborate: 1) Rationalists significantly overestimate people's vulnerability to money pumps, often based on mistaken views about how e.g. religious irrationality "must" spill over into other areas. 2) Even if you don't care about ethics, scamming people will just make the population more suspicious of people claiming mastery of rationalist ideas.
2gwern
To elaborate your elaboration: To do money pumping on a grand scale, you have to be in the financial markets; but there are no money pumps there which aren't being busily pumped away. ('Bears make money, bulls make money; pigs get slaughtered'.) This is true for pumps like casinos, too - lots of competition. And most ways to make a money pump in other areas have been outlawed or are regulated; working money pumps like Swoopo (see http://www.codinghorror.com/blog/archives/001196.html ) are usually walking a fine line.
0wedrifid
The potential for a money pump is an indication that the preference system is inconsistent and so potential for exploit to some extent. It does not mean that the agent in question must be incapable of altering their preferences in the face of blantant exploit. 'Patching' of a utility function in the face of inconsistencies that are the most easy to exploit comes naturally to humans. With that in mind I observe that bizarre and irrational preferences and the exploitation thereof are extremely prevalent and I would go as far as to say a significant driver of the economy. Of course, it isn't only rationalists that enjoy the benefits of exploiting suckers.
1Technologos
I'm not disagreeing with you, I think. Until rationalists start showing tangible social benefits like the ones backing the subtle threats you mentioned, it will be hard to get people in the door who aren't already predisposed. Though I have had trouble developing a demonstrable money pump that can't be averted by saying "I would simply be too suspicious of somebody who offered me repeated deals to allow them to continually take my money." Of course, the standard retort might be "you play the lottery," but then that's not a great way to make people like rationalists/rationalism.
1SilasBarta
Okay, we're in agreement; I just wasn't sure what your ultimate point was, and used the opportunity to point out how the technique is used in other contexts.
1wedrifid
The comparison I'm concerned with is "people who don't conform to the beliefs of the orthodoxy are burned as heretics".
8Scott Alexander
To Eliezer's list, I would add "Something To Protect" and the very end of "Circular Altruism". When a friend of mine said something similar during a discussion of health care about not really wanting to be rational, I linked him to those two and summarized them like this (goes off and finds the discussion): If you're using a different example with something less important than saving lives, maybe switch to something more important in the cosmic scheme of things. I'm very sympathetic to people who say good feelings are more important to them than a few extra bucks, and I don't even think they're being irrational most of the time. The more important the outcome, the more proportionately important rationality becomes than happy feelings.
3clay
I thought that this may be of interest to some. There was an IAMA posted on reddit from a person that suffers from alexithmia or lack of emotions recently. Check it out. http://www.reddit.com/r/IAmA/comments/9xea8/i_am_unable_to_feel_most_emotion_i_have/
3zaph
Are they saying that they don't want to be rational, or just not emotionless? I think that people do want to be rational, in some sense, when dealing with emotions, but they're just never going to have interest in, say, Kahneman and Tversky , or other formal theory. I've noticed that some women I know have read "He's Just Not That Into You", which from how they describe it, sounds like strategies on rationally dealing with strong emotions. I know it sounds hokey, but people have read that book and were able to put their emotions in a different light when it comes to romantic relationships. I couldn't tell you if the advice was good or not, but I think it does sound like there's at least an audience for what you're talking about.
2LauraABJ
People don't want to go through the formal processes of being rational in many emotional situations (and they are often right not to). I think letting people know that sometimes its rational not to go through the formal routes, because the outcome will be better if they don't (and it's rational to want the best outcome). For example, if you just met a person you might want a relationship with, don't make said person fill out a questionairre and subject them to a pros-cons list of starting said relationship (I know this sounds absurd, but I know someone who did just this to all her boyfriends. Perhaps fittingly she ended up engaged to an impotent Husserlian phenomenologist twice her age.)
1Bindbreaker
Usually they seem to think that being rational is the same as being emotionless, despite my efforts to convince them otherwise. I think this may again be thanks largely to that dreaded Mr. Spock.
6Psy-Kosh
Just keep saying (with your voice clearly pained, no need to hide the feeling) "ugh... Spock, or vulcans in general, are NOT rational. They are what silly not so rational scriptwriters imagine rationality to be", I guess?
3jimmy
I'd try playing taboo with the word "rational". You both agree that being spocklike is bad, so instead of fighting with those connotations, just try to point out that theres a third alternative and why it's better.
3Eliezer Yudkowsky
http://lesswrong.com/lw/hp/feeling_rational/ http://lesswrong.com/lw/go/why_truth_and/ http://yudkowsky.net/rational/virtues

Perhaps there should be an 'Open Thread' link between 'Top' and 'Comments' above, so that people could get to it easily. If we're going to have an open thread, we might as well make it accessible.

Anyways, I was looking around Amazon for a book on axiology, and I started to wonder: when it comes to fields that are advancing, but not at a 'significant pace', is it better to buy older books (as they've passed the test of time) or newer ones (as they may have improved on the older books and include new info)? My intuition tells me it's better to buy newer books.

3RobinZ
Assuming total ignorance of the field (absent total ignorance, I could probably distinguish between good and poor books), I'd choose newer editions of older books.
0wedrifid
That's a good point.
[-]Cyan30

Why is TvTropes (no linky!) such a superstimulus?

4Alicorn
I think a fair bit of it is the silly titles. I can resist clicking on things that I can figure out what they are from what they're named (such as when I'm intimately familiar with the Trope Namer), but toss me a bewildering title and I have to know what it is and where it got that name.
1CronoDAS
Also, it's a subject in which everyone is an expert simply by virtue of simply living in our culture.
1RobinZ
One factor: it provides variable interval positive reinforcement* - those moments when you see a page which describes something you recognize happening all the time, and those moments when you see a show you recognize acknowledged on the page. * Edit for those who don't want to follow the link: variable-interval reinforcement occurs with some set frequency (approximately, in this case), but at non-equal spacings. Other things with variable intervals are raindrops falling on a small area of pavement, cars passing on a street, and other things which are loosely modeled by Poisson processes. Any (say) ten-minute period has about the same number as any other ten-minute period, but they aren't spread out at regular intervals.

I posted an idea for 'friendly' AI over on AcceleratingFuture the other night, while in a bit of a drunken stupor. I just reread it and I don't immediately see why it's wrong, so I thought I'd repost it here to get some illuminating negative feedback. Here goes:

Make it easy to bliss out.

Consider the following utility function

U(n, x_n) = max(U(n-1, x_{n-1}), -x_n^2)

where n is the current clock tick, x_n is an external input (aka, from us, the AI’s keepers, or from another piece of software). This utility is monotonic in time, that is, it never decreases, a... (read more)

4Vladimir_Nesov
Expected utility is not something that "goes up", as the AI develops. It's utility of all it expects to achieve, ever. It may obtain more information about what the outcome will be, but each piece of evidence is necessarily expected to bring the outcome either up or down, with no way to know in advance which way it'll be.
0Jordan
Can you elaborate? I understand what you wrote (I think) but don't see how it applies.
1Vladimir_Nesov
Hmm, I don't see how it applies either, at least under default assumptions -- as I recall, this piece of cached thought was regurgitated instinctively in response to sloppily looking through your comment and encountering the phrase which was for some reason interpreted as confusing utility with expected utility. My apologies, I should be more conscious, at least about the things I actually comment on...
0Jordan
No worries. I'd still be curious to hear your thoughts, as I haven't received any responses that help me understand how this utility function might fail. Should I expand on the original post?
0Vladimir_Nesov
Now I hopefully did read your comment adequately. It presents an interesting idea, one that I don't recall hearing before. It even seems like a good safety measure, with a tiny chance of making things better. But beware of magical symbols: when you write x_n, what does it mean, exactly? AI's utility function is necessarily about the whole world, or its interpretation as the whole history of the world. Expected utility that comes into action in AI's decision-making is about all the possibilities for the history of the world (since that's what is in general determined by AI's decisions). When you say "x_n" in AI's utility function, it means some condition on that, and this condition is no simpler than defining what the AI's box is. By x_n you have to name "only this input device, and nothing else". And by x_n=0 you also have to refer some exact condition on the state of the world, one that it won't necessarily be possible to meet precisely. So the AI may just go on developing infrastructure for better understanding of the ultimate meaning of its values and finer and finer implementation of them. It has no motive to actually stop. Even when AI's utility function happens to be exactly maxed out, the AI is still there: what does implementation of an arbitrary plan look like, I wonder? Maybe just like the work of an AI arbitrarily pulled from mind design space, a paperclip maximizer of sorts. Utility is for selecting plans, and since all plans are equally preferable, an arbitrary plan gets selected, but this plan may involve a lot of heavy-duty creative restructuring of the world. Think of utility as a constructor for AI's algorithm: there will still be some algorithm even if you produce it from "trivial" input. And finally, you assume AI's decision theory to be causal. Even after actually maxing out its utility, it may spend long nights contemplating various counterfactual opportunities it still has at increasing its expected utility using possibilities that weren't r
0Jordan
This is what I sought to avoid by making the utility function depend only on a numerical value. The utility does not care which input device is feeding it information. You can assume that there is an internal variable x, inside the AI software, which is the input to the utility function. We, from the outside, are simply modifying the internal state of the AI at each moment in time. The nature of our actions, or of the the input device, are intentionally unaccounted for in the utility function. This is, I feel, as far from a magical symbol as possible. The AI has a purely mathematical, internally defined utility function, with no implicit reference to external reality or any fuzzy concepts. There are no magical labels such as 'box', 'signal', 'device' that the utility function must reference to evaluate properly. I wonder too. This is, in my opinion, the crux of the issue at hand. I believe it is inherently an implementation issue (a boundary case), rather than a property inherent to all utility maximizers. The best case scenario is that the AI defaults to no action (now this is a magical phrase, I agree). If, however, the AI simply picks a random plan, as you suggest, what is to prevent it from picking an alternative random plan in the next moment of time? We could even encourage this in the implementation: design the AI to randomly select, at each moment in time, a plan from all plans with maximum expected utility. The resulting AI, upon attaining its maximum utility, would turn into a random number generator: dangerous, perhaps, but not on the same order as an unfriendly superintelligence.

A friend asked me a question I'd like to refer to LW posters.

TL;DR: he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?

My friend has a background in programming, physics, engineering, and information security and cryptography. He's smart, he's already financially successful, has friends who are also likely to become successful and influential, and he's also good at direct interactions with people, reading and understanding them and being likable - about as good as I am capa... (read more)

0wedrifid
He could start with shut up and multiply. (Or, perhaps he could just change 'best' to 'most appealing'.)
0DanArmak
Rereading what I wrote, I don't quite agree with it myself... I retract that part (will edit). What I wanted to say (and did not in fact say) was this. To take the example of FAI research - it's hard to measure or predict the value of giving money to such a cause. It doesn't produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It's hard to measure its progress for someone who isn't at least an AI expert. It's very hard to predict the FAI research team's probability of success (as with any complex research). And finally, it's hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks. If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.
[-]Jack30

I'd like to start talking about scientific explanation here. This is the particular problem I have been working on recently:

A plausible hypothesis is that scientific explanations are answers to "why" questions about phenomena. If I hear a "cawing" noise and I ask my friend why I hear this cawing. This is a familiar enough situation that most of us would have our curiosity satisfied by an answer as simple as "there is a crow". But say the situation was unfamiliar (perhaps the question is asked by a child). In that case "th... (read more)

1jscn
The problem is even worse than that, because "Sometimes, crows caw" predicts both the hearing of a caw and the non-hearing of a caw. So it does not explain either (at least, based on the default model of scientific explanation). If we go with "Crows always caw and only crows caw" (along with your extra premises regarding lungs, sound and ears etc), then we might end up with a different model of explanation, one which takes explanation to be showing that what happened had to happen. The overall problem you seem to have is that neither of these kinds of explanation gives a causal story for the event (which is a third model for scientific explanations). (I wrote an essay on these models of scientific explanation earlier in the year for a philosophy of science course which I could potentially edit and post if there's interest.) Some good, early papers on explanation (i.e., ones which set the future debate going) are: The Value of Laws: Explanation and Prediction (by Rudolf Carnap), Two Basic Types of Scientific Explanation, The Thesis of Structural Identity and Inductive-Statistical Explanation (all by Carl Hempel).
1Jack
This issue actually came up while I was reading Hempel's "Aspects of Scientific Explanation". It can be seen as a specific objection to the covering law model as well as a general problem for all explanation. Think of it as a poorly specified inductive-statistical explanation. Not at all. One problem with Hempel is that there are covering-law predictions that aren't causal stories and therefore don't look like explanations. For example, if some event X always causes Y and Z then we can have a covering law model predicting Z from Y and Laws. But that model doesn't result in an explanation for Z. But even a causal explanation is going to have general laws which aren't reducible. Thus, the problem would remain. And actually, "crows caw" is a causal explanation so I'm not sure why you would think my problem was the absence of causation. If you did see my last two paragraphs in this reply I think they do a better job explaining the problem than this first post. And by all means, post anything you think would be insightful.
0byrnema
Great topic. I would enjoy seeing this as a top level post. So what is the answer that would satisfy the child? For an adult, saying "that was a crow, and crows sometimes caw" seems like a fairly complete answer because we already possess a lot of contextual information about animals and why they make noises. A lot of the lower-level 'explanation' you described above was really about how the crow cawed (lungs, vibrations, etc.) With this information, you could build a mechanical crow that did in fact crow -- but the mechanism for how the mechanical crow caws would have nothing to do with why the organic crow caws. The real explanation for the crow's caw is in the biology of the crow. The crow caws to communicate something to other crows, because it is part of a social group. As reductionists, we all accept that biology would be reducible to interacting layers of complicated physics, but this example about the crow really gives us a concrete example to see that it may not be immediately straight-forward how reductionism is supposed to work. No amount of detail regarding how the crow caws is going to get at why it does, because we can build the mechanical crow that caws for no reason. On the other hand, we can make a little mechanical gadget that beeps for the same reason that the crow caws -- to let other gadgets know it's around or needs something. The gadget and the crow don't have to have much of anything in common material-wise in order to have the same 'why'. What they do have in common is something more abstract, a type of relational identity as a member of a group that exchanges information. Later edit: I think we could inflate 'physics' to include this type of information, because physics has mathematics (and algebra). So we'll be able to define things that really depend on relationships and interactions, rather than actual material properties, but I wonder to some extent if this is what was envisioned as going to be eliminated by reductionism?
1Jack
Ok, but what is the form that contextual information takes? I'm skeptical that most adults actually have a well-formed set of beliefs about the causes or biophysics of animal behavior. I think my mind includes a function that tells me what sorts of things are acceptable hypotheses about animal behavior and I have on hand a few particular facts about why particular animals do particular things. But I don't think anyone can actually proceed along the different levels of abstraction I outlined. I'm worried that what actually makes us think "there is a crow and crows caw" is that it just connects the observation to something we're familiar with. We're used to animals and the things that they do and probably have a few norms that guide our expectations with animals. But at rarely if ever are we reducing or explaining things in terms of concepts we already already fully understand. Rather, we we just render familiar the unfamiliar. Think of explanations as translations, for example. If I say "a plude is what a voom does" no one will have any idea what I mean. But if I tell you that a plude is a mind and a voom is a brain suddenly people will think they understand what I mean even if they don't actually know what a mind is or anything about a brain. I wonder if a taxonomy of explanations would be worth while. We have physical reduction, token history explanation (how that crow got outside my window and what lead it to caw at that moment), type historical explanation (the evolutionary explanation for why crows caw)... I'm sure we can come up with more. Note though that any historical explanation is going to be incomplete in the way explain above because it will appeal to concepts and entities that need to be reduced. Anyway, the "explanation" in my above comment was never for "why do crows caw" but why do I hear cawing. And I posited that there was a crow and that crows caw. These assumptions are sufficient to predict the possibility of cawing. But even though they are pre
1loqi
Why is this the privileged "real explanation"? For example, the real explanation is that evolution produces complex social assemblies in need of signaling mechanisms. Or the real explanation is that an asteroid or comet disrupted the previous biological configuration, allowing crow-like birds to evolve. Etc... We don't expect that reductionist approaches have the magical potential to successfully answer all possible questions. It's possible (and necessary) for information to be irretrievably lost. So how it's supposed to work is actually straight-forward: seek evidence that distinguishes between different causal hypotheses for the crow's caw. Depending on what you're looking for, there may be no meaningful explanation, as in the case of chaotic systems. For example, the most concise explanation for a particular cloud being of a particular shape may just be the entire mountain of data comprising the positions and velocities of the air and water particles involved. I think this is an overstatement. The only hard upper bound we have on how much information might be contained in a crow's vocal system is the number of possible states in the physical system comprising it, which is huge. It even seems conceivable that significant portions of a crow's DNA might be reconstructible from a detailed enough understanding of its vocal system. I'd say the mechanical crow caws because it was built that way. Then you're faced with the question of how something can possibly be built for no reason. But "this type of information" is stated in terms of physically observable phenomena. We can reason logically and mathematically about the things we observe without new physics, as long as the observations themselves have believable reductions to known physics. I don't see what you're looking for that isn't captured by a reductionist model of the crows, their communication mechanisms, their brains, and their evolutionary history.
3byrnema
I'm certain there is not enough information in how the crow caws. For example, there is not enough information even in the DNA; if the base pairs could be deduced from it's caw (which I would guess is impossible), because the full explanation will involve it's environment and the other crows. (For example, if you cloned a horse in a sterile laboratory, you wouldn't know why it swished it's tail without also cloning a fly.) We know there is enough information in the whole universe. The crow, its environment and its entire evolution history do explain everything about it. So our different answers to the 'why' a crow caws are different ways and angles of summarixing the limited story that we know about the whole universe. I agree with Jack that it would be useful to have a 'good' classification of these answers. However, it's not a project I would be interested in following, generally, because the quality of the outcome is too subjective. I would enjoy reading the classification of someone who thought the way I did, and would find it frustrating to read that of someone who thought differently, with no tools to distinguish 'quality' beyond this feeling of accord or frustration. Exactly. At least, we may agree that the crow did caw.
0loqi
I'd agree that there probably isn't enough information, but I think your certainty is misplaced. I'm guessing the crow's DNA contains quite a lot of information about its environment and social habits. I have yet to be convinced that a Bayesian superintelligence couldn't infer the existence of fly-like organisms from a horse's DNA.
0byrnema
Actually, it seems we agree. I'd agree that there could be enough information in the horse DNA to deduce many salient features about the fly. In fact, I might even put a higher probability on the information being in there somewhere than you would. But I thought we were trying to determine where such information is coded ... in other words, how large a swathe of information would you need to guarantee that you have enough? But I see the conversation has drifted over time. What I was saying at the beginning, which I believe you disagreed with, was that the answer was mathematical in some way (algebraic, actually, because my favored answer to the 'why' was about relationships among the crows rather than about the materials the crow is made of) while you were pressing it should still be answered in the physicality of the universe: So by now I've now changed my view. I agree with you that all the answers do ultimately lie in the materials: the crows and their material environment. At the time of my first post, I had preferred to answer that the crow had a "purpose" (to speak with other crows) but of course this is a story which would actually reduce to a bunch of statistics over time that crows had better fitness when they communicated in effective ways.
0[anonymous]
Well, sure. A Bayesian superintelligence would probably guess that a crow caws to communicate with other crows even without the crow's DNA. There's a lot of similarity and pattern in the universe, and you can infer much by analogy. What we're debating, however, isn't what a superpower might be able to infer but where the information is coded for why the crow caws. Perhaps the universe is deterministic and everything can be deduced by a superintelligence from the periodic table of the elements and the number of pigeons born in Maine on Sunday. Only in this sense would the DNA of the crow contain information about the caw and the DNA of the horse contain information about the fly. This is why I am so confident: The DNA base pairs of life are random, except for the fact that they need to code information that leads to better fitness. Yet coding information provided by the environment itself would be redundant information-wise. So while the information could be there, by accident, there's no reason that it would be there necessarily. I imagine that if efficient fly-swatting leads to some genetic advantage, then one might deduce the size and weight of a fly from the length and motion dynamics of the tail. That would be neat. But unlikely, because what are the chances that the tail is so tuned? Why should the information necessarily be there?

Just great. I had a song parody idea in the shower this morning, and now I'm afraid that I'm going to have to write a rationalist version of Fiddler on the Roof in order to justify it.

"Mapmaker, mapmaker,
Make me a map,
Text me a truth,
Fax me a fact ... "

3Alicorn
2Zack_M_Davis
0Alicorn
1Jordan
For some reason the tune I had in my head while I was reading this switched from "If I Were Rich Man" to "Bohemian Rhapsody".

I'd like to ask a moronic question or two that aren't immediately obvious to me and probably should be. (Please note, my education is very limited, especially procedural knowledge of mathematics/probability.)

If I had to guess what the result of a coin flip would be, what confidence would I place in my guess? 50% because that's the same as the probability or me being correct or 0% because I'm just randomly guessing between 2 outcomes and have no evidence to support either (well I guess there being only 2 outcomes is some kind of evidence)?

Likewise with a lo... (read more)

4Psy-Kosh
Think of the probability you assign as a measure of how "not surprised" you would be at seeing a certain outcome. Total probability of all mutually exclusive possibilities has to add up to 1, right? So if you would be equally surprised at heads or tails coming up, and you consider all other possibilities to be negligible (Or you state your prediction in terms of "given that the coin lands such that one face is clearly the 'face up' face....") then you ought assign a probability of 1/2 to each. (Again, slightly less to account for various "out of bounds" options, but in the abstract, considered on its own, 1/2) ie, the same probability ought be assigned to each, since you'd be (reasonably) equally surprised at each outcome. So if the two have to also sum to 1 (100%), then 1/2 (50%) is the correct amount of belief to assign.
1Alicorn
Surprise is not isomorphic to probability. See this.
1Yorick_Newsome
Ah, that makes a lot more sense: I was looking at the probability from the viewpoint of my guess (i.e. heads) instead of just looking at the all outcomes equally (no privileged references guesses), if you take my meaning. I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea. Thanks for the reply.
0Psy-Kosh
Well, maybe you were thinking about "how confident am I that this is a fair coin vs that it's biased toward heads vs that it's biased toward tails" which is a slightly different question.
0wedrifid
Given how 'confidence' is used in a social context that differentiation would feel quite natural.
1saturn
In the context of most discussions on this site, "confidence" is the probability that a guess is correct. For example: * I guess that a flipped coin will land heads. My confidence is 1/2, because I have arbitrarily picked 1 out of 2 possible outcomes. * I guess that, when a coin is flipped repeatedly, the ratio of heads will be close to half. My confidence is close to 1, because I know from experience that most coins are fair (and the law of large numbers). "Confidence interval" is just confidence that something is within a certain range. You should also be aware that in the context of frequentism (most scientific papers), these terms have different and somewhat confusing technical definitions.
1Richard_Kennaway
You might want to look at Dempster-Shafer theory, which is a generalisation of Bayesian reasoning that distinguishes belief from probability. It is possible to have a belief of 0 in heads, 0 in tails, and 1 in {heads,tails}. It may be that, when looked at properly, DS theory turns out to be Bayesian reasoning in disguise, but a brief google didn't turn up anything definitive. Is anyone here more informed on the matter?
0Yorick_Newsome
After looking at the reasoning in that article I was about to credit myself with being unintentionally deep, but I'm pretty sure that when I posed the question I was assuming a fair coin for the sake of the problem. Doh. Thanks for the interesting link. (It's really kind of embarrassing asking questions about simple probability amongst all the decision theories and Dutch books and priors and posteriors and inconceivably huge numbers. Only way to become less wrong, I suppose.)

We can mean two things by "existing". Either as "something exists inside the universe", or "something exists on the level of the universe itself"(For example, "universe exists"). These things don't seem to be the same.

Our universe being a mathematical object seems to be tautology. If we can describe universe using math, the described mathematical object shares every property of the universe, and it would be redundant to assume there being some "other level of existence".

One confusion to clear up is some sor... (read more)

Love of Shopping is Not a Gene: exposing junk science and ideology in Darwinian Psychology might be of interest, seeing as evolutionary psychology is pretty popular around here. (Haven't had a chance to read it myself, though.)

Just a bit of silliness:

With apologies to Brad DeLong, when reading WSJ editorials you need to bear two things in mind:

  1. The WSJ editorial page is wrong about everything.
  2. If you think the WSJ editorial page is right about something, see rule #1.

After all, here’s what you would have believed if you listened to that page over the years: Clinton’s tax hike will destroy the economy, you really should check out those people suggesting that Clinton was a drug smuggler, Dow 36000, the Bush tax cuts will bring surging prosperity, Saddam is backing Al Qae

... (read more)
4Scott Alexander
Force anyone to express several controversial opinions per day for several decades and you'll be able to cherry pick a list of seven hilariously wrong examples.
0CronoDAS
Well, can you find something they were right about? (I haven't looked.)

I remember well enough to describe, but apparently not well enough to Google, a post or possibly a comment that said something to the effect that one should convince one's opponents with the same reasoning that one was in fact convinced by (rather than by other convenient arguments, however cogent). Can anyone help me find it?

1Zack_M_Davis
You're probably thinking of "A Rational Argument" or "Back Up and Ask Whether, Not Why".
0Alicorn
Neither of those look quite like it...
0RobinZ
I was reminded of The Bottom Line, for what that's worth, although I see both "A Rational Argument" and "Back Up and Ask Whether, Not Why" link back to it.
0Alicorn
This looked like it might be it for a while, but I have the memory of the statement being made pretty directly, not just stabbed at sideways.
3Zack_M_Davis
The last paragraph of "Back Up" seems fairly explicit. ... "Singularity Writing Advice" points six and seven?
0Alicorn
Oh, the writing advice looks very much like what I remember - but I'm almost positive I haven't come across the particular document before! Perhaps some of the same prose was reused elsewhere?
0komponisto
Eliezer has been known to recycle text from old documents on occasion. (I'm thinking of certain OB posts having to do with a Toyota Corolla and Deep Blue vs. Kasparov, which contain material lifted from here and here respectively.)

Hi, I have never posted on this forum, but I believe that some Less Wrong readers read my blog, FeministX.blogspot.com.

Since this at least started out as an open thread, I have a request of all who read this comment, and an idea for a future post topic.

On my blog, I have a topic about why some men hate feminism. The answers are varied, but they include a string of comments back and forth between anti feminists and me. The anti feminists accuse me of fallacies, and one says that he "clearly" refuted my argument. My interpretation is that my argu... (read more)

I read through a couple of months worth of FeministX when I first discovered it...

(Because of a particular skill exhibited: namely the ability to not force your self-image into a narrow box based on the labels you apply to yourself, a topic on which I should write further at some point. See the final paragraph of this post on how much she hates sports for a case in point. Most people calling themselves "feminist" would experience cognitive dissonance between that and their self-image. Just as most people who thought of themselves as important or as "rationalists" might have more trouble than I do publicly quoting anime fanfiction. There certainly are times when it's appropriate to experience cognitive dissonance between your self-image and something you want, but most people seem to cast that net far too widely. There is no contradiction, and there should be no cognitive dissonance, between loving and hating the same person, or between being a submissive feminist who wants alpha males, or between being a rationalist engaged on a quest of desperate importance who reads anime fanfiction, etcetera. But most people try to conform so narrowly and so unimaginati... (read more)

6Zack_M_Davis
\winces* So, I agree* that no one is competent and everyone has an agenda, but it's not as if everyone sides with "their" sex. No, historically we suck at this, too. Got any decision theory questions?
1FeministX
"winces* So, I agree that no one is competent and everyone has an agenda, but it's not as if everyone sides with "their" sex." I didn't mean to imply that they did always side with their physical sex.
8LucasSloan
Why do you think of the discussion of gender roles and gender equality to necessary break down into a camp for men and a camp for women? By creating two groups you have engaged mental circuitry that will predispose you to dismissing their arguments when they are correct and supporting your own sides' even when they are wrong. http://lesswrong.com/lw/lt/the_robbers_cave_experiment/ http://lesswrong.com/lw/gw/politics_is_the_mindkiller/
-2FeministX
"Why do you think of the discussion of gender roles and gender equality to necessary break down into a camp for men and a camp for women?" I don't personally think this. I don't think there are two genders. There are technically more than two physical sexes even if we categorize the intersexed as separate. I feel that either out of cultural conditioning or instincts, the bulk of people push a discussion about gender into a discussion about steryotypical behaviors by men and by women. This then devolves into a "battle of the sexes" issue where the "male" perspective and "female" perspective are constructed so that they must clash. However, on my thread, there are a number of people that seem to have no qualms with the idea of barring female voting and such things. I think that sort of opinion goes beyond the point where one could say that an issue was framed to set up a camp for men and a camp for women. Once we are talking about denying functioning adults sufferage, then we are talking about an attitude which should be properly labelled as anti-female.
[-]loqi100

However, on my thread, there are a number of people that seem to have no qualms with the idea of barring female voting and such things.

On the internet, emotional charge attracts intellectual lint, and there are plenty of awful people to go around. If you came here looking for a rational basis for your moral outrage, you will probably leave empty-handed.

But I don't think you're actually concerned that the person arguing against suffrage is making any claims with objective content, so this isn't so much the domain of rational debate as it is politics, wherein you explain the virtue of your values and the vice of your opponents'. Such debates are beyond salvage.

2FeministX
I saw that Eliezer posts that politics are a poor field to hone rational discussion skills. It is unfortunate that anyone should see a domain such as politics as a place where discussions are inherantly beyond salvage. It's a strange limitation to place on the utility of reason to say that it should be relegated to domains which have less immediate affect on human life. Poltiics are immensely important. Should it not be priority to structure rational discussion so that there are effective ways for correcting for the propensity to rely on bias, partisanship and other impulses which get in the way of determining truth or the best available course? If rational discussion only works effectively in certain domains, perhaps it is not well developed enough to succeed in ideologically charged domains where it is badly needed. Is there definitely nothing to be gained from attempting to reason objectively through a subject where your own biases are most intense?
4DanArmak
One of the points of Eliezer's article, IIRC, is that politics when discussed by ordinary people indeed tends not to affect anything except the discussion itself. Political instincts evolved from small communities where publicly siding with one contending leader, or with one policy option, and then going and telling the whole 100-strong tribe about it really made a difference. But today's rulers of nations of hundreds of millions of people can't be influenced by what any one ordinary individual says or does. So our political instinct devolves into empty posturing and us-vs-them mentality. Politics are important, sure, but only in the sense that what our rulers do is important to us. The relationship is one-way most of the time. If you're arguing about things that depend on what ordinary people do - such as "shall we respect women equally in our daily lives?" - then it's not politics. But if you're arguing about "should women have legal suffrage?" - and you're not actually discussing a useful means of bringing that about, like a political party (of men) - then the discussion will tend to engage political instincts and get out of hand. There's a lot to be gained from rationally working out your own thoughts and feelings on the issue. But if you're arguing with other people, and they aren't being rational, then it won't help you to have a so-called rational debate with them. If you're looking for rationality to help you in such arguments - the help would probably take the form of rationally understanding your opponents' thinking, and then constructing a convincing argument which is totally "irrational", like publicly shaming them, or blackmailing, or anything else that works. Remember - rationality means Winning. It's not the same as having "rational arguments" - you can only have those with other rationalists.
2loqi
It's not so strange if you believe that reason isn't a sufficient basis for determining values. It allows for arguments of the form, "if you value X, then you should value Y, because of causal relation Z", but not simply "you should value Y". Debates fueled by ideology are the antithesis of rational discussion, so I consider its "ineffectiveness" in such circumstances a feature, not a bug. These are beyond salvage because the participants aren't seeking to increase their understanding, they're simply fielding "arguments as soldiers". Tossing carefully chosen evidence and logical arguments around is simply part of the persuasion game. Being too openly rational or honest can be counter-productive to such goals. That depends on what you gain from a solid understanding of the subject versus what you lose in sanity if you fail to correct for your biases as you continue to accumulate "evidence" and beliefs, along with the respective chances of each outcome. As far as I can tell, political involvement tends to make people believe crazy things, and "accurate" political opinions (those well-aligned with your actual values) are not that useful or effective, except for signaling your status to a group of like-minded peers. Politics isn't about policy.
0bogus
I agree with your assessment, but applying our skills to the political domain is very much an open problem--and a difficult one at that. See these wiki pages: [Mind-killer] and [Color politics] for a concise description of the issue. The gist of it is that politics involves real-world violence, or the governmental monopoly thereof, or something which could involve violence in the ancestral environment and thus misleads our well-honed instincts. Thus, solving political conflicts requires specialized skills, which are not what LessWrong is about. Nevertheless, there are a number of so-called open politics websites which are more focused on what you're describing here. I'd like to see more collaboration between that community and the LessWrong/debiasing/rationality camp.
6LucasSloan
Yes, those who would deny women suffrage are anti-female. But in order to feel they deserve suffrage, one need not be pro-female. One only need be in favor of human rights.
6RobinZ
I hate to say it, but your analysis seems rather thin. I think a productive discussion of social attitudes toward feminism would have to start with a more comprehensive survey of the facts of the matter on the ground - discussion of poll results, interviews, and the like. Even if the conclusion is correct, it is not supported in your post, and there are no clues in your post as to where to find evidence either way.

Agreed. The post is almost without content (or badly needed variation in sentence structure, but that's another point altogether) - there's no offered reason to believe any of the claims about what anti-feminists say or what justifications they have. No definition of terms - what kind of feminism do you mean, for instance? Maybe these problems are obviated with a little more background knowledge of your blog, but if that's what you're relying on to help people understand you, then it was a poor choice to send us to this post and not another.

I'm tickled that Less Wrong came to mind as a place to go for unbiased input, though.

5wedrifid
Indeed. And even more so that she seems to be getting it.
7Jack
I now have a wonderful and terrible vision of the future in which less wrong posters are hired guns, brought in to resolve disagreements in every weird and obscure corner of the internets. We should really be getting paid.
2wedrifid
Did Robin make a post on how free market judicial systems could work or am I just pattern matching on what I would expect him to say, if he got around to it?
1Jack
I don't know if Robin has said anything on this but it is a well-tread issue in anarcho-capitalist/individualist literature. Also, there already are pseudo-free market judicial systems. Like this. And this!
1DanArmak
How would you stop this from degenerating into a lawyer system? Rationality is only a tool. The hired guns will use their master rationalist skills to argue for the side that hired them.
5Eliezer Yudkowsky
Technically, you cannot rationally argue for anything. I suppose you could use master rationalist skillz to answer the question "What will persuade person X?" but this relies on person X being persuadable by the best arguer rather than the best facts, which is not itself a characteristic of master rationalists. The more the evidence itself leans, the more likely it is that a reasonably rational arbiter and a reasonably skillful evidence-collecter-and-presenter working on the side of truth, cannot be defeated by a much more skillful and highly-paid arguer on the side of falsity.
0DanArmak
A master rationalist can still be persuaded by a good arguer because most arguments aren't about facts. Once everyone agrees about facts, you can still argue about goals and policy - what people should do, what the law should make them do, how a sandwich ought to taste to be called a sandwich, what's a good looking dress to wear tonight. If everyone agreed about facts and goals, there wouldn't be much of an argument left. Most human arguments have no objective right party because they disagree about goals, about what should be or what is right.
4Eliezer Yudkowsky
One obvious reply would be to hire rationalists only to adjudicate that which has been phrased as a question of simple fact. To the extent that you do think that people who've learned to be good epistemic critics have an advantage in listening to values arguments as well, then go ahead and hire rationalists to adjudicate that as well. (Who does the hiring, though?) Is the idea that rationalists have an advantage here, enough that people would still hire them, but the advantage is much weaker and hence they can be swayed by highly paid arguers?
0DanArmak
If the two parties can agree on the phrasing of the question, then I think it would be better to hire experts in the domain of the disputed facts, with only minimal training in rationality required. (Really, such training should be required to work in any fact-based discipline anyway.) If there's a tradition of such adjudication - and if there's a good supply of rationalists - then people will hire them as long as they can agree in advance on submitting to arbitrage. Now, I didn't suggest this; my argument is that if this system somehow came to exist, it would soon collapse (or at least stop serving its original purpose) due to lawyer-y behavior.
0wedrifid
You know, this actually makes (entirely unintended) sense. If the rationalists are obliged to express their evaluations in the form of carefully designed and discrete bets then they are vulnerable to exploitation by others extracting arbitrage.
0Technologos
Presumably, "arbitration"--and that's a good point, and with clear precedents in the physical world. Nevertheless, "lawyer-y" behavior hasn't prevented a similar mutual-agreement-based system from flourishing, at least in the USA. The biggest difference is that arbitrators are applying a similarly mutually-agreed-upon law, where rationalists mediating a non-rationalist dispute would be applying expertise outside the purview of the parties involved. That's where your point about advocacy-like behavior becomes important.
3Jack
Parties to the dispute can split the cost. Also, if the hired guns aren't seen as impartial there would be no reason to hire them so there would be a market incentive (if there were a market, which of course there isn't). Or we have a professional guild system with an oath and an oversight board. Hah.

Parties to the dispute can split the cost.

Actually, here's a rule that would make a HELL of a lot of sense:

Either party to a lawsuit can contribute to a common monetary pool which is then split between both sides to hire lawyers. It is illegal for either side to pay a lawyer a bonus beyond this, or for the lawyer to accept additional help on the lawsuit.

4gwern
And you don't see any issues with this? That would seem to be far worse than the English rule/losers-pay. I pick a random rich target, find 50 street bums, and have them file suits; the bums can't contribute more than a few flea infested dollars, so my target pays for each of the 50 suits brought against him. If he contributes only a little, then both sides' lawyers will be the crappiest & cheapest ones around, and the suit will be a diceroll; so my hobos will win some cases, reaping millions, and giving most of it to me per our agreement. If he contributes a lot, then we'll both be able to afford high-powered lawyers, and the suit will be... a diceroll again. But let's say better lawyers win the case for my target in all 50 cases; now he's impoverished by the thousands of billable hours (although I do get nothing). I go to my next rich target and say, sure would be a shame if those 50 hobos you ran over the other day were to all sue you...
3Jordan
How is this different from how things currently are, beyond a factor of two in cost for the target?
2gwern
It's not an issue of weakening the defense/target, but a massive strengthening of the offense. Aside from the doubling of the target's defense expenses (what, like that's irrelevant or chump change?), I can launch 50 or 100 suits against my target for nothing. At that point, a judge having a bad day is enough for me to become a millionaire. Any system which is so trivially exploitable is a seriously bad idea, and I'm a little surprised Eliezer thinks it's an improvement at all. (I could try to do this with contingency-fees, but no sane firm would take my 100 frivolous suits on contingency payment and so I couldn't actually do this.)
0Jordan
Good point. My initial response to your comment was short sighted.
0Oscar_Cunningham
Surely that only works if the probability of winning a case depends only on the skill of the lawyers, and not on the actual facts of the cases. I imagine a lawyer with no training at all could unravel your plan and make it clear that your hobos had nothing to back up their case. Also, being English myself, it hadn't dawned on me that the losers-pay rule doesn't apply everywhere. Having no such system at all seems really stupid. It also occurs to me that hiring expensive lawyers under losers-pay is like trying to fix a futarchy: you don't lose anything if you succeeded, but you stand to lose a lot if you fail.
1gwern
If facts totally determine the case, then my exploit doesn't work but Eliezer's radical change is equally irrelevant. If facts have no bearing on who wins or loses, and it is purely down to the lawyers, then Eliezer's system turns lawsuits into a coin flip, which is only an improvement if you think that the current system gets things right less than 50% of the time, and you'd also have to show there would be no negating side-effects like people using my exploit. If facts determine somewhere in-between, then there is a substantial area where my exploit will still work. Suppose I have to put up a minimum of 10k for each hobo lawsuit asking for 1 million; then I need only have a 1% chance of winning to break even. So if cases with lousy lawyers on both sides wind up with the wrong verdict even 2% of the time, I'm laughing all the way to the bank. And it's very easy for bad lawyering work to lose an otherwise extremely solid judgement. A small slipup might result in the defendant not even showing up, in which case the defendant gets screwed over by the default judgement against him. Even the biggest multinational can mess up: consider this recent case where Pepsi is contesting a $1.26 billion default judgement which was assessed because a secretary forgot a letter. They probably won't have to pay, but even if they settle for a tiny fraction of 1.26 billion, how many frivolous lawsuits do you think one fluke like that could fund? For that matter, consider patent trolls; they have limited funds and currently operate quite successfully, despite the fact that they are generally suing multinationals who can spend far more than the troll on any given case. How much more effective would they be if those parasites could force their hosts to mount a far less lavish & effective defense than they would otherwise? I did some reading; apparently it's long-standing tradition all the way back to colonial times. The author said the Americans likely wanted to discourage litigation, w
0eirenicon
If the defending party is only required to match the litigating party's contribution, the suits will never proceed because the litigating bums can't afford to pay for a single hour of a lawyer's time. And while I don't know if this is true, it makes sense that funding the bums yourself would be illegal.
1gwern
Well, the original said you could only not fund the legal defense; I don't see anything there stopping you from putting the bums up in a hotel or something during the lawsuits. But even if defendants were required to spend the same as the plaintiff, we still run into the issue I already mentioned: So now I simply need to put up 5 or 10k for each bum, guaranteeing me a very crappy legal team but also guaranteeing my target a very crappy legal team. The less competent the 2 lawyers are, the more the case becomes a role of the dice. (Imagine taking it down to the extreme where the lawyers are so stupid or incompetent they are replaceable by random number generators.) The most unpredictable chess game in the world is between the 2 rankest amateurs, not the current World Champion and #2. But maybe your frivolous win-rate remains the same regardless of whether you put in 10k or a few million. There's still a problem: people already use frivolous lawsuits as weapons: forcing discovery, intrusive subpoenas, the sheer hassle, and so on. Those people, and many more, would regard this as a massive enhancement of lawsuits as a weapon. You have an enemy? File a lawsuit, put in 20k, say, and now you can tell your crappy lawyer to spend an hour on it every so often just to keep it kicking. If your target blows his allotted 20k trying to get the lawsuit ended despite your delaying & harassing tactics, now you can sic your lawyer on the undefended target; if he measures out his budget to avoid this, then he has given into suffering this death of a thousand cuts. And if he goes without? As they say, someone who represents himself in court has a fool for a client....
3Eliezer Yudkowsky
Well, a lot of what you're pointing out here is the result of other systemic problems that need other systemic fixes. Judges may not be fast enough to toss out foolish complaints. One might need a two-tier system whereby cheap lawyers and reasonably sane judges could quickly toss almost all the lawsuits, and any that make it past the first bar can get more expensive lawyers. One may need a basic cost of a dismissed suit to the litigant, or some higher degree of loser-pays. Lawsuits are already weapons. This isn't obviously a massive enhancement. At most, it increases costs by a factor of 2 for rich defendants, while greatly improving (if it works as planned!) the position of poor defendants.
1gwern
OK. So the English rule is a weakened version of this; we should expect to see great improvements from it, since between it and contingency-fees and class-actions, poor defendants have much greater financial wherewithal than their poverty would allow. Do we see great improvements? If we don't, why would we expect your full-strength treatment to work? And if we can't justify it on any empirical grounds, why on earth would you put it forward on theoretical grounds when a minute's thought shows multiple issues with it, to say nothing of how one would actually enforce equitable spending? (The issue would seem to be as difficult & tricky as enforcing campaign finance laws...) And if I, an utter layman to the law, can come up with flaws you seem to acknowledge as real, how many ways could a legal eagle come up with to abuse it? That's pretty lame. Reminds me of Yvain's "Solutions to Political Problems As Counterfactuals":
3DanArmak
I would contribute nothing to the pool, hire a lawyer privately on the side to advise me, and pass his orders down to the public courtroom lawyer. If I have much more money than the other party, and if the money can strongly enough determine the lawyer's quality and the trial's outcome, then even advice and briefs prepared outside the courtroom by my private lawyer would be worth it.
3Eliezer Yudkowsky
Then your lawyer gets arrested. It sometimes is possible to have laws or guild rules if the prohibited behavior is clear enough that people can't easily fool themselves into thinking they're not violating them. Accepting advice and briefs prepared outside the courtroom is illegal, in this world.
1DanArmak
I agree with Alicorn. Even if you pass the law, there's no practical way to stop people from getting private advice secretly, especially in advance of the court date. If you try real hard, private lawyers will go underground (and as the saying goes, only criminals will have lawyers :-) People will pass along illegal samizdat manuals of how to behave in court, half of them actually presenting harmful advice and none of them properly attributed. Congratulations: you have just forced lawyering to become a secret Dark Art.
3Eliezer Yudkowsky
And this is not an improvement over the current status quo because...?
0DanArmak
Do you think this is an improvement? As described, it looks like it's a repeat of a similar system with similar problems. (And how much of that is because we already know those failings and are best at describing them?)
0Nick_Tarleton
How would you have legal advice outside of a court case (to ensure predictability) handled?
1Vladimir_Nesov
You seem to be engaging in motivated skepticism. Consider how much easier it becomes to get a good professional support for the poor side in Eliezer's setup. There is just too much trouble with "underground" professional representation. A significant portion of expensive lawyers may simply not like the idea of going "underground", because it hurts their self-image and lowers their status within the community of "white-book" lawyers. Respect trivial inconveniences.
2DanArmak
You're right, I was deliberately playing devil's advocate. I should reconsider how likely the failure mode I described is, although I do believe its probability isn't very small.
1Alicorn
Any other advice? What if I want to go to my Ethical Culture Society leader to ask him or her about whether something my in-court lawyer suggests would be right? What if my spouse is a lawyer? What if I'm a lawyer - a really expensive one?
0[anonymous]
That's what I think too. Even if you pass the law, there's no practical way to stop people from getting private advice secretly, especially in advance of the court date. If you try real hard, private lawyers will go underground (and as the saying goes, only criminals will have lawyers :-) People will pass along illegal samizdat manuals of how to behave in court, half of them actually presenting harmful advice and none of them properly attributed. Congratulations: you have just forced lawyering to become a secret Dark Art.
0Eliezer Yudkowsky
Okay, suppose a lawyer is not allowed to accept briefs. In the Least Convenient case where you happen to be a really expensive lawyer, how much can actually be accomplished courtroom-wise if you talk for a few hours with a much less expensive lawyer? Would any lawyers care to weigh in?
1wedrifid
I'm tempted to suggest 'about the same amount a professional dancer can teach an amateur, and for similar reasons'.
1Alicorn
Why would you need to do anything with the inexpensive lawyer? Contribute nothing to the fund - maybe even forfeit your half of whatever the other party contributes - and then represent yourself.
4MBlume
I suspect that the only real solution to the Lawyer Problem is to remove the necessity of the profession -- ie, either simplify the law, or cognitively enhance the people to the point where any person who can not hold the whole of the law in his/her head can be declared legally incompetent.
6DanArmak
If possible, that would certainly be a great solution. The original (our-world) Lawyer Problem goes beyond what we've discussed here: it involves (ex-) lawyers both deliberately making the law and the case law more and more complex, to increase the value of their services.
2RobinZ
That is frelling brilliant.
1Alicorn
Have a karma point for using Farscape profanity.
0Technologos
I'm just waiting for LW to develop its own variants of profanities.
0[anonymous]
"Have at thee, accursed frequentist!"
0Alicorn
There's "woo".
0Technologos
Mhm. Not enough hard consonants, though--"woo that wooing wooer" doesn't sound especially angry. I suggest "materd" as a general insult, meaning "one whose MAp and TERritory Diverge."
1Alicorn
"Morong" might work as an allusion to "moron" and compression of "more wrong". It's a little more pronunciation-friendly than "materd".
0Technologos
Like it.
0Jack
Very likely everyone's map diverges from the territory. An insult needs to have a more limited scope.
0Technologos
Hm. Agreed. I suppose I was thinking in terms of pointing out a specific, obvious case. "Being a materd" or something similar.
2Alicorn
I would totally join a rationalist arbitration guild. Even if this cut into the many, many bribes I get to use my skills on only one party's behalf ;) Perhaps records of previous dispute resolutions can be made public with the consent of the disputants, so people can look for arbitrators who have apparently little bias or bias they can live with?
1CannibalSmith
What are you talking about, we have our first customer already!
0DanArmak
Please see my reply to wedfrid above.
2wedrifid
More or less, because both sides have to agree to the process. Then the market favours those arbiters that manage to maintain a reputation for being unbiased and fair. This still doesn't select for rationality precisely. But it degenerates into a different system to that of a lawyer system.
1DanArmak
Yes, but if a side can hire a rationalist to argue their case before the judge, then that rationalist will degenerate into a lawyer. (And how could you forbid assistance in arguments, precisely? Offline assistance at least will always be present.) And since the lawyer-like rationalists can be paid as much as the richest party can afford, while the arbiter's fees are probably capped (so that anyone can ask for arbitration), the market will select the best performing lawyers and reward them with the greatest fees, and the best rationalists who seek money (which is such a cliched rational thing to do :-) will prefer being lawyers and not judges. Edit: added: the market will also select the judges who are least swayed by lawyers. It still needs to be shown that the market will have good information as to whether a judge had decided because the real rational evidence leaned one way, or because a smart lawyer had spun it appropriately. It's not clear to me what this will collapse to, or whether there's one inevitable outcome at all.
0wedrifid
Would a lawyer by any other name still speak bullshit? Yes. But why are we talking about lawyers and judges? You have explained well the reason that capping is a terrible idea. Now it is time to update the 'probably capped' part. It's not clear to me either. I also add that I rather doubt that the market, even with full information, would select for the most rational decisionmakers. That's just not what it wants.
0DanArmak
Because I think that in the proposed scenario, where people hire master rationalists to arbitrate disputes, these arbitrators and other rationalists who would be hired by each side independently for advice would start behaving like judges and lawyers do, respectively. (Although case law probably wouldn't become important.) I didn't mean they would be capped by guild rules or something like that, but rather, that the effective market prices would stay low. I've no proof of this, economics is not my strong suit, but here are the reasons I think that's likely to happen: 1. If arbitration is so expensive that some people can't afford it (and it needs to be affordable by the poorer party in a conflict), that's an untapped market someone could profit from. Whenever your argument is with a poor party, you have to have an arbitrator whose fee is at most twice the fee the other party can or is willing to afford, and there's no effective low limit here. (State-provided judges and loser pays winner's fees do have something going for them.) 2. Being a better arbitrator doesn't require direct investment of money on part of the arbitrator. So a good arbitrator who's not getting enough work can lower his prices. 3. Third parties interested in seeing a dispute resolved - if only to achieve peace and unity - might contribute money towards the fee, or send volunteer arbitrators, in exchange for the parties to the dispute agreeing to arbitration. Finally, competition between arbitrators (for money and work) would eventually draw the fees down, assuming a reasonable supply of arbitrators. 4. What would make people choose an arbitrator that wasn't the cheapest available? Assuming some kind of minimal standard or accreditation (e.g., LW karma > 1000), an arbitrator is inferior if he cannot properly comprehend your rational argument or might be swayed by your opponent's master rationalist lawyer. You then have a choice: invest your money in a costlier and fairer arbitrator, or in a
0Jack
I see, so rationalist arbitration of a dispute raises the stakes in status and neutral-party persuasion and that would lead to a market for lawyers? In what sense would this be damaging/ harmful? Edit: Obviously it would be really bad if lawyers were causing the arbitrators to make bad decisions. But presumably the arbitrators are trained to avoid biasing information, fallacies etc. If you want to persuade and arbitrator you have to argue well-- referencing verifiable facts, not inflaming emotions or appealing to fallacies. If all online disagreements did that than the internet would be a much better place! If arbitration leads to better standards all the better. Now, the system might be unfair to those who couldn't afford a lawyer and so can't present as much data to further their cause. But 1) presumably the arbitrator does some independent fact checking and 2) there are already huge class barriers in arguments. Arguments an average high school drop outs and an average Phd are already totally one sided. The presence of arbitration wouldn't change this.
0DanArmak
If that's so then there's no point to arbitration. He with the best lawyer wins. Put it this way: take, in the general (average) case, any decision made by an arbitrator. For simplicity, suppose it's "A is right, B is wrong." Now suppose party B had employed the services of the very best rationalist ever to live as a lawyer. What is the probability the arbitrator would have given the opposite judgment instead? How high a probability are you willing to accept before giving up on the system? And how high a probability do you estimate, in practice?
3Jack
The purpose of arbitration isn't to establish the truth of a question. If that were the case there would be no reason for the arbitrator to even listen to the disagreeing parties. She would be better off just going off and looking for the answer on her own. This would also take much, much longer since she wouldn't want to leave any information out of the calculation. Rather, the purpose of arbitration is to facilitate agreement. Not just any agreement but a kind of pseudo- Aumann style agreement between the two parties. The idea is that since people aren't natural Bayesian calculators and have all kinds of biases and incentives that keep them from agreeing they'll hire one to do the calculating for them. This means we want the result to be skewed toward the side with better arguments. If the side with weak arguments doesn't end up closer to the side with strong arguments then we're doing it wrong. This is true even if one side puts a lot more time or money into their arguments. Otherwise you'd have to conclude that arguing never has a point because the outcomes of arguments are skewed toward those who are the smartest, have done the most research and thought up the best arguments.
3DanArmak
If agreement is more important to you than objective truth, than sure, that method will work. I just happen to think a system that optimizes for agreement at the expense of truth and facts tends to lead to a lot of pain in the end. You end up with Jesuits masterfully arguing the number of angels that can dance on the head of a pin.
0Jack
Uh... like I said: Edit: If a rationalist is hired to arbitrate a dispute between two Jesuits regarding the number of angels she isn't going to start complaining that there are no angels. That isn't what she was hired to do. If the Jesuits want to read some atheistic arguments they can find those on their own. The task of the arbitrator is applying rationalist method to whatever shared premises the disputing parties have. But the system as a whole still tends toward truth because an arbitration between a Jesuit and an atheist will generate a ruling in favor of atheism (assuming the Jesuit believes in God because of evidence and not Kierkegaardian faith or "grace").
0Eliezer Yudkowsky
Think we've got some fundamental disagreements here about just what it is that rationalists do. You cannot just hire them to argue anything. The ideal rationalist is the one who only ends up arguing true beliefs, and who, when presented with anything else, throws up their hands and says "How am I supposed to make that sound plausible?" That which can be used to argue for any side is not distinguishing evidence, whether "that" is a strategy, a person, an outlook on life, whatever.
3DanArmak
I reply: I'm paying you a lot of money. You'll find a way. When I say or hear "rationality", I think of the tool, not of the noble "ideal rationalist" whose only pursuit is truth, not money or other personal interest. Rationality is winning. I'm hiring a master rationalist to make me win my court case. What's not to like? A rational debate and agreeing on objective truth may be what the arbitrage system wants. But what the individual disputant wants, in the end, in an important enough court case, is to win. If I have to game the system to win, I will. (It doesn't help when we create legal entities like corporations, which are liable to get into many more trials and also to treat many more trials as all-out war where winning is paramount.)
2CannibalSmith
Also, the irony of a feminist coming to an overwhelmingly male community for advice. :)
0FeministX
Oh, sorry. To clarify, I know my original post was never substantiated with any evidence based analysis for the true motivations behind anti-feminism. What I was referring to was the latter part of the comment thread between a commenter, Sabril and a few other commenters and me. I think their attacks on my capacity for objective reasoning are a bit hypocritical.
6CannibalSmith
You should rectify that as soon as possible. Hypocrisy doesn't make one wrong. An assertion that murder is wrong is not falsified by it being said by a murderer.
1wedrifid
Especially if you catch a hint of a sinister, sadistic pleasure in his eyes.
0FeministX
" An assertion that murder is wrong is not falsified by it being said by a murderer." No, but saying that there is no point in arguing with a woman because women are not capable to discerning objective truth is an instance of making an assertion which is not based on objective truth (unless you can provide evidence that being female necessarily prevents capacity for objective reasoning in all cases and subsequently prevents the ability to arrive at objective truth). It is like saying, "you rely on personal attacks, therefore your perspective on the environment is not correct"
0Larks
This is a bit strong: a more reasonable interpritation is that women are simply much less capable or liable to discern the truth than men.
0CannibalSmith
What I'm saying is that you should make sure you're right before calling other people wrong lest you be a hypocrite just like them.
1FeministX
That's not an argument against anyone even if it is true. The relative liklihood of one person vs another arriving at a correct outcome is irrelevant when you see the actual argument and conclusion before you. At that point, you must evaluate only on the merits of the argument and the conclusion. Secondly, that's not a reasonable interpretation because it is too vague to determine whether it is true or not. Less capable or reliable on average? At the extreme ends of capability? Less capable or reliable in what percentage of endeavors? What kind of endeavors? I would not define this behavior as hypocrisy. Being wrong does not make an accusation of a logical fallacy erroneous, nor does it make it hypocritical. And being wrong does not mean the opponent is correct, so calling them wrong is truthful and perhaps a demostration of superior rationality. What I call hypocrisy is relying on the very logical error you accuse another person of when you accuse them. The merit of the ultimate conclusion is not what I am discussing. I am only referring to the argumentation.
1CannibalSmith
... Actually, forget the whole hypocrisy thing. Forget about the commenters. Correct your mistakes, learn the facts, put more effort into writing clearly. If you do all that, your next post will be much more persuasive and will consequently attract comments of higher quality. Heck, it might even attract us! :)
0Cyan
Just out of curiosity, were you familiar with this post before you wrote the above? (And who wrote "This is a bit strong: a more reasonable interpritation..." ? It doesn't currently appear in the parent to your post.)
0FeministX
Cyan, the poster Larks wrote that response. I had not read that post before I made the comment. Eliezer says that authority is not 100% irrelevent in an argument. I think this is true because 100% of reliance on authority can't ordinarily be removed. Unless the issue is pure math or directly observable phenomena. But removal of reliance on a particular individual's authority/competence/biological state etc. is one the first steps in achieving objective rationality.
5wedrifid
tu quoque, it's like ad hominem light.
4Alicorn
*finds name "sabril" and reads from there* This first comment, and the later ones, betray a repulsive attitude, and I wouldn't blame you for being furious and therefore slightly off your game thereafter. That said, Sabril makes several moderately cogent points - the numbered items in particular are things I've noticed with disapproval before. I'm about to go to bed, so I'm not going to delve too deeply into the history of your blog to find an exhaustive list or lots of context, but it looks like he also has a legitimate complaint or three about your data regarding the Conservative Party in the UK, your failure to cite some data, the apparently undefended implication about war, the anecdote-based unfavorable comparison of arranged marriage versus non-arranged, and your tendency to cite... uh... nothing that I've run across so far. Also, this seems to beg your own question: And now I've gotten to this part of the page and I've decided I don't want to read anything else you have to say:
-2FeministX
"And now I've gotten to this part of the page and I've decided I don't want to read anything else you have to say: I am a female supremacist, not a true feminist " Why does this bother you so much? Why would it invalidate everything I have to say or render everything I say uninteresting? It is indeed impossible to find someone who will remain detatched from the issue of feminism.
5LucasSloan
May I ask the moral difference between a female supremacist and a male supremacist? Your pre-existing bias against males calls into doubt everything you say afterward. If you have already decided that men are oppressive pigs and women are heroic repressed figures who would be able to run the world better (I assume that is what female supremacist means, correct me if I'm wrong), you will search for arguments in favor your view and dismiss those contrary to your opinion. Have you ever seen an academic article discussing gender and dismissed it as "typical of the male dominated academic community?" These articles might explain further: http://lesswrong.com/lw/js/the_bottom_line/ http://lesswrong.com/lw/ju/rationalization/ http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/
-2FeministX
"May I ask the moral difference between a female supremacist and a male supremacist?" What I call female supremacism does not mean that females should rule. I feel that the concept of needing a ruler is one based on male status hierarchies where an alpha rules over a group or has the highest status and most priviliges in a group. To me, female supremacism means that female social hierarchies should determine overall status differences between all people. In my mind, female social hierarchies involve less power/resource differentials between the most and least advantaged persons. A "leader' is a person who organically grows into a position of more responsibility, but this person isn't seen as better, richer, more powerful or particularly enviable. They are not seen as an authority figure to be venerated and obeyed. I associated those characteristics with male hierarchies.
5LucasSloan
I think you overestimate the differences between male and female interpretations of status. Can you provide an example of one your female social hierarchies? Also, what is a leader other than an authority figure to be obeyed?
0FeministX
" Also, what is a leader other than an authority figure to be obeyed? " In our world, that is what a leader must be. In the general human concept of an ideal world, I do not know if this is the case. I actually think that humans have some basic agreement about what an ideal world would be like. The ideal world is based on priorities from our instincts as mortal animals, but it is not subjected to the confines of natural experience. I think the concept of heaven illustrates the general human fantasy of the ideal world. I get the impression that almost everyone's concept of heaven includes that there are no rich and poor- everyone has plenty. There is no battle of the sexes, and perhaps even no gendered personalities. There is no unhappiness, pain, sickness or death. I personally think there are no humans that hold authority over other humans in heaven (to clarify, I know that a theological heaven cannot actually exist). What this means to me is that to have a more ideal world, the power differential between leaders and the led should be minimized. I understand that humans with their propensities for various follies aren't as they are necessarily suited for the ideal world they'd like to inhabit, but striving for an ideal world would to me mean that human nature would in some ways be corrected so that the ideal world became more in tune with human desires for that state. " Can you provide an example of one your female social hierarchies?" Say a nursing floor. There is such a thing as a nurse with the most authority, but the status differential between head nurse and other floor nurses is sometimes imperceptible to all but the nurses that work there. The pay difference is not that great either. Sometimes the nurse who makes the most decisions is the one that chooses to invest the most time and has the longest experience, not necessarily one who is chosen to be obeyed. This is entirely unlike a traditionaly male structure like an army where the difference between g
5DanArmak
You must not know your way around the actual heavens of the big religions (as officially described). For instance, an important and (according to many Christian theologists) necessary part of the Christian heaven is being able to view the Christian Hell and enjoy the torture of the evil sinners there. And an important part of Muslim Heaven, according to some, is a certain thing about female virgins you may have heard of. I could go on for a while in this vein if you want real examples... because I happen to have a thing for completely un-academically reading popular history of religion & thought in my free time. Really, if we're going to get into religious (historical & contemporary) conceptions of heaven, the best one-line summary I can come up with is - heaven is just like Earth ought to be according to your cleric of choice and taken to an appropriate extreme. And most people's conception of how things "ought to be" is horrible to most other people. One of the most common issues for idealists to face is that most people don't want any part of their ideal world, no matter what that ideal happens to be. If the difference is imperceptible, even to people who have experience with similar hierarchies but don't happen to work inside this one, then why is the difference at all important? Why are we even talking about such a minute difference? It sounds to me like "there are no real status hierarchies and no leader" is a pretty good summary of this situation.
1komponisto
Some think the opposite, such as the pastor of a church I attended as a child. Apparently there was concern about the knowledge of loved ones' suffering in Hell interfering with the ability to experience pleasure in Heaven, so he claimed in a sermon once that God must somehow "shield us" from that knowledge.
4LucasSloan
I was hoping for an example of a large scale usage of your ideal. It seems to me that as social systems get larger, the difficult of co-ordination gets more difficult, necessitating more power into the hands of those who lead. Much as communism can work in a small village, but not on a national scale, I suspect your ideal fails at the large scale.
3wedrifid
No gendered personalities? How many people strap bombs to themselves, working themselves into a frenzy by reminding themselves of their heavenly reward of 40 androgynous virgins?
-1FeministX
But those are celestial virgins. I mean the real women that die and go to heaven. What happens to them? Perhaps they also enjoy the celestial virgins.
6spriteless
They are united with their husbands. If they were widows and had had multiple husbands, they can choose the best husband to be with. They are also, being a real woman and therefore superior to creatures that have never been mortal, the boss of the 40 virgins of their husband. What, you think they didn't think of this?
4DanArmak
So... do wives try to make their husbands sin just a bit, so they don't get the 40 virgins and the wives can have them all to themselves in heaven?
0wedrifid
The husband acquiring more wives raises her status. It is not unheard of for a wife in some cultures to nag or disrespect their husband for being unable to support more wives, leaving her with a lesser role than what she hoped for.
3Vladimir_Nesov
A theological heaven can actually exist, but shouldn't. See Fun theory, for this reference in particular Visualizing Eutopia and Eutopia is Scary.
2wedrifid
I suggest that this is to constrain the natural dynamics of leadership, not to formalise it. It saves on the killing.
0[anonymous]
Re: heaven: http://lesswrong.com/lw/y0/31_laws_of_fun/
1FeministX
And I should add that it was foolish of me to present that post, which was possibly my most biased, as an introduction to my blog. Actually, my blog gets more insightful than this. Please don't dismiss my entire blog based on the content of that post about the motivations for a visceral reaction against feminists as indicative of what my blog is usually about. That particular post was designed to spur emotional reactions from a specific set of readers I have.
3saturn
Of what use is rationality, then?
2Eliezer Yudkowsky
Eh, just say "Oops" and get it over with. Excuses slow down life. Never expend effort on defending something you could just change.
0[anonymous]
It would have been a good idea to link to that thread as the inspiration for the post, if that's what's going on.
3Jack
Hi! Feel free to introduce yourself here. There are a couple general reasons for disagreement. 1. Two parties disagree on terminal values (if someone genuinely believes that women are inherently less valuable than men there is no reason to keep talking about gender politics) 2. Two parties disagree on intermediate values (both might value happiness but a feminist might believe gender equality to be central to attaining happiness while the anti-feminist thinks gender equality is counter productive to this goal. It might be difficult for parties to explain their reasoning in these matters but it is possible). 3.Two parties disagree about the means to the end (an anti-feminist might think that feminism as a movement doesn't do a good job promoting gender equality) 3. Two parties disagree about the intent of one or more parties (a lot of anti-feminists think feminism is a tool for advancing interests of women exclusively and that feminists aren't really concerned with gender equality. I don't think you can say much to such people though it is worth asking yourself why they have that impression... calling yourself a female supremacist will not help matters.) 4. Two parties disagree about the facts of the status quo (if someone thinks that women aren't more oppressed than men or that feminists exaggerate the problem they may have exactly the same view of an ideal world as you do but have very different means for getting there. This is a tricker issue than it looks because facts about oppression are really difficult to quantify. There is a common practice in anti-subordination theory of treating claims of oppression at face value but this only works if one trusts the intentions of the person claiming to be oppressed.) 5. One of more parties have incoherent views (you can point out incoherence, not much else). I think that is more or less complete. As you can see, some disagreements can be resolved, others can't. Talk to the people you can make progress with but don
3FeministX
The discussion here helped me reanalyze my own attitude towards this kind of issue. I don't think I ever had a serious intention to back up my arguments or win a debate when I posted on the issue of why men hate feminism. I am not sure what to do when faced the extreme anti feminism that I commonly find on the internet. I have a number of readers on my blog who will make totalizing comments about all women or all feminists. Ex, one commenter said that women have no ability to sustain interest in topics that don't pertain to relationships between individuals. Other commenters say that feminsm will lead to the downfall of civilization for reasons including that it lets women pursue their fleeting sexual impulses, which are destructive. i suppose I do not really know how to handle this attitude. Ordinarily, I ignore them since I operate under the assumption that people that expouse such viewpoints are not prone to being swayed by any argument. They are attached to their bias, in a sense. I am not sure if it is possible for a feminist to have a reasonable discussion with a person that is anti feminist and that hates nearly all aspects of feminism in the western world.
3ShardPhoenix
Personally I'd say you shouldn't "be a feminist" at all. Have goals (whether relating to women's rights or anything else) and try to find the best ways to reach them. Don't put a political label on yourself that will constrain your thinking and/or be socially and emotionally costly to change. Though given that you seem to have invested a lot of your identity in feminism it's probably already hard to change.
4wedrifid
Shouldn't? According to which utility function? There are plenty of advantages to taking a label.
2ShardPhoenix
Yes, there are obvious advantages to overtly identifying with some established group, but if you identify too strongly and become a capital-F Feminist (or a capital D-Democrat, or even a capital-R Rationalist) there's a real danger that conforming to the label will get in the way of actually achieving your original goals. It's analogous to the idea that you shouldn't use dark side methods in the service of rationality - ie that you shouldn't place too much trust in your own ability to be virtuously hypocritical.
0Larks
Advantages to outwardly signalling group loyalty, perhaps, but to internal self-identification?
2Eliezer Yudkowsky
As mentioned above, this particular person does seem unusually good at not being so constrained.
3DanArmak
It's almost certainly not possible for you to have a discussion about feminism with such a person. I haven't read your blog, but perhaps you should reconsider the kind of community of readers you're trying to build there. If you tend to attract antifeminist posters, and you don't also attract profeminist ones who help you argue your position in the comments, that sounds like a totally unproductive community and you might want to take explicit steps to remodel it, e.g. by changing your posts, controlling the allowed posters, or starting from scratch if you have to.
2CannibalSmith
Ban them.
0bogus
If these commenters are foolish enough to disparage and denigrate any political role to women generally, then do them a favor and flame them to a crisp. If that's not enough to drive them off your site, then feel free to ban them. These are thinly-veiled attempts at intimidation which are reprehensible in the extreme, and will not be taken lightly by anyone who cares seriously about any kind of politics other than mere alignment to power and privilege--which is most everyone in this day and age. Especially so when coming from people of a Western male background--who are thus embedded in a complex power structure rife with systemic biases, which discriminates towards all kinds of minority groups. Simply stated, you don't have to be nice to these people. Quite the opposite, in fact. Sometimes that's all they'll understand.
1wedrifid
What exactly is an anti-feminist? I've never actually met someone who identified as one. Is this more of a label that others apply to them and if so, what do you mean when you apply it? Is it a manner of 'Feminism, Boo!' vs 'Yay! Feminism!' or is it the objection to one (or more) ideals that are of particular import? Does 'anti-feminist' apply to beliefs about the objective state of the universe, such as the impact of certain biological differences on psychology or social dynamics? Or is it more suitably applied to normative claims about how things should be, including those about the relative status of groups or individuals?
1gwern
I think it's only applied by the feminists. Take a look at National Review, a bastion of anti-feminism if ever there was any, and notice how all the usages are by the feminists or fellow travelers or are in clear scare-quotes or other such language: http://www.google.com/search?hl=en&num=100&q=anti-feminism+anti-feminist+site%3Anationalreview.com
-2CannibalSmith
Let me be the first to say: welcome to Less Wrong! Please explore the site and stay with us - we need more girls.
5Eliezer Yudkowsky
I'd quite strongly suggest deleting everything after the hyphen, there.
1CannibalSmith
No.
0wedrifid
Verbal symbols are slippery things sometimes.
1CannibalSmith
Explain.
1wedrifid
No, at least not right now.
0RobinZ
When, if I may be so bold? (Bear in mind that it is not necessary to explain your remark in full generality - just in sufficient detail to justify its presence as a response to CannibalSmith in this instance.)
0[anonymous]
This afternoon. Pardon me. That means ~7 hours and a bit.
0RobinZ
Fair enough!
-1FeministX
Why?
2Eliezer Yudkowsky
Because advertising your lack of girls is not viewed by the average woman as a hopeful sign. (Heck, I'd think twice about any online site that advertised itself with "we need more boys".) Also, the above point should be sufficiently obvious that a potential female reader would look at that and justifiably think "This person is thinking about what they want and not thinking about how I might react" which isn't much of a hopeful sign either.
5Alicorn
I'm probably non-average, but I'm ambivalent about hearing "we need more girls" from any community that's generally interesting. The first question that I think of is "why don't they have any?", but as long as it's not obvious to me why there are not presently enough girls had by a website and it's easy to leave if I find a compelling reason later, my obliging nature would be likely to take over. Also, saying "we need more girls" does advertise the lack of girls - but it also advertises the recognition that maybe that's not a splendid thing. Not saying it at all might signify some kind of attempt at gender-blindness, but it could also signify complacency about the ungirly ratio extant. I hear "we need more girls" from my female classmates about our philosophy department.
6RobinZ
We also hear this kind of thing online, in the atheism community. To sum up the convo, then, it seems like: * the "too many dicks on the dance floor" attitude isn't particularly attractive, but * the honest admission that there aren't many female regulars, and that we'd like the input of women on the issues which we care about, is perfectly valid. The rest of it is our differing levels of charity in interpreting CannibalSmith's remarks.
1Eliezer Yudkowsky
As with so many other remarks, this carries a different freight of meaning when spoken by a woman to a woman.
2Alicorn
I think I don't hear it from my male classmates because they aren't alert to this need. I would be pleased to hear one of them acknowledge it. This may have something to do with the fact that I'd trust most of them to be motivated by something other than a desire for eye candy or dating opportunities, though, if they did express this concern.
1FeministX
"I think I don't hear it from my male classmates because they aren't alert to this need. I would be pleased to hear one of them acknowledge it." Why do you feel there is a need for more female philosophy students in your department?
4Alicorn
I think a more balanced ratio would help the professors learn to be sensitive to the different typical needs of female students (e.g. decrease reliance on the "football coach" approach). Indirectly, more female students means more female Ph.Ds means more female professors means more female philosophy role models means more female students, until ideally contemporary philosophy isn't so terribly skewed. More female students would also increase the chance that there would be more female philosophers outside the typical "soft options" (history and ethics and feminist philosophy), which would improve the reception I and other female philosophers would get when proposing ideas on non-soft topics like metaphysics because we'd no longer look atypical for the sort of person who has good ideas on metaphysics.
2DanArmak
That's what we hoped for in the physical/biological sciences. Then we discovered we had a glass ceiling problem. We have more than enough female grads now, more than half in some programs, but not enough become PhD students. Last I heard, people who were looking into solving this problem had come to a conclusion that it wouldn't just resolve itself with time and needed active intervention, in part by deliberately creating female role models. (They're trying but so far no very notable successes on the statistical level.) This is true for my university (Hebrew U of Jerusalem) and other Israeli universities, and from what I'd heard in many other parts of the Western world as well. Is your philosophy dept. different?
3Alicorn
When I talk about "my department", I mean the grad students - we don't interact very much with any but the most avid undergrads, except in the capacity of the TA/student relationship. So by saying that we need more girls, I mean we need more female Ph.D students.
2DanArmak
Oh right, sorry :-) I assume the undergrad's POV too easily because I am one. When existing grad students, who will eventually become professors, want more girls, that should be the best and most direct solution. I wish your department all success in this.
2Alicorn
There seems, thankfully, to be some new attention by the admissions people to the issue. I was the only girl admitted in my year, but this year we got two. (Also, two of the new admits were minority races, while I don't think that's the case with any but perhaps a couple of ABDs who were here already.)
1DanArmak
Out of how many total people admitted each year?
1Alicorn
I was one of five; I think this year there were seven total. Edit: Total for the whole department, we have 43 students, eight of whom are female (counting me).
1CannibalSmith
What did you think when you first saw my "we need more girls" remark?
1FeministX
I found it flattering.
0Vladimir_Nesov
Inapt.
0wedrifid
That's five divs, which means it is a reply to, let's see...
-2wedrifid
Even the bit before the hyphen sounds a little on the needy side.
-1Zack_M_Davis
And while we're at it, it should really be an em dash, not a hyphen.
0RobinZ
En dash - it's surrounded by spaces. And I don't think the reddit engine tells you how to code it. A hyphen is the accepted substitute (for the en dash - two hyphens for an em dash).
2eirenicon
An en dash is defined by its width, not the spacing around it. In fact, spacing around an em dash is permitted in some style guides. On the internet, though, the hyphen has generally taken over from the em dash (an en dash should not be used in that context). Now, two hyphens—that's a recipe for disaster if I've ever heard one.
0RobinZ
Hey, I like double-hyphens as em-dash substitutes! ...but yeah, you're right otherwise.
-5CannibalSmith

I'm working through Jaynes' /Probability Theory/ (the online version). My math has apparently gotten a bit rusty and I'm getting stuck on exercise 3.2, "probability of a full set" (Google that exact phrase for the pdf). I'd appreciate if anyone who's been through it before, or finds this stuff easy, would drop a tiny hint, rot13'd if necessary.

V'ir pbafvqrerq jbexvat bhg gur cebonovyvgl bs "abg trggvat n shyy frg", ohg gung qbrfa'g frrz gb yrnq naljurer.

V unir jbexrq bhg gung jura z=x (gur ahzore bs qenjf = gur ahzore bs pbybef) gur shy... (read more)

1RHollerith
There already exists (an extremely low-traffic) mailing list with that mission: etjaynesstudy@yahoogroups.com Note that the objection that an existing mailing list would be populated by people who have not been exposed to Eliezer's writings on rationality does not apply here because (1) the current population consists of only a handful of people and (2) what I have seen of the current population over the last 3 or 4 years is that it consists mostly of a few people posting (relevant) faculty positions and conference announcements and experts in Bayesian statistics.
0Morendil
Thanks for the info !
0mtraven
Yes! I've been wanting a virtual place to help me learn probabilistic reasoning in general; a group focued on Jaynes would be a good start.
0Morendil
So far it seems to be only the two of us, which seems rather surprising. In probabilistic terms, I was assigning a significant probability to receiving N>>1 favorable replies to the suggestion above. I'm not sure yet how I should update on the observation of only one taker. One hypothesis is that the Open Thread isn't an effective way to float such suggestions, so I could consider a top-level post instead. Another is that all LWers are much more advanced than we are and consider Jaynes' book elementary. What other hypotheses might I be missing ?
2RobinZ
That within the set of those interested in studying Jaynes the set of those interested in studying Jaynes through a virtual book study group is small. Some people find virtual study groups ineffective. That'd be my reason for not responding.
0Morendil
OK. Who wants to study Jaynes - at all ? If you find virtual study groups ineffective, then - ineffective compared to what ? To study some material, two things are quite useful: access to the material, and access to someone who can help you over difficult spots in the material. Even if you intend to study alone, having the latter as an option can reasonably be expected to increase your chances. (Modulo the objection "I'll expect too much help from outside and that'll degrade my learning", which I could understand.) In this case Jaynes' book is a free PDF; on the other hand, the LW readership probably doesn't have formal access to a formal teacher for this material, I'd expect occasions to meet others interested in it IRL are fairly rare. Given all this I'd still expect more of a response than has been the case so far.
1arundelo
I'd like to someday, but unfortunately not now. :-/
0RobinZ
I'd like to study Jaynes, although it's not on the top of my priority list - and I'm under the impression that the free PDF has been taken down at the moment. Wasn't making a comparison, actually - just saying that joining a group of people online to study something hasn't actually led to me studying in the past. Ineffective compared to taking a course, I suppose.
1gwern
The PDF may've been taken down by whomever was hosting it, but it's easily found: http://omega.albany.edu:8008/JaynesBook.html for example, to say nothing of all the download sites or P2P sources you could use.
1gwern
There're more than that, over time anyway: http://tech.groups.yahoo.com/group/etjaynesstudy/ (Personally, my problem is that Jaynes is difficult, my calculus weak, and I have no particular application to study using it. It's like programming - you learn best trying to solve problems, not just trying to memorize what map is or whatever. Even though I have the book, I haven't gone past chapter 2.)
0wedrifid
I am quite interested but I know from experience that I study would seem too much like work. I would probably stop doing it until I actually needed to expand my skills for some purpose practical or otherwise. That being said, I would quite probably follow along with such a group and almost certainly get sucked into answering questions people posed. That changes it from 'homework' to 'curious problem someone put up and I can't resist solving'.
0mtraven
Yes, probably it deserves a top-level post, or going outside of this community and advertsing more widely.
0Vladimir_Golovin
Perhaps there are more, but they just don't want to signal that they are newbies at probabilistic reasoning.
1Yorick_Newsome
I may be the only one of my kind here, but I know absolutely nothing about probabilistic reasoning (I am envious of all you Bayesians and assume you're right about everything. Down with the frequentists!); thus, I think Jaynes would be too far over my head. Maybe there's a dichotomy between philosophy / psychology / highschool Lesswrongers and computer science / physics / math Lesswrongers that make the group of people at Jaynes-level a small group.
0AdeleneDawner
You're not the only one. I'm not bad at math and logic, but very rusty, and almost completely uneducated when it comes to probabilities. (Oddly enough, the junior high school I went to did offer a probabilities course - to the students who were in the track below me. We who tested highest were given trigonometry a year earlier, instead.) You might be right about the divide, too - I'm more in the former category than the latter, for all that I'm a programmer, and it doesn't seem like I'd have much opportunity to use the math even if I took the time to learn it, so there's very little motivation for me to do so.

To resurrect the Pascal's mugging problem:

Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who are not symmetrically in such a situation themselves, the prior probability would be penalized by a factor on the same order as the utility. ( http://wiki.lesswrong.com/mediawiki/index.php?

... (read more)

On the subject of creating a function/predicate able to identify a person. It seems that it is another non-localiseable function. My reasoning goes something like this.

1) We want the predicate to be able to identify paused humans (cryostasis), so that the FAI doesn't destroy them accidentally.

2) With sufficient scanning technology we could make a digital scan of a human that has the same value as a frozen head, and encrypt with a one time pad, making it indistinguishable from the output of /dev/rng.

From 1 and 2 it follows that the AI will have to look at... (read more)

3Jack
Poorly labeled encrypted persons may well be destroyed. I'm not sure this matters too much.
0whpearson
It depends when the singularity occurs. It is also indicative that there might be other problems. Let us say that an AI might be able to recreate some (famous) people from their work/habitation and memories in other people, along with a thorough understanding of human biology. If an AI can it should preserve as much of the human environment as possible (no turning it into computronium), until it gains that ability. However it doesn't know whether a bit of the world will be useful for that purpose (hardened footprints in mud), until it has lots of computronium.
2Jack
This problem just looks like the usual question of how much of our resources should be conserved and how much should be used. There is some optimal combination of physical conservation and virtual conservation that leaves enough memory and computronium for other things. We're always deciding between immediate economic growth and long term access to resources (fossil fuels, clean air, biodiversity, fish). In this case the resource is famous person memorabilia/ human environment. But this isn't a tricky conceptual issue, just a utility calculation, and the AI will get better at making this calculation the more information it has. The only programming question is how much we value recreating famous people relative to other goods. I also don't see how this issue is indicated by the 'functional definition of a person' issue.
2Nick_Tarleton
Besides what gwern said, it could just scan and save at appropriate resolution everything that gets turned into computronium. This seems desirable even before you get into possibly reconstructing people.
1whpearson
Every qubit might be precious so you would need more matter than the earth to do it (if you wanted to accurately simulate things like when/how volcanos/typhoons happened, so that the memories would be correct). Possibly the rest of the solar system would be useful as well so you can rewind the clock on solar flares etc. I wonder what a non-disruptive biosphere scan would look like.
0gwern
If it's that concerned, it can just blast off into space, couldn't it? Might slow down development, but the hypothetical mud footprints out to be fine... No harm done by computronium in the sun.
1whpearson
The question is should we program it to be that concerned? The human predicate is necessary for CEV if I remember correctly, you would want to extrapolate the volition of everyone currently informationally smeared across the planet as well as the more concentrated humans. I can't find the citation at the moment, I'll hunt tomorrow.
0Nick_Tarleton
I think the (non)person predicate is necessary for CEV only to avoid stomping on persons while running it. It may not be essential to try to make the initial dynamic as expansive as possible, since a less-expansive one can always output "learn enough to do a broader CEV, and do so superseding this".
0whpearson
Hmm, I think you are right. We still need to have some estimate of what it will do though so we can predict its speed somewhat.

I'm currently writing a science fiction story set around the time of the singularity. What newsworthy events might you expect in the weeks, days, or hours prior to the singularity. (and in particular prior to friendly AI)

This story is from the perspective of someone not directly involved with any research.

Example: For the purpose of the story, I'm having the FAI team release to the public a 'visualization of human morality' a few days before they go live with it.

2wedrifid
I would expect astute but untrusting parties to hit the FAI team with every weapon and resource they have at their disposal.
1blogospheroid
Singularity via FAI, intelligence explosion, right? Not much need actually happen, unless the story is set in a sim universe with a delayed reaction god. if so, then imagine the disaster movie of your choice :-) If it is set in our world - After the visualization of morality announcement, I think that for a short while, there might be some typical tribalism blather on. "You're not considering other cultures, etc. Euro centric(if the team is in the anglosphere), etc." I agree with djcb rather than wedrifid. They will not consider a SIAI like body seriously. However, if its Stanford University or a major chinese university, then there will be a serious reaction. Arrests and interogations are likely, bombing of facilities, unlikely. Most humans will still believe the AI only to be a tool which they can manipulate to their ends. Little do they know! (cue, evil laugh, except done for GOOD this time!) Events in the weeks leading * publishing of a complete mapping of how a human understands a concept done via brain emulation in some other place. This yields the last piece of the puzzle for the FAI team. But to the rest of the world, it is just another cool concept. Software firms are looking at semantic web applications in 1 year tops. Days or hours before - not sure. anything could be happening.
1djcb
Interesting... most technology is gets around only slowly, with the impact becoming clear only after a while. The biggest exception must be the atomic bomb - at least for the outside world. That's one way to think of it: what would have happened if they'd announced the A-bomb a few months in advance. Alternatively, if the development is done by some relatively unknown group, the reception may be more like what you'd get if you'd announce that you built a machine that will solve the world's energy problem -- disbelief and skepticism.

It seems there has never been a discussion here of 'Frank H. Knight's famous distinction between "risk" and "uncertainty"'. Though perhaps the issue has been addressed under another name?

[-][anonymous]00

I try to avoid the temptation of IQ tests on the internet since they make me feel cocky for the rest of the day. Anyway:

www.iqtest.dk

I feel much more awake after going through it. The only question I had trouble with was the last one. I'm sure I have the right answer, but it feels like there's more to the question that I'm missing. So here's what I got out of it (spoiler): every shape is a shape-shifting entity moving one square to the right each turn.

If someone could fill me in on the rest, that'd be great. This has been killing me for about an hour.

0Yorick_Newsome
Regarding that test, do 'real' IQ tests consist only of pattern recognition? I quite like the cryptographic and 'if a is b and b is...'-type questions found in other online IQ tests, and do well on them. I scored a good 23 points below my average on the iqtest.dk , which made me feel sad.
0[anonymous]
This IQ test is trying to be fair to as many people possible. As long as a person understands the idea behind multiple-choice questions, no one should have a upper hand. There were no verbal questions, which is good because I'm crap at those. I would think that a real IQ test wouldn't be multiple choice at all. Maybe a 1 on 1 setting.... I wouldn't worry about the 23 points. This test is not perfect. On some questions, it is easy to understand the pattern and still pick the wrong answer. Also, I think fatigue plays a huge role.

Earlier stuff here So, I thought that it could be fun to have a KGS room for all Go players reading this blog. Blueberry suggested an IGS channel. Others have shown interest. So, lets do this!

But where? IGS or KGS? Some other? I'm in favor of KGS, but all suggestions are welcome. If you're interested, post something!

3Eliezer Yudkowsky
It's been at this point before, before the helium leak thing. Let's see them collide beams at energies higher than 1 TeV (which is what I think the highest beam heretofore has been). We have not yet passed the hamster point!
1timtyler
Proposed schedule says they hope to get there this year: "Before a brief shutdown of the LHC for Christmas, CERN hopes to boost the energy to 1.2 TeV per beam – exceeding the world's current top collision energies of 1 TeV per beam at the Tevatron accelerator in Batavia, Illinois. In early 2010, physicists will attempt to ramp up the energy to 3.5 TeV per beam, collect data for a few months at that energy, then push towards 5 TeV per beam in the second half of the year." * http://www.newscientist.com/article/dn18186-lhc-smashes-protons-together-for-first-time.html
0Eliezer Yudkowsky
The hamster waits. Or if 1.2 TeV isn't enough to defy the hidden limit - then we all know that collider's never coming on again after Christmas! If it does come on, of course, and destroys the world, this will disprove the anthropic principle.
0timtyler
"Big Bang Collider Sets New Record" * http://abcnews.go.com/Technology/wireStory?id=9372376 They are now up to 2.36 tera-electron volts and counting...
0Jack
For whom?
-4timtyler
Pushing back the proposed DOOM-date (when DOOM fails to materialise) is a classic trick, though. For example, Wayne Bent / Michael Travesser employs it at the very end of this documentary: "The End of The World Cult Pt.5" * http://www.youtube.com/watch?v=c4KDGgaO5Bo
0Eliezer Yudkowsky
Clearly you haven't heard about the Movable Dates of Prophecy.
0[anonymous]
This might be relevant if anyone here were seriously proposing that the LHC is likely to be catastrophic.
2CannibalSmith
http://twitter.com/CERN

Thomas Metzinger's Being No One was very highly recommended by Peter Watts in the notes to Blindsight (and I've seen similar praise elsewhere); I got a copy and I was absolutely crushed by the first chapter. What do LWers make of him?

Continuing from my discussion with whpearson because it became offtopic.

whpearson, could you expand on your values and the reasons they are that way? Can you help me understand why you'd sacrifice the life of yourself and your friends for an increased chance of survival for the rest of humanity? Do you explicitly value the survival of humanity, or just the utility functions of other humans?

Regarding science, I certainly value it a lot, but not to the extent of welcoming a war here & now just to get some useful spin-offs of military tech in another decade.

3Jack
Not directed at me, but since this is a common view... I don't think you're question takes an argument as its answer. This is why. If you don't want to protect people you don't know then you and I have different amygdalas. Whpearson can come up with reasons why we're all the same but if you don't feel it those reasons won't be compelling.
0DanArmak
That's just it. The amygdala is only good for protecting the people around you. It doesn't know about 'survival of humanity'. To the amygdala, a million deaths is just a statistic. Note my question for whpearson: would you kill all the people around you, friends and family, hurting them face-to-face, and finally kill yourself, if it were to increase the chance of survival of the rest of humanity? whpearson said yes, he would. But he'd be working against his amygdala to do so.
0Jack
Good to know you're not a psychopath, anyway. :-) I'm not sure that I can't generalize the experience of empathy to apply to people whose faces I can't see. They don't have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can't succeed in empathizing with these people per se, I don't know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race. The fact that you never will have to experience this doesn't mean those children won't experience the fear. Now you can't make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they're big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can't actually be in your utility function.
0DanArmak
Getting back to the original issue: since protecting humanity isn't necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it? From your explanation I gather that you first decided it's a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it's just a nice value to signal, since I presume you've never had to act on it - far mode thinking, if I remember the term correctly).
1Jack
Extending empathy to those whom I can't actually see just seems like the obvious thing to do since the fact that I can't see their faces doesn't appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize. So I'm not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.
3whpearson
There are two possible answers to this. One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic. Then there is the attempt to find a rationale for this feeling, the number of arguments I have had with myself to give some reason to why I might feel this way. Or at least why it is not a very bad idea to feel this way. My view of identity is something like the idea of genetic relatedness. If someone made an atom level copy of you, that'd be the same person pretty much right? Because it shares the same beliefs, desires and view point on the world. But most humans share some beliefs and desires. From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some. We are identity kin as well as all sharing lots of the same genetic code (as we do with animals). So even if I die parts of me are in everyone even if not as obvious as they are with my friends. We are all mental descendants of Newton and Einstein and share that heritage. Not all things about humanity (or about me) are to be cherished, so I do not preach universal love and peace. But wiping out humanity would remove all of those spread out bits of me. Making self-sacrifice easier is the fact I'm not sure that me surviving as a post human will preserve much of my current identity. In some way I hope it doesn't as I am not psychologically ready for grown up (on the cosmic scale) choices but I wish to be. In other ways I am afraid that things of value will be lost that don't need to be. But from any view I don't think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival. I think my friends would also share the same word less emotion to save humanity, but not the odd wordy view of identity
0DanArmak
There are two relevant differences between this and wanting to prevent the extinction of humankind. One is, as I told Jack, that emotions only work for small amounts of people you can see and interact with personally; you can't really feel the same kind of emotions about humanity. The other is people have all kinds of irrational, suboptimal, bug-ridden heuristics for taking personal risks; for instance the firefighter might be confident in his ability to survive the fire, even though a lot of the danger doesn't depend on his actions at all. That's why I prefer to talk about incurring a certain penalty, like killing one guy to save another, rather than taking a risk. I understand this as a useful rational model, but I confess I can't identify with this way of thinking at all on an emotional level. What importance do you attach to actually being you (the subjective thread of experience)? Would you sacrifice your life to save the lives of two atomically precise copies of you that were created a minute ago? If not two, how many? In fact, how could you decide on a precise number? Personal continuity, in the sense of subjective experience, matters very much to me. In fact it probably matters more than the rest of the universe put together. If Omega offered me great riches and power - or designing a FAI singleton correctly, or anything I wanted - at the price of losing my subjective experience in some way (which I define to be much the same as death, on a personal level) - then I would say no. How about you?
[-][anonymous]00

"Test".

(Feel free to ignore - just looking at the way comment previews get truncated in the Recent Comments.)

What's a brief but effective way to respond to the "an AI, upon realizing that it's programmed in a way its designer didn't intend to, would reprogram itself to be like the designer intended" fallacy? (Came up here: http://xuenay.livejournal.com/325292.html?thread=1229996#t1229996 )

3Vladimir_Nesov
I hope I'm not misinterpreting again, but this is a Giant cheesecake fallacy. The problem is that AI's decisions depend on its motive. "An AI, upon realizing that it's programmed in a way its designer didn't intend to, would try to convince the programmer that what the AI turned out to be is exactly what he intended in the first place", "An AI, upon realizing that it's programmed in a way its designer didn't intend to, would print a string "Styggron" to the console".
0Kaj_Sotala
Thanks, that's a good one. I'll try it.
0Cyan
How about: an AI can be smart enough to realize all of those things, and it still won't change its utility function. Then link Eliezer's short story about that exact scenario. (Can't find it in two minutes, but it's the one where the dude wakes up with a construct designed to be his perfect mate, and he rejects her because she's not his wife.)
2Eliezer Yudkowsky
http://lesswrong.com/lw/xu/failed_utopia_42/

Paul Almond has written a new article, Launching anything is good: How Governments Could Promote Space Development. I don't know how realistic his proposal is, but I can't find any flagrant logical error in it.

I have a question for the members of LW who are more knowledgable than me in quantum mechanics and theories of quantum mechanics's relevance to consciousness.

There are examples of people having exactly the same conversation repeatedly (e.g. due to transient global amnesia). Is this evidence against quantum mechanics being crucial to consciousness?

4Eliezer Yudkowsky
Thermal noise dominates quantum noise anyway. I suppose it argues that if you don't depend on thermal noise then you don't depend on quantum noise either, but the Penrosian types claim it's not really random anyway.
2Nick_Tarleton
It's evidence against chaotic or random processes being important, but quantum computing needn't mean random (i.e. high variance) results; AFAIK, it can in principle be made highly predictable.
1RobinZ
Wait, I think I know what the question is, now. Yes, this thing seems to suggest that human thinking is well-approximated as deterministic - a hypothesis which matches what I've heard elsewhere. Off the top of my head: * I once read a story about a guy being offered lunch several times in a row and accepting again and again and again in similar terms until his stomach felt "tightish". * There was a family friend taking sleeping medication which was known to cause sleepwalking, and she had an entire phone conversation with her friend in her sleep - and then called the same friend after waking up planning to discuss the same things. Of course, the typical quantum-mechanical stories of consciousness are far too vague to be falsified by this or any other evidence. Edit: As Nick Tarleton cogently points out, this is an exaggeration - it is certainly falsifiable in the way phlogiston or elan vital is falsifiable, by the production of a complete correct theory, and it is further so by e.g. uploading.
1Nick_Tarleton
They could be falsified by successful classical uploading or an ironclad argument for the impossibility of coherence in the brain (among other things); furthermore, I think most of their proponents who are actual scientists would accept such a falsification.
0RobinZ
You're right, of course - editing in a note.
0Jack
I don't think anyone holds that human behavior is always undetermined in the way particles are. The reason no one holds that view is that it would contradict the work of neuroscientists, the people, you know, actually making progress on these questions.
0RobinZ
Citations?
1Johnicholas
I can't find the link because of censorship on my work computer, but there was a description of orgasm-induced transient global amnesia that made the rounds recently. Google: orgasm transient global amnesia
0RobinZ
That's an odd phenomenon, but I don't think that it, specifically, is especially relevant to quantum mechanics' relevance to consciousness. The chief problem with the proposals that quantum mechanics is directly involved in consciousness is that they constitute mysterious answers to a mysterious question.
1Zachary_Kurtz
The only reference on google related to "transient global amnesia" and quantum is this thread (third link down).
1Johnicholas
This is the story in the news. Some may prefer the paper itself.
-1Vladimir_Nesov
I'm surprised to hear this question from you. Does this comment mean that you seriously consider this quantum consciousness woo? Why on Earth?
0Johnicholas
No, I'm just looking for solid evidence-based arguments against it that don't actually depend on me knowing lots of QM.
2Vladimir_Nesov
In that case you need killer evidence, something to take back an insane leap of privileging the hypothesis, not some vague argument around amnesia.

Does anyone know when will the 2009 Summit videos be available?

2Zachary_Kurtz
Already are! http://vimeo.com/siai
0Kutta
Oh, thank you very much!
0Zachary_Kurtz
no problem

So I got into an argument with a theist the other day, and after a while she posted this:

It's not about evidence.

Nu, talk about destroying the foundation for your own beliefs... Escher drawings, indeed.

0Jack
What did she she say it was about?
0RolfAndreassen
Faith, I think.

Meetup listing in Wiki? MBlume created a great Google Calendar for meetups. How about some sort of rudimentary meetup "register" in the LW Wiki? I volunteer to help with this if people think it's a good idea. Thoughts? Objections?

ETA: The GCal is great for presenting some information, but I think something like a Wiki page might be more flexible. I'm especially curious to hear opinions from people who are organizing regular meetups, how that's going, and interest in maintaining a Wiki page.

ETA++: AndrewKemendo has a more complex, probably more us... (read more)

IBM simulates cat's whole brain... research team bores simulated cat to death showing him IBM logo... announces human whole-brain real-time simulation for 2018...

4Nick_Tarleton
Michael Anissimov deflates hype.
2spriteless
Beat me to it. Here's IBM's own press release.
0Jordan
Unfortunately they're using toy neurons. What I'd be excited to see is a high fidelity simulation of neurons in a petri dish, even just a few hundred. There's no problem scanning the topology here, the only problem is in accurately reproducing the biophysics. Once this has been demonstrated, human WBE is just a truckload of money away from reality. Really, does anyone know of any groups working on something like this? I'd gladly throw away my current research agenda to work with them.
3Douglas_Knight
cf the nematode upload project, which looks dead. If people wanted to provide evidence that they're serious, this is what they'd do.
1Jordan
I've seen this around. It's unfortunate that it's dead. There are more confounding factors in the nematode project than with just a petri dish. You have to worry about the whole nematode if you want to verify your results. It's also harder to 'read' a single neuron in action. With a petri dish it would be possible to have an electrode in every neuron. Because the neurons are splayed out imaging techniques might be able to yield some insight into the internal chemical states of the neurons. An uploaded nematode would be great, but an uploaded petri dish seems like a more tractable and logical first step.

A mind teaser for the stream-of-consciousness folk. Let's say one day at 6pm Omega predicts your physical state at 8pm and creates your copy with the state of mind identical to what it predicts for 8pm. At 9pm in kills the original you. Did your consciousness just jump back in time? When did that happen?

2Nick_Tarleton
Not sure who the "stream-of-consciousness folk" are, but I don't see any more problem with a timeless stream (we're all timeless folk, I assume) jumping backward than sideways or forward.
0wedrifid
To be consistent with the 'stream' metaphor it would seem that you must say it jumped back at 8pm. It is not too much of a stretch for a metaphorical stream to diverge into two, where one branch can be transported back in time and the other end some time later. I'm not sure if 'jump' is the ideal terminology for the transition. Either way, the whole 'stream-of-consciousness' idea seems to be stretched beyond whatever usefulness it may have had.
0DanArmak
In the stream metaphor, the consciousness still didn't jump backwards in actual time. Its stream of experience included an apparent jump in time, but that's just because its beliefs suddenly became out of sync with reality: it believed that 2 hours' worth of things had happened, but they hadn't. This isn't a shortcoming of the stream model. It's Omega's fault for messing with your brain :-) For instance, Omega isn't needed: I can do the job myself. I may not be able to correctly predict you for two hours into the future, but I can just invent two hours' history full of ridiculous things happening and edit that false memory into your brain. The end result is the same: you remember experiencing two hours that didn't happen, and then a backwards jump in time. It's no surprise that if I edit your memories, then you might remember something that contradicts a stream model, because you're remembering things that did not in fact happen.
0wedrifid
I don't agree. That is, you describe actual reality accurately but if I am to consider consciousness a stream then I consider this consciousness to have jumped back in time. I assert that the stream of consciousness to have travelled in time to exactly the same extent that a teleported consciousness can be said to have travelled in space. Also quite close to the extent that a guy walking down a street, moving ordinarily in time and space can be stationary in time and space can be considered to have a stream-of-consciousness and all for similar reasons. They don't contradict a stream model. They're just weird. Stuff with Omega in it usually is. 'Stream-of-consciousness' is a map, not the territory. If I have the right scale on a map I can draw a thousand light year line in seconds. From there back in time is just math. I see no reason splitting a stream in two and one part jumping back in time contradicts the model.
0DanArmak
This is just wordplay. We both agree no material or causative thing jumped backwards in time. Sure, if you define a stream of consciousness that way it can be said to have moved backwards in time, but that's just because we're overextending the metaphor. I could equally say that if I predict (or record) all of a consciousness' successive states, and then simulate them in reverse order, then that consciousness has genuine Merlin sickness.
0wedrifid
Absolutely. Wordplay seems to be extent of Vladmir's question, at least as far as I am interested in it. Another curious question. That would be a stream of consciousness flowing back in time. Merlin sickness also has the symptom of living backwards in time. But I don't think it follows that the reverse simulation is an example of Merlin sickness. Whatever the mechanism is behind Merlin's reverse life it appeared to result in him being able to operate quite effectively in a forward flowing universe. At least, he usually seems to get it right by the end of the story.
0DanArmak
Your consciousness (the cloned one of the two) experiences a jump back in time, but the universe history it observes between 6 and 8 for the second time diverges from what it observed the first time, because it itself now acts differently. There's no more an actual backward jump in time than there would be in case Omega just implanted (accurate, predicted) memories of 6 through 8 pm in your brain at 6pm, without any duplications.

This post is a continuation of a discussion with Stefan Pernar - from another thread:

I think there's something to an absolute morality. Or at least, some moralities are favoured by nature over other ones - and those are the ones we are more likely to see.

That doesn't mean that there is "one true morality" - since different moral systems might be equally favoured - but rather that moral relativism is dubious - some moralities really are better than other ones.

There have been various formulations of the idea of a natural morality.

One is "goal ... (read more)

-4StefanPernar
With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence' This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
3wedrifid
Not really. You don't need to co-exist with anything if you out-compete them then turn their raw materials into paperclips.
-5StefanPernar
0timtyler
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It's utility function is not the only issue - and maximisers with any utility function can easily be eaten by other, more powerful maximisers. If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is "ensuring continued co-existence" - rather than doing the things described in my references? If so, why do you think that? - the view doesn't seem to make any sense.
-5StefanPernar
[-]Thanos-30

Millennial Challenges:

Millennial Challenges / Goals

What should we have accomplished by 3010?

/a long term iteration of the Shadow Question

4wedrifid
Extinction or, well, just about everything.