You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, September 2-8, 2013

0 Post author: David_Gerard 02 September 2013 02:07PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (376)

Comment author: sixes_and_sevens 02 September 2013 03:50:00PM 3 points [-]

I would like recommendations for an Android / web-based to-do list / reminder application. I was happily using Astrid until a couple of months ago, when they were bought up and mothballed by Yahoo. Something that works with minimal setup, where I essentially stick my items in a list, and it tells me when to do them.

Comment author: Metus 02 September 2013 05:20:38PM 1 point [-]

I want to tack onto this and ask for a solution that provides some privacy, that is where I can run my own server.

Comment author: diegocaleiro 03 September 2013 02:39:09PM 1 point [-]

Wunderlist 2 has android (it only speaks english in the phone app, but it does portuguese in the normal online version.

it puts your tasks in the cloud so you can catch up with what you wrote in other services.

I'm amazed by David Allen's GTD at the moment, so I want to recommend it, despite still being on honeymoon effect.

Comment author: sixes_and_sevens 03 September 2013 02:52:55PM 1 point [-]

Looking into Wunderlist now.

Don't worry. I read GTD several years ago, and stole plenty of stuff from it.

Comment author: Ben_LandauTaylor 03 September 2013 09:56:48PM 0 points [-]

I've been happily using http://www.rememberthemilk.com/ to manage my GTD system. It's got a simple, intuitive interface, both on desktop and on Android. I'm not sure if it has the reminder features you're after, since that's not something I've ever wanted.

Comment author: Vladimir_Golovin 04 September 2013 01:46:15PM *  0 points [-]

I was on Astrid too. I switched to Wunderlist mostly because their import from Astrid worked correctly. Wunderlist is OK, though I can't say I'm completely satisfied with it. Its UI is laggy (on a Nexus 4!) and unreliable, for example the auto-sync often destroys the last task I just typed in, or when I accidentally tap outside the task entry box the text I just typed is lost forever.

I'm looking at alternatives, and the one I like the most so far is Remember the Milk. Last time I tried it (probably a year ago) it was rubbish, but the latest version has a clean and fast native Android GUI and some nice extra functionality (e.g. geofencing). I'm thinking about switching, but it doesn't have import from Wunderlist, so I'll have to move about 200 tasks manually.

Comment author: twanvl 02 September 2013 04:22:19PM 4 points [-]

Are old humans better than new humans?

This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.

Comment author: Oscar_Cunningham 02 September 2013 04:29:17PM 18 points [-]

The usual argument (which I agree with) is that "Death events have a negative utility". Once a human already exists, it's bad for them to stop existing.

Comment author: twanvl 02 September 2013 10:11:34PM 6 points [-]

So every human has a right to their continued existence. That's a good argument. Thanks.

Comment author: diegocaleiro 02 September 2013 10:30:45PM *  3 points [-]

Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing. .

Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding, that is a good argument against children.

Comment author: twanvl 02 September 2013 10:44:53PM 2 points [-]

Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing.

An adult, yes. But what about the elderly? Of course this is an argument for preventing the problems of old age.

that is a good argument against children.

Is it? It just says that you should value adults over children, not that you should value children over no children. To get one of these valuable adult minds you have to start with something.

Comment author: Mestroyer 03 September 2013 03:54:37AM 1 point [-]

How does that negative utility vary over time though? Because if it stays the same (or increases) then if we know now it's impossible to live 3^^^3 years, then disutility from death sooner than that is counterbalanced (or more than that) by averted disutility from dying later, meaning decisions made are basically the same as if you didn't disvalue death (or as if you valued it).

Comment author: Oscar_Cunningham 03 September 2013 08:54:28AM 6 points [-]

I think that part of the badness of death is the destruction of that person's accumulated experience. Thus the negative utility of death does indeed increase over time. However this is counterbalanced by the positive utility of their continued existence. If someone lives to 70 rather than 50 then we're happy because the 20 extra years of life were worth more than the worsening of the death event.

Comment author: Mestroyer 03 September 2013 10:56:51PM *  0 points [-]

So if Bob is cryopreserved, and I can res him for N dollars, or create a simulation of a new person and run them quickly enough to catch up a number of years equal to Bob's age at death, for N - 1 dollars, I should spend all available dollars on the latter?

Edit: to clarify why I think this is implied by your answer, what this is doing is trading such that you gain a death at Bob's current age, but gain a life of experience up to Bob's current age. If a life ending at Bob's current age is net utility positive, this has to be net utility positive too.

Comment author: drethelin 03 September 2013 11:03:08PM 2 points [-]

broadly: yes, though all available dollars is actually all available dollars (for making people), and you're ignoring considerations like keeping promises to people unable to enforce them such as the cryopreserved or asleep or unconscious etc.

Comment author: drethelin 02 September 2013 05:05:53PM -1 points [-]

Yes.

Comment author: twanvl 02 September 2013 10:09:40PM 1 point [-]

Because?

Comment author: drethelin 02 September 2013 10:44:10PM 5 points [-]

a level 5 character is more valuable than a level 1 character.

A person who is older has more to give the world and has been more invested in than a baby. they're a lot less replaceable.

also i like em more.

Comment author: Izeinwinter 02 September 2013 07:04:33PM 7 points [-]

Existing people take priority over theoretical people. Infinitely so. This should be obvious, as the reverse conclusion ends up with utter absurdities of the "Every sperm is sacred" variety.

Mad grin

Once a child is born, it has as much claim on our consideration as every other person in our light cone, but there is no obligation to have children. Not any specific child, nor any at all. Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born. Right now.

Even if you stay pregnant till you die/never masturbate, this would effectively not help at all - each conception moves one potential from the space of "could be" to to the space of "is", but at the same time eliminates at least several hundred million other potential children from the possibility space - that is just how human reproduction works.

TL:DR; yes, yes they are. It is a silly question.

Comment author: twanvl 02 September 2013 10:09:22PM 5 points [-]

Existing people take priority over theoretical people. Infinitely so.

Does this mean that I am free to build a doomsday weapon that kills everyone born after September 4th 2013 100 years from now, if that gets me a cookie?

This should be obvious, as the reverse conclusion ends up with utter absurdities of the "Every sperm is sacred" variety.

Not necessarily. It would merely be your obligation to have as many children as possible, while still ensuring that they are healthy and well cared for. At some point having an extra child will make all your children less well of.

Once a child is born, it has as much claim on our consideration as every other person in our light cone

Why is there a threshold at birth? I agree that it is a convenient point, but it is arbitrary.

Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born.

Why should I commit suicide? That reduces the number of people. It would be much better to start having children. (Note that I am not saying that this is my utility function).

Comment author: Eliezer_Yudkowsky 02 September 2013 11:21:55PM 3 points [-]

The "infinitely so" part seems wrong, but the idea is that 4D histories which include a sentient being coming into existence, and then dying, are dispreferred to 4D world-histories in which that sentient being continues. Since the latter type of such histories may not be available, we specify that continuing for a billion years and then halting is greatly preferable to continuing for 10 years then halting. Our degree of preference for such is substantially greater than the degree to which we feel morally obligated to create more people, especially people who shall themselves be doomed to short lives.

Comment author: Alejandro1 03 September 2013 05:31:32AM 2 points [-]

The switch from consquentialist language ("4D histories which include… are dispreferred") to deontological language ("…the degree to which we feel morally obligated to create more people") is confusing. I agree that saving the lives of existing people is a stronger moral imperative than creating new ones, at the level of deontological rules and virtuous conduct which is a large part of everyday human moral reasoning. I am much less clear than when evaluating 4D histories I assign higher utility to one with few people living long lives than to one with more people living shorter lives. Actually, I tend towards the opposite intuition preferring a world with more people who live less (as long as the their lives are still well worth living, etc.)

Comment author: Eliezer_Yudkowsky 02 September 2013 11:23:38PM 12 points [-]

Assuming Rawls's veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.

Comment author: Alejandro1 03 September 2013 03:02:11AM *  9 points [-]

I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?

Let us try this framing instead: Assume there are a very large number Z of possible different human "persons" (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be "you". Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?

I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.

Comment author: [deleted] 04 September 2013 08:25:51PM 4 points [-]

I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?

Rawls's veil of ignorance + self-sampling assumption = average utilitarianism, Rawls's veil of ignorance + self-indication assumption = total utilitarianism (so to speak)? I had already kind-of noticed that, but hadn't given much thought to it.

Comment author: Mestroyer 03 September 2013 03:46:29AM 5 points [-]

Doesn't Rawls's veil of ignorance prove too much here though? If both worlds would exist anyway, I'd rather be born into a world where a million people lived 101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.

Comment author: ShardPhoenix 03 September 2013 04:55:53AM *  0 points [-]

Would you? A million probably isn't enough to sustain a modern economy, for example. (Although in the 3^^^3 case it depends on the assumed density since we can only fit a negligible fraction of that many people into our visible universe).

Comment author: Mestroyer 03 September 2013 05:01:38AM 4 points [-]

If the economies would be the same, then yes. Don't fight the hypothetical.

Comment author: ShardPhoenix 03 September 2013 11:25:04PM 1 point [-]

I think "fighting the hypothetical" is justified in cases where the necessary assumptions are misleadingly inaccurate - which I think is the case here.

Comment author: Creutzer 04 September 2013 05:40:13AM 3 points [-]

But compared to 3^^^3, it doesn't matter whether it's a million people, a billion, or a trillion. You can certainly find a number that is sufficient to sustain an economy and is still vastly smaller than 3^^^3, and you will end up preferring the smaller number for a single additional year of lifespan. Of course, for Rawls, this is a feature, not a bug.

Comment author: TrE 03 September 2013 05:47:05PM *  1 point [-]

So then, Rawls's veil has to be modified such that you are randomly chosen to be one of a quadrillion people. In scenario A, you live a million years. In scenario B, one trillion people live for one billion years each, the rest are fertilized eggs which for some reason don't develop.

I'd still choose B over A.

Comment author: Alsadius 04 September 2013 07:52:23PM 0 points [-]

Death isn't just a negative for the dead person - it also causes paperwork and expenses, destruction of relationships, and grief among the living.

Comment author: MugaSofer 04 September 2013 07:57:58PM 1 point [-]

This is true, but in my experience usually used to massage models that don't consider death a disutility into giving the right answers. I can't think of ever hearing this argument used for any other reason, in fact, in meatspace.

(Replying to this comment out of context on the Recent Comments.)

Comment author: Alsadius 04 September 2013 08:25:38PM 0 points [-]

The context is someone asking whether it's better to stop existing people from dying or just make new people.

Comment author: [deleted] 04 September 2013 08:32:17PM 0 points [-]

If by “old humans” you mean healthy adults, yes. If you mean this, no. (IMO -- YMMV.)

Comment author: Desrtopa 02 September 2013 05:05:45PM *  4 points [-]

The following query is sexual in nature, and is rot13'ed for the sake of those who would either prefer not to encounter this sort of content on Less Wrong, or would prefer not to recall information of such nature about my private life in future interactions.

V nz pheeragyl va n eryngvbafuvc jvgu n jbzna jub vf fvtavsvpnagyl zber frkhnyyl rkcrevraprq guna V nz. Juvyr fur cerfragyl engrf bhe frk nf "njrfbzr," vg vf abg lrg ng gur yriry bs "orfg rire," juvpu V ubcr gb erpgvsl.

Sbe pynevsvpngvba, V jbhyq fnl gung gur trareny urnygu naq fgnovyvgl bs bhe eryngvbafuvc vf rkgerzryl uvtu; guvf chefhvg vf n znggre bs erperngvba naq crefbany cevqr, abg n arprffnel vagreiragvba gb fnir gur eryngvbafuvc.

V'ir nyernql frnepurq bayvar sbe nyy gur vasbezngvba V pna svaq ba vzcebivat gur dhnyvgl bs frk, ohg fhpu vasbezngvba vf birejuryzvatyl rvgure gnetrgrq ng oevatvat crbcyr jvgu fbzr frevbhf qrsvpvrapl va gurve frk yvirf hc gb gur yriry bs abeznyvgl be fngvfslvat gurz gung gur abez vf yrff fcrpgnphyne guna gurl guvax naq gurl qba'g unir gb yvir hc gb vasyngrq fgnaqneqf, engure guna crbcyr gelvat gb npuvrir frk jnl bhg ba gur sne raq bs gur oryy pheir, be gnetrgrq ng crbcyr jub jbhyqa'g xabj jung "rzcvevpny onpxvat" jnf vs lbh uvg gurz va gur snpr jvgu vg.

V'z nyernql snzvyvne jvgu gur zbfg boivbhf, ybj unatvat sehvg vagreiragvbaf fhpu nf "pbageby fgerff," "qb xrtryf," rgp, naq jr pbzzhavpngr nobhg bhe frkhny cersreraprf naq npgvivgvrf rkgrafviryl. V'z nyfb nppbhagvat sbe snpgbef fhpu nf gur rzbgvbany pbagrkg bs bhe rapbhagref naq ubezbany plpyrf. Jung V'z ybbxvat sbe ng guvf cbvag ner rkprcgvbany zrnfherf sbe guvatf yvxr envfvat zl frkhny fgnzvan gb hahfhny yriryf, vapernfvat ure yriry bs nebhfny naq/be frafvgvivgl, naq fb sbegu. Obgu purzvpny naq abapurzvpny zrnfherf ner npprcgnoyr, ohg V jbhyq yvxr gb nibvq nalguvat yvxryl gb pneel qnatrebhf fvqr rssrpgf, naq vs cbffvoyr V jbhyq cersre abg gb erfbeg gb guvatf gung jbhyq erdhver zr gb trg n cerfpevcgvba sebz n qbpgbe juvyr qvfpybfvat gung V'z hfvat vg ba n cheryl erperngvbany onfvf.

Nal nqivpr jbhyq or nccerpvngrq.

Comment author: Locaha 02 September 2013 05:18:55PM 4 points [-]

To be honest, you sound a bit like a person who made a billion dollars and now tries to crowd-source a way to make ten billions. :-)

Comment author: Desrtopa 02 September 2013 07:22:41PM 14 points [-]

Well, I'm flattered that you think my position is so enviable, but I also think this would be a pretty reasonable course of action for someone who made a billion dollars.

Comment author: drethelin 02 September 2013 05:21:17PM 1 point [-]

Practice makes perfect. I think a lot of good sex is intuitively reading your partner's signals and ramping things up/down with good timing in response to them. I think this is something you might be able to learn via logos but I think it's much more likely to be something you need to experience before you can get good at it. When to pull hair, when to thrust deeper, etc.

In general I and whoever I'm with have had more fun when I felt I had a good idea of what they wanted in the moment, which I think I've gotten better at mainly through practice.

Comment author: Desrtopa 02 September 2013 05:28:13PM 1 point [-]

I suspect that I can continue to improve with practice, but I'd like to be able to set out every option available to me on the table.

Even if I can attain the status of "best" without taking such extraordinary measures, this is something I'm genuinely competitive on, which at least to me means that simply taking first place isn't sufficient if I can still see avenues to top myself.

Comment author: NancyLebovitz 02 September 2013 05:41:39PM *  1 point [-]

Slow Sex seems to help move at least some people to move from good to great.

Comment author: Desrtopa 02 September 2013 05:48:35PM 1 point [-]

Does that entail sex literally done slowly? We could try it out, but that doesn't seem to be to her preferences.

Comment author: NancyLebovitz 02 September 2013 06:16:15PM 0 points [-]

It involves learning to pay more attention as a meditative practice, but not (I think) a recommendation to always go slowly.

Comment author: bbleeker 03 September 2013 06:48:11AM *  0 points [-]

This book pbhyq uryc jvgu gur fgnzvan. Vg jbexrq sbe zl uhfonaq, jura ur gevrq vg n srj lrnef ntb.

Comment author: Desrtopa 03 September 2013 09:35:22PM 0 points [-]

Are the instructions anything simple enough that I could replicate them without needing to buy the entire book?

Comment author: bbleeker 04 September 2013 11:33:32AM *  -1 points [-]

Maybe, but then I'd have to read it to find out, and I have many other books I'd like to read. Maybe you can find it in the library?

Comment author: Desrtopa 04 September 2013 02:04:28PM 0 points [-]

I'll check; I'm pretty sure my own library doesn't have a Sex section, but it might be in network.

Asking to order it would be pretty embarrassing, I have to admit, especially at my own library where a lot of the people who work there know me by name.

Comment author: Douglas_Knight 04 September 2013 05:17:06PM *  0 points [-]

If you're too cheap to spend $4 at amazon, pirate it.

Comment author: khafra 05 September 2013 12:51:31PM 0 points [-]

Dewey Decimal number 613.96, IIRC from my internet-deprived adolescence.

Comment author: Adele_L 02 September 2013 05:17:43PM 31 points [-]

There has recently been some speculation that life started on Mars, and then got blasted to earth by an asteroid or something. Molybdenum is very important to life (eukaryote evolution was delayed by 2 billion years because it was unavailable), and the origin of life is easier to explain if Molybdenum is available. The problem is that Molybdenum wasn't available in the right time frame on Earth, but it was on Mars.

Anyway, assuming this speculation is true, Mars had the best conditions for starting life, but Earth had the best conditions for life existing, and it is unlikely conscious life would have evolved without either of these planets being the way they are. Thus, this could be another part of the Great Filter.

Side note: I find it amusing that Molybdenum is very important in the origin/evolution of life, and is also element 42.

Comment author: Will_Newsome 03 September 2013 03:38:13AM 0 points [-]

(Are you Adele Lack Cotard?)

Comment author: Adele_L 03 September 2013 04:04:29AM 0 points [-]

Like in Synecdoche, New York? No... it is an abbreviation of my real name.

Comment author: curiousepic 05 September 2013 06:13:41PM *  3 points [-]

As someone pointed out to me when mentioning this to them, to be a candidate for Great Filter there would need to be something intrinsic about how planets are formed that cause these two types of environments to be mutually exclusive, else it seems like there isn't sufficient reduction in probability of their availability. Is this actually the case? Perhaps user:CellBioGuy can elucidate.

Comment author: Metus 02 September 2013 05:26:22PM 14 points [-]

The ancient Stoics apparently had a lot of techniques for habituation and changing cognitive processes. Some of those live on in the form of modern CBT. One of the techniques is to write a personal handbook with advice and sayings to carry around at all times as to never be without guidance from a calmer self. Indeed, Epictet advises to learn this handbook by rote to further internalisation. So I plan to write such a handbook for myself, once in long form with anything relevant to my life and lifestyle, and once in a short form that I update with things that are difficult at that time, be it strong feelings or being deluded by some biases.

In this book I intend to include a list of all known cognitive biases and logical fallacies. I know that some biases are helped by simply knowing them, does anyone have a list of those? And should I complete the books or have a clear concept of their contents, are you interested in reading about the process of creating one and possible perceived benefits?

Comment author: palladias 02 September 2013 07:05:40PM 7 points [-]

I'm also interested in hearing from you again about this project if you decide to not complete it. Rock on, negative data!

Comment author: Metus 02 September 2013 07:27:16PM 0 points [-]

Though lack of motivation or laziness is not a particularly interesting answer.

Comment author: Vaniver 02 September 2013 10:38:28PM 6 points [-]

Though lack of motivation or laziness is not a particularly interesting answer.

I have found "I thought X would be awesome, and then on doing X realized that the costs were larger than the benefits" to be useful information for myself and others. (If your laziness isn't well modeled by that, that's also valuable information for you.)

Comment author: Metus 02 September 2013 05:36:51PM 2 points [-]

Is the layout for anyone else weird? The thread titles are more spaced out, like three times. Maybe something broke during my last Firefox upgrade.

Comment author: Username 03 September 2013 08:33:51PM *  1 point [-]

Site layout hasn't changed for me. Chrome on windows and safari on iphone.

Comment author: Jayson_Virissimo 04 September 2013 04:49:25AM *  1 point [-]

It looks fine on Safari for the iPhone.

Comment author: gwern 02 September 2013 05:37:05PM *  9 points [-]

To maybe help others out and solve the trust bootstrapping involved, I'm offering for sale <=1 bitcoin at the current Bitstamp price (without the usual premium) in exchange for Paypal dollars to any LWer with at least 300 net karma. (I would prefer if you register with #bitcoin-otc, but that's not necessary.) Contact me on Freenode as gwern.

EDIT: as of 9 September 2013, I have sold to 2 LWers.

Comment author: linkhyrule5 06 September 2013 01:34:52AM 3 points [-]

Pardon me, but - what is the trust boostrapping involved?

Comment author: gwern 06 September 2013 02:25:04AM 4 points [-]

Paypal allows clawbacks for months, hence it's difficult to sell for Paypal to anyone who is not already in the -otc web of trust; but by restricting sales to high-karma LWers, I am putting their reputation here at risk if they scam me, which enables me to sell to them. Hence, they can acquire bitcoins & get bootstrapped into the -otc web of trust based on LW.

Comment author: iDante 02 September 2013 06:31:49PM *  3 points [-]

I learned about Egan's Law, and I'm pretty sure it's a less-precise restatement of the correspondence principle. Anyone have any thoughts on that similarity?

Comment author: gwern 02 September 2013 06:38:04PM 6 points [-]

The term is also used more generally, to represent the idea that a new theory should reproduce the results of older well-established theories in those domains where the old theories work.

Sounds good to me, although that's not what I would have guessed from a name like 'correspondence principle'.

Comment author: shminux 02 September 2013 08:54:58PM 2 points [-]

I suppose some minor difference is that this "law" is also applicable to meta-ethics, not just to physics. It's probably worth adding a link to the standard terminology to the LW wiki page.

Comment author: Adele_L 02 September 2013 08:37:16PM 4 points [-]

Is there a good way to avoid HPMOR spoilers on prediction book?

Comment author: gwern 02 September 2013 08:56:07PM 5 points [-]

Since PB users' calibrations are not yet good enough to see the future, you can easily avoid MoR spoilers by subscribing to the email or RSS alerts for new chapters & reading them as appropriate.

Comment author: Adele_L 03 September 2013 04:08:46AM 3 points [-]

This is the obvious solution, but I want to reread what I've currently read, and have some time to think about the story and try creating an accurate causal model of events and such in the story as I read new!Adele material (Eliezer says it's supposed to be a solvable puzzle). I don't have time to do this right now, so in the meantime, I try to avoid spoilers.

Comment author: Jayson_Virissimo 02 September 2013 09:57:56PM *  2 points [-]

Is there a good way to avoid HPMOR spoilers on prediction book?

If you are skilled in the art of Ruby, then yes. Otherwise, maybe. People (myself included) have been complaining about the lack of tagging/sorting system on PB for quite some time, but so far, no one has played the hero.

Comment author: Douglas_Knight 04 September 2013 06:28:02AM *  2 points [-]

I used feed43 to create an rss feed out of recent predictions. Then I used feedrinse to filter out references to hpmor resulting in a safe feed. (Update: chaining unreliable services makes something even less reliable.)

You could do the same for the pages of recently judged or future or users you follow. I think feedrinse offers to merge feeds (into a "channel") before or after doing the filtering. But if you find someone new and just want to click on the username, you'll leave the safe zone. Even if you see someone you have processed, the username will take you to the unsafe page.

A better solution would be to write a greasemonkey script that modified each predictionbook page as you look at it.

The final feedrinse feed works in a couple of my browsers, but not chrome. Probably sending it through feedburner would fix it.

feed43 was finicky. The item search pattern was:
<li class="prediction{%}">{_}<p>{_}<span class='title'><a href="{%}">{%}</a></span>{%}</li>

The regexp I used in feedrinse was /hp.?mor/
It is case insensitive and manages to eliminate "HP MoR:", "[HPMOR]", etc. It won't work if they spell it out, or just predict "Harry is orange" without indicating which story they're predicting about. In that case, someone will probably leave a hpmor comment, but this doesn't see such comments.

Comment author: lsparrish 02 September 2013 09:36:13PM *  8 points [-]

Abstract

What makes money essential for the functioning of modern society? Through an experiment, we present evidence for the existence of a relevant behavioral dimension in addition to the standard theoretical arguments. Subjects faced repeated opportunities to help an anonymous counterpart who changed over time. Cooperation required trusting that help given to a stranger today would be returned by a stranger in the future. Cooperation levels declined when going from small to large groups of strangers, even if monitoring and payoffs from cooperation were invariant to group size. We then introduced intrinsically worthless tokens. Tokens endogenously became money: subjects took to reward help with a token and to demand a token in exchange for help. Subjects trusted that strangers would return help for a token. Cooperation levels remained stable as the groups grew larger. In all conditions, full cooperation was possible through a social norm of decentralized enforcement, without using tokens. This turned out to be especially demanding in large groups. Lack of trust among strangers thus made money behaviorally essential. To explain these results, we developed an evolutionary model. When behavior in society is heterogeneous, cooperation collapses without tokens. In contrast, the use of tokens makes cooperation evolutionarily stable.

http://www.pnas.org/content/early/2013/08/21/1301888110

Comment author: tut 03 September 2013 12:22:27PM 9 points [-]

Does this also work with macaques, crows or some other animals that can be taught to use money, but didn't grow up in a society where this kind of money use is taken for granted?

Comment author: Alsadius 04 September 2013 08:06:26PM 1 point [-]

Not strictly the same, but there have been monkey money experiments. And the results are hilarious. www.zmescience.com/research/how-scientists-tught-monkeys-the-concept-of-money-not-long-after-the-first-prostitute-monkey-appeared/

Comment author: Gunnar_Zarncke 02 September 2013 10:03:17PM 1 point [-]

Just had a discussion with my in-law about the singularity. He's a physicist and his immediate response was: There are no singularities. They appear mathematically all the time and it only means that there is another effect taking over. Correspondingly a quick google thus brought up this:

http://www.askamathematician.com/2012/09/q-what-are-singularities-do-they-exist-in-nature/

So my question is: What are the 'obvious' candidates for limits that take over before the all optimizable is optimized by runaway technology?

Comment author: Vaniver 02 September 2013 10:43:14PM 0 points [-]

So my question is: What are the 'obvious' candidates for limits that take over before the all optimizable is optimized by runaway technology?

There aren't any that I'm aware of, except for "a disaster happens and everyone dies," but that's bad luck, not a hard limit. I would respond with something along the lines of "exponential growth can't continue forever, but where it levels out has huge implications for what life will look like, and it seems likely it will level out far above our current level, rather than just above our current level."

Comment author: CellBioGuy 02 September 2013 11:43:33PM 6 points [-]

Lack of cheap energy.

Ecological disruption.

Diminishing returns of computation.

Diminishing returns of engineering.

Inability to precisely manipulate matter below certain size thresholds.

All sorts of 'boring' engineering issues by which things that get more and more complicated get harder and harder faster than their benefits increase.

Comment author: Adele_L 03 September 2013 12:22:05AM 14 points [-]

On LW, 'singularity' does not refer to a mathematical singularity, and does not involve or require physical infinities of any kind. See Yudkowsky's post on the three major meanings of the term singularity. This may resolve your physicist friend's disagreement. In any case, it is good to be clear about what exactly is meant.

Comment author: diegocaleiro 02 September 2013 10:28:01PM 1 point [-]

Fighting (in the sense of arguing loudly, as well as showing physical strength or using it) seems to be bad the vast majority of time.

When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)

There is an SSA argument to be made for fighting in that taller people are stronger, stronger people are dominant, and bigger skulls correlate with intelligence. But it seems to me that this factor alone is far, far away from being sufficient justification for fighting, given the possible consequences.

Comment author: drethelin 02 September 2013 10:37:27PM 1 point [-]

Fighting makes a lot more sense in a tribe or in small groups/individuals of humans than it does now. A big argument with someone now will very rarely keep you from starving and will probably never get you a child. On the other, showing dominance in a situation where the women around you are choosing a mate out of 5 guys, will get you a lot more laid.

Comment author: diegocaleiro 02 September 2013 10:47:37PM *  0 points [-]

I haven't seen people who can get laid frequently getting into dominance disputes/fights.

There is a distinction between dominance which is assertive and aversive, and prestige, which is recognized and non-aversive.

Guys like Keanu Reeves, Tom Cruise, Brad Pitt have prestige which gets them (potentially) laid.

Women have more reason to be be attracted to a man if he is universally recognized to be awesome, than if he is all the time showing his power through small agonistic interactions with other people - males and females.

If Cesar had been universally prestigious instead of agonistically powerful, Brutus wouldn't have reason to kill him leaving an unassisted widow and children.

Comment author: wedrifid 02 September 2013 11:07:26PM *  3 points [-]

I haven't seen people who can get laid frequently getting into dominance disputes/fights.

I agree with your central point but I think this claim is something of an overstatement (since I don't wish to accuse you of being sheltered). Crudely speaking it tends to be sexier to win without fighting than to fight and win but fighting (social status battles) and winning is still more than sufficiently sexy.

I also note that it is hard to become the kind of person who does not need to engage in any dominance disputes and still maintain high social status without in engaging in many dominance disputes on the way. To a certain extend the process can be munchkined since much of the record of who is dominant is stored in the individual but some actual dominance disputes will still be inevitable.

Comment author: diegocaleiro 03 September 2013 02:43:20PM 1 point [-]

Yes, also keep in mind that human cognition related to hierarchies of prestige and dominance is flexible enough that it may be worth more to step up in a different hierarchy than try to save yourself in this one by agonistic dispute. We don't have the problem of being "stuck" with the same group forever, which facilitates a lot.

Comment author: Lumifer 03 September 2013 04:49:57PM 0 points [-]

I haven't seen people who can get laid frequently getting into dominance disputes/fights.

To put it crudely, alpha males very rarely get into dominance fights because part of being an alpha male is being acknowledged as an alpha male.

Betas and gammas status-fight more often since their position on the ladder is less stable.

A large part of having status is not having to constantly prove it.

Comment author: ChristianKl 02 September 2013 11:01:59PM 2 points [-]

If everyone agrees about how power is distributed fighting is unnecessary.

Fighting can be necessary when another person claims to have power that they actually don't have.

Comment author: Emile 04 September 2013 12:49:38PM 0 points [-]

If everyone agrees about how power is distributed fighting is unnecessary.

Surely it's in nearly everyone's interest to have more power distributed to themselves!

But fighting to get more power may have positive utility for oneself, it usually has negative utility for others, so it's in everybody's interest that everybody agrees to not fighting for more power. This agreement can take the form of alternative ways of getting power (elections, money), or making power less important to one's happiness (the rule of law).

Comment author: ChristianKl 05 September 2013 12:24:10PM 0 points [-]

But fighting to get more power may have positive utility for oneself, it usually has negative utility for others, so it's in everybody's interest that everybody agrees to not fighting for more power.

If you don't have enough power to win a fight fighting is also negative utility for yourself. If everyone predicts that you would win a fight, you usually don't actually have to fight it to get what you want.

Comment author: blashimov 06 September 2013 01:44:28AM 3 points [-]

Fighting has a huge signalling component: when viewed in isolation, a fight might be trivially, obviously, a net negative for both participants. However, either or both! participants might in the future win more concessions for their willingness to fight alone than the loss of the fight. As humans are adaption executers, a certain willingness to fight, to seek revenge, etc. is pretty common. At least, this seems to be the dominant theory and sensible to me.

Comment author: wedrifid 02 September 2013 11:24:06PM 0 points [-]

When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)

Or even just CDT style. Human interaction is approximately an iterated prisoners dilemma without a fixed duration. Reputation concerns are sufficient to account for most of the (perceived and actual) benefit among humans. Then more can be attributed to ethical inhibitions on the 'pride' ethic.

Comment author: mare-of-night 03 September 2013 01:44:41AM 3 points [-]

I recently realized that I have something to protect (or perhaps a smaller version of the same concept). I also realized that I've been spending too much time thinking about solutions that should have have been obviously not workable. And I've been avoiding thinking about the real root problem because it was too scary, and working on peripheral things instead.

Does anyone have any advice for me? In particular, being able to think about the problem without getting so scared of it would be helpful.

Comment author: ChristianKl 03 September 2013 01:12:51PM 2 points [-]

Talk about it with other people. Ask a good friend to sit down with you and listen to you talking about the issue.

Comment author: Emily 03 September 2013 08:47:13AM 2 points [-]

Has anyone got a recommendation for a nice RSS reader? Ideally I'm looking for one that runs on the desktop rather than in-browser (I'm running Ubuntu). I still haven't found a replacement that I like for Lightread for Google Reader.

Comment author: mstevens 03 September 2013 11:50:25AM 1 point [-]

I used to like liferea, but I don't have an up to date opinion as I switched to non-desktop RSS reading options.

Comment author: Emily 03 September 2013 12:47:40PM 0 points [-]

Thanks! Will try it.

Comment author: diegocaleiro 03 September 2013 02:47:33PM 13 points [-]

(mild exaggeration) Has anyone else transitioned from "I only read Main posts, to I nearly only read discussion posts, to actually I'll just take a look at the open threat and people who responded to what I wrote" during their interactions with LW?

To be more specific, is there a relevant phenomenon about LW or is it just a characteristic of my psyche and history that explain my pattern of reading LW?

Comment author: diegocaleiro 03 September 2013 02:51:26PM 2 points [-]

I predict that some people will have been through the sequences, which are Main posts, but then mainly cared about discussion. I suspect it has to do with Morning Newspaper Bias - the bias of thinking that new stuff is more relevant, when actually it is just pointless to read most of the time, only scrambles your mind, and loses value very quickly.

Comment author: RolfAndreassen 03 September 2013 06:17:46PM 5 points [-]

I read the Sequences as they were posted; Main posts now rarely hold my interest the same way. Eliezer's writing is just better than most people's.

Comment author: shminux 03 September 2013 06:34:08PM *  27 points [-]

Honestly, I don't know why Main is even an option for posting. It should really be just an automatically labeled/generated "Best of LW" section, where Discussion posts with, say, 30+ karma are linked. This is easy to implement, and easy to do manually using the Promote feature until it is. The way it is now, it's mostly by people thinking that they are making an important contribution to the site, which is more of a statement about their ego than about quality of their posts.

Comment author: drethelin 03 September 2013 07:41:27PM 10 points [-]

I read the sequences and a bunch of other great old main posts but now mostly read discussion. It feels like Main posts these days are either repetitive of what I've read before, simply wrong or not even wrong, or decision theory/math that's above my head. Discussion posts are more likely to be novel things I'm interested in reading.

Comment author: CAE_Jones 04 September 2013 08:02:40AM 3 points [-]

This describes how my use of LW has wound up pretty accurately.

Comment author: Username 03 September 2013 08:33:15PM 1 point [-]

I've definitely noticed this in my use of LW. I find that the open threads/media threads with their consistent high-quality novelty in a wide range of subject areas are far more enjoyable than the more academic main threads. Decision theory is interesting, but it's going to be hard to hold my attention for a 3,000 word post when there are tasty 200-word bites of information over here.

Comment author: David_Gerard 03 September 2013 08:39:27PM 0 points [-]

Well, chat's always more fun.

Comment author: ygert 04 September 2013 10:13:39AM 0 points [-]

My experience is similar. I read the sequences as they were published on OB, then when the move over to LW happened I just subscribed to the RSS feed and only read Promoted posts for quite a few years. Only about a year ago I actually signed up for an account here and started posting and reading Discussion and the Open Thread.

Comment author: tgb 04 September 2013 12:34:32PM 10 points [-]

Selection bias alert: asking people whether they have transitioned to reading mostly discussion and then to mostly just open threads in an open thread isn't likely to give you a good perspective on the entire population, if that is in fact what you were looking for.

Comment author: private_messaging 04 September 2013 07:38:11PM 1 point [-]

There would be far more selection bias if he asked about it outside an open thread, though.

Comment author: blashimov 06 September 2013 01:29:15AM 0 points [-]

Really? Why?

Comment author: private_messaging 06 September 2013 09:20:45AM 3 points [-]

Because he's asking about people who only read the open thread. Here he could get response from the people who do read LW in general, inclusive of the open thread, and people who read only the open thread (he'll miss the people who don't read the open thread). Outside the open thread, he gets no response at all from people who only read the open thread.

Comment author: niceguyanon 04 September 2013 02:49:51PM 0 points [-]

I'll admit that much of the main sequence are too heavy to understand without prior knowledge, so I find discussions much easier to take in, and many times I end up reading a sequence because it was posted in a discussion comment. For me discussion posts are like the gateway to Main.

Comment author: David_Gerard 05 September 2013 07:45:38PM 0 points [-]

The lower the barrier to entry, the more the activity. Thus, more posts are on Discussion. My hypothesis is that this has worked well enough to make Discussion where stuff happens. c.f. how physics happens on arXiv these days, not in journals. (OTOH, it doesn't happen on viXra, whose barrier to entry may be too low.)

Comment author: niceguyanon 03 September 2013 05:24:54PM 5 points [-]

Is there a name for, taking someone being wrong on A as evidence as being wrong on B? Is this a generally sound heuristic to have? In the case of crank magnetism; should I take someone's crank ideas, as evidence against an idea that is new and unfamiliar to me?

Comment author: Adele_L 03 September 2013 07:34:41PM 0 points [-]

Bayes' theorem to the rescue! Consider a crank C, who endorses idea A. Then the probability of A being true, given that C endorses it equals the probability of C endorsing A, given that A is true times the probability that A is true over the probability that C endorses A.

In equations: P(A being true | C endorsing A) = P(C endorsing A | A being true)*P(A being true)/P(C endorsing A).

Since C is known to be a crank, our probability for C endorsing A given that A is true is rather low (cranks have an aversion to truth), while our probability for C endorsing A in general is rather high (i.e. compared to a more sane person). So you are justified in being more skeptical of A, given that C endorses A.

Comment author: shminux 03 September 2013 08:11:38PM *  1 point [-]

I don't know if there is a name for it, but there ought to be one, since this heuristic is so common: the reliability prior of an argument is the reliability of the arguer. For example, one reason I am not a firm believer in the UFAI doomsday scenarios is Eliezer's love affair with MWI.

Comment author: Salemicus 03 September 2013 10:44:05PM 2 points [-]

I don't know if there's a name for this, but I definitely do it. I think it's perfectly legitimate in certain circumstances. For example, the more B is a subject of general dispute within the relevant grouping, and the more closely-linked belief in B is to belief in A, the more sound the heuristic. But it's not a short-cut to truth.

For example, suppose that you don't know anything about healing crystals, but are aware that their effectiveness is disputed. You might notice that many of the same people who (dis)believe in homeopathy also (dis)believe in healing crystals, that the beliefs are reasonably well-linked in terms of structure, and you might already know that homeopathy is bunk. Therefore it's legitimate to conclude that healing crystals are probably not a sound medical treatment - although you might revise this belief if you got more evidence. On the other hand, note that reversed stupidity is not truth - healing crystals being bunk doesn't indicate that conventional medicine works well.

The place where I find this heuristic most useful is politics, because the sides are well-defined - effectively, you have a binary choice between A and ~A, regardless of whether hypothetical alternative B would be better. If I stopped paying attention to current affairs, and just took the opposite position to Bob Crow on every matter of domestic political dispute, I don't think I'd go far wrong.

Comment author: satt 03 September 2013 11:02:02PM 0 points [-]
Comment author: CellBioGuy 05 September 2013 06:51:36AM 0 points [-]

Horrifically misnamed.

Comment author: Douglas_Knight 04 September 2013 03:06:29AM *  0 points [-]

ad hominem

Not that there's anything wrong with that.

Comment author: Mestroyer 04 September 2013 03:27:09AM 8 points [-]

It's evidence against them being a person whose opinion is strong evidence of B, which means it is evidence against B, but it's probably weak evidence, unless their endorsement of B is the main thing giving it high probability in your book.

Comment author: [deleted] 04 September 2013 06:34:32PM 1 point [-]

Yes, but in many cases it's very weak evidence. Overweighing it leads to the “reversed stupidity” failure mode.

Comment author: linkhyrule5 04 September 2013 01:50:42AM 2 points [-]

So.... Thinking about using Familiar, and realizing that I don't actually know what I'd do with it.

I mean, some things are obvious - when I get to sleep, how I feel when I wake up, when I eat, possibly a datadump from RescueTime... then what? All told that's about 7-10 variables, and while the whole point is to find surprising correlations I would still be very surprised if there were any interesting correlations in that list.

Suggestions? Particularly from someone already trying this?

Comment author: niceguyanon 04 September 2013 02:32:26PM 4 points [-]

I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.

Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.

Is this flawed? If not then I'm probably really late to this idea, but I thought I would mention it because it's taken considerable time for me to see it like this. And if I were to explain the AI problem to someone who is uninitiated, I would be tempted to lead with the ~FAI is bad, rather than UFAI is bad. Why? Because intuitively, the dangers of UFAI feels "farther" than ~FAI. First people have to consider whether or not it's even possible for AI, then consider why its bad for for UFAI, this is a future problem. Whereas ~FAI is now, it feels nearer, it is happening – we have come close to annihilating ourselves before and technology is just getting better at accidentally killing us, therefore let's work on FAI urgently.

Comment author: Lumifer 04 September 2013 03:15:11PM 1 point [-]

The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive.

So you want a god to watch over humanity -- without it we're doomed?

Comment author: niceguyanon 04 September 2013 03:35:41PM 2 points [-]

As of right now, yes. However, I could be persuaded otherwise.

Comment author: Lumifer 04 September 2013 03:58:36PM *  3 points [-]

A Singularity conference around a project financed by a Russian oligarch, seems to be mostly about uploading and ems.

Looks curious.

Comment author: Thomas 04 September 2013 05:39:21PM 8 points [-]
Comment author: wadavis 06 September 2013 02:53:13PM 1 point [-]

Thank you. All I need is hand held spray thermos to make Australia a viable working vacation. I have a strong irrational aversion to spiders. This is much more acceptable than the home made flamer.

Comment author: JMiller 04 September 2013 07:21:12PM 2 points [-]

Hi, I am taking a course in Existentialism. It is required for my degree. The primary authors are Sartre, de Bouvoir and Merleau-Ponty. I am wondering if anyone has taken a similar course, and how they prevented material from driving them insane (I have been warned this may happen). Is there any way to frame the material to make sense to a naturalist/ reductionist?

Comment author: Mitchell_Porter 05 September 2013 12:55:23AM 10 points [-]

This could be a Lovecraft horror story: "The Existential Diary of JMiller."

Week 3: These books are maddeningly incomprehensible. Dare I believe that it all really is just nonsense?

Week 8: Terrified. Today I "saw" it - the essence of angst - and yet at the same time I didn't see it, and grasping that contradiction is itself the act of seeing it! What will become of my mind?

Week 12: The nothingness! The nothingness! It "is" everywhere in its not-ness. I can not bear it - oh no, "not", the nothingness is even constitutive of my own reaction to it - aieee -

(Here the manuscript breaks off. JMiller is currently confined in the maximum security wing of the Asylum for the Existentially Inane.)

Comment author: kalium 05 September 2013 06:02:57PM 0 points [-]

I suspect that warning was intended as a joke.

Comment author: fubarobfusco 06 September 2013 05:05:15AM *  0 points [-]

All of those weird books were written by humans.
Those humans were a lot like other humans.
They had noses and butts and toes.
They ate food and they breathed air.
They could add numbers and spell words.
They knew how to have conversations and how to use money.
They had girlfriends or boyfriends or both.
Why did they write such weird books?
Was it because they saw other humans kill each other in wars?
Was it because writing weird books can get you a lot of attention and money?
Was it because they remembered feeling weird about their moms and dads?
People talk a lot about that.
Why do they talk a lot about that?

Comment author: pragmatist 06 September 2013 09:35:36AM *  1 point [-]

When reading Merleau-Ponty it might help to also read the work of contemporary phenomenologists whose work is much more rooted in cognitive science and neuroscience. A decent example is Shaun Gallagher's book How the Body Shapes the Mind, or perhaps his introductory book on naturalistic phenomenology, which I haven't read. Gallagher has a more or less Merleau-Pontyesque view on a lot of stuff, but explicitly connects it to the naturalistic program and expresses things in a much clearer manner. It might help you read Merleau-Ponty sympathetically.

Comment author: Ichneumon 04 September 2013 07:23:59PM 2 points [-]

In the effective animal altruism movement, I've heard a bit (on LW) about wild animal suffering- that is, since raised animals are vastly outnumbered by wild animals (who encounter a fair bit of suffering on a frequent basis), we should be more inclined to prevent wild suffering than worry about spreading vegetarianism.

That said, I think I've heard it sometimes as a reason (in itself!) not to worry about animal suffering at all, but has anyone tried to solve or come up with solutions for that problem? Where can I find those? Alternatively, are there more resources I can read on wild animal altruism in general?

Comment author: Oscar_Cunningham 04 September 2013 07:57:14PM 3 points [-]

since raised animals are vastly outnumbered by wild animals

That doesn't sound true if you weight by intelligence (which I think you should since intelligent animals are more morally significant). Surely the world's livestock outnumber all the other large mammals.

Comment author: blashimov 06 September 2013 01:26:56AM 1 point [-]

Large mammals only? Is a domesticated cow smarter than a rat? A pigeon? Tough call.

Comment author: MugaSofer 04 September 2013 07:52:30PM *  5 points [-]

This may be an odd question, but what (if anything) is known on turning NPCs into PCs? (Insert your own term for this division here, it seems to be a standard thing AFAICT.)

I mean, it's usually easier to just recruit existing PCs, but ...

Comment author: blashimov 06 September 2013 01:13:26AM 2 points [-]

Take the leadership feat, and hope your GM is lazy enough to let you level them. More practically, is it a skills problem or as I would guess an agency problem? Can impress on them the importance of acting vs not? Lend them the Power of Accountability? 7 habits of highly effective people? Can you compliment them every time they show initiative? etc. I think the solution is too specific to individuals for general advice, nor do I know a general advice book beyond those in the same theme as those mentioned.

Comment author: topynate 04 September 2013 10:40:12PM 2 points [-]

Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?

Comment author: RomeoStevens 04 September 2013 11:40:04PM *  1 point [-]

I recently read Luminosity/radiance, was there ever a discussion thread on here about it?

SPOILERS for the end

V jnf obgurerq ol gur raq bs yhzvabfvgl. Abg gb fnl gung gur raq vf gur bayl rknzcyr bs cbbe qrpvfvba znxvat bs gur punenpgref, naq cresrpgyl engvbany punenpgref jbhyq or obevat naljnl. Ohg vg frrzf obgu ernfbanoyr nf fbzrguvat Oryyn jbhyq unir abgvprq naq n terng bccbeghavgl gb vapyhqr n engvbanyvgl yrffba. Anzryl, Oryyn artrypgrq gb fuhg hc naq zhygvcyl. Fur vf qribgvat yvzvgrq erfbheprf gbjneqf n irel evfxl cyna bs unygvat nyy uhzna zheqre ol inzcverf vzzrqvngryl. Fbyivat guvf vffhr vf cynhfvoyl rzbgvbanyyl eryrinag, ohg Oryyn fubhyq unir abgvprq gung vg qbrfa'g znggre nyy gung zhpu ubj lbh trg xvyyrq, vg vf ebhtuyl rdhnyyl gentvp ab znggre gur sbez bs qrngu vs vzzbegnyvgl rkpvfgf. Juvpu vg qbrf. Inzcverf qb abg ercerfrag n fvmrnoyr senpgvba bs nyy qrnguf. Nf n crefba va gur nccnerag cbfvgvba gb raq qrngu bar fubhyq or n ovg zber pnershy jvgu frphevgl. Bs pbhefr Oryyn naq pb. zvtug srry vaivapvoyr sbyybjvat gurve ivpgbel. Qba'g gurl unir npprff gb nyy gur zbfg cbjreshy jvgpurf? Vfa'g Nyyvaern(fc?) gur hygvzngr ahyyvsvre? Jryy, znlor. Ohg gur sbezre Iraghev unq n ybg ybatre gb cyna naq n ybg srjre pbafgenvagf ba gurve orunivbe, zbenyyl fcrnxvat, naq gurl frrzrq gb gernq pnershyyl. Vs gur Iraghev unq nyy gung svercbjre, jul qvq gurl obgure gb znvagnva perqvovyvgl jvgu gur trareny inzcver cbchyngvba? Guvf fubhyq or n erq synt. Oryyn vf va gur cbfvgvba gb raq qrngu pbaqvgvbany ba ure erznvavat va cbjre. Fur fubhyq or gernqvat yvtugyl urer naq qribgvat erfbheprf gb fbyivat gur ceboyrz bs flagurgvp inzcver sbbq nf dhvpxyl nf cbffvoyr. Nalguvat gung cbfrf n frphevgl guerng gb guvf vavgvngvir fubhyq or pbafvqrerq vafnavgl.

Comment author: drethelin 05 September 2013 03:32:51AM 1 point [-]

Raqvat inzcver zheqref VF gernqvat yvtugyl: Vg'f n cbyvgvpny zbir. Gur orfg jnl gb trg uhznavgl abg gb ungr naq srne lbh vf gb or noyr gb pbasvqragyl gryy gurz gung gurl unir ab ernfba gb, naq gung lbh jnag bayl jung'f orfg sbe gurz. Zhpu nf Nzrevpn vf zvfgehfgrq va gur zvqqyr rnfg orpnhfr jr obzo jvgu unaq naq qvfgevohgr sbbq fhccyvrf jvgu gur bgure, crbcyr naq tbireazragf jvyy or zhpu yrff jvyyvat gb gehfg inzcverf vs gurl'er fgvyy xvyyvat crbcyr ng jvyy. Gur uhzna cbyvgvpny cbjref naq nyy gur uhznaf va gur jbeyq ner ahzrebhf rabhtu gung vs gur znfxrenqr oernxf, gur inzcverf jbhyq or va ZNWBE gebhoyr. Gur ovttrfg rkgnag guerng gb gur znfdhrenqr vf uhznaf orvat xvyyrq ol inzcverf. Fb fgbccvat xvyyvat uhznaf vf gur arkg fnsr fgrc jurgure lbh ner gelvat gb erirny inzcverf be pbaprny gurz.

Comment author: tgb 05 September 2013 02:22:48AM 12 points [-]

Background: "The genie knows, but doesn't care" and then this SMBC comic.

The joke in that comic annoys me (and it's a very common one on SMBC, there must be at least five there with approximately the same setup). Human values aren't determined to align with the forces of natural selection. We happen to be the product of natural selection, and, yes, that made us have some values which are approximately aligned with long-term genetic fitness. But studying biology does not make us change our values to suddenly become those of evolution!

In other words, humans are a 'genie that knows, but doesn't care'. We have understood the driving pressures that created us. We have understood what they 'want', if that can really be applied here. But we still only care about the things which the mechanics of our biology happened to have made us care about, even though we know these don't always align with the things that 'evolution cares about.'

(Please if someone can think of a good way to say this all without anthropomorphising natural selection, help me. I haven't thought enough about this subject to have the clarity of mind to do that and worry that I might mess up because of such metaphors.)

Comment author: Vladimir_Nesov 05 September 2013 02:41:09AM 2 points [-]
Comment author: MileyCyrus 05 September 2013 04:18:23PM 5 points [-]

If anyone wants to teach English in China, my school is hiring. The pay is higher than the market rate and the management is friendly and trustworthy. Must have a Bachelor's degree and a passport from and English speaking country. If you are at all curious, PM me for details.

Comment author: [deleted] 05 September 2013 04:55:44PM 0 points [-]

LWers seem to be pretty concerned about reducing suffering by vegetarianism, charity, utilitarianism etc. which I completely don't understand. Can anybody explain to me what is the point of reducing suffering?

Thanks.

Comment author: drethelin 05 September 2013 06:15:17PM 5 points [-]

Commonly, humans have an amount of empathy that means that when they know about suffering of entities within their circle of interest, they also suffer. EG, I can feel sad because my friend is sad. Some people have really vast circles, and feel sad when they think about animals suffering.

Do you understand suffering yourself? If so, presumably when you suffer you act to reduce it, by not holding your hand in a fire or whatnot? Working to end suffering of others can end your own empathic suffering.

Comment author: Oscar_Cunningham 05 September 2013 06:23:34PM 3 points [-]

I don't help people because of empathy for them. I just want to help them. It's a terminal value for me that other people be happy. I do feel empathy, but that's not why I help people.

Your utility function needn't be your own personal happiness! It can be anything you want!

Comment author: drethelin 05 September 2013 06:33:53PM 3 points [-]

No it can't. You don't get to choose your utility function.

But anyway I was responding to rationalnoodles as someone who clearly doesn't seem to understand wanting to help people.

Comment author: Jayson_Virissimo 05 September 2013 08:19:05PM 0 points [-]

Are you implying that utility functions don't change or that they do, but you can't take actions that will make it more likely to change in a given direction, or something else?

Comment author: drethelin 05 September 2013 09:09:20PM 2 points [-]

More that any decision you make about trying to change your utility function is not "choosing a utility function" but is actually just your current utility function expressing itself.

Comment author: Oscar_Cunningham 05 September 2013 08:30:46PM 6 points [-]

My point was that you should never feel constrained by your utility function. You should never feel like it's telling you to do something that isn't what you want. But if you thought that utility=happiness then you might very well end up feeling this way.

Comment author: drethelin 05 September 2013 09:09:53PM 2 points [-]

That's fair. I think a better way to put it is to not put too much value into any explicit attempt to state your own utility function?

Comment author: Oscar_Cunningham 05 September 2013 09:19:38PM 0 points [-]

Yeah.

Comment author: [deleted] 06 September 2013 01:24:21AM *  -1 points [-]

I understand wanting to help people. I have empathy and I feel all the things you've mentioned. What I'm trying to say is if you suffer when you think about suffering of others, why not to try to stop thinking (caring) about it and donate to science, instead of spending your time and money to reduce suffering?

Comment author: Schlega 06 September 2013 03:51:12AM 1 point [-]

In my experience, trying to choose what I care about does not work well, and has only resulted in increasing my own suffering.

Is the problem that thinking about the amount of suffering in the world makes you feel powerless to fix it? If so then you can probably make yourself feel better if you focus on what you can do to have some positive impact, even if it is small. If you think "donating to science" is the best way to have a positive impact on the future, than by all means do that, and think about how the research you are helping to fund will one day reduce the suffering that all future generations will have to endure.

Comment author: [deleted] 06 September 2013 04:18:58AM 0 points [-]

It could be the problem, but, actually, the main one is that I see no point in reducing suffering and it looks like nobody can explain it to me.

Comment author: alex_zag_al 05 September 2013 09:10:07PM *  4 points [-]

Has anyone here read up through ch18 of Jaynes' PT:LoS? I just spent two hours trying to derive 18.11 from 18.10. That step is completely opaque to me, can anybody who's read it help?

You can explain in a comment, or we can have a conversation. I've got gchat and other stuff. If you message me or comment we can work it out. I probably won't take long to reply, I don't think I'll be leaving my computer for long today.

EDIT: I'm also having trouble with 18.15. Jaynes claims that P(F|A_p E_aa) = P(F|A_p) but justifies it with 18.1... I just don't see how that follows from 18.1.

EDIT 2: It hasn't answered my question but there's online errata for this book: http://ksvanhorn.com/bayes/jaynes/ Chapter 18 has a very unfinished feel, and I think this is going to help other confusions I get into about it

Comment author: Oscar_Cunningham 06 September 2013 07:00:16AM *  2 points [-]

I've just looked and I have no idea either. If anyone wants to help there's a copy of the book here.

EDIT: The numbers in that copy are off by 1 from the book. "18.10" = "18-9" and so on.

Comment author: alex_zag_al 06 September 2013 03:21:02PM *  0 points [-]

Yeah, so to add some redundancy for y'all, here's the text surrounding the equations I'm having trouble with.

The 18.10 to 18.11 jump I'm having trouble with is the one in this part of the text:

But suppose that, for a given E_b, (18.8) holds independently of what E_a might be; call this 'strong irrelevance'. Then we have (what I'm calling 18.10) But if this is to hold for all (A_p|E_a), the integrands must be the same: (what I'm calling 18.11, and can't derive) .

And equation 18.15, which I can't justify, is in this part of the text:

But then, by definition (18.1) of A_p, we can see that A_p automatically cancels out E_aa in the numerator: (F|A_pE_aa)=(F|A_p). And so we have (18.13) reduced to (what I'm calling 18.15, and don't follow the justification for)

Comment author: Vladimir_Nesov 05 September 2013 10:21:38PM 3 points [-]

Framing effects (causing cognitive biases) can be thought of as a consequence of the absence of logical transparency in System 1 thinking. Different mental models that represent the same information are psychologically distinct, and moving from one model to another requires thought. If this thought was not expended, the equivalent models don't get constructed, and intuition doesn't become familiar with these hypothetical mental models.

This suggests that framing effects might be counteracted by explicitly imagining alternative framings in order to present a better sample to intuition; or, alternatively, focusing on an abstract model that has abstracted away the irrelevant details of the framing.

Comment author: jooyous 06 September 2013 01:27:18AM 3 points [-]

How do you pronounce "Yvain"?

Comment author: fubarobfusco 06 September 2013 07:36:47AM 0 points [-]

An awful lot of politics seems to be variations on the theme of "let's you and him fight".

Comment author: Risto_Saarelma 06 September 2013 09:05:24AM *  4 points [-]

An Open Letter to Friendly AI Proponents by Simon Funk (who wrote the After Life novel):

No law or even good idea is going to stop various militaries around the world, including our own, from working as fast as they can to create Skynet. Even if they tell you they've put the brakes on and are cautiously proceeding in perfect accordance with your carefully constructed rules of friendly AI, that's just their way of telling you you're stupid.

There are basically two outcomes possible here: They succeed in your lifetime, and you are killed by a Terminator, or they don't succeed in your lifetime and you die of old age.

I suggest choosing option three: Have one last party with your navel, then get off your sofa and grab a computer or a pad of paper and start working on solving AI as fast as you can. Contrary to singularity b.s., the AI you invent isn't going to rewrite the laws of physics and destroy the universe before you can hit control-C. Basic space, time, and energy limitations will likely confound your laptop's ambitions to take over the world for quite some time--plenty of time for those who best understand it to toy with what it really takes to make it friendly. That's assuming it's you and me, and not SAIC. And maybe, just maybe, if we work together and make enough progress in our lifetimes, that AI can help us live long enough to live even longer still...

But it starts now, and the first step is admitting that AI is hard and accepting that you have no fucking clue how to do it. If you can't do that, you'll never be able to leave that sofa comfort zone. Have an idea? Try it. Code it up. Nothing will teach you more about what you do (and mostly don't) know than that. Share your results, positive or negative. Look for more ideas. Don't be attached to anything--wear failures with pride. Today's good idea is tomorrow's nonsense, and two years later may prove the solution after all. Stir the pot and dive in. Make it happen.

I promise you that by default the next thirty years of your life will go by in a blink and you will look around you horrified at how little progress has happened--and you'll wish you'd been working on the other side of the equation.

Comment author: somervta 06 September 2013 09:25:03AM 5 points [-]

So, in other words, absolutely no engagement with the actual ideas/arguments of the people the 'letter' is addressed to.