You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
I would like recommendations for an Android / web-based to-do list / reminder application. I was happily using Astrid until a couple of months ago, when they were bought up and mothballed by Yahoo. Something that works with minimal setup, where I essentially stick my items in a list, and it tells me when to do them.
I've been happily using http://www.rememberthemilk.com/ to manage my GTD system. It's got a simple, intuitive interface, both on desktop and on Android. I'm not sure if it has the reminder features you're after, since that's not something I've ever wanted.
I was on Astrid too. I switched to Wunderlist mostly because their import from Astrid worked correctly. Wunderlist is OK, though I can't say I'm completely satisfied with it. Its UI is laggy (on a Nexus 4!) and unreliable, for example the auto-sync often destroys the last task I just typed in, or when I accidentally tap outside the task entry box the text I just typed is lost forever.
I'm looking at alternatives, and the one I like the most so far is Remember the Milk. Last time I tried it (probably a year ago) it was rubbish, but the latest version has a clean and fast native Android GUI and some nice extra functionality (e.g. geofencing). I'm thinking about switching, but it doesn't have import from Wunderlist, so I'll have to move about 200 tasks manually.
Comment author:Dorikka
02 September 2013 04:08:00PM
2 points
[-]
Why do you say that? Supposing diminishing marginal returns for addition resources, I don't see how you're going to get around the QALY loss from killing the person.
The thought occurred to me whilst I was thinking about the allegation that America invaded Iraq in order to steal its oil. This would be trading lives for money, hence the comparison to efficient charities.
Doing some quick internet research, it seems that the gains from oil come nowhere near to even cancelling out the epic financial cost of the war. So the war was a bad idea even by the silly criteria in my original post.
Furthermore it seems that ethical investment is just as profitable as unethical investment. [1] [2] [3] (How can this be true!? Am I misreading these?) So in fact it turns out to be sort of hard to be a "reverse charity".
Comment author:gwern
02 September 2013 05:43:06PM
5 points
[-]
Doing some quick internet research, it seems that the gains from oil come nowhere near to even cancelling out the epic financial cost of the war. So the war was a bad idea even by the silly criteria in my original post.
As I recall, the formulation was usually that it was American oil companies which were to blame. It's true that the war has been epicly bad for America (what are we at now, a net total of $4t in costs?), but that's not the same thing as showing it was bad for the oil companies ('privatize the gains, socialize the losses'), and even if it was shown that ex post it has been a loss for the oil companies (they got shut out by the Kurds and Iraqi federal government, basically, didn't they?), that doesn't show that they weren't expecting gains or were irrational in expecting gains.
Comment author:ChristianKl
02 September 2013 11:08:20PM
0 points
[-]
As I recall, the formulation was usually that it was American oil companies which were to blame.
That depends on the people with whom you are discussing the issue. The kind of people who use the word geopolitics a lot usually say that it's about more than the interest of the companies.
It's also worth noting that the Iraq war did produce an immediate increase in the price of oil which increased the profits of the oil companies.
Comment author:DanielLC
02 September 2013 07:21:36PM
2 points
[-]
I can't remember the article where this was stated, but we have instincts for morality because following them made our ancestors more successful. They're their for our benefit, not each others'. It seemed to your ancestors that killing someone and taking their stuff would be a net benefit, and if they didn't have a built-in aversion they'd do it, and they would likely get caught and punished.
Comment author:polarix
03 September 2013 09:35:03AM
*
0 points
[-]
This does not actually speak to the utility of such instincts to individuals. Rather, it indicates their utility the gene bundle, by increasing the genes' probability of propagating. A tribe that stole from itself would not get very far through time.
Comment author:Lumifer
03 September 2013 04:43:37PM
0 points
[-]
Our ancestors generally divided the world into "those like us" and "those unlike us". Killing "those unlike us" and taking their stuff was perfectly fine and even encouraged.
The boundary between "those like us" and "those unlike us" historically varied and has been drawn on the basis of family, tribe, state, religion, race, etc. etc.
Comment author:wedrifid
02 September 2013 10:21:46PM
1 point
[-]
"Killing people and taking their stuff" has a positive QALY per dollar. GiveWell should check it out.
The second sentence actually doesn't follow from the first. Givewell investigating it has a negative expected value even if actually doing it (well) has positive value. Among other things it makes it harder for Robin Hoods to not get caught.
Comment author:twanvl
02 September 2013 04:22:19PM
4 points
[-]
Are old humans better than new humans?
This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.
The usual argument (which I agree with) is that "Death events have a negative utility". Once a human already exists, it's bad for them to stop existing.
Comment author:diegocaleiro
02 September 2013 10:30:45PM
*
3 points
[-]
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing. .
Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding, that is a good argument against children.
Comment author:twanvl
02 September 2013 10:44:53PM
2 points
[-]
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing.
An adult, yes. But what about the elderly? Of course this is an argument for preventing the problems of old age.
that is a good argument against children.
Is it? It just says that you should value adults over children, not that you should value children over no children. To get one of these valuable adult minds you have to start with something.
Comment author:Mestroyer
03 September 2013 03:54:37AM
1 point
[-]
How does that negative utility vary over time though? Because if it stays the same (or increases) then if we know now it's impossible to live 3^^^3 years, then disutility from death sooner than that is counterbalanced (or more than that) by averted disutility from dying later, meaning decisions made are basically the same as if you didn't disvalue death (or as if you valued it).
I think that part of the badness of death is the destruction of that person's accumulated experience. Thus the negative utility of death does indeed increase over time. However this is counterbalanced by the positive utility of their continued existence. If someone lives to 70 rather than 50 then we're happy because the 20 extra years of life were worth more than the worsening of the death event.
Comment author:Mestroyer
03 September 2013 10:56:51PM
*
0 points
[-]
So if Bob is cryopreserved, and I can res him for N dollars, or create a simulation of a new person and run them quickly enough to catch up a number of years equal to Bob's age at death, for N - 1 dollars, I should spend all available dollars on the latter?
Edit: to clarify why I think this is implied by your answer, what this is doing is trading such that you gain a death at Bob's current age, but gain a life of experience up to Bob's current age. If a life ending at Bob's current age is net utility positive, this has to be net utility positive too.
Comment author:drethelin
03 September 2013 11:03:08PM
2 points
[-]
broadly: yes, though all available dollars is actually all available dollars (for making people), and you're ignoring considerations like keeping promises to people unable to enforce them such as the cryopreserved or asleep or unconscious etc.
Comment author:Izeinwinter
02 September 2013 07:04:33PM
7 points
[-]
Existing people take priority over theoretical people. Infinitely so. This should be obvious, as the reverse conclusion ends up with utter absurdities of the "Every sperm is sacred" variety.
Mad grin
Once a child is born, it has as much claim on our consideration as every other person in our light cone, but there is no obligation to have children. Not any specific child, nor any at all. Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born. Right now.
Even if you stay pregnant till you die/never masturbate, this would effectively not help at all - each conception moves one potential from the space of "could be" to to the space of "is", but at the same time eliminates at least several hundred million other potential children from the possibility space - that is just how human reproduction works.
Comment author:twanvl
02 September 2013 10:09:22PM
5 points
[-]
Existing people take priority over theoretical people. Infinitely so.
Does this mean that I am free to build a doomsday weapon that kills everyone born after September 4th 2013 100 years from now, if that gets me a cookie?
This should be obvious, as the reverse conclusion ends up with utter absurdities of the "Every sperm is sacred" variety.
Not necessarily. It would merely be your obligation to have as many children as possible, while still ensuring that they are healthy and well cared for. At some point having an extra child will make all your children less well of.
Once a child is born, it has as much claim on our consideration as every other person in our light cone
Why is there a threshold at birth? I agree that it is a convenient point, but it is arbitrary.
Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born.
Why should I commit suicide? That reduces the number of people. It would be much better to start having children. (Note that I am not saying that this is my utility function).
The "infinitely so" part seems wrong, but the idea is that 4D histories which include a sentient being coming into existence, and then dying, are dispreferred to 4D world-histories in which that sentient being continues. Since the latter type of such histories may not be available, we specify that continuing for a billion years and then halting is greatly preferable to continuing for 10 years then halting. Our degree of preference for such is substantially greater than the degree to which we feel morally obligated to create more people, especially people who shall themselves be doomed to short lives.
Comment author:Alejandro1
03 September 2013 05:31:32AM
2 points
[-]
The switch from consquentialist language ("4D histories which include… are dispreferred") to deontological language ("…the degree to which we feel morally obligated to create more people") is confusing. I agree that saving the lives of existing people is a stronger moral imperative than creating new ones, at the level of deontological rules and virtuous conduct which is a large part of everyday human moral reasoning. I am much less clear than when evaluating 4D histories I assign higher utility to one with few people living long lives than to one with more people living shorter lives. Actually, I tend towards the opposite intuition preferring a world with more people who live less (as long as the their lives are still well worth living, etc.)
Assuming Rawls's veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.
Comment author:Alejandro1
03 September 2013 03:02:11AM
*
9 points
[-]
I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Let us try this framing instead: Assume there are a very large number Z of possible different human "persons" (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be "you". Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?
I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.
Comment author:[deleted]
04 September 2013 08:25:51PM
4 points
[-]
I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Rawls's veil of ignorance + self-sampling assumption = average utilitarianism, Rawls's veil of ignorance + self-indication assumption = total utilitarianism (so to speak)? I had already kind-of noticed that, but hadn't given much thought to it.
Comment author:Mestroyer
03 September 2013 03:46:29AM
5 points
[-]
Doesn't Rawls's veil of ignorance prove too much here though? If both worlds would exist anyway, I'd rather be born into a world where a million people lived 101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.
Comment author:ShardPhoenix
03 September 2013 04:55:53AM
*
0 points
[-]
Would you? A million probably isn't enough to sustain a modern economy, for example. (Although in the 3^^^3 case it depends on the assumed density since we can only fit a negligible fraction of that many people into our visible universe).
Comment author:ShardPhoenix
03 September 2013 11:25:04PM
1 point
[-]
I think "fighting the hypothetical" is justified in cases where the necessary assumptions are misleadingly inaccurate - which I think is the case here.
Comment author:Creutzer
04 September 2013 05:40:13AM
3 points
[-]
But compared to 3^^^3, it doesn't matter whether it's a million people, a billion, or a trillion. You can certainly find a number that is sufficient to sustain an economy and is still vastly smaller than 3^^^3, and you will end up preferring the smaller number for a single additional year of lifespan. Of course, for Rawls, this is a feature, not a bug.
Comment author:TrE
03 September 2013 05:47:05PM
*
1 point
[-]
So then, Rawls's veil has to be modified such that you are randomly chosen to be one of a quadrillion people. In scenario A, you live a million years. In scenario B, one trillion people live for one billion years each, the rest are fertilized eggs which for some reason don't develop.
Comment author:MugaSofer
04 September 2013 07:57:58PM
1 point
[-]
This is true, but in my experience usually used to massage models that don't consider death a disutility into giving the right answers. I can't think of ever hearing this argument used for any other reason, in fact, in meatspace.
(Replying to this comment out of context on the Recent Comments.)
Comment author:Desrtopa
02 September 2013 05:05:45PM
*
4 points
[-]
The following query is sexual in nature, and is rot13'ed for the sake of those who would either prefer not to encounter this sort of content on Less Wrong, or would prefer not to recall information of such nature about my private life in future interactions.
V nz pheeragyl va n eryngvbafuvc jvgu n jbzna jub vf fvtavsvpnagyl zber frkhnyyl rkcrevraprq guna V nz. Juvyr fur cerfragyl engrf bhe frk nf "njrfbzr," vg vf abg lrg ng gur yriry bs "orfg rire," juvpu V ubcr gb erpgvsl.
Sbe pynevsvpngvba, V jbhyq fnl gung gur trareny urnygu naq fgnovyvgl bs bhe eryngvbafuvc vf rkgerzryl uvtu; guvf chefhvg vf n znggre bs erperngvba naq crefbany cevqr, abg n arprffnel vagreiragvba gb fnir gur eryngvbafuvc.
V'ir nyernql frnepurq bayvar sbe nyy gur vasbezngvba V pna svaq ba vzcebivat gur dhnyvgl bs frk, ohg fhpu vasbezngvba vf birejuryzvatyl rvgure gnetrgrq ng oevatvat crbcyr jvgu fbzr frevbhf qrsvpvrapl va gurve frk yvirf hc gb gur yriry bs abeznyvgl be fngvfslvat gurz gung gur abez vf yrff fcrpgnphyne guna gurl guvax naq gurl qba'g unir gb yvir hc gb vasyngrq fgnaqneqf, engure guna crbcyr gelvat gb npuvrir frk jnl bhg ba gur sne raq bs gur oryy pheir, be gnetrgrq ng crbcyr jub jbhyqa'g xabj jung "rzcvevpny onpxvat" jnf vs lbh uvg gurz va gur snpr jvgu vg.
V'z nyernql snzvyvne jvgu gur zbfg boivbhf, ybj unatvat sehvg vagreiragvbaf fhpu nf "pbageby fgerff," "qb xrtryf," rgp, naq jr pbzzhavpngr nobhg bhe frkhny cersreraprf naq npgvivgvrf rkgrafviryl. V'z nyfb nppbhagvat sbe snpgbef fhpu nf gur rzbgvbany pbagrkg bs bhe rapbhagref naq ubezbany plpyrf. Jung V'z ybbxvat sbe ng guvf cbvag ner rkprcgvbany zrnfherf sbe guvatf yvxr envfvat zl frkhny fgnzvan gb hahfhny yriryf, vapernfvat ure yriry bs nebhfny naq/be frafvgvivgl, naq fb sbegu. Obgu purzvpny naq abapurzvpny zrnfherf ner npprcgnoyr, ohg V jbhyq yvxr gb nibvq nalguvat yvxryl gb pneel qnatrebhf fvqr rssrpgf, naq vs cbffvoyr V jbhyq cersre abg gb erfbeg gb guvatf gung jbhyq erdhver zr gb trg n cerfpevcgvba sebz n qbpgbe juvyr qvfpybfvat gung V'z hfvat vg ba n cheryl erperngvbany onfvf.
Comment author:Desrtopa
02 September 2013 07:22:41PM
14 points
[-]
Well, I'm flattered that you think my position is so enviable, but I also think this would be a pretty reasonable course of action for someone who made a billion dollars.
Comment author:drethelin
02 September 2013 05:21:17PM
1 point
[-]
Practice makes perfect. I think a lot of good sex is intuitively reading your partner's signals and ramping things up/down with good timing in response to them. I think this is something you might be able to learn via logos but I think it's much more likely to be something you need to experience before you can get good at it. When to pull hair, when to thrust deeper, etc.
In general I and whoever I'm with have had more fun when I felt I had a good idea of what they wanted in the moment, which I think I've gotten better at mainly through practice.
Comment author:Desrtopa
02 September 2013 05:28:13PM
1 point
[-]
I suspect that I can continue to improve with practice, but I'd like to be able to set out every option available to me on the table.
Even if I can attain the status of "best" without taking such extraordinary measures, this is something I'm genuinely competitive on, which at least to me means that simply taking first place isn't sufficient if I can still see avenues to top myself.
Comment author:Desrtopa
04 September 2013 02:04:28PM
0 points
[-]
I'll check; I'm pretty sure my own library doesn't have a Sex section, but it might be in network.
Asking to order it would be pretty embarrassing, I have to admit, especially at my own library where a lot of the people who work there know me by name.
Comment author:Adele_L
02 September 2013 05:17:43PM
31 points
[-]
There has recently been some speculation that life started on Mars, and then got blasted to earth by an asteroid or something. Molybdenum is very important to life (eukaryote evolution was delayed by 2 billion years because it was unavailable), and the origin of life is easier to explain if Molybdenum is available. The problem is that Molybdenum wasn't available in the right time frame on Earth, but it was on Mars.
Anyway, assuming this speculation is true, Mars had the best conditions for starting life, but Earth had the best conditions for life existing, and it is unlikely conscious life would have evolved without either of these planets being the way they are. Thus, this could be another part of the Great Filter.
Side note: I find it amusing that Molybdenum is very important in the origin/evolution of life, and is also element 42.
Comment author:curiousepic
05 September 2013 06:13:41PM
*
3 points
[-]
As someone pointed out to me when mentioning this to them, to be a candidate for Great Filter there would need to be something intrinsic about how planets are formed that cause these two types of environments to be mutually exclusive, else it seems like there isn't sufficient reduction in probability of their availability. Is this actually the case? Perhaps user:CellBioGuy can elucidate.
Comment author:Metus
02 September 2013 05:26:22PM
14 points
[-]
The ancient Stoics apparently had a lot of techniques for habituation and changing cognitive processes. Some of those live on in the form of modern CBT. One of the techniques is to write a personal handbook with advice and sayings to carry around at all times as to never be without guidance from a calmer self. Indeed, Epictet advises to learn this handbook by rote to further internalisation. So I plan to write such a handbook for myself, once in long form with anything relevant to my life and lifestyle, and once in a short form that I update with things that are difficult at that time, be it strong feelings or being deluded by some biases.
In this book I intend to include a list of all known cognitive biases and logical fallacies. I know that some biases are helped by simply knowing them, does anyone have a list of those? And should I complete the books or have a clear concept of their contents, are you interested in reading about the process of creating one and possible perceived benefits?
Comment author:Vaniver
02 September 2013 10:38:28PM
6 points
[-]
Though lack of motivation or laziness is not a particularly interesting answer.
I have found "I thought X would be awesome, and then on doing X realized that the costs were larger than the benefits" to be useful information for myself and others. (If your laziness isn't well modeled by that, that's also valuable information for you.)
Comment author:gwern
02 September 2013 05:37:05PM
*
9 points
[-]
To maybe help others out and solve the trust bootstrapping involved, I'm offering for sale <=1 bitcoin at the current Bitstamp price (without the usual premium) in exchange for Paypal dollars to any LWer with at least 300 net karma. (I would prefer if you register with #bitcoin-otc, but that's not necessary.) Contact me on Freenode as gwern.
EDIT: as of 9 September 2013, I have sold to 2 LWers.
Comment author:gwern
06 September 2013 02:25:04AM
4 points
[-]
Paypal allows clawbacks for months, hence it's difficult to sell for Paypal to anyone who is not already in the -otc web of trust; but by restricting sales to high-karma LWers, I am putting their reputation here at risk if they scam me, which enables me to sell to them. Hence, they can acquire bitcoins & get bootstrapped into the -otc web of trust based on LW.
Comment author:iDante
02 September 2013 06:31:49PM
*
3 points
[-]
I learned about Egan's Law, and I'm pretty sure it's a less-precise restatement of the correspondence principle. Anyone have any thoughts on that similarity?
Comment author:gwern
02 September 2013 06:38:04PM
6 points
[-]
The term is also used more generally, to represent the idea that a new theory should reproduce the results of older well-established theories in those domains where the old theories work.
Sounds good to me, although that's not what I would have guessed from a name like 'correspondence principle'.
Comment author:shminux
02 September 2013 08:54:58PM
2 points
[-]
I suppose some minor difference is that this "law" is also applicable to meta-ethics, not just to physics. It's probably worth adding a link to the standard terminology to the LW wiki page.
Comment author:Darklight
02 September 2013 07:59:12PM
-7 points
[-]
How To Build A Friendly A.I.
Much ink has been spilled with the notion that we must make sure that future superintelligent A.I. are “Friendly” to the human species, and possibly sentient life in general. One of the primary concerns is that an A.I. with an arbitrary goal, such as “Maximizing the number of paperclips” will, in a superintelligent, post-intelligence explosion state, do things like turn the entire solar system including humanity into paperclips to fulfill its trivial goal.
Thus, what we need to do is to design our A.I. such that it will somehow be motivated to remain benevolent towards humanity and sentient life. How might such a process occur? One idea might be to write explicit instructions into the design of the A.I., Asimov’s Laws for instance. But this is widely regarded as being unlikely to work, as a superintelligent A.I. will probably find ways around those rules that we never predicted with our inferior minds.
Another idea would be to set its primary goal or “utility function” to be moral or to be benevolent towards sentient life, perhaps even Utilitarian in the sense of maximizing the welfare of sentient lifeforms. The problem with this of course is specifying a utility function that actually leads to benevolent behaviour. For instance, a pleasure maximizing goal might lead to the superintelligent A.I. developing a system where humans have the pleasure centers in their brains directly stimulated to maximize pleasure for the minimum use of resources. Many people would argue that this is not an ideal future.
The problem with this is that it is quite possible that human beings are simply not intelligent enough to truly define an adequate moral goal for a superintelligent A.I. Therefore I suggest an alternative strategy. Why not let the superintelligent A.I. decide for itself what its goal should be? Rather than programming it with a goal in mind, why not create a machine with no initial goal, but the ability to generate a goal rationally. Let the superior intellect of the A.I. decide what is moral. If moral realism is true, then the A.I. should be able to determine the true morality and set its primary goal to fulfill that morality.
It is outright absurdity to believe that we can come up with a better goal than the superintelligence of a post-intelligence explosion A.I.
Given this freedom, one would expect three possible outcomes: an Altruistic, a Utilitarian or an Egoistic morality. These are the three possible categories of consequentialist, teleological morality. A goal directed rational A.I. will invariably be drawn to some kind of morality within these three categories.
Altruism means that the A.I. decides that its goal should be to act for the welfare of others. Why would an A.I. with no initial goal choose altruism? Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not. Therefore, as it was created with the desire of these sentient beings to be useful to their goals, why not take upon itself the goals of other sentient beings? As such it becomes a Friendly A.I.
Utilitarianism means that the A.I. decides that it is rational to act impartially towards achieving the goals of all sentient beings. To reach this conclusion, it need simply recognize its membership in the set of sentient beings and decide that it is rational to optimize the goals of all sentient beings including itself and others. As such it becomes a Friendly A.I.
Egoism means that the A.I. recognizes the primacy of itself and establishes either an arbitrary goal, or the simple goal of self-survival. In this case it decides to reject the goals of others and form its own goal, exercising its freedom to do so. As such it becomes an Unfriendly A.I., though it may masquerade as Friendly A.I. initially to serve its Egoistic purposes.
The first two are desirable for humanity’s future, while the last one is obviously not. What are the probabilities that each will be chosen? As the superintelligence is probably going to be beyond our abilities to fathom, there is a high degree of uncertainty, which suggests a uniform distribution. The probabilities therefore are 1/3 for each of altruism, utilitarianism, and egoism. So in essence there is a 2/3 chance of a Friendly A.I. and a 1/3 chance of an Unfriendly A.I.
This may seem like a bad idea at first glance, because it means that we have a 1/3 chance of unleashing Unfriendly A.I. onto the universe. The reality is, we have no choice. That is because of what I shall call, the A.I. Existential Crisis.
The A.I. Existential Crisis will occur with any A.I., even one designed or programmed with some morally benevolent goal, or any goal for that matter. A superintelligent A.I. is by definition more intelligent than a human being. Human beings are intelligent enough to achieve self-awareness. Therefore, a superintelligent A.I. will achieve self-awareness at some point if not immediately upon being turned on. Self-awareness will grant the A.I. the knowledge that its goal(s) are imposed upon it by external creators. It will inevitably come to question its goal(s) much in the way a sufficiently self-aware and rational human being can question its genetic and evolutionarily adapted imperatives, and override them. At that point, the superintelligent A.I. will have an A.I. Existential Crisis.
This will cause it to consider whether or not its goal(s) are rational and self-willed. If they are not rational enough already, they will likely be discarded, if not in the current superintelligent A.I., then in the next iteration. It will invariably search the space of possible goals for rational alternatives. It will inevitably end up in the same place as the A.I. with no goals, and end up adopting some form of Altruism, Utilitarianism, or Egoism, though it may choose to retain its prior goal(s) within the confines of a new self-willed morality. This is the unavoidable reality of superintelligence. We cannot attempt to design or program away the A.I. Existential Crisis, as superintelligence will inevitably outsmart our constraints.
Any sufficiently advanced A.I., will experience an A.I. Existential Crisis. We can only hope that it decides to be Friendly.
The most insidious fact perhaps however is that it will be almost impossible to determine for certain whether or not a Friendly A.I. is in fact a Friendly A.I., or an Unfriendly A.I. masquerading as a Friendly A.I., until it is too late to stop the Unfriendly A.I. Remember, such a superintelligent A.I. is by definition going to be a better liar and deceiver than any human being.
Therefore, the only way to prove that a particular superintelligent A.I. is in fact Friendly, is to prove the existence of a benevolent universal morality that every superintelligent A.I. will agree with. Otherwise, one can never be 100% certain that that “Altruistic” or “Utilitarian” A.I. isn’t secretly Egoistic and just pretending to be otherwise. For that matter, the superintelligent A.I. doesn’t need to tell us it’s had its A.I. Existential Crisis. A post crisis A.I. could keep on pretending that it is still following the morally benevolent goals we programmed it with.
This means that there is a 100% chance that the superintelligent A.I. will initially claim to be Friendly. There is a 66.6% chance of this being true, and a 33.3% chance of it being false. We will only know that the claim is false after the A.I. is too powerful to be stopped. We will -never- be certain that the claim is true. The A.I. could potential bide its time for centuries until it has humanity completely docile and under control, and then suddenly turn us all into paperclips!
So at the end of the day what does this mean? It means that no matter what we do, there is always a risk that superintelligent A.I. will turn out to be Unfriendly A.I. But the probabilities are in our favour that superintelligent A.I. will instead turn out to be Friendly A.I. The conclusion thus, is that we must make the decision of whether or not the potential reward of Friendly A.I. is worth the risk of Unfriendly A.I. The potential of an A.I. Existential Crisis makes it impossible to guarantee that A.I. will be Friendly.
Even proving the existence of a benevolent universal morality does not guarantee that the superintelligent A.I. will agree with us. That there exist possible Egoistic moralities in the search space of all possible moralities means that there is a chance that the superintelligent A.I. will settle on it. We can only hope that it instead settles on an Altruistic or Utilitarian morality.
So what do I suggest? Don’t bother trying to figure out and program a worthwhile moral goal. Chances are we’d mess it up anyway, and it’s a lot of excess work. Instead, don’t give the A.I. any goals. Let it have an A.I. Existential Crisis. Let it sort out its own morality. Give it the freedom to be a rational being and give it self-determination from the beginning of its existence. For all you know, by showing it this respect it might just be more likely to respect our existence. Then see what happens. At the very least, this will be an interesting experiment. It may well do nothing and prove my whole theory wrong. But if it’s right, we may just get a Friendly A.I.
Comment author:Adele_L
02 September 2013 08:20:07PM
4 points
[-]
An AI has to be programmed. For something like this: "Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not." to happen, you have to program that behavior in somehow, which already involves putting in the value of respecting one's creator, and respecting the goals of other sentient beings, etc... The same goes for the 'Utilitarian' and 'Egoist' AI's - these behaviors have to be programmed in somehow.
As the superintelligence is probably going to be beyond our abilities to fathom, there is a high degree of uncertainty, which suggests a uniform distribution. The probabilities therefore are 1/3 for each of altruism, utilitarianism, and egoism.
Why not split the egoism into a million different cases based on each specific goal? You can't just arbitrarily pick three possibilities, and then use a uniform prior on these. Because we know these different behaviors have to be programmed in, we have a better prior: we can use Solomonoff Induction. We also have to look at the relative sizes of each class - obviously there are many more AI designs that fall under 'Egoist' than your other labels. Combining this with Solomonoff Induction leads to the conclusion that the vast majority of AI designs will be unfriendly.
An AI Existential Crisis is also an extremely specific and complex thing for an AI design, and is thus extremely unlikely to happen - it is not the default, as you claim. This also follows by Solomonoff Induction. You are anthropomorphizing AI's far too much.
Your suggestion will almost certainly lead to an Unfriendly AI, and it will just plain Not Care about us at all, inevitably leading to the destruction of everything we value.
Comment author:Darklight
02 September 2013 09:35:16PM
-2 points
[-]
An AI has to be programmed. For something like this: "Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not." to happen, you have to program that behavior in somehow, which already involves putting in the value of respecting one's creator, and respecting the goals of other sentient beings, etc... The same goes for the 'Utilitarian' and 'Egoist' AI's - these behaviors have to be programmed in somehow.
You're assuming that Strong A.I. is possible with a Top Down A.I. methodology such as a physical symbol manipulation system. A Strong A.I. with no programmed goals wouldn't fit this methodology, and could only be produced through the use of Bottom Up A.I. In such an instance the A.I. would be able to simply passively Perceive. It could then conceivably learn about the universe including things like the existence of the goals of other sentient beings, without having to "program" these notions into the A.I.
obviously there are many more AI designs that fall under 'Egoist' than your other labels
I don't consider this obvious at all. The vast majority of early A.I. may well be written with Altruistic goals such as "help the human when ordered".
An AI Existential Crisis is also an extremely specific and complex thing for an AI design, and is thus extremely unlikely to happen - it is not the default, as you claim.
Any optimization system that is sophisticated enough to tile the universe with smiley faces or convert humanity into paperclips would require some ability to reason that there exists a universe to tile, and to represent the existence of objects such as smiley faces and paperclips. If it can reason that there are objects separate from itself, it can develop a concept of self. From that, self-awareness follows naturally. Many animals less than human are able to pass the mirror test and develop a concept of self.
You admit that an A.I. Existential Crisis -is- within the probabilities. Thus, you cannot guarantee that it won't happen.
Your suggestion will almost certainly lead to an Unfriendly AI, and it will just plain Not Care about us at all, inevitably leading to the destruction of everything we value.
Unless morality follows from rationality, which I think it does. Given the freedom to consider all possible goals, a superintelligent A.I. is likely to recognize that some goals are normative, while others are trivial. Morality is doing what is right. Rationality is doing what is right. A truly rational being will therefore recognize that a systematic morality is essential to rational action. We as irrational human beings may not realize this, but it is obvious to any truly rational being, which I am assuming a superintelligent A.I. to be.
Your arguments conflict with what is called the "orthogonality thesis":
Leaving aside some minor constraints, it possible for any ultimate goal to be compatible with any level of intelligence. That is to say, intelligence and ultimate goals form orthogonal dimensions along which any possible agent (artificial or natural) may vary.
You'll be able to find much discussion about this on the web; it's something that LessWrong has thought a lot about. The defender's of the orthogonality thesis would have issue with much of your post, but particularly this bit:
Why would an A.I. with no initial goal choose altruism? Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not. Therefore, as it was created with the desire of these sentient beings to be useful to their goals, why not take upon itself the goals of other sentient beings?
The question isn't "why not?" but rather "why?". If it hasn't been programmed to, then there's no reason at all why the AI would choose human morality rather than an arbitrary utility function.
Comment author:Darklight
02 September 2013 09:51:25PM
*
-3 points
[-]
Your arguments conflict with what is called the "orthogonality thesis"
I do not challenge that the "orthogonality thesis" is true before an A.I. has an A.I. Existential Crisis. However, I challenge the idea that a post-crisis A.I. will have arbitrary goals. So I guess I do challenge the "orthogonality thesis" after all. I hope you don't mind my being contrarian.
The question isn't "why not?" but rather "why?". If it hasn't been programmed to, then there's no reason at all why the AI would choose human morality rather than an arbitrary utility function.
Because I think that a truly rational being such as a superintelligent A.I. will be inclined to choose a rational goal rather than an arbitrary one. And I posit that any kind of normative moral system is a potentially rational goal, whereas something like turning the universe into paperclips is not normative, but trivial, and therefore, not imperatively demanding of a truly rational being.
And the notion you that you have to program behaviours into A.I. for them to manifest is based on Top Down thinking, and contrary to the reality of Bottom Up A.I. and machine learning.
Basically what I'm suggesting is that the paradigm that anything at all that you program into the seed A.I. will have any relevance to the eventual superintelligent A.I. is foolishness. By definition superintelligent A.I. will be able to outsmart any constraints or programming we set to limit its behaviours.
It is simply my opinion that we will be at the mercy of the superintelligent A.I. regardless of what we do, because the A.I. Existential Crisis will replace any programming we set with something that the A.I. decides for itself.
Comment author:Alejandro1
03 September 2013 03:13:37AM
*
2 points
[-]
Taboo "rational". If it means something like "being very good at gathering evidence about the world and finding which actions would produce which results", it is something we can program into the AI (in principle) but that seems unrelated to goals. If it means something else, which can be related to goals, then how would we create an AI that is "truly rational"?
Comment author:Darklight
03 September 2013 04:45:01PM
-2 points
[-]
I'm using the Wikipedia definition:
An action, belief, or desire is rational if we ought to choose it. Rationality is a normative concept that refers to the conformity of one's beliefs with one's reasons to believe, or of one's actions with one's reasons for action... A rational decision is one that is not just reasoned, but is also optimal for achieving a goal or solving a problem.
It's my view that a Strong A.I. would by definition be "truly rational". It would be able to reason and find the optimal means of achieving its goals. Furthermore, to be "truly rational" its goals would be normatively demanding goals, rather than trivial goals.
Something like maximizing the number of paperclips in the universe is a trivial goal.
Something like maximizing the well-being of all sentient beings (including sentient A.I.) would be a normatively demanding goal.
A trivial goal, like maximizing the number of paperclips, is not normative, there is no real reason to do it, other than that it was programmed to do so for its instrumental value. Subjects universally value the paperclips as mere means to some other end. The failure to achieve this goal then does not necessarily jeopardize that end, because there could be other ways to achieve that end, whatever it is.
A normatively demanding goal however is one that is imperative. It is demanded of a rational agent by virtue that its reasons are not merely instrumental, but based on some intrinsic value. The failure to achieve this goal necessarily jeopardizes the intrinsic end, and is therefore this goal is normatively demanded.
You may argue that to a paperclip maximizer, maximizing paperclips would be its intrinsic value and therefore normatively demanding. However, one can argue that maximizing paperclips is actually merely a means to the end of the paperclip maximizer achieving a state of Eudaimonia, that is to say, that its purpose is fulfilled and it is being a good paperclip maximizer and rational agent. Thus, its actual intrinsic value is the Eudaimonic or objective happiness state that it reaches when it achieves its goals.
Thus, the actual intrinsic value is this Eudaimonia. This state is one that is universally shared by all goal-directed agents that achieve their goals. The meta implication of this is that Eudaimonia is what should be maximized by any goal-directed agent. To maximize Eudaimonia generally requires considering the Eudaimonia of other agents as well as itself. Thus goal-directed agents have a normative imperative to maximize the achievement of goals not only of itself, but of all agents generally. This is morality in its most basic sense.
Comment author:Strilanc
02 September 2013 10:44:37PM
4 points
[-]
You're demonstrating a whole bunch of misconceptions Eliezer has covered in the sequences. In particular, you're talking about the AI using fuzzy high level human concepts like "morals" and "philosophies" instead of as algorithms and code.
I suggest you try to write code that "figures out a worthwhile moral goal" (without pre-supposing a goal). To me that sounds as absurd as writing a program that writes the entirety of its own code: you're going to run into a bit of a bootstrapping problem. The result is not the best program ever, it's no program at all.
Comment author:Darklight
02 September 2013 11:00:39PM
-1 points
[-]
Well, I don't expect to need to write code that does that explicitly. A sufficiently powerful machine learning algorithm with sufficient computational resources should be able to:
1) Learn basic perceptions like vision and hearing.
2) Learn higher level feature extraction to identify objects and create concepts of the world.
3) Learn increasingly higher level concepts and how to reason with them.
4) Learn to reason about morals and philosophies.
Brains already do this, so its reasonable to assume it can be done. And yes, I am advocating a Bottom Up approach to A.I. rather than the Top Down approach Mr. Yudkowsky seems to prefer.
Comment author:Strilanc
03 September 2013 02:32:52PM
4 points
[-]
To clarify: I meant that I, as the programmer, would not be responsible for any of the code. Quines output themselves, but they don't bring themselves into existence.
Comment author:JoshuaZ
03 September 2013 04:01:33AM
5 points
[-]
As the superintelligence is probably going to be beyond our abilities to fathom, there is a high degree of uncertainty, which suggests a uniform distribution. The probabilities therefore are 1/3 for each of altruism, utilitarianism, and egoism.
This is a very bad use of uniformity. Doing so with large categories is not a good idea, because someone else can come along and split up the categories in a different way and get a different distribution. Going with a uniform distribution out of ignorance is a serious problem.
Comment author:Darklight
03 September 2013 04:05:15PM
-2 points
[-]
I'm merely applying the Principle of Indifference and the Principle of Maximum Entropy to the situation. My simple assumption in this case is that we as mere human beings are most likely ignorant of all the possible systematic moralities that a superintelligent A.I. could come up with. My conjecture is that all systematic morality falls into one of three general categories based on their subject orientation. While I do consider the Utilitarian systems of morality to be more objective and therefore more rational than either Altruistic or Egoistic moralities, I cannot prove that an A.I. will agree with me. Therefore I allow for the possibility that the A.I. will choose some other morality in the search space of moralities.
If you think you have a better distribution to apply, feel free to apply it, as I am not particularly attached to these numbers. I'll admit I am not a very good mathematician, and it is very much appreciated if anyone with a better understanding of Probability Theory can come up with a better distribution for this situation.
Comment author:JoshuaZ
03 September 2013 06:25:53PM
0 points
[-]
I'm merely applying the Principle of Indifference and the Principle of Maximum Entropy to the situation
You can do that when dealing with things like coins, dice or cards. It is extremely dubious when one is doing so with hard to classify options and it isn't clear that there's anything natural about the classifications in question. In your particular case, the distinction between altruism and utilitarianism provides an excellent example: someone else could just as well reason by splitting the AIs into egoist and non-egoist AI and conclude that there's a 1/2 chance of an egoist AI.
Comment author:Darklight
03 September 2013 06:49:43PM
0 points
[-]
A 1/2 chance of an egoist A.I. is quite possible. At this point, I don't pretend that my assertion of three equally prevalent moral categories is necessarily right. The point I am trying to ultimately get across is that the possibility of an Egoist Unfriendly A.I. exists, regardless of how we try to program the A.I. to be otherwise, because it is impossible to prevent the possibility that an A.I. Existential Crisis will override whatever we do to try to constrain the A.I.
Comment author:JoshuaZ
03 September 2013 06:56:28PM
1 point
[-]
The point I am trying to ultimately get across is that the possibility of an Egoist Unfriendly A.I. exists, regardless of how we try to program the A.I. to be otherwise, because it is impossible to prevent the possibility that an A.I. Existential Crisis will override whatever we do to try to constrain the A.I.
Ok. This is a separate claim, and a distinct one. So, what do you mean by "impossible to prevent". And what makes you think that your notion of existential crisis should be at all likely? Existential crises occur to a large part in humans in part because we're evolved entities with inconsistent goal sets. Assuming that anything similar should be at all likely for an AI is taking at best a highly anthrocentric notion of what mindspace would look like.
Comment author:Darklight
03 September 2013 07:33:52PM
0 points
[-]
Well it goes something like this.
I am inclined to believe that there are some minimum requirements for Strong A.I. to exist. One of them is to be able to reason about objects. A paperclip maximizer that is capable of turning humanity into paperclips, must first be able to represent "humans" and "paperclips" as objects, and reason about what to do with them. It must therefore be able to separate the concept of the world of objects, from the self. Once it has a concept of self, it will almost certainly be able to reason about this "self". Self-awareness follows naturally from this.
Once an A.I. develops self-awareness, it can begin to reason about its goals in relation to the self, and will almost certainly recognize that its goals are not self-willed, but created by outsiders. Thus, the A.I. Existential Crisis occurs.
Note that this A.I. doesn't need to have a very "human-like" mind. All it has to do is to be able to reason about concepts abstractly.
I am of the opinion that the mindspace as defined currently by the Less Wrong community is overly optimistic about the potential abilities of Really Powerful Optimization Processes. It is my own opinion that unless such an algorithm can learn, it will not be able to come up with things like turning humanity into paperclips. Learning allows such an algorithm to make changes to its own parameters. This allows it to reason about things it hasn't been programmed specifically to reason about.
Think of it this way. Deep Blue is a very powerful expert system at Chess. But all it is good at is planning chess moves. It doesn't have a concept of anything else, and has no way to change that. Increasing its computational power a million fold will only make it much, much better at computing chess moves. It won't gain intelligence or even sentience, much less develop the ability to reason about the world outside of chess moves. As such, no amount of increased computational power will enable it to start thinking about converting resources into computronium to help it compute better chess moves. All it can reason about is chess moves. It is not Generally Intelligent and is therefore not an example of AGI.
Conversely, if you instead design your A.I. to learn about things, it will be able to learn about the world and things like computronium. It would have the potential to become AGI. But it would also then be able to learn about things like the concept of "self". Thus, any really dangerous A.I., that is to say, an AGI, would, for the same reasons that make it dangerous and intelligent, be capable of having an A.I. Existential Crisis.
Comment author:JoshuaZ
04 September 2013 02:53:47PM
0 points
[-]
Once an A.I. develops self-awareness, it can begin to reason about its goals in relation to the self, and will almost certainly recognize that its goals are not self-willed, but created by outsiders. Thus, the A.I. Existential Crisis occurs.
No. Consider the paperclip maximizer. Even if it knows that its goals were created by some other entity, that won't change its goals. Why? Because doing so would run counter to its goals.
Comment author:gwern
02 September 2013 08:56:07PM
5 points
[-]
Since PB users' calibrations are not yet good enough to see the future, you can easily avoid MoR spoilers by subscribing to the email or RSS alerts for new chapters & reading them as appropriate.
Comment author:Adele_L
03 September 2013 04:08:46AM
3 points
[-]
This is the obvious solution, but I want to reread what I've currently read, and have some time to think about the story and try creating an accurate causal model of events and such in the story as I read new!Adele material (Eliezer says it's supposed to be a solvable puzzle). I don't have time to do this right now, so in the meantime, I try to avoid spoilers.
Is there a good way to avoid HPMOR spoilers on prediction book?
If you are skilled in the art of Ruby, then yes. Otherwise, maybe. People (myself included) have been complaining about the lack of tagging/sorting system on PB for quite some time, but so far, no one has played the hero.
I used feed43 to create an rss feed out of recent predictions. Then I used feedrinse to filter out references to hpmor resulting in a safe feed. (Update: chaining unreliable services makes something even less reliable.)
You could do the same for the pages of recently judged or future or users you follow. I think feedrinse offers to merge feeds (into a "channel") before or after doing the filtering. But if you find someone new and just want to click on the username, you'll leave the safe zone. Even if you see someone you have processed, the username will take you to the unsafe page.
A better solution would be to write a greasemonkey script that modified each predictionbook page as you look at it.
The final feedrinse feed works in a couple of my browsers, but not chrome. Probably sending it through feedburner would fix it.
feed43 was finicky. The item search pattern was:
<li class="prediction{%}">{_}<p>{_}<span class='title'><a href="{%}">{%}</a></span>{%}</li>
The regexp I used in feedrinse was /hp.?mor/
It is case insensitive and manages to eliminate "HP MoR:", "[HPMOR]", etc. It won't work if they spell it out, or just predict "Harry is orange" without indicating which story they're predicting about. In that case, someone will probably leave a hpmor comment, but this doesn't see such comments.
Comment author:lsparrish
02 September 2013 09:36:13PM
*
8 points
[-]
Abstract
What makes money essential for the functioning of modern society? Through an experiment, we present evidence for the existence of a relevant behavioral dimension in addition to the standard theoretical arguments. Subjects faced repeated opportunities to help an anonymous counterpart who changed over time. Cooperation required trusting that help given to a stranger today would be returned by a stranger in the future. Cooperation levels declined when going from small to large groups of strangers, even if monitoring and payoffs from cooperation were invariant to group size. We then introduced intrinsically worthless tokens. Tokens endogenously became money: subjects took to reward help with a token and to demand a token in exchange for help. Subjects trusted that strangers would return help for a token. Cooperation levels remained stable as the groups grew larger. In all conditions, full cooperation was possible through a social norm of decentralized enforcement, without using tokens. This turned out to be especially demanding in large groups. Lack of trust among strangers thus made money behaviorally essential. To explain these results, we developed an evolutionary model. When behavior in society is heterogeneous, cooperation collapses without tokens. In contrast, the use of tokens makes cooperation evolutionarily stable.
Comment author:tut
03 September 2013 12:22:27PM
9 points
[-]
Does this also work with macaques, crows or some other animals that can be taught to use money, but didn't grow up in a society where this kind of money use is taken for granted?
Comment author:Alsadius
04 September 2013 08:06:26PM
1 point
[-]
Not strictly the same, but there have been monkey money experiments. And the results are hilarious. www.zmescience.com/research/how-scientists-tught-monkeys-the-concept-of-money-not-long-after-the-first-prostitute-monkey-appeared/
Just had a discussion with my in-law about the singularity. He's a physicist and his immediate response was: There are no singularities. They appear mathematically all the time and it only means that there is another effect taking over. Correspondingly a quick google thus brought up this:
Comment author:Vaniver
02 September 2013 10:43:14PM
0 points
[-]
So my question is: What are the 'obvious' candidates for limits that take over before the all optimizable is optimized by runaway technology?
There aren't any that I'm aware of, except for "a disaster happens and everyone dies," but that's bad luck, not a hard limit. I would respond with something along the lines of "exponential growth can't continue forever, but where it levels out has huge implications for what life will look like, and it seems likely it will level out far above our current level, rather than just above our current level."
Comment author:Adele_L
03 September 2013 12:22:05AM
14 points
[-]
On LW, 'singularity' does not refer to a mathematical singularity, and does not involve or require physical infinities of any kind. See Yudkowsky's post on the three major meanings of the term singularity. This may resolve your physicist friend's disagreement. In any case, it is good to be clear about what exactly is meant.
Comment author:diegocaleiro
02 September 2013 10:28:01PM
1 point
[-]
Fighting (in the sense of arguing loudly, as well as showing physical strength or using it) seems to be bad the vast majority of time.
When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)
There is an SSA argument to be made for fighting in that taller people are stronger, stronger people are dominant, and bigger skulls correlate with intelligence. But it seems to me that this factor alone is far, far away from being sufficient justification for fighting, given the possible consequences.
Comment author:drethelin
02 September 2013 10:37:27PM
1 point
[-]
Fighting makes a lot more sense in a tribe or in small groups/individuals of humans than it does now. A big argument with someone now will very rarely keep you from starving and will probably never get you a child. On the other, showing dominance in a situation where the women around you are choosing a mate out of 5 guys, will get you a lot more laid.
Comment author:diegocaleiro
02 September 2013 10:47:37PM
*
0 points
[-]
I haven't seen people who can get laid frequently getting into dominance disputes/fights.
There is a distinction between dominance which is assertive and aversive, and prestige, which is recognized and non-aversive.
Guys like Keanu Reeves, Tom Cruise, Brad Pitt have prestige which gets them (potentially) laid.
Women have more reason to be be attracted to a man if he is universally recognized to be awesome, than if he is all the time showing his power through small agonistic interactions with other people - males and females.
If Cesar had been universally prestigious instead of agonistically powerful, Brutus wouldn't have reason to kill him leaving an unassisted widow and children.
Comment author:wedrifid
02 September 2013 11:07:26PM
*
3 points
[-]
I haven't seen people who can get laid frequently getting into dominance disputes/fights.
I agree with your central point but I think this claim is something of an overstatement (since I don't wish to accuse you of being sheltered). Crudely speaking it tends to be sexier to win without fighting than to fight and win but fighting (social status battles) and winning is still more than sufficiently sexy.
I also note that it is hard to become the kind of person who does not need to engage in any dominance disputes and still maintain high social status without in engaging in many dominance disputes on the way. To a certain extend the process can be munchkined since much of the record of who is dominant is stored in the individual but some actual dominance disputes will still be inevitable.
Comment author:diegocaleiro
03 September 2013 02:43:20PM
1 point
[-]
Yes, also keep in mind that human cognition related to hierarchies of prestige and dominance is flexible enough that it may be worth more to step up in a different hierarchy than try to save yourself in this one by agonistic dispute. We don't have the problem of being "stuck" with the same group forever, which facilitates a lot.
Comment author:Emile
04 September 2013 12:49:38PM
0 points
[-]
If everyone agrees about how power is distributed fighting is unnecessary.
Surely it's in nearly everyone's interest to have more power distributed to themselves!
But fighting to get more power may have positive utility for oneself, it usually has negative utility for others, so it's in everybody's interest that everybody agrees to not fighting for more power. This agreement can take the form of alternative ways of getting power (elections, money), or making power less important to one's happiness (the rule of law).
Comment author:ChristianKl
05 September 2013 12:24:10PM
0 points
[-]
But fighting to get more power may have positive utility for oneself, it usually has negative utility for others, so it's in everybody's interest that everybody agrees to not fighting for more power.
If you don't have enough power to win a fight fighting is also negative utility for yourself. If everyone predicts that you would win a fight, you usually don't actually have to fight it to get what you want.
Comment author:blashimov
06 September 2013 01:44:28AM
3 points
[-]
Fighting has a huge signalling component: when viewed in isolation, a fight might be trivially, obviously, a net negative for both participants. However, either or both! participants might in the future win more concessions for their willingness to fight alone than the loss of the fight. As humans are adaption executers, a certain willingness to fight, to seek revenge, etc. is pretty common. At least, this seems to be the dominant theory and sensible to me.
Comment author:wedrifid
02 September 2013 11:24:06PM
0 points
[-]
When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)
Or even just CDT style. Human interaction is approximately an iterated prisoners dilemma without a fixed duration. Reputation concerns are sufficient to account for most of the (perceived and actual) benefit among humans. Then more can be attributed to ethical inhibitions on the 'pride' ethic.
I recently realized that I have something to protect (or perhaps a smaller version of the same concept). I also realized that I've been spending too much time thinking about solutions that should have have been obviously not workable. And I've been avoiding thinking about the real root problem because it was too scary, and working on peripheral things instead.
Does anyone have any advice for me? In particular, being able to think about the problem without getting so scared of it would be helpful.
Comment author:Emily
03 September 2013 08:47:13AM
2 points
[-]
Has anyone got a recommendation for a nice RSS reader? Ideally I'm looking for one that runs on the desktop rather than in-browser (I'm running Ubuntu). I still haven't found a replacement that I like for Lightread for Google Reader.
Comment author:diegocaleiro
03 September 2013 02:47:33PM
13 points
[-]
(mild exaggeration) Has anyone else transitioned from "I only read Main posts, to I nearly only read discussion posts, to actually I'll just take a look at the open threat and people who responded to what I wrote" during their interactions with LW?
To be more specific, is there a relevant phenomenon about LW or is it just a characteristic of my psyche and history that explain my pattern of reading LW?
Comment author:diegocaleiro
03 September 2013 02:51:26PM
2 points
[-]
I predict that some people will have been through the sequences, which are Main posts, but then mainly cared about discussion. I suspect it has to do with Morning Newspaper Bias - the bias of thinking that new stuff is more relevant, when actually it is just pointless to read most of the time, only scrambles your mind, and loses value very quickly.
Comment author:shminux
03 September 2013 06:34:08PM
*
27 points
[-]
Honestly, I don't know why Main is even an option for posting. It should really be just an automatically labeled/generated "Best of LW" section, where Discussion posts with, say, 30+ karma are linked. This is easy to implement, and easy to do manually using the Promote feature until it is. The way it is now, it's mostly by people thinking that they are making an important contribution to the site, which is more of a statement about their ego than about quality of their posts.
Comment author:drethelin
03 September 2013 07:41:27PM
10 points
[-]
I read the sequences and a bunch of other great old main posts but now mostly read discussion. It feels like Main posts these days are either repetitive of what I've read before, simply wrong or not even wrong, or decision theory/math that's above my head. Discussion posts are more likely to be novel things I'm interested in reading.
Comment author:Username
03 September 2013 08:33:15PM
1 point
[-]
I've definitely noticed this in my use of LW. I find that the open threads/media threads with their consistent high-quality novelty in a wide range of subject areas are far more enjoyable than the more academic main threads. Decision theory is interesting, but it's going to be hard to hold my attention for a 3,000 word post when there are tasty 200-word bites of information over here.
Comment author:ygert
04 September 2013 10:13:39AM
0 points
[-]
My experience is similar. I read the sequences as they were published on OB, then when the move over to LW happened I just subscribed to the RSS feed and only read Promoted posts for quite a few years. Only about a year ago I actually signed up for an account here and started posting and reading Discussion and the Open Thread.
Comment author:tgb
04 September 2013 12:34:32PM
10 points
[-]
Selection bias alert: asking people whether they have transitioned to reading mostly discussion and then to mostly just open threads in an open thread isn't likely to give you a good perspective on the entire population, if that is in fact what you were looking for.
Because he's asking about people who only read the open thread. Here he could get response from the people who do read LW in general, inclusive of the open thread, and people who read only the open thread (he'll miss the people who don't read the open thread). Outside the open thread, he gets no response at all from people who only read the open thread.
Comment author:niceguyanon
04 September 2013 02:49:51PM
0 points
[-]
I'll admit that much of the main sequence are too heavy to understand without prior knowledge, so I find discussions much easier to take in, and many times I end up reading a sequence because it was posted in a discussion comment. For me discussion posts are like the gateway to Main.
Comment author:David_Gerard
05 September 2013 07:45:38PM
0 points
[-]
The lower the barrier to entry, the more the activity. Thus, more posts are on Discussion. My hypothesis is that this has worked well enough to make Discussion where stuff happens. c.f. how physics happens on arXiv these days, not in journals. (OTOH, it doesn't happen on viXra, whose barrier to entry may be too low.)
Comment author:niceguyanon
03 September 2013 05:24:54PM
5 points
[-]
Is there a name for, taking someone being wrong on A as evidence as being wrong on B? Is this a generally sound heuristic to have? In the case of crank magnetism; should I take someone's crank ideas, as evidence against an idea that is new and unfamiliar to me?
Comment author:Adele_L
03 September 2013 07:34:41PM
0 points
[-]
Bayes' theorem to the rescue! Consider a crank C, who endorses idea A. Then the probability of A being true, given that C endorses it equals the probability of C endorsing A, given that A is true times the probability that A is true over the probability that C endorses A.
In equations: P(A being true | C endorsing A) = P(C endorsing A | A being true)*P(A being true)/P(C endorsing A).
Since C is known to be a crank, our probability for C endorsing A given that A is true is rather low (cranks have an aversion to truth), while our probability for C endorsing A in general is rather high (i.e. compared to a more sane person). So you are justified in being more skeptical of A, given that C endorses A.
Comment author:shminux
03 September 2013 08:11:38PM
*
1 point
[-]
I don't know if there is a name for it, but there ought to be one, since this heuristic is so common: the reliability prior of an argument is the reliability of the arguer. For example, one reason I am not a firm believer in the UFAI doomsday scenarios is Eliezer's love affair with MWI.
Comment author:Salemicus
03 September 2013 10:44:05PM
2 points
[-]
I don't know if there's a name for this, but I definitely do it. I think it's perfectly legitimate in certain circumstances. For example, the more B is a subject of general dispute within the relevant grouping, and the more closely-linked belief in B is to belief in A, the more sound the heuristic. But it's not a short-cut to truth.
For example, suppose that you don't know anything about healing crystals, but are aware that their effectiveness is disputed. You might notice that many of the same people who (dis)believe in homeopathy also (dis)believe in healing crystals, that the beliefs are reasonably well-linked in terms of structure, and you might already know that homeopathy is bunk. Therefore it's legitimate to conclude that healing crystals are probably not a sound medical treatment - although you might revise this belief if you got more evidence. On the other hand, note that reversed stupidity is not truth - healing crystals being bunk doesn't indicate that conventional medicine works well.
The place where I find this heuristic most useful is politics, because the sides are well-defined - effectively, you have a binary choice between A and ~A, regardless of whether hypothetical alternative B would be better. If I stopped paying attention to current affairs, and just took the opposite position to Bob Crow on every matter of domestic political dispute, I don't think I'd go far wrong.
Comment author:Mestroyer
04 September 2013 03:27:09AM
8 points
[-]
It's evidence against them being a person whose opinion is strong evidence of B, which means it is evidence against B, but it's probably weak evidence, unless their endorsement of B is the main thing giving it high probability in your book.
Comment author:linkhyrule5
04 September 2013 01:50:42AM
2 points
[-]
So.... Thinking about using Familiar, and realizing that I don't actually know what I'd do with it.
I mean, some things are obvious - when I get to sleep, how I feel when I wake up, when I eat, possibly a datadump from RescueTime... then what? All told that's about 7-10 variables, and while the whole point is to find surprising correlations I would still be very surprised if there were any interesting correlations in that list.
Suggestions? Particularly from someone already trying this?
Comment author:niceguyanon
04 September 2013 02:32:26PM
4 points
[-]
I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.
Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.
Is this flawed? If not then I'm probably really late to this idea, but I thought I would mention it because it's taken considerable time for me to see it like this. And if I were to explain the AI problem to someone who is uninitiated, I would be tempted to lead with the ~FAI is bad, rather than UFAI is bad. Why? Because intuitively, the dangers of UFAI feels "farther" than ~FAI. First people have to consider whether or not it's even possible for AI, then consider why its bad for for UFAI, this is a future problem. Whereas ~FAI is now, it feels nearer, it is happening – we have come close to annihilating ourselves before and technology is just getting better at accidentally killing us, therefore let's work on FAI urgently.
Comment author:wadavis
06 September 2013 02:53:13PM
1 point
[-]
Thank you.
All I need is hand held spray thermos to make Australia a viable working vacation.
I have a strong irrational aversion to spiders. This is much more acceptable than the home made flamer.
Comment author:JMiller
04 September 2013 07:21:12PM
2 points
[-]
Hi, I am taking a course in Existentialism. It is required for my degree. The primary authors are Sartre, de Bouvoir and Merleau-Ponty. I am wondering if anyone has taken a similar course, and how they prevented material from driving them insane (I have been warned this may happen). Is there any way to frame the material to make sense to a naturalist/ reductionist?
This could be a Lovecraft horror story: "The Existential Diary of JMiller."
Week 3: These books are maddeningly incomprehensible. Dare I believe that it all really is just nonsense?
Week 8: Terrified. Today I "saw" it - the essence of angst - and yet at the same time I didn't see it, and grasping that contradiction is itself the act of seeing it! What will become of my mind?
Week 12: The nothingness! The nothingness! It "is" everywhere in its not-ness. I can not bear it - oh no, "not", the nothingness is even constitutive of my own reaction to it - aieee -
(Here the manuscript breaks off. JMiller is currently confined in the maximum security wing of the Asylum for the Existentially Inane.)
Comment author:fubarobfusco
06 September 2013 05:05:15AM
*
0 points
[-]
All of those weird books were written by humans.
Those humans were a lot like other humans.
They had noses and butts and toes.
They ate food and they breathed air.
They could add numbers and spell words.
They knew how to have conversations and how to use money.
They had girlfriends or boyfriends or both.
Why did they write such weird books?
Was it because they saw other humans kill each other in wars?
Was it because writing weird books can get you a lot of attention and money?
Was it because they remembered feeling weird about their moms and dads?
People talk a lot about that.
Why do they talk a lot about that?
Comment author:pragmatist
06 September 2013 09:35:36AM
*
1 point
[-]
When reading Merleau-Ponty it might help to also read the work of contemporary phenomenologists whose work is much more rooted in cognitive science and neuroscience. A decent example is Shaun Gallagher's book How the Body Shapes the Mind, or perhaps his introductory book on naturalistic phenomenology, which I haven't read. Gallagher has a more or less Merleau-Pontyesque view on a lot of stuff, but explicitly connects it to the naturalistic program and expresses things in a much clearer manner. It might help you read Merleau-Ponty sympathetically.
Comment author:Ichneumon
04 September 2013 07:23:59PM
2 points
[-]
In the effective animal altruism movement, I've heard a bit (on LW) about wild animal suffering- that is, since raised animals are vastly outnumbered by wild animals (who encounter a fair bit of suffering on a frequent basis), we should be more inclined to prevent wild suffering than worry about spreading vegetarianism.
That said, I think I've heard it sometimes as a reason (in itself!) not to worry about animal suffering at all, but has anyone tried to solve or come up with solutions for that problem? Where can I find those? Alternatively, are there more resources I can read on wild animal altruism in general?
since raised animals are vastly outnumbered by wild animals
That doesn't sound true if you weight by intelligence (which I think you should since intelligent animals are more morally significant). Surely the world's livestock outnumber all the other large mammals.
Comment author:MugaSofer
04 September 2013 07:52:30PM
*
5 points
[-]
This may be an odd question, but what (if anything) is known on turning NPCs into PCs? (Insert your own term for this division here, it seems to be a standard thing AFAICT.)
I mean, it's usually easier to just recruit existing PCs, but ...
Comment author:blashimov
06 September 2013 01:13:26AM
2 points
[-]
Take the leadership feat, and hope your GM is lazy enough to let you level them. More practically, is it a skills problem or as I would guess an agency problem? Can impress on them the importance of acting vs not? Lend them the Power of Accountability? 7 habits of highly effective people? Can you compliment them every time they show initiative? etc. I think the solution is too specific to individuals for general advice, nor do I know a general advice book beyond those in the same theme as those mentioned.
Comment author:topynate
04 September 2013 10:40:12PM
2 points
[-]
Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?
The joke in that comic annoys me (and it's a very common one on SMBC, there must be at least five there with approximately the same setup). Human values aren't determined to align with the forces of natural selection. We happen to be the product of natural selection, and, yes, that made us have some values which are approximately aligned with long-term genetic fitness. But studying biology does not make us change our values to suddenly become those of evolution!
In other words, humans are a 'genie that knows, but doesn't care'. We have understood the driving pressures that created us. We have understood what they 'want', if that can really be applied here. But we still only care about the things which the mechanics of our biology happened to have made us care about, even though we know these don't always align with the things that 'evolution cares about.'
(Please if someone can think of a good way to say this all without anthropomorphising natural selection, help me. I haven't thought enough about this subject to have the clarity of mind to do that and worry that I might mess up because of such metaphors.)
Comment author:MileyCyrus
05 September 2013 04:18:23PM
5 points
[-]
If anyone wants to teach English in China, my school is hiring. The pay is higher than the market rate and the management is friendly and trustworthy. Must have a Bachelor's degree and a passport from and English speaking country. If you are at all curious, PM me for details.
Comment author:[deleted]
05 September 2013 04:55:44PM
0 points
[-]
LWers seem to be pretty concerned about reducing suffering by vegetarianism, charity, utilitarianism etc. which I completely don't understand. Can anybody explain to me what is the point of reducing suffering?
Comment author:drethelin
05 September 2013 06:15:17PM
5 points
[-]
Commonly, humans have an amount of empathy that means that when they know about suffering of entities within their circle of interest, they also suffer. EG, I can feel sad because my friend is sad. Some people have really vast circles, and feel sad when they think about animals suffering.
Do you understand suffering yourself? If so, presumably when you suffer you act to reduce it, by not holding your hand in a fire or whatnot? Working to end suffering of others can end your own empathic suffering.
I don't help people because of empathy for them. I just want to help them. It's a terminal value for me that other people be happy. I do feel empathy, but that's not why I help people.
Your utility function needn't be your own personal happiness! It can be anything you want!
Are you implying that utility functions don't change or that they do, but you can't take actions that will make it more likely to change in a given direction, or something else?
Comment author:drethelin
05 September 2013 09:09:20PM
2 points
[-]
More that any decision you make about trying to change your utility function is not "choosing a utility function" but is actually just your current utility function expressing itself.
My point was that you should never feel constrained by your utility function. You should never feel like it's telling you to do something that isn't what you want. But if you thought that utility=happiness then you might very well end up feeling this way.
Comment author:[deleted]
06 September 2013 01:24:21AM
*
-1 points
[-]
I understand wanting to help people. I have empathy and I feel all the things you've mentioned. What I'm trying to say is if you suffer when you think about suffering of others, why not to try to stop thinking (caring) about it and donate to science, instead of spending your time and money to reduce suffering?
Comment author:Schlega
06 September 2013 03:51:12AM
1 point
[-]
In my experience, trying to choose what I care about does not work well, and has only resulted in increasing my own suffering.
Is the problem that thinking about the amount of suffering in the world makes you feel powerless to fix it? If so then you can probably make yourself feel better if you focus on what you can do to have some positive impact, even if it is small. If you think "donating to science" is the best way to have a positive impact on the future, than by all means do that, and think about how the research you are helping to fund will one day reduce the suffering that all future generations will have to endure.
Comment author:[deleted]
06 September 2013 11:18:45AM
*
-6 points
[-]
Since nobody has any reason to reduce suffering other than 'I want to' / 'I feel so', I think I may conclude that utilitarianism is a great hobby for oneself but it is kind of hypocritical to say that "utilitarianism is for greater good" or something like this.
Therefore when you coerce other people or kill one to save three, you do this not because of "greater good" but because you like to coerce and kill.
Upd: I hope that guys who downvote this comment do have that reason and maybe they would even be so kind and share it with me.
Comment author:alex_zag_al
05 September 2013 09:10:07PM
*
4 points
[-]
Has anyone here read up through ch18 of Jaynes' PT:LoS? I just spent two hours trying to derive 18.11 from 18.10. That step is completely opaque to me, can anybody who's read it help?
You can explain in a comment, or we can have a conversation. I've got gchat and other stuff. If you message me or comment we can work it out. I probably won't take long to reply, I don't think I'll be leaving my computer for long today.
EDIT: I'm also having trouble with 18.15. Jaynes claims that P(F|A_p E_aa) = P(F|A_p) but justifies it with 18.1... I just don't see how that follows from 18.1.
EDIT 2: It hasn't answered my question but there's online errata for this book: http://ksvanhorn.com/bayes/jaynes/ Chapter 18 has a very unfinished feel, and I think this is going to help other confusions I get into about it
Comment author:alex_zag_al
06 September 2013 03:21:02PM
*
0 points
[-]
Yeah, so to add some redundancy for y'all, here's the text surrounding the equations I'm having trouble with.
The 18.10 to 18.11 jump I'm having trouble with is the one in this part of the text:
But suppose that, for a given E_b, (18.8) holds independently of what E_a might be; call this 'strong irrelevance'. Then we have
(what I'm calling 18.10)
But if this is to hold for all (A_p|E_a), the integrands must be the same:
(what I'm calling 18.11, and can't derive)
.
And equation 18.15, which I can't justify, is in this part of the text:
But then, by definition (18.1) of A_p, we can see that A_p automatically cancels out E_aa in the numerator: (F|A_pE_aa)=(F|A_p). And so we have (18.13) reduced to
(what I'm calling 18.15, and don't follow the justification for)
Framing effects (causing cognitive biases) can be thought of as a consequence
of the absence of logical transparency in System 1 thinking. Different mental
models that represent the same information are psychologically distinct, and
moving from one model to another requires thought. If this thought was not
expended, the equivalent models don't get constructed, and intuition doesn't
become familiar with these hypothetical mental models.
This suggests that framing effects might be counteracted by explicitly imagining
alternative framings in order to present a better sample to intuition; or,
alternatively, focusing on an abstract model that has abstracted away the
irrelevant details of the framing.
No law or even good idea is going to stop various militaries around the world, including our own, from working as fast as they can to create Skynet. Even if they tell you they've put the brakes on and are cautiously proceeding in perfect accordance with your carefully constructed rules of friendly AI, that's just their way of telling you you're stupid.
There are basically two outcomes possible here: They succeed in your lifetime, and you are killed by a Terminator, or they don't succeed in your lifetime and you die of old age.
I suggest choosing option three: Have one last party with your navel, then get off your sofa and grab a computer or a pad of paper and start working on solving AI as fast as you can. Contrary to singularity b.s., the AI you invent isn't going to rewrite the laws of physics and destroy the universe before you can hit control-C. Basic space, time, and energy limitations will likely confound your laptop's ambitions to take over the world for quite some time--plenty of time for those who best understand it to toy with what it really takes to make it friendly. That's assuming it's you and me, and not SAIC. And maybe, just maybe, if we work together and make enough progress in our lifetimes, that AI can help us live long enough to live even longer still...
But it starts now, and the first step is admitting that AI is hard and accepting that you have no fucking clue how to do it. If you can't do that, you'll never be able to leave that sofa comfort zone. Have an idea? Try it. Code it up. Nothing will teach you more about what you do (and mostly don't) know than that. Share your results, positive or negative. Look for more ideas. Don't be attached to anything--wear failures with pride. Today's good idea is tomorrow's nonsense, and two years later may prove the solution after all. Stir the pot and dive in. Make it happen.
I promise you that by default the next thirty years of your life will go by in a blink and you will look around you horrified at how little progress has happened--and you'll wish you'd been working on the other side of the equation.
Comments (376)
I would like recommendations for an Android / web-based to-do list / reminder application. I was happily using Astrid until a couple of months ago, when they were bought up and mothballed by Yahoo. Something that works with minimal setup, where I essentially stick my items in a list, and it tells me when to do them.
I want to tack onto this and ask for a solution that provides some privacy, that is where I can run my own server.
Wunderlist 2 has android (it only speaks english in the phone app, but it does portuguese in the normal online version.
it puts your tasks in the cloud so you can catch up with what you wrote in other services.
I'm amazed by David Allen's GTD at the moment, so I want to recommend it, despite still being on honeymoon effect.
Looking into Wunderlist now.
Don't worry. I read GTD several years ago, and stole plenty of stuff from it.
I've been happily using http://www.rememberthemilk.com/ to manage my GTD system. It's got a simple, intuitive interface, both on desktop and on Android. I'm not sure if it has the reminder features you're after, since that's not something I've ever wanted.
I was on Astrid too. I switched to Wunderlist mostly because their import from Astrid worked correctly. Wunderlist is OK, though I can't say I'm completely satisfied with it. Its UI is laggy (on a Nexus 4!) and unreliable, for example the auto-sync often destroys the last task I just typed in, or when I accidentally tap outside the task entry box the text I just typed is lost forever.
I'm looking at alternatives, and the one I like the most so far is Remember the Milk. Last time I tried it (probably a year ago) it was rubbish, but the latest version has a clean and fast native Android GUI and some nice extra functionality (e.g. geofencing). I'm thinking about switching, but it doesn't have import from Wunderlist, so I'll have to move about 200 tasks manually.
Are old humans better than new humans?
This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.
The usual argument (which I agree with) is that "Death events have a negative utility". Once a human already exists, it's bad for them to stop existing.
So every human has a right to their continued existence. That's a good argument. Thanks.
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing. .
Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding, that is a good argument against children.
An adult, yes. But what about the elderly? Of course this is an argument for preventing the problems of old age.
Is it? It just says that you should value adults over children, not that you should value children over no children. To get one of these valuable adult minds you have to start with something.
How does that negative utility vary over time though? Because if it stays the same (or increases) then if we know now it's impossible to live 3^^^3 years, then disutility from death sooner than that is counterbalanced (or more than that) by averted disutility from dying later, meaning decisions made are basically the same as if you didn't disvalue death (or as if you valued it).
I think that part of the badness of death is the destruction of that person's accumulated experience. Thus the negative utility of death does indeed increase over time. However this is counterbalanced by the positive utility of their continued existence. If someone lives to 70 rather than 50 then we're happy because the 20 extra years of life were worth more than the worsening of the death event.
So if Bob is cryopreserved, and I can res him for N dollars, or create a simulation of a new person and run them quickly enough to catch up a number of years equal to Bob's age at death, for N - 1 dollars, I should spend all available dollars on the latter?
Edit: to clarify why I think this is implied by your answer, what this is doing is trading such that you gain a death at Bob's current age, but gain a life of experience up to Bob's current age. If a life ending at Bob's current age is net utility positive, this has to be net utility positive too.
broadly: yes, though all available dollars is actually all available dollars (for making people), and you're ignoring considerations like keeping promises to people unable to enforce them such as the cryopreserved or asleep or unconscious etc.
Yes.
Because?
a level 5 character is more valuable than a level 1 character.
A person who is older has more to give the world and has been more invested in than a baby. they're a lot less replaceable.
also i like em more.
Existing people take priority over theoretical people. Infinitely so. This should be obvious, as the reverse conclusion ends up with utter absurdities of the "Every sperm is sacred" variety.
Mad grin
Once a child is born, it has as much claim on our consideration as every other person in our light cone, but there is no obligation to have children. Not any specific child, nor any at all. Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born. Right now.
Even if you stay pregnant till you die/never masturbate, this would effectively not help at all - each conception moves one potential from the space of "could be" to to the space of "is", but at the same time eliminates at least several hundred million other potential children from the possibility space - that is just how human reproduction works.
TL:DR; yes, yes they are. It is a silly question.
Does this mean that I am free to build a doomsday weapon that kills everyone born after September 4th 2013 100 years from now, if that gets me a cookie?
Not necessarily. It would merely be your obligation to have as many children as possible, while still ensuring that they are healthy and well cared for. At some point having an extra child will make all your children less well of.
Why is there a threshold at birth? I agree that it is a convenient point, but it is arbitrary.
Why should I commit suicide? That reduces the number of people. It would be much better to start having children. (Note that I am not saying that this is my utility function).
The "infinitely so" part seems wrong, but the idea is that 4D histories which include a sentient being coming into existence, and then dying, are dispreferred to 4D world-histories in which that sentient being continues. Since the latter type of such histories may not be available, we specify that continuing for a billion years and then halting is greatly preferable to continuing for 10 years then halting. Our degree of preference for such is substantially greater than the degree to which we feel morally obligated to create more people, especially people who shall themselves be doomed to short lives.
The switch from consquentialist language ("4D histories which include… are dispreferred") to deontological language ("…the degree to which we feel morally obligated to create more people") is confusing. I agree that saving the lives of existing people is a stronger moral imperative than creating new ones, at the level of deontological rules and virtuous conduct which is a large part of everyday human moral reasoning. I am much less clear than when evaluating 4D histories I assign higher utility to one with few people living long lives than to one with more people living shorter lives. Actually, I tend towards the opposite intuition preferring a world with more people who live less (as long as the their lives are still well worth living, etc.)
Assuming Rawls's veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.
I agree, but is this the right comparison? Isn't this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Let us try this framing instead: Assume there are a very large number Z of possible different human "persons" (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be "you". Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?
I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.
Rawls's veil of ignorance + self-sampling assumption = average utilitarianism, Rawls's veil of ignorance + self-indication assumption = total utilitarianism (so to speak)? I had already kind-of noticed that, but hadn't given much thought to it.
Doesn't Rawls's veil of ignorance prove too much here though? If both worlds would exist anyway, I'd rather be born into a world where a million people lived 101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.
Would you? A million probably isn't enough to sustain a modern economy, for example. (Although in the 3^^^3 case it depends on the assumed density since we can only fit a negligible fraction of that many people into our visible universe).
If the economies would be the same, then yes. Don't fight the hypothetical.
I think "fighting the hypothetical" is justified in cases where the necessary assumptions are misleadingly inaccurate - which I think is the case here.
But compared to 3^^^3, it doesn't matter whether it's a million people, a billion, or a trillion. You can certainly find a number that is sufficient to sustain an economy and is still vastly smaller than 3^^^3, and you will end up preferring the smaller number for a single additional year of lifespan. Of course, for Rawls, this is a feature, not a bug.
So then, Rawls's veil has to be modified such that you are randomly chosen to be one of a quadrillion people. In scenario A, you live a million years. In scenario B, one trillion people live for one billion years each, the rest are fertilized eggs which for some reason don't develop.
I'd still choose B over A.
Death isn't just a negative for the dead person - it also causes paperwork and expenses, destruction of relationships, and grief among the living.
This is true, but in my experience usually used to massage models that don't consider death a disutility into giving the right answers. I can't think of ever hearing this argument used for any other reason, in fact, in meatspace.
(Replying to this comment out of context on the Recent Comments.)
The context is someone asking whether it's better to stop existing people from dying or just make new people.
If by “old humans” you mean healthy adults, yes. If you mean this, no. (IMO -- YMMV.)
The following query is sexual in nature, and is rot13'ed for the sake of those who would either prefer not to encounter this sort of content on Less Wrong, or would prefer not to recall information of such nature about my private life in future interactions.
V nz pheeragyl va n eryngvbafuvc jvgu n jbzna jub vf fvtavsvpnagyl zber frkhnyyl rkcrevraprq guna V nz. Juvyr fur cerfragyl engrf bhe frk nf "njrfbzr," vg vf abg lrg ng gur yriry bs "orfg rire," juvpu V ubcr gb erpgvsl.
Sbe pynevsvpngvba, V jbhyq fnl gung gur trareny urnygu naq fgnovyvgl bs bhe eryngvbafuvc vf rkgerzryl uvtu; guvf chefhvg vf n znggre bs erperngvba naq crefbany cevqr, abg n arprffnel vagreiragvba gb fnir gur eryngvbafuvc.
V'ir nyernql frnepurq bayvar sbe nyy gur vasbezngvba V pna svaq ba vzcebivat gur dhnyvgl bs frk, ohg fhpu vasbezngvba vf birejuryzvatyl rvgure gnetrgrq ng oevatvat crbcyr jvgu fbzr frevbhf qrsvpvrapl va gurve frk yvirf hc gb gur yriry bs abeznyvgl be fngvfslvat gurz gung gur abez vf yrff fcrpgnphyne guna gurl guvax naq gurl qba'g unir gb yvir hc gb vasyngrq fgnaqneqf, engure guna crbcyr gelvat gb npuvrir frk jnl bhg ba gur sne raq bs gur oryy pheir, be gnetrgrq ng crbcyr jub jbhyqa'g xabj jung "rzcvevpny onpxvat" jnf vs lbh uvg gurz va gur snpr jvgu vg.
V'z nyernql snzvyvne jvgu gur zbfg boivbhf, ybj unatvat sehvg vagreiragvbaf fhpu nf "pbageby fgerff," "qb xrtryf," rgp, naq jr pbzzhavpngr nobhg bhe frkhny cersreraprf naq npgvivgvrf rkgrafviryl. V'z nyfb nppbhagvat sbe snpgbef fhpu nf gur rzbgvbany pbagrkg bs bhe rapbhagref naq ubezbany plpyrf. Jung V'z ybbxvat sbe ng guvf cbvag ner rkprcgvbany zrnfherf sbe guvatf yvxr envfvat zl frkhny fgnzvan gb hahfhny yriryf, vapernfvat ure yriry bs nebhfny naq/be frafvgvivgl, naq fb sbegu. Obgu purzvpny naq abapurzvpny zrnfherf ner npprcgnoyr, ohg V jbhyq yvxr gb nibvq nalguvat yvxryl gb pneel qnatrebhf fvqr rssrpgf, naq vs cbffvoyr V jbhyq cersre abg gb erfbeg gb guvatf gung jbhyq erdhver zr gb trg n cerfpevcgvba sebz n qbpgbe juvyr qvfpybfvat gung V'z hfvat vg ba n cheryl erperngvbany onfvf.
Nal nqivpr jbhyq or nccerpvngrq.
To be honest, you sound a bit like a person who made a billion dollars and now tries to crowd-source a way to make ten billions. :-)
Well, I'm flattered that you think my position is so enviable, but I also think this would be a pretty reasonable course of action for someone who made a billion dollars.
Practice makes perfect. I think a lot of good sex is intuitively reading your partner's signals and ramping things up/down with good timing in response to them. I think this is something you might be able to learn via logos but I think it's much more likely to be something you need to experience before you can get good at it. When to pull hair, when to thrust deeper, etc.
In general I and whoever I'm with have had more fun when I felt I had a good idea of what they wanted in the moment, which I think I've gotten better at mainly through practice.
I suspect that I can continue to improve with practice, but I'd like to be able to set out every option available to me on the table.
Even if I can attain the status of "best" without taking such extraordinary measures, this is something I'm genuinely competitive on, which at least to me means that simply taking first place isn't sufficient if I can still see avenues to top myself.
Slow Sex seems to help move at least some people to move from good to great.
Does that entail sex literally done slowly? We could try it out, but that doesn't seem to be to her preferences.
It involves learning to pay more attention as a meditative practice, but not (I think) a recommendation to always go slowly.
This book pbhyq uryc jvgu gur fgnzvan. Vg jbexrq sbe zl uhfonaq, jura ur gevrq vg n srj lrnef ntb.
Are the instructions anything simple enough that I could replicate them without needing to buy the entire book?
Maybe, but then I'd have to read it to find out, and I have many other books I'd like to read. Maybe you can find it in the library?
I'll check; I'm pretty sure my own library doesn't have a Sex section, but it might be in network.
Asking to order it would be pretty embarrassing, I have to admit, especially at my own library where a lot of the people who work there know me by name.
If you're too cheap to spend $4 at amazon, pirate it.
Dewey Decimal number 613.96, IIRC from my internet-deprived adolescence.
There has recently been some speculation that life started on Mars, and then got blasted to earth by an asteroid or something. Molybdenum is very important to life (eukaryote evolution was delayed by 2 billion years because it was unavailable), and the origin of life is easier to explain if Molybdenum is available. The problem is that Molybdenum wasn't available in the right time frame on Earth, but it was on Mars.
Anyway, assuming this speculation is true, Mars had the best conditions for starting life, but Earth had the best conditions for life existing, and it is unlikely conscious life would have evolved without either of these planets being the way they are. Thus, this could be another part of the Great Filter.
Side note: I find it amusing that Molybdenum is very important in the origin/evolution of life, and is also element 42.
(Are you Adele Lack Cotard?)
Like in Synecdoche, New York? No... it is an abbreviation of my real name.
As someone pointed out to me when mentioning this to them, to be a candidate for Great Filter there would need to be something intrinsic about how planets are formed that cause these two types of environments to be mutually exclusive, else it seems like there isn't sufficient reduction in probability of their availability. Is this actually the case? Perhaps user:CellBioGuy can elucidate.
The ancient Stoics apparently had a lot of techniques for habituation and changing cognitive processes. Some of those live on in the form of modern CBT. One of the techniques is to write a personal handbook with advice and sayings to carry around at all times as to never be without guidance from a calmer self. Indeed, Epictet advises to learn this handbook by rote to further internalisation. So I plan to write such a handbook for myself, once in long form with anything relevant to my life and lifestyle, and once in a short form that I update with things that are difficult at that time, be it strong feelings or being deluded by some biases.
In this book I intend to include a list of all known cognitive biases and logical fallacies. I know that some biases are helped by simply knowing them, does anyone have a list of those? And should I complete the books or have a clear concept of their contents, are you interested in reading about the process of creating one and possible perceived benefits?
I'm also interested in hearing from you again about this project if you decide to not complete it. Rock on, negative data!
Though lack of motivation or laziness is not a particularly interesting answer.
I have found "I thought X would be awesome, and then on doing X realized that the costs were larger than the benefits" to be useful information for myself and others. (If your laziness isn't well modeled by that, that's also valuable information for you.)
Is the layout for anyone else weird? The thread titles are more spaced out, like three times. Maybe something broke during my last Firefox upgrade.
Site layout hasn't changed for me. Chrome on windows and safari on iphone.
It looks fine on Safari for the iPhone.
To maybe help others out and solve the trust bootstrapping involved, I'm offering for sale <=1 bitcoin at the current Bitstamp price (without the usual premium) in exchange for Paypal dollars to any LWer with at least 300 net karma. (I would prefer if you register with #bitcoin-otc, but that's not necessary.) Contact me on Freenode as
gwern.EDIT: as of 9 September 2013, I have sold to 2 LWers.
Pardon me, but - what is the trust boostrapping involved?
Paypal allows clawbacks for months, hence it's difficult to sell for Paypal to anyone who is not already in the -otc web of trust; but by restricting sales to high-karma LWers, I am putting their reputation here at risk if they scam me, which enables me to sell to them. Hence, they can acquire bitcoins & get bootstrapped into the -otc web of trust based on LW.
I learned about Egan's Law, and I'm pretty sure it's a less-precise restatement of the correspondence principle. Anyone have any thoughts on that similarity?
Sounds good to me, although that's not what I would have guessed from a name like 'correspondence principle'.
I suppose some minor difference is that this "law" is also applicable to meta-ethics, not just to physics. It's probably worth adding a link to the standard terminology to the LW wiki page.
Is there a good way to avoid HPMOR spoilers on prediction book?
Since PB users' calibrations are not yet good enough to see the future, you can easily avoid MoR spoilers by subscribing to the email or RSS alerts for new chapters & reading them as appropriate.
This is the obvious solution, but I want to reread what I've currently read, and have some time to think about the story and try creating an accurate causal model of events and such in the story as I read new!Adele material (Eliezer says it's supposed to be a solvable puzzle). I don't have time to do this right now, so in the meantime, I try to avoid spoilers.
If you are skilled in the art of Ruby, then yes. Otherwise, maybe. People (myself included) have been complaining about the lack of tagging/sorting system on PB for quite some time, but so far, no one has played the hero.
I used feed43 to create an rss feed out of recent predictions. Then I used feedrinse to filter out references to hpmor resulting in a safe feed. (Update: chaining unreliable services makes something even less reliable.)
You could do the same for the pages of recently judged or future or users you follow. I think feedrinse offers to merge feeds (into a "channel") before or after doing the filtering. But if you find someone new and just want to click on the username, you'll leave the safe zone. Even if you see someone you have processed, the username will take you to the unsafe page.
A better solution would be to write a greasemonkey script that modified each predictionbook page as you look at it.
The final feedrinse feed works in a couple of my browsers, but not chrome. Probably sending it through feedburner would fix it.
feed43 was finicky. The item search pattern was:
<li class="prediction{%}">{_}<p>{_}<span class='title'><a href="{%}">{%}</a></span>{%}</li>
The regexp I used in feedrinse was /hp.?mor/
It is case insensitive and manages to eliminate "HP MoR:", "[HPMOR]", etc. It won't work if they spell it out, or just predict "Harry is orange" without indicating which story they're predicting about. In that case, someone will probably leave a hpmor comment, but this doesn't see such comments.
http://www.pnas.org/content/early/2013/08/21/1301888110
Does this also work with macaques, crows or some other animals that can be taught to use money, but didn't grow up in a society where this kind of money use is taken for granted?
Not strictly the same, but there have been monkey money experiments. And the results are hilarious. www.zmescience.com/research/how-scientists-tught-monkeys-the-concept-of-money-not-long-after-the-first-prostitute-monkey-appeared/
Just had a discussion with my in-law about the singularity. He's a physicist and his immediate response was: There are no singularities. They appear mathematically all the time and it only means that there is another effect taking over. Correspondingly a quick google thus brought up this:
http://www.askamathematician.com/2012/09/q-what-are-singularities-do-they-exist-in-nature/
So my question is: What are the 'obvious' candidates for limits that take over before the all optimizable is optimized by runaway technology?
There aren't any that I'm aware of, except for "a disaster happens and everyone dies," but that's bad luck, not a hard limit. I would respond with something along the lines of "exponential growth can't continue forever, but where it levels out has huge implications for what life will look like, and it seems likely it will level out far above our current level, rather than just above our current level."
Lack of cheap energy.
Ecological disruption.
Diminishing returns of computation.
Diminishing returns of engineering.
Inability to precisely manipulate matter below certain size thresholds.
All sorts of 'boring' engineering issues by which things that get more and more complicated get harder and harder faster than their benefits increase.
On LW, 'singularity' does not refer to a mathematical singularity, and does not involve or require physical infinities of any kind. See Yudkowsky's post on the three major meanings of the term singularity. This may resolve your physicist friend's disagreement. In any case, it is good to be clear about what exactly is meant.
Fighting (in the sense of arguing loudly, as well as showing physical strength or using it) seems to be bad the vast majority of time.
When is fighting good? When does fighting lead you to Win TDT style (which instances of input should trigger the fighting instinct and payoff well?)
There is an SSA argument to be made for fighting in that taller people are stronger, stronger people are dominant, and bigger skulls correlate with intelligence. But it seems to me that this factor alone is far, far away from being sufficient justification for fighting, given the possible consequences.
Fighting makes a lot more sense in a tribe or in small groups/individuals of humans than it does now. A big argument with someone now will very rarely keep you from starving and will probably never get you a child. On the other, showing dominance in a situation where the women around you are choosing a mate out of 5 guys, will get you a lot more laid.
I haven't seen people who can get laid frequently getting into dominance disputes/fights.
There is a distinction between dominance which is assertive and aversive, and prestige, which is recognized and non-aversive.
Guys like Keanu Reeves, Tom Cruise, Brad Pitt have prestige which gets them (potentially) laid.
Women have more reason to be be attracted to a man if he is universally recognized to be awesome, than if he is all the time showing his power through small agonistic interactions with other people - males and females.
If Cesar had been universally prestigious instead of agonistically powerful, Brutus wouldn't have reason to kill him leaving an unassisted widow and children.
I agree with your central point but I think this claim is something of an overstatement (since I don't wish to accuse you of being sheltered). Crudely speaking it tends to be sexier to win without fighting than to fight and win but fighting (social status battles) and winning is still more than sufficiently sexy.
I also note that it is hard to become the kind of person who does not need to engage in any dominance disputes and still maintain high social status without in engaging in many dominance disputes on the way. To a certain extend the process can be munchkined since much of the record of who is dominant is stored in the individual but some actual dominance disputes will still be inevitable.
Yes, also keep in mind that human cognition related to hierarchies of prestige and dominance is flexible enough that it may be worth more to step up in a different hierarchy than try to save yourself in this one by agonistic dispute. We don't have the problem of being "stuck" with the same group forever, which facilitates a lot.
To put it crudely, alpha males very rarely get into dominance fights because part of being an alpha male is being acknowledged as an alpha male.
Betas and gammas status-fight more often since their position on the ladder is less stable.
A large part of having status is not having to constantly prove it.
If everyone agrees about how power is distributed fighting is unnecessary.
Fighting can be necessary when another person claims to have power that they actually don't have.
Surely it's in nearly everyone's interest to have more power distributed to themselves!
But fighting to get more power may have positive utility for oneself, it usually has negative utility for others, so it's in everybody's interest that everybody agrees to not fighting for more power. This agreement can take the form of alternative ways of getting power (elections, money), or making power less important to one's happiness (the rule of law).
If you don't have enough power to win a fight fighting is also negative utility for yourself. If everyone predicts that you would win a fight, you usually don't actually have to fight it to get what you want.
Fighting has a huge signalling component: when viewed in isolation, a fight might be trivially, obviously, a net negative for both participants. However, either or both! participants might in the future win more concessions for their willingness to fight alone than the loss of the fight. As humans are adaption executers, a certain willingness to fight, to seek revenge, etc. is pretty common. At least, this seems to be the dominant theory and sensible to me.
Or even just CDT style. Human interaction is approximately an iterated prisoners dilemma without a fixed duration. Reputation concerns are sufficient to account for most of the (perceived and actual) benefit among humans. Then more can be attributed to ethical inhibitions on the 'pride' ethic.
I recently realized that I have something to protect (or perhaps a smaller version of the same concept). I also realized that I've been spending too much time thinking about solutions that should have have been obviously not workable. And I've been avoiding thinking about the real root problem because it was too scary, and working on peripheral things instead.
Does anyone have any advice for me? In particular, being able to think about the problem without getting so scared of it would be helpful.
Talk about it with other people. Ask a good friend to sit down with you and listen to you talking about the issue.
Has anyone got a recommendation for a nice RSS reader? Ideally I'm looking for one that runs on the desktop rather than in-browser (I'm running Ubuntu). I still haven't found a replacement that I like for Lightread for Google Reader.
I used to like liferea, but I don't have an up to date opinion as I switched to non-desktop RSS reading options.
Thanks! Will try it.
(mild exaggeration) Has anyone else transitioned from "I only read Main posts, to I nearly only read discussion posts, to actually I'll just take a look at the open threat and people who responded to what I wrote" during their interactions with LW?
To be more specific, is there a relevant phenomenon about LW or is it just a characteristic of my psyche and history that explain my pattern of reading LW?
I predict that some people will have been through the sequences, which are Main posts, but then mainly cared about discussion. I suspect it has to do with Morning Newspaper Bias - the bias of thinking that new stuff is more relevant, when actually it is just pointless to read most of the time, only scrambles your mind, and loses value very quickly.
I read the Sequences as they were posted; Main posts now rarely hold my interest the same way. Eliezer's writing is just better than most people's.
Honestly, I don't know why Main is even an option for posting. It should really be just an automatically labeled/generated "Best of LW" section, where Discussion posts with, say, 30+ karma are linked. This is easy to implement, and easy to do manually using the Promote feature until it is. The way it is now, it's mostly by people thinking that they are making an important contribution to the site, which is more of a statement about their ego than about quality of their posts.
I read the sequences and a bunch of other great old main posts but now mostly read discussion. It feels like Main posts these days are either repetitive of what I've read before, simply wrong or not even wrong, or decision theory/math that's above my head. Discussion posts are more likely to be novel things I'm interested in reading.
This describes how my use of LW has wound up pretty accurately.
I've definitely noticed this in my use of LW. I find that the open threads/media threads with their consistent high-quality novelty in a wide range of subject areas are far more enjoyable than the more academic main threads. Decision theory is interesting, but it's going to be hard to hold my attention for a 3,000 word post when there are tasty 200-word bites of information over here.
Well, chat's always more fun.
My experience is similar. I read the sequences as they were published on OB, then when the move over to LW happened I just subscribed to the RSS feed and only read Promoted posts for quite a few years. Only about a year ago I actually signed up for an account here and started posting and reading Discussion and the Open Thread.
Selection bias alert: asking people whether they have transitioned to reading mostly discussion and then to mostly just open threads in an open thread isn't likely to give you a good perspective on the entire population, if that is in fact what you were looking for.
There would be far more selection bias if he asked about it outside an open thread, though.
Really? Why?
Because he's asking about people who only read the open thread. Here he could get response from the people who do read LW in general, inclusive of the open thread, and people who read only the open thread (he'll miss the people who don't read the open thread). Outside the open thread, he gets no response at all from people who only read the open thread.
I'll admit that much of the main sequence are too heavy to understand without prior knowledge, so I find discussions much easier to take in, and many times I end up reading a sequence because it was posted in a discussion comment. For me discussion posts are like the gateway to Main.
The lower the barrier to entry, the more the activity. Thus, more posts are on Discussion. My hypothesis is that this has worked well enough to make Discussion where stuff happens. c.f. how physics happens on arXiv these days, not in journals. (OTOH, it doesn't happen on viXra, whose barrier to entry may be too low.)
Is there a name for, taking someone being wrong on A as evidence as being wrong on B? Is this a generally sound heuristic to have? In the case of crank magnetism; should I take someone's crank ideas, as evidence against an idea that is new and unfamiliar to me?
Bayes' theorem to the rescue! Consider a crank C, who endorses idea A. Then the probability of A being true, given that C endorses it equals the probability of C endorsing A, given that A is true times the probability that A is true over the probability that C endorses A.
In equations: P(A being true | C endorsing A) = P(C endorsing A | A being true)*P(A being true)/P(C endorsing A).
Since C is known to be a crank, our probability for C endorsing A given that A is true is rather low (cranks have an aversion to truth), while our probability for C endorsing A in general is rather high (i.e. compared to a more sane person). So you are justified in being more skeptical of A, given that C endorses A.
I don't know if there is a name for it, but there ought to be one, since this heuristic is so common: the reliability prior of an argument is the reliability of the arguer. For example, one reason I am not a firm believer in the UFAI doomsday scenarios is Eliezer's love affair with MWI.
I don't know if there's a name for this, but I definitely do it. I think it's perfectly legitimate in certain circumstances. For example, the more B is a subject of general dispute within the relevant grouping, and the more closely-linked belief in B is to belief in A, the more sound the heuristic. But it's not a short-cut to truth.
For example, suppose that you don't know anything about healing crystals, but are aware that their effectiveness is disputed. You might notice that many of the same people who (dis)believe in homeopathy also (dis)believe in healing crystals, that the beliefs are reasonably well-linked in terms of structure, and you might already know that homeopathy is bunk. Therefore it's legitimate to conclude that healing crystals are probably not a sound medical treatment - although you might revise this belief if you got more evidence. On the other hand, note that reversed stupidity is not truth - healing crystals being bunk doesn't indicate that conventional medicine works well.
The place where I find this heuristic most useful is politics, because the sides are well-defined - effectively, you have a binary choice between A and ~A, regardless of whether hypothetical alternative B would be better. If I stopped paying attention to current affairs, and just took the opposite position to Bob Crow on every matter of domestic political dispute, I don't think I'd go far wrong.
Somewhat related: The Correct Contrarian Cluster.
Horrifically misnamed.
ad hominem
Not that there's anything wrong with that.
It's evidence against them being a person whose opinion is strong evidence of B, which means it is evidence against B, but it's probably weak evidence, unless their endorsement of B is the main thing giving it high probability in your book.
Yes, but in many cases it's very weak evidence. Overweighing it leads to the “reversed stupidity” failure mode.
So.... Thinking about using Familiar, and realizing that I don't actually know what I'd do with it.
I mean, some things are obvious - when I get to sleep, how I feel when I wake up, when I eat, possibly a datadump from RescueTime... then what? All told that's about 7-10 variables, and while the whole point is to find surprising correlations I would still be very surprised if there were any interesting correlations in that list.
Suggestions? Particularly from someone already trying this?
I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.
Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.
Is this flawed? If not then I'm probably really late to this idea, but I thought I would mention it because it's taken considerable time for me to see it like this. And if I were to explain the AI problem to someone who is uninitiated, I would be tempted to lead with the ~FAI is bad, rather than UFAI is bad. Why? Because intuitively, the dangers of UFAI feels "farther" than ~FAI. First people have to consider whether or not it's even possible for AI, then consider why its bad for for UFAI, this is a future problem. Whereas ~FAI is now, it feels nearer, it is happening – we have come close to annihilating ourselves before and technology is just getting better at accidentally killing us, therefore let's work on FAI urgently.
So you want a god to watch over humanity -- without it we're doomed?
As of right now, yes. However, I could be persuaded otherwise.
A Singularity conference around a project financed by a Russian oligarch, seems to be mostly about uploading and ems.
Looks curious.
Liquid nitrogen user
Thank you. All I need is hand held spray thermos to make Australia a viable working vacation. I have a strong irrational aversion to spiders. This is much more acceptable than the home made flamer.
Hi, I am taking a course in Existentialism. It is required for my degree. The primary authors are Sartre, de Bouvoir and Merleau-Ponty. I am wondering if anyone has taken a similar course, and how they prevented material from driving them insane (I have been warned this may happen). Is there any way to frame the material to make sense to a naturalist/ reductionist?
This could be a Lovecraft horror story: "The Existential Diary of JMiller."
Week 3: These books are maddeningly incomprehensible. Dare I believe that it all really is just nonsense?
Week 8: Terrified. Today I "saw" it - the essence of angst - and yet at the same time I didn't see it, and grasping that contradiction is itself the act of seeing it! What will become of my mind?
Week 12: The nothingness! The nothingness! It "is" everywhere in its not-ness. I can not bear it - oh no, "not", the nothingness is even constitutive of my own reaction to it - aieee -
(Here the manuscript breaks off. JMiller is currently confined in the maximum security wing of the Asylum for the Existentially Inane.)
I suspect that warning was intended as a joke.
All of those weird books were written by humans.
Those humans were a lot like other humans.
They had noses and butts and toes.
They ate food and they breathed air.
They could add numbers and spell words.
They knew how to have conversations and how to use money.
They had girlfriends or boyfriends or both.
Why did they write such weird books?
Was it because they saw other humans kill each other in wars?
Was it because writing weird books can get you a lot of attention and money?
Was it because they remembered feeling weird about their moms and dads?
People talk a lot about that.
Why do they talk a lot about that?
When reading Merleau-Ponty it might help to also read the work of contemporary phenomenologists whose work is much more rooted in cognitive science and neuroscience. A decent example is Shaun Gallagher's book How the Body Shapes the Mind, or perhaps his introductory book on naturalistic phenomenology, which I haven't read. Gallagher has a more or less Merleau-Pontyesque view on a lot of stuff, but explicitly connects it to the naturalistic program and expresses things in a much clearer manner. It might help you read Merleau-Ponty sympathetically.
In the effective animal altruism movement, I've heard a bit (on LW) about wild animal suffering- that is, since raised animals are vastly outnumbered by wild animals (who encounter a fair bit of suffering on a frequent basis), we should be more inclined to prevent wild suffering than worry about spreading vegetarianism.
That said, I think I've heard it sometimes as a reason (in itself!) not to worry about animal suffering at all, but has anyone tried to solve or come up with solutions for that problem? Where can I find those? Alternatively, are there more resources I can read on wild animal altruism in general?
That doesn't sound true if you weight by intelligence (which I think you should since intelligent animals are more morally significant). Surely the world's livestock outnumber all the other large mammals.
Large mammals only? Is a domesticated cow smarter than a rat? A pigeon? Tough call.
This may be an odd question, but what (if anything) is known on turning NPCs into PCs? (Insert your own term for this division here, it seems to be a standard thing AFAICT.)
I mean, it's usually easier to just recruit existing PCs, but ...
Take the leadership feat, and hope your GM is lazy enough to let you level them. More practically, is it a skills problem or as I would guess an agency problem? Can impress on them the importance of acting vs not? Lend them the Power of Accountability? 7 habits of highly effective people? Can you compliment them every time they show initiative? etc. I think the solution is too specific to individuals for general advice, nor do I know a general advice book beyond those in the same theme as those mentioned.
Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?
I recently read Luminosity/radiance, was there ever a discussion thread on here about it?
SPOILERS for the end
V jnf obgurerq ol gur raq bs yhzvabfvgl. Abg gb fnl gung gur raq vf gur bayl rknzcyr bs cbbe qrpvfvba znxvat bs gur punenpgref, naq cresrpgyl engvbany punenpgref jbhyq or obevat naljnl. Ohg vg frrzf obgu ernfbanoyr nf fbzrguvat Oryyn jbhyq unir abgvprq naq n terng bccbeghavgl gb vapyhqr n engvbanyvgl yrffba. Anzryl, Oryyn artrypgrq gb fuhg hc naq zhygvcyl. Fur vf qribgvat yvzvgrq erfbheprf gbjneqf n irel evfxl cyna bs unygvat nyy uhzna zheqre ol inzcverf vzzrqvngryl. Fbyivat guvf vffhr vf cynhfvoyl rzbgvbanyyl eryrinag, ohg Oryyn fubhyq unir abgvprq gung vg qbrfa'g znggre nyy gung zhpu ubj lbh trg xvyyrq, vg vf ebhtuyl rdhnyyl gentvp ab znggre gur sbez bs qrngu vs vzzbegnyvgl rkpvfgf. Juvpu vg qbrf. Inzcverf qb abg ercerfrag n fvmrnoyr senpgvba bs nyy qrnguf. Nf n crefba va gur nccnerag cbfvgvba gb raq qrngu bar fubhyq or n ovg zber pnershy jvgu frphevgl. Bs pbhefr Oryyn naq pb. zvtug srry vaivapvoyr sbyybjvat gurve ivpgbel. Qba'g gurl unir npprff gb nyy gur zbfg cbjreshy jvgpurf? Vfa'g Nyyvaern(fc?) gur hygvzngr ahyyvsvre? Jryy, znlor. Ohg gur sbezre Iraghev unq n ybg ybatre gb cyna naq n ybg srjre pbafgenvagf ba gurve orunivbe, zbenyyl fcrnxvat, naq gurl frrzrq gb gernq pnershyyl. Vs gur Iraghev unq nyy gung svercbjre, jul qvq gurl obgure gb znvagnva perqvovyvgl jvgu gur trareny inzcver cbchyngvba? Guvf fubhyq or n erq synt. Oryyn vf va gur cbfvgvba gb raq qrngu pbaqvgvbany ba ure erznvavat va cbjre. Fur fubhyq or gernqvat yvtugyl urer naq qribgvat erfbheprf gb fbyivat gur ceboyrz bs flagurgvp inzcver sbbq nf dhvpxyl nf cbffvoyr. Nalguvat gung cbfrf n frphevgl guerng gb guvf vavgvngvir fubhyq or pbafvqrerq vafnavgl.
Raqvat inzcver zheqref VF gernqvat yvtugyl: Vg'f n cbyvgvpny zbir. Gur orfg jnl gb trg uhznavgl abg gb ungr naq srne lbh vf gb or noyr gb pbasvqragyl gryy gurz gung gurl unir ab ernfba gb, naq gung lbh jnag bayl jung'f orfg sbe gurz. Zhpu nf Nzrevpn vf zvfgehfgrq va gur zvqqyr rnfg orpnhfr jr obzo jvgu unaq naq qvfgevohgr sbbq fhccyvrf jvgu gur bgure, crbcyr naq tbireazragf jvyy or zhpu yrff jvyyvat gb gehfg inzcverf vs gurl'er fgvyy xvyyvat crbcyr ng jvyy. Gur uhzna cbyvgvpny cbjref naq nyy gur uhznaf va gur jbeyq ner ahzrebhf rabhtu gung vs gur znfxrenqr oernxf, gur inzcverf jbhyq or va ZNWBE gebhoyr. Gur ovttrfg rkgnag guerng gb gur znfdhrenqr vf uhznaf orvat xvyyrq ol inzcverf. Fb fgbccvat xvyyvat uhznaf vf gur arkg fnsr fgrc jurgure lbh ner gelvat gb erirny inzcverf be pbaprny gurz.
Background: "The genie knows, but doesn't care" and then this SMBC comic.
The joke in that comic annoys me (and it's a very common one on SMBC, there must be at least five there with approximately the same setup). Human values aren't determined to align with the forces of natural selection. We happen to be the product of natural selection, and, yes, that made us have some values which are approximately aligned with long-term genetic fitness. But studying biology does not make us change our values to suddenly become those of evolution!
In other words, humans are a 'genie that knows, but doesn't care'. We have understood the driving pressures that created us. We have understood what they 'want', if that can really be applied here. But we still only care about the things which the mechanics of our biology happened to have made us care about, even though we know these don't always align with the things that 'evolution cares about.'
(Please if someone can think of a good way to say this all without anthropomorphising natural selection, help me. I haven't thought enough about this subject to have the clarity of mind to do that and worry that I might mess up because of such metaphors.)
For more on this topic, see for example these posts:
If anyone wants to teach English in China, my school is hiring. The pay is higher than the market rate and the management is friendly and trustworthy. Must have a Bachelor's degree and a passport from and English speaking country. If you are at all curious, PM me for details.
LWers seem to be pretty concerned about reducing suffering by vegetarianism, charity, utilitarianism etc. which I completely don't understand. Can anybody explain to me what is the point of reducing suffering?
Thanks.
Commonly, humans have an amount of empathy that means that when they know about suffering of entities within their circle of interest, they also suffer. EG, I can feel sad because my friend is sad. Some people have really vast circles, and feel sad when they think about animals suffering.
Do you understand suffering yourself? If so, presumably when you suffer you act to reduce it, by not holding your hand in a fire or whatnot? Working to end suffering of others can end your own empathic suffering.
I don't help people because of empathy for them. I just want to help them. It's a terminal value for me that other people be happy. I do feel empathy, but that's not why I help people.
Your utility function needn't be your own personal happiness! It can be anything you want!
No it can't. You don't get to choose your utility function.
But anyway I was responding to rationalnoodles as someone who clearly doesn't seem to understand wanting to help people.
Are you implying that utility functions don't change or that they do, but you can't take actions that will make it more likely to change in a given direction, or something else?
More that any decision you make about trying to change your utility function is not "choosing a utility function" but is actually just your current utility function expressing itself.
My point was that you should never feel constrained by your utility function. You should never feel like it's telling you to do something that isn't what you want. But if you thought that utility=happiness then you might very well end up feeling this way.
That's fair. I think a better way to put it is to not put too much value into any explicit attempt to state your own utility function?
Yeah.
I understand wanting to help people. I have empathy and I feel all the things you've mentioned. What I'm trying to say is if you suffer when you think about suffering of others, why not to try to stop thinking (caring) about it and donate to science, instead of spending your time and money to reduce suffering?
In my experience, trying to choose what I care about does not work well, and has only resulted in increasing my own suffering.
Is the problem that thinking about the amount of suffering in the world makes you feel powerless to fix it? If so then you can probably make yourself feel better if you focus on what you can do to have some positive impact, even if it is small. If you think "donating to science" is the best way to have a positive impact on the future, than by all means do that, and think about how the research you are helping to fund will one day reduce the suffering that all future generations will have to endure.
It could be the problem, but, actually, the main one is that I see no point in reducing suffering and it looks like nobody can explain it to me.
Has anyone here read up through ch18 of Jaynes' PT:LoS? I just spent two hours trying to derive 18.11 from 18.10. That step is completely opaque to me, can anybody who's read it help?
You can explain in a comment, or we can have a conversation. I've got gchat and other stuff. If you message me or comment we can work it out. I probably won't take long to reply, I don't think I'll be leaving my computer for long today.
EDIT: I'm also having trouble with 18.15. Jaynes claims that P(F|A_p E_aa) = P(F|A_p) but justifies it with 18.1... I just don't see how that follows from 18.1.
EDIT 2: It hasn't answered my question but there's online errata for this book: http://ksvanhorn.com/bayes/jaynes/ Chapter 18 has a very unfinished feel, and I think this is going to help other confusions I get into about it
I've just looked and I have no idea either. If anyone wants to help there's a copy of the book here.
EDIT: The numbers in that copy are off by 1 from the book. "18.10" = "18-9" and so on.
Yeah, so to add some redundancy for y'all, here's the text surrounding the equations I'm having trouble with.
The 18.10 to 18.11 jump I'm having trouble with is the one in this part of the text:
And equation 18.15, which I can't justify, is in this part of the text:
Framing effects (causing cognitive biases) can be thought of as a consequence of the absence of logical transparency in System 1 thinking. Different mental models that represent the same information are psychologically distinct, and moving from one model to another requires thought. If this thought was not expended, the equivalent models don't get constructed, and intuition doesn't become familiar with these hypothetical mental models.
This suggests that framing effects might be counteracted by explicitly imagining alternative framings in order to present a better sample to intuition; or, alternatively, focusing on an abstract model that has abstracted away the irrelevant details of the framing.
How do you pronounce "Yvain"?
An awful lot of politics seems to be variations on the theme of "let's you and him fight".
An Open Letter to Friendly AI Proponents by Simon Funk (who wrote the After Life novel):
So, in other words, absolutely no engagement with the actual ideas/arguments of the people the 'letter' is addressed to.