Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment author: pcm 10 October 2016 04:44:24PM 3 points [-]

I suspect attempted telekinesis is relevant.

Comment author: Ozyrus 26 September 2016 11:25:21PM *  1 point [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment author: pcm 27 September 2016 03:04:58PM 2 points [-]

See ontological crisis for an idea of why it might be hard to preserve a value function.

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: pcm 22 September 2016 07:23:33PM 1 point [-]

My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario.

Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party.

My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.

Comment author: WhySpace 23 August 2016 06:26:08PM *  2 points [-]

(1) Given: AI risk comes primarily from AI optimizing for things besides human values.

(2) Given: humans already are optimizing for things besides human values. (or, at least besides our Coherent Extrapolated Volition)

(3) Given: Our world is okay.^[CITATION NEEDED!]

(4) Therefore, imperfect value loading can still result in an okay outcome.

This is, of course, not necessarily always the case for any given imperfect value loading. However, our world serves as a single counterexample to the rule that all imperfect optimization will be disastrous.

(5) Given: A maxipok strategy is optimal. ("Maximize the probability of an okay outcome.")

(6) Given: Partial optimization for human values is easier than total optimization. (Where "partial optimization" is at least close enough to achieve an okay outcome.)

(7) ∴ MIRI should focus on imperfect value loading.

Note that I'm not convinced of several of the givens, so I'm not certain of the conclusion. However, the argument itself looks convincing to me. I’ve also chosen to leave assumptions like “imperfect value loading results in partial optimization” unstated as part of the definitions of those 2 terms. However, I’ll try and add details to any specific areas, if questioned.

Comment author: pcm 24 August 2016 03:29:20PM 0 points [-]

I expect that MIRI would mostly disagree with claim 6.

Can you suggest something specific that MIRI should change about their agenda?

When I try to imagine problems for which imperfect value loading suggests different plans from perfectionist value loading, I come up with things like "don't worry about whether we use the right set of beings when creating a CEV". But MIRI gives that kind of problem low enough priority that they're acting as if they agreed with imperfect value loading.

Comment author: Wei_Dai 22 July 2016 03:54:22PM 2 points [-]

Anyone else worried about Peter Thiel's support for Donald Trump discrediting Thiel in a lot of people's eyes, and MIRI and AI safety/risk research in general by association?

Comment author: pcm 22 July 2016 06:18:58PM 5 points [-]

No, mainly because Elon Musk's concern about AI risk added more prestige than Thiel had.

Comment author: MrMind 15 July 2016 06:54:26AM 0 points [-]

I know regard reading a book a not so trivial investment of time and energy, given the huge quantity of possible books I could be reading right now.
Is there any particular reason to believe Hanson's beliefs? So that it makes sense to anticipate the future the way he does?

Comment author: pcm 15 July 2016 06:45:26PM 1 point [-]

There's no particular reason to believe all of his predictions. But that's also true of anyone else who makes as many predictions as the book does (on similar topics).

When you say "anticipate the future the way he does", are you asking whether you should believe there's a 10% chance of his scenario being basicly right?

Nobody should have much confidence in such predictions, and when Robin talks explicitly about his confidence, he doesn't sound very confident.

Good forecasters consider multiple models before making predictions (see Tetlock's work). Reading the book is a better way for most people to develop an additional model of how the future might be than reading new LW comments.

Comment author: hofmannsthal 17 June 2016 06:31:21AM 0 points [-]

Appreciative of the broadness here, but I take trust in the readership here to recommend interestingly.

I'm looking for an introductory book on non-democratic political systems. I'd be particularly interested in a book that argues some of the core issues in democracy, and proposes alternative solutions.

I often find myself critical of democratic systems ("we shouldn't be voting, I don't trust these people"), but have little arguing power to the alternatives when needed. Often hear neoreactionary / anarchism thrown around, but I'd actually like to ready beyond a wikipedia article.

Thoughts?

Comment author: pcm 17 June 2016 06:16:48PM 1 point [-]

See Seasteading. No good book on it yet, but one will be published in March (by Joe Quirk and LWer Patri Friedman).

Comment author: Viliam 30 May 2016 08:44:53AM *  -1 points [-]

Some people believe that altruism has evolved through helping your relatives or through helping others to help you in return. I was thinking about it; on the surface the idea looks good -- if you already have this system in place, it is easy to see how it benefits those involved -- but that doesn't explain how the system could have appeared in the first place. Anyone knows the standard answer?

Imagine that you are literally the first organism who by random mutation achieved a gene for "helping those who help you". How specifically does this gene increase your fitness, if there is no one else to reciprocate?

Or imagine that you are literally the first organism who by random mutation achieved a gene for "helping your siblings". How specifically does this gene increase your fitness, or the fitness of the gene itself, if your siblings do not have a copy of this gene?

In other words, it seems simple to explain how these kinds of altruism can work when they are already an established system, but it is more difficult to explain how it could work when it is new.

And this all is a huge simplification; for example, I doubt that "helping those who help you" could be achieved by a single mutation, since it involves multiple parts like "noticing that someone helped you", "remembering the individual who helped you" and "helping the individual who helped you in the past". Plus the problem of how to start this chain of mutual cooperation.

My guess is that... nygehvfz pbhyq unir ribyirq guebhtu frkhny fryrpgvba. Yrg'f rkcynva vg ol funevat sbbq jvgu bguref. Svefg, vaqvivqhnyf abgvpr jub vf tbbq ng tngurevat sbbq, naq gurl ribyir nggenpgvba gbjneqf tbbq sbbq pbyyrpgbef. Gung znxrf vzzrqvngr frafr orpnhfr vg vapernfrf fheiviny bs gur puvyqera, vs gurl nyfb trg gur trarf tbbq sbe tngurevat sbbq. Nsgre guvf nggenpgvba rkvfgf jvguva gur fcrpvrf, gur arkg fgrc pbhyq or fvtanyyvat: vs lbh unir fbzr rkgen sbbq lbh qba'g npghnyyl arrq, oevat vg naq ivfvoyl qebc vg arne bgure vaqvivqhnyf, fb gung bguref abgvpr lbh unir zber sbbq guna lbh pna rng. Ntnva, guvf znxrf vzzrqvngr frafr, orpnhfr vg znxrf lbh zber nggenpgvir. Abgvpr ubj arvgure "urycvat lbh eryngvirf" abe "urycvat gubfr jub uryc lbh" jnf arprffnel gb ribyir urycvat vaqvfpevzvangryl. Npghnyyl, gubfr pbhyq unir ribyirq yngre, nf shegure vzcebirzragf bs be nqqvgvbaf gb gur vaqvfpevzvangr urycvat.

Comment author: pcm 30 May 2016 06:15:08PM -1 points [-]

I suggest reading Henrich's book The Secret of our Success. It describes a path to increased altruism that doesn't depend on any interesting mutation. It involves selection pressures acting on culture.

Comment author: pcm 24 March 2016 02:04:11AM 0 points [-]

There used to be important differences between stocks and futures (back when futures exchanges used open outcry) that (I think) enabled futures brokers to delay decisions about which customer got which trade price.

Comment author: Lumifer 09 February 2016 04:30:17PM 6 points [-]

A cautionary statement about betting on your beliefs from Tyler Cowen:

Bryan Caplan is pleased that he has won his bet with me, about whether unemployment will fall under five percent. ... The Benthamite side of me will pay Bryan gladly, as I don’t think I’ve ever had a ten dollar expenditure of mine produce such a boost in the utility of another person.

That said, I think this episode is a good example of what is wrong with betting on ideas. Betting tends to lock people into positions, gets them rooting for one outcome over another, it makes the denouement of the bet about the relative status of the people in question, and it produces a celebratory mindset in the victor. That lowers the quality of dialogue and also introspection, just as political campaigns lower the quality of various ideas — too much emphasis on the candidates and the competition.

Comment author: pcm 09 February 2016 07:36:09PM 2 points [-]

It has nearly the opposite effects for ideas I haven't yet bet on but might feel tempted or obligated to bet on.

The bad effects are weaker if I can get out of the bet easily (as is the case on a high-volume prediction market).

View more: Next