Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: polarix 19 January 2017 02:58:19PM 0 points [-]

Maybe a tangent, but: Are we humans corrigible?

I think about this a lot -- it seems that no matter what I do, I'm not able to prevent a sufficiently motivated attacker from ending my life.

Comment author: polarix 19 January 2017 01:36:58AM *  0 points [-]

I often observe with people that we don't all share the same meaning for the word, and that the discrepancy is significant.

YES! This is the study of ethics, I think: "by what rules can we generate an ideal society?"

Do we have a shared meaning for this word?

NO!

This is why ethical formalisms have historically been so problematic.

Overconfident projections of value based on proxies that are extrapolated way out of their region of relevance (generally in the service of "legibility") is the root cause of so much avoidable suffering: http://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/

This hits fairly close to home in the rest of the tech industry as our proxies are stressed way beyond their rated capacity: http://timewellspent.io and http://nxhx.org/maximizing/

Moreover, even if we did nail it at one point in time, this thing called "ideal" drifts with progress, see also "value drift".

Will Buckingham suggests that simply sharing stories is the most responsible way forward in https://www.amazon.com/Finding-Our-Sea-legs-Experience-Stories/dp/1899999485 -- digested ad nauseum by https://meaningness.com/

I hope these citations are convincing. Let's continue to talk about what's ideal, but once we throw in underneath some god-value-proxy, we're just as screwed as if we gave up on CEV.

Comment author: Error 19 December 2014 02:49:08AM 2 points [-]

The open threads have always seemed terribly inefficient to me. Most forums have a board for "stuff that doesn't belong anywhere else." That seems to be the purpose that the OT is being used for, but it's not terribly effective at it.

Any topic-thread that's posted regularly should really be a subreddit, IMO.

Comment author: polarix 19 December 2014 03:06:08PM 3 points [-]

Absolutely. The proper response to this confusion should be: "fix the site to have a third, lower priority level", not "increase the frequency of our hack".

In response to Podcasts?
Comment author: [deleted] 26 October 2014 12:44:47PM 1 point [-]

Why do you listen to all but a few of the podcasts at x1.5 or x2 speed? I know there's another person on here who does the same with audio books (can't remember who). Does it improve retention/enjoyment or does it simply allow you to get through more items quickly?

In response to comment by [deleted] on Podcasts?
Comment author: polarix 26 October 2014 04:00:35PM *  2 points [-]

I find broadcast speech in general, and especially recorded narration such as audiobooks, to be slow enough as to provoke distraction.

On top of the additional focal intensity, there's double the time bandwidth. Of course, it's sensitive to my mental state -- sometimes when I'm de-energized I need to slow it down to 1.5x, but I'd ideally hover around 2.5x (though software rarely goes above 2x yet).

To get started, listen to something at 1.25x, and crank it up further as you get accustomed to the density.

Comment author: polarix 14 April 2014 02:39:49PM 0 points [-]

Essentially, this sounds like temporal sampling bias. The points about ease of recombination and augmentation bespeak a lack of infrastructure investment in post-text meda, not a fundamental property. Yes, communication mediums begin with text. But the low emotional bandwidth (and low availability of presence in real-time interactions) concretely limits the kinds of transmissions that can be made.

Your writing, however, does raise a spectacular question.

How can we increase the bandwidth of text across the machine/brain barrier?

Comment author: lmm 30 January 2014 12:21:03PM 11 points [-]

Most things are easier than they look, but writing software that's free of bugs seems to be an exception: people are terrible at it. So I don't share your hope.

In response to comment by lmm on Humans can drive cars
Comment author: polarix 01 February 2014 07:57:06AM 0 points [-]

The biggest difference I see is that driving overloads (or extends) fairly deeply embedded/evolved neural pathways: motion, just with a different set of actuators.

Intelligence is yet lightly embedded, and the substrate is so different with software vs wetware.

Comment author: polarix 25 January 2014 07:53:57PM *  0 points [-]

I find this an immensely valuable insight: continuity, or "haecceity", is the critical element of self which naive uploading scenarios dismiss. Our current rational situation of self as concept-in-brain has no need for continuity, which is counterintuitive.

We know a good deal about the universe, but we do not yet know it in its entirety. If there were an observer outside of physics, we might suspect they care great deal about continuity, or their laws might. Depending on your priors, and willingness to accept that current observational techniques cannot access all-that-there-is, it might be worth embedding some value to haecceity near your value of self.

Contrast grow-and-prune uploading with slice-and-scan uploading: the latter will be anathema to the vast majority of humanity; they may "get over it", but it'll be a long battle. And slice-and-scan will probably be much slower to market. Start with Glass and EEGs: we'll get there in our lifetime using grow-and-prune, and our AIs will grow up with mentors they can respect.

Comment author: Mestroyer 15 January 2014 10:16:00AM 10 points [-]

If we pick an appropriate value for the "not alive anymore" penalty, then it won't be so large as to outweigh all other considerations, but enough that situations with unnecessary death will be evaluated as clearly worse than ones where that death could have been prevented.

Under your solution, every life created implies infinite negative utility. Due to thermodynamics or whatever (big rip? other cosmological disaster that happens before heat death?) we can't keep anyone alive forever. No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.

If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don't care about their death. One weird result of this is that if there will soon be a factory that rapidly creates and then painlessly destroys people, we don't object (And while the factory is running, we are feeling terrible about everything that has happened in it so far, but we still don't care to stop it). Or to put it in less weird terms, we won't object to spreading some kind of poison which affects newly developing zygotes, reducing their future lifespan painlessly.

There's also the incentive for an agent with this system to self-modify to stop changing their utility function over time.

Comment author: polarix 15 January 2014 03:56:08PM 0 points [-]

Yes, this is at first glance in conflict with our current understanding of the universe. However, it is probably one of the strategies with the best hope of finding a way out of that universe.

Comment author: polarix 14 December 2013 03:53:08PM *  2 points [-]

It seems the disconnect is between B & C for most people.

But why is the generative simulation (B) not morally equivalent to the replay simulation (C)?

Perhaps because the failure modes are different. Imagine the case of a system sensitive to cosmic rays. In the replay simulation, the everett bundle is locally stable; isolated blips are largely irrelevant. When each frame is causally determining the subsequent steps, the system exhibits a very different signature.

Comment author: solipsist 04 October 2013 02:39:17PM 2 points [-]

I'm floating in abstraction. Could you give a concrete story where a society that fixes akrasia suffers? I won't hold you to the particulars of the story, but I'd appreciate a place to plant my feet and generalize from.

Comment author: polarix 05 October 2013 03:33:32AM 3 points [-]

Unfortunately, no, what you ask for is not a permissible thing to do on LessWrong.

View more: Next