Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Maybe a tangent, but: Are we humans corrigible?

I think about this a lot -- it seems that no matter what I do, I'm not able to prevent a sufficiently motivated attacker from ending my life.

I often observe with people that we don't all share the same meaning for the word, and that the discrepancy is significant.

YES! This is the study of ethics, I think: "by what rules can we generate an ideal society?"

Do we have a shared meaning for this word?

NO!

This is why ethical formalisms have historically been so problematic.

Overconfident projections of value based on proxies that are extrapolated way out of their region of relevance (generally in the service of "legibility") is the root cause of so much avoidable suffering: http://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/

This hits fairly close to home in the rest of the tech industry as our proxies are stressed way beyond their rated capacity: http://timewellspent.io and http://nxhx.org/maximizing/

Moreover, even if we did nail it at one point in time, this thing called "ideal" drifts with progress, see also "value drift".

Will Buckingham suggests that simply sharing stories is the most responsible way forward in https://www.amazon.com/Finding-Our-Sea-legs-Experience-Stories/dp/1899999485 -- digested ad nauseum by https://meaningness.com/

I hope these citations are convincing. Let's continue to talk about what's ideal, but once we throw in underneath some god-value-proxy, we're just as screwed as if we gave up on CEV.

polarix40

Absolutely. The proper response to this confusion should be: "fix the site to have a third, lower priority level", not "increase the frequency of our hack".

polarix30

I find broadcast speech in general, and especially recorded narration such as audiobooks, to be slow enough as to provoke distraction.

On top of the additional focal intensity, there's double the time bandwidth. Of course, it's sensitive to my mental state -- sometimes when I'm de-energized I need to slow it down to 1.5x, but I'd ideally hover around 2.5x (though software rarely goes above 2x yet).

To get started, listen to something at 1.25x, and crank it up further as you get accustomed to the density.

polarix00

Essentially, this sounds like temporal sampling bias. The points about ease of recombination and augmentation bespeak a lack of infrastructure investment in post-text meda, not a fundamental property. Yes, communication mediums begin with text. But the low emotional bandwidth (and low availability of presence in real-time interactions) concretely limits the kinds of transmissions that can be made.

Your writing, however, does raise a spectacular question.

How can we increase the bandwidth of text across the machine/brain barrier?

polarix00

The biggest difference I see is that driving overloads (or extends) fairly deeply embedded/evolved neural pathways: motion, just with a different set of actuators.

Intelligence is yet lightly embedded, and the substrate is so different with software vs wetware.

polarix00

I find this an immensely valuable insight: continuity, or "haecceity", is the critical element of self which naive uploading scenarios dismiss. Our current rational situation of self as concept-in-brain has no need for continuity, which is counterintuitive.

We know a good deal about the universe, but we do not yet know it in its entirety. If there were an observer outside of physics, we might suspect they care great deal about continuity, or their laws might. Depending on your priors, and willingness to accept that current observational techniques cannot access all-that-there-is, it might be worth embedding some value to haecceity near your value of self.

Contrast grow-and-prune uploading with slice-and-scan uploading: the latter will be anathema to the vast majority of humanity; they may "get over it", but it'll be a long battle. And slice-and-scan will probably be much slower to market. Start with Glass and EEGs: we'll get there in our lifetime using grow-and-prune, and our AIs will grow up with mentors they can respect.

polarix00

Yes, this is at first glance in conflict with our current understanding of the universe. However, it is probably one of the strategies with the best hope of finding a way out of that universe.

polarix20

It seems the disconnect is between B & C for most people.

But why is the generative simulation (B) not morally equivalent to the replay simulation (C)?

Perhaps because the failure modes are different. Imagine the case of a system sensitive to cosmic rays. In the replay simulation, the everett bundle is locally stable; isolated blips are largely irrelevant. When each frame is causally determining the subsequent steps, the system exhibits a very different signature.

polarix50

Unfortunately, no, what you ask for is not a permissible thing to do on LessWrong.

Load More