Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Psy-Kosh 10 September 2014 07:39:12PM 3 points [-]

I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:

Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?

Comment author: Stuart_Armstrong 15 September 2014 03:06:52PM 0 points [-]

That's interesting - basically here we're trying to educate an AI into human values, but human values are going to swiftly be changed to something different (and bad from our perspective).

I think there's no magical solution - either we build a FAI properly (which is very very hard), and it would stop the third party, or we have an AI that we value load and try and prevent our values from changing while it's happening.

The omission/commission thing applies to value loading AIs, not to traditional FAI. But I admit it's not the best analogy.

Comment author: ciphergoth 13 September 2014 07:15:25PM 0 points [-]

What do you make of Katja Grace's SIA-based argument for a late Filter?

Comment author: Stuart_Armstrong 15 September 2014 03:02:05PM 1 point [-]

I no longer believe that anthropic probabilities make sense (see http://lesswrong.com/lw/891/anthropic_decision_theory_i_sleeping_beauty_and/ and subsequent posts - search "anthropic decision theory" in less wrong), only anthropic decisions do. Applying this to these situations, total utilitarians should (roughly) act as if there was a late filter, while average utilitarians and selfish beings should act as if was an early filter.

Comment author: cameroncowan 11 September 2014 07:28:56PM 1 point [-]

That doesn't work because to not wirehead humanity is not the same as doing it and has different implications and whether it was right or wrong as you say won't matter when it is done.. Whereas if you decide to stop someone from doing that NOT stopping them is morally worse (ostensibly) than agreeing with it. Inaction, is itself an action. If there is wrong occurring, to decide to not stop it is just as bad as doing it. However, choosing not to do something bad is not the same as doing it.

Comment author: Stuart_Armstrong 15 September 2014 02:47:06PM 0 points [-]

I'm arguing against some poorly thought out motivations; eg "don't do anything that most people would disagree with most of the time". This falls apart if you can act to change "people would disagree with" through some means other than preference satisfaction.

Comment author: JoshuaMyer 11 September 2014 03:45:24AM *  2 points [-]

Why?

Anything massive traveling between stars would almost certainly be either very slow turning, constantly in search of fuel, or unconstrained by widely accepted (though possibly non-immutable) physical limitations ... Would we be a fuel source? Perhaps we would represent a chance to learn about life, something we believe to be a relatively rare phenomena ... There's just not enough information to say why an entity would seek us out without assuming something about its nature ... intelligence wants to be seen? To reformat the universe to suit it's needs? An interesting concept. It certainly can evolve as an imperative (probably in a more specific form).

Perhaps you could refer me to more writing on the subject. I've been imagining Von Nueman machines crawling through asteroid belts -- Arthur C. Clarke chases them away from a first contact scenario by convincing them we will never conquer the stars. Clearly, I'm missing some links.

Oh and thank you for engaging me. The way you deal with concepts makes me happy.

Comment author: Stuart_Armstrong 15 September 2014 02:43:55PM 0 points [-]

We argue that travelling between galaxies - let alone between stars is very "easy", for some values of "easy". See http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf or https://www.youtube.com/watch?v=zQTfuI-9jIo&list=UU_qqMD08PFrDfPREoBEL6IQ

Major cosmical restructuring would be trivial (under the assumptions we made) for any star-spanning civilization.

Comment author: chaosmage 12 September 2014 12:32:46PM 3 points [-]

if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us.

Excellent! So, wouldn't that mean that the best way to eliminate x-risk would be to do exactly that?

It is counterintuitive, because "eliminating x-risk" implies some activity, some fixing of something. But we eliminated the risk of devastating asteroid impact not by nuking any dangerous ones, but by mapping all of them and concluding the risk didn't exist. As it happens, that was also much cheaper than any asteroid deflection could have been.

If sending out an interstellar replicator was proof we're further ahead (i.e. less vulnerable) than anything that could have evolved inside this galaxy since the dawn of time, it seems mightily important to become more certain we can do that (without AI). If some variant of our interstellar replicator was capable of enabling intergalactic travel, that'd raise our expectation of comparative invulnerability because we'd know we've gone past obstacles that nothing inside some fraction of our light cone even outside our galaxy has been able to master.

Ideally we'd actually demonstrate that of course, but for the purpose of eliminating (perceived) x-risk, a highly evolved and believable model of how it could be done should go much of the way.

Of course we might find out that self-replicating spacecraft are a lot harder than they look, but that too would be information that is valuable for the long-term survival of our species.

Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design in 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, and probably has a bunch of other problems that I'm not qualified to discover. I haven't looked at all the papers that cite it (yet), but the once I've seen seem to agree self-replicating spacecraft are plausible.

I posit that greater certainty on that point would be of outsized value to our species. So why aren't we researching it? Am I overlooking something?

Comment author: Stuart_Armstrong 15 September 2014 02:40:45PM 1 point [-]

Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design in 1980.

Actually, bacteria, seeds and acorns are our strongest arguments for self-replication, along with the fact that humans can generally copy or co-opt natural processes for our own uses.

Comment author: private_messaging 11 September 2014 01:52:00PM *  1 point [-]

If it looks too different, we won't see them in space, though.

Our own intelligence is at the level where it's just barely sufficient to build a civilization when you got hands, fire, and so on. Note that orcas have much larger brains than humans, and had those larger brains for quite a long time, yet we're where we are, and they're where they are.

Comment author: Stuart_Armstrong 15 September 2014 02:37:25PM 2 points [-]

Our own intelligence is at the level where it's just barely sufficient to build a civilization when you got hands, fire, and so on.

Likely because the first beings that could do that, did do that - no need to wait for the evolution of higher intelligence (so, in particular, this doesn't show that higher intelligence couldn't evolve).

Comment author: JoshuaMyer 10 September 2014 01:59:26AM 1 point [-]

Something which cannot be observed and tested lays beyond the realm of science - so how big a signal are we looking for? A pattern in quasar flashes perhaps? Maybe the existence of unexplained engineering feats from civilizations long dead? The idea that advanced technology would want us to observe it, the existence of vague entities with properties yet to be determined ... these exist as speculations. To attempt to discern a reason for the absence of evidence on these matters is even more speculative.

Perhaps I should clarify: none of the data discussed really helps us narrow down a location for the filter because we aren't really discussing methods of testing the filter. It's existence is speculative by design. You can't test for something as vaguely defined as intelligent technology.

I do agree that examining other species may yield a better conceptualization of intelligence. I very much like that the discussion has drifted in that direction.

Comment author: Stuart_Armstrong 10 September 2014 03:23:54PM 2 points [-]

A pattern in quasar flashes perhaps?

I'm more thinking of mega engineering projects, the reforming of galaxies to suit the needs of a civilization, rather that the messy randomness and waste of negentropy that we seem to see.

I'm not assuming that advanced technology would want us to observe it - I'm assuming that advanced technology has no reason to stay hidden from us, at tremendous opportunity costs for it.

Omission vs commission and conservation of expected moral evidence

2 Stuart_Armstrong 08 September 2014 02:22PM

Consequentialism traditionally doesn't distinguish between acts of commission or acts of omission. Not flipping the lever to the left is equivalent with flipping it to the right.

But there seems one clear case where the distinction is important. Consider a moral learning agent. It must act in accordance with human morality and desires, which it is currently unclear about.

For example, it may consider whether to forcibly wirehead everyone. If it does so, they everyone will agree, for the rest of their existence, that the wireheading was the right thing to do. Therefore across the whole future span of human preferences, humans agree that wireheading was correct, apart from a very brief period of objection in the immediate future. Given that human preferences are known to be inconsistent, this seems to imply that forcible wireheading is the right thing to do (if you happen to personally approve of forcible wireheading, replace that example with some other forcible rewriting of human preferences).

What went wrong there? Well, this doesn't respect "conversation of moral evidence": the AI got the moral values it wanted, but only though the actions it took. This is very close to the omission/commission distinction. We'd want the AI to not take actions (commission) that determines the (expectation of the) moral evidence it gets. Instead, we'd want the moral evidence to accrue "naturally", without interference and manipulation from the AI (omission).

Comment author: JoshuaMyer 07 September 2014 11:38:40PM 1 point [-]

I'm sorry but I think this article's line of reasoning is irreparably biased by the assumption that we don't see any evidence of complex technological life in the universe. It's entirely possible we see it and don't recognize it as such because of the considerable difficulties humans experience when sorting through all the data in the universe looking for a pattern they don't recognize yet.

Technology is defined, to a certain extent, by it's newness. What could make us think we would recognize something we've never seen before and had no hand in creating? Most of what we believe to be true about the universe is experimentally verifiable only from our tiny corner of the universe in which we run our experiments. How do we know there aren't intelligent creatures out there just as unaware of us?

All we know for sure is that we (well ... most of us) have not recognized the existence of life-like technology.

Comment author: Stuart_Armstrong 08 September 2014 08:47:46AM 5 points [-]

It's entirely possible we see it and don't recognize it as such because of the considerable difficulties humans experience when sorting through all the data in the universe looking for a pattern they don't recognize yet.

Yes, it's possible. But that argument proves too much: any observation could be advanced technology that we "don't recognize [...] as such". The fossil record could be that, as far as we can tell, etc... We have to reason with what we can reason with, and the setup of the universe - galaxies losing gas and stars, stars burning pointlessly into the void, supernovas incredibly wasteful of resources, etc... - all points to natural forces rather than artificial.

It still could be artificial, but there's no evidence of it.

Comment author: MugaSofer 05 September 2014 04:36:16PM 1 point [-]

I'm extremely curious: how did you come to conclude that the Great Filter was probably a particular evolutionary leap?

Comment author: Stuart_Armstrong 05 September 2014 08:14:38PM 1 point [-]

Using bad reasoning: intuition and subjective judgement. The chances of a late great filter just don't seem high enough...

View more: Next