Friendly AI is an idea that I find to be an admirable goal. While I'm not yet sure an intelligence explosion is likely, or whether FAI is possible, I've found myself often thinking about it, and I'd like for my first post to share a few those thoughts on FAI with you.

Safe AGI vs Friendly AGI
-Let's assume an Intelligence Explosion is possible for now, and that an AGI with the ability to improve itself somehow is enough to achieve it.
-Let's define a safe AGI as an above-human general AI that does not threaten humanity or terran life (eg. FAI, Tool AGI, possibly Oracle AGI)
-Let's define a Friendly AGI as one that *ensures* the continuation of humanity and terran life.
-Let's say an unsafe AGI is all other AGIs.
-Safe AGIs must supress unsafe AGIs in order to be considered Friendly. Here's why:

-If we can build a safe AGI, we probably have the technology to build an unsafe AGI too.
-An unsafe AGI is likely to be built at that point because:
-It's very difficult to conceive of a way that humans alone will be able to permanently stop all humans from developing an unsafe AGI once the steps are known**
-Some people will find the safe AGI's goals unnacceptable
-Some people will rationalise or simply mistake that their AGI design is safe when it is not
-Some people will not care if their AGI design is safe, because they do not care about other people, or because they hold some extreme beliefs
-Most imaginable unsafe AGIs would outcompete safe AGIs, because they would not neccessarily be "hamstrung" by complex goals such as protecting us meatbags from destruction. Tool or Oracle AGIs would obviously not stand a chance due to their restrictions.
-Therefore, If a safe AGI does not prevent unsafe AGIs from coming into existence, humanity will very likely be destroyed.

-The AGI most likely to prevent unsafe AGIs from being created is one that actively predicted their development and terminates that development before or on completion.
-So to summarise

-An AGI is very likely only a Friendly AI if it actively supresses unsafe AGI.
-Oracle and Tool AGIs are not Friendly AIs, they are just safe AIs, because they don't suppress anything.
-Oracle and Tool AGIs are a bad plan for AI if we want to prevent the destruction of humanity, because hostile AGIs will surely follow.

(**On reflection I cannot be certain of this specific point, but I assume it would take a fairly restrictive regime for this to be wrong. Further comments on this very welcome.)

Other minds problem - Why should be philosophically careful when attempting to theorise about FAI

I read quite a few comments in AI discussions that I'd probably characterise as "the best utility function for a FAI is one that values all consciousness". I'm quite concerned that this persists as a deeply held and largely unchallenged assumption amongst some FAI supporters. I think in general I find consciousness to be an extremely contentious, vague and inconsistently defined concept, but here I want to talk about some specific philosophical failures.

My first concern is that while many AI theorists like to say that consciousness is a physical phenomenon, which seems to imply Monist/Physicalist views, they at the same time don't seem to understand that consciousness is a Dualist concept that is coherent only in a Dualist framework. A Dualist believes there is a thing called a "subject" (very crudely this equates with the mind) and then things called objects (the outside "empirical" world interpreted by that mind). Most of this reasoning begins with Descartes' cogito ergo sum or similar starting points ( https://en.wikipedia.org/wiki/Cartesian_dualism ). Subjective experience, qualia and consciousness make sense if you accept that framework. But if you're a Monist, this arbitrary distinction between a subject and object is generally something you don't accept. In the case of a Physicalist, there's just matter doing stuff. A proper Physicalist doesn't believe in "consciousness" or "subjective experience", there's just brains and the physical human behaviours that occur as a result. Your life exists from a certain point of view, I hear you say? The Physicalist replies, "well a bunch of matter arranged to process information would say and think that, wouldn't it?".

I don't really want to get into whether Dualism or Monism is correct/true, but I want to point out even if you try to avoid this by deciding Dualism is right and consciousness is a thing, there's yet another more dangerous problem. The core of the problem is that logically or empirically establishing the existence of minds, other than your own is extremely difficult (impossible according to many). They could just be physical things walking around acting similar to you, but by virtue of something purely mechanical - without actual minds. In philosophy this is called the "other minds problem" ( https://en.wikipedia.org/wiki/Problem_of_other_minds or http://plato.stanford.edu/entries/other-minds/). I recommend a proper read of it if the idea seems crazy to you. It's a problem that's been around for centuries, and yet to-date we don't really have any convincing solution (there are some attempts but they are highly contentious and IMHO also highly problematic). I won't get into it more than that for now, suffice to say that not many people accept that there is a logical/empirical solution to this problem.

Now extrapolate that to an AGI, and the design of its "safe" utility functions. If your AGI is designed as a Dualist (which is neccessary if you wish to encorporate "consciousness", "experience" or the like into your design), then you build-in a huge risk that the AGI will decide that other minds are unprovable or do not exist. In this case your friendly utility function designed to protect "conscious beings" fails and the AGI wipes out humanity because it poses a non-zero threat to the only consciousness it can confirm - its own. For this reason I feel "consciousness", "awareness", "experience" should be left out of FAI utility functions and designs, regardless of the truth of Monism/Dualism, in favour of more straight-forward definitions of organisms, intelligence, observable emotions and intentions. (I personally favour conceptualising any AGI as a sort of extension of biological humanity, but that's a discussion for another day) My greatest concern is there is such strong cultural attachment to the concept of consciousness that researchers will be unwilling to properly question the concept at all.

What if we're not alone?

It seems a little unusual to throw alien life into the mix at this point, but I think its justified because an intelligence explosion really puts an interstellar existence well within our civilisation's grasp. Because it seems that an intelligence explosion implies a very high rate of change, it makes sense to start considering even the long term implication early, particularly if the consequences are very serious, as I believe they may be in this realm of things.

Let's say we successfully achieved a FAI. In order to fufill its mission of protecting humanity and the biosphere, it begins expanding, colonising and terraforming other planets for potential habitation by Earth originating life. I would expect this expansion wouldn't really have a limit, because the more numourous the colonies, the less likely it is we could be wiped out by some interstellar disaster.

Of course, we can't really rule out the possibility that we're not alone in the universe, or even the galaxy. If we make it as far as AGI, then its possible another alien civilisation might reach a very high level of technological advancement too. Or there might be many. If our FAI is friendly to us but basically treats them as paperclip fodder, then potentially that's a big problem. Why? Well:

-Firstly, while a species' first loyalty is to itself, we should consider that it might be morally unsdesirable to wipe out alien civilisations, particularly as they might be in some distant way "related" (see panspermia) to own biosphere.
-Secondly, there is conceivable scenarios where alien civilisations might respond to this by destroying our FAI/Earth/the biosphere/humanity. The reason is fairly obvious when you think about it. An expansionist AGI could be reasonably viewed as an attack or possibly an act of war.

Let's go into a tiny bit more detai. Given that we've not been destroyed by any alien AGI just yet, I can think of a number of possible interstellar scenarios:

(1) There is no other advanced life
(2) There is advanced life, but it is inherently non-expansive (expand inwards, or refuse to develop dangerous AGI)
(3) There is advanced life, but they have not discovered AGI yet. There could potentially be a race-to-the-finish (FAI) scenario on.
(4) There is already expanding AGIs, but due to physical limits on the expansion rate, we are not aware of them yet. (this could use further analysis)
One civilisation, or an allied group of civilisations have develop FAIs and are dominant in the galaxy. They could be either:

(5) Whack-a-mole cilivisations that destroy all potential competitors as soon as they are identified
(6) Dominators that tolerate civilisations so long as they remain primitive and non-threatening by comparison.
(7) Some sort of interstellar community that allows safe civilisations to join (this community still needs to stomp on dangerous potential rival AGIs)

In the case of (6) or (7), developing a FAI that isn't equipped to deal with alien life will probably result in us being liquidated, or at least partially sanitised in some way. In (1) (2) or (5), it probably doesn't matter what we do in this regard, though in (2) we should consider being nice. In (3) and probably (4) we're going to need a FAI capable of expanding very quickly and disarming potential AGIs (or at least ensuring they are FAIs from our perspective).

The upshot of all this is that we probably want to design safety features into our FAI so that it doesn't destroy alien civilisations/life unless its a significant threat to us. I think the understandable reaction to this is something along the lines of "create an FAI that values all types of life" or "intelligent life" or something along these lines. I don't exactly disagree, but I think we must be cautious in how we formulate this too.

Say there are many different civilisations in the galaxy. What sort of criteria would ensure that, given some sort of zero-sum scenario, Earth life wouldn't be destroyed. Let's say there was some sort of tiny but non-zero probability that humanity could evade the FAI's efforts to prevent further AGI development. Or perhaps there was some loophole in the types of AGI's that humans were allowed to develop. Wouldn't it be sensible, in this scenario, for a universalist FAI to wipe out humanity to protect the countless other civilisations? Perhaps that is acceptable? Or perhaps not? Or less drastically, how does the FAI police warfare or other competition between civilisations? A slight change in the way life is quantified and valued could change drastically the outcome for humanity. I'd probably suggest we want to weight the FAI's values to start with human and Earth biosphere primacy, but then still give some non-zero weighting to other civilisations. There is probably more thought to be done in this area too.

Simulation

I want to also briefly note that one conceivable way we might postulate as a safe way to test Friendly AI designs is to simulate a worlds/universes of less complexity than our own, make it likely that it's inhabitants invent a AGI or FAI, and then closely study the results of these simluations. Then we could study failed FAI attempt with much greater safety. It also occured to me that if we consider the possibilty of our universe being a simulated one, then this is a conceivable scenario under which our simulation might be created. After all, if you're going to simulate something, why not something vital like modelling existential risks? I'm not sure yet sure of the implications exactly. Maybe we need to consider how it relates to our universe's continued existence, or perhaps it's just another case of Pascal's Mugging. Anyway I thought I'd mention it and see what people say.

A playground for FAI theories

I want to lastly mention this link (https://www.reddit.com/r/LessWrongLounge/comments/2f3y53/the_ai_game/). Basically its a challenge for people to briefly describe an FAI goal-set, and for others to respond by telling them how that will all go horribly wrong. I want to suggest this is a very worthwhile discussion, not because its content will include rigourous theories that are directly translatable into utility functions, because very clearly it won't, but because a well developed thread of this kind would be mixing pot of ideas and good introduction to common known mistakes in thinking about FAI. We should encourage a slightly more serious verison of this.

Thanks

FAI and AGI are very interesting topics. I don't consider myself able to really discern whether such things will occur, but its an interesting and potentially vital topic. I'm looking forward to a bit of feedback on my first LW post. Thanks for reading!

New to LessWrong?

New Comment
44 comments, sorted by Click to highlight new comments since: Today at 6:23 AM

My first concern is that while many AI theorists like to say that consciousness is a physical phenomenon, which seems to imply Monist/Physicalist views, they at the same time don't seem to understand that consciousness is a Dualist concept that is coherent only in a Dualist framework.

Maybe they are just less pessimistic.

A Dualist believes there is a thing called a "subject"

It's not that non dualists don't.

But if you're a Monist, this arbitrary distinction between a subject and object is generally something you don't accept. In the case of a Physicalist, there's just matter doing stuff.

Which is to say that physicalists accept consciousness and subjects so long as they are just matter doing stuff.

A proper Physicalist doesn't believe in "consciousness" or "subjective experience",

An eliminativist doesn't. Other physicalits do, and consider themselves proper.

your AGI is designed as a Dualist (which is neccessary if you wish to encorporate "consciousness", "experience" or the like into your design),

That doesn't follow. Taking consciousness seriously isn't that exclusive. Even qualiaphobes don't think torture is OK.

Thanks for engagement on the philosophical side of things. I'll politely beg to differ on a couple of points:

Maybe they are just less pessimistic.

Pessimism/optimism doesn't seem like an appropriate emotion for rational thought on the nature of universe or mind? Perhaps I misunderstand.

It's not that non dualists don't.

I fairly certain strict Monists don't believe in a subject as a metaphysical category the way Dualists do.

Other physicalits do

I don't think they do. For example, in the case of "subjective experience" they'd not regard the subjective part as meaningful in the way a Dualist does. Only Dualists see the subject and object as legitimate metaphysical categories - that's what the "Dual" in Dualism is!

Even qualiaphobes don't think torture is OK.

There's plenty of other reasons for that beyond accepting the subject. It doesn't follow that opposition to torture implies belief in consciousness.

Again thanks for engaging on the philosophy, it is actually much appreciated.

I can (and do) believe that consciousness and subjective experience are things that exist, and are things that are important, without believing that they are in some kind of separate metaphysical category.

I understand, but I just want to urge you to examine the details of that really closely, starting with examining "consciousness"s place in Dualist thought. What I'm suggesting if many of us have got a concept from a school of thought you explicitly disagree with embedded in your thinking, and that's worth looking into. It's always alluring to dismiss things that run contrary to the existence of something we feel is important, but sometimes those rare times when we question our core values and thought that we make the most profound leaps forward.

I urge you to be less of a dismissive, lecturing dick when talking about consciousness.

What I'm suggesting if many of us have got a concept from a school of thought you explicitly disagree with embedded in your thinking,

What concept? The concept of consciousness or the concept of consciousness as fundamental?

Maybe we have a concept of consciousness because we are conscious.

Maybe we have a concept of gods because we are gods? It don't think that logic works. If someone is physicalist then they can't assume consciousness a priori. In which case, how can observation of brains and behaviours justify a concept like consciousness? The only way it can arise is out of a mind-body separation (Dualism).

Maybe we gave a concept of rocks, because there are rocks.

It isn't a question of all sentences of the form "we have a concept of X because X exists" being analytically true. It is a question of having evidence of specific things. The other minds problem is the other minds problem because we all have evidence of our own minds.

If someone is physicalist then they can't assume consciousness a priori. In which case, how can observation of brains and behaviours justify a concept like consciousness?

1 I am aware of my own consciousness

2 my own consciousness must be an outcome of the physical operation of my brain

3 similarly operating brains must be similarly conscious

See other post. Cheers for discussion.

I mean pessimism about the prospect of physicallisticaly explaining consciousness.

Materialists and physicalist don't believe in subjectivity and consciousness the way that dualism and idealists do...and that doesn't add up to only dualists believing in consciousness. What is unclear is why an AI would need to believe in consciousness in the way a dualist does,a s a separate ontological category, in order to be ethical.

I wouldn't say that AI needs to refer to consciousness at all to be ethical. I think it will be better if we design it without reference to the problematic concept, to be much safer. Consciousness is a pretty deeply embedded word in our culture's thinking, so much that we don't remeber the philosophical context in which it is derived. I just want to get people to ask, where does the concept come from? Until people decide to ask that, this is probably all crazy talk to them.

I can't claim to have read every one's thought ever, so perhaps someone claiming to be a Monist believes that, but I do know you can't both be a Monist and believe in a Mind-Body duality. That's by definition. And I also know you can't justify something along the lines of "consciousness" from observation of people's behaviour and brains - if we do its because we're bringing the concept along with us as part of a Dualist perspective, or as part of latent Dualism. The only way you can establish a need for the concept is through Mind-Body separation - otherwise you've already got all the stuff you need to explain humans - brains, neurons, behaviours etc. The need to plonk "consciousness" on the top of all that is the latent Dualism I'm talking about in some Physicalists.

The reason a AI would have to believe in consciousness in a Dualist way is the same - because it will not be able to induct such a thing as a "consciousness" from observations. If somehow we managed to cram it in there and give the AI faulty logic, apart from the general unpredictability that implies (an AI with faulty logic?), the AI may realise the same philosophical problem at some point and classify itself or others as without consciousness (some variant of the other minds problem), thus rendering them to the same importance as paperclips or whatever else.

I do know you can't both be a Monist and believe in a Mind-Body duality.

Nobody said otherwise. You keep conflating consciousness with ontologocally fundamental consciousness.

And I also know you can't justify something along the lines of "consciousness" from observation of people's behaviour and brains - if we do its because we're bringing the concept along with us as part of a Dualist perspective, or as part of latent Dualism

Justify which concept of consciousness? Justify how? We believe others are conscious because we are wired up to, via mirror neurons and so on. But that's a kind of compulsion. The belief can be justified in a number of ways. A physicalist can argue that a normally functioning brain will have a normal consciousness, because that us all that is needed, there is no nonphysical element to go missing. Dualism is no help at all to the other minds problem, because it posits an indetectable, nonphysical element that could go missing, leaving a zombie.

The only way you can establish a need for the concept is through Mind-Body separation - otherwise you've already got all the stuff you need to explain humans - brains, neurons, behaviours etc. The need to plonk "consciousness" on the top of all that is the latent Dualism I'm talking about in some Physicalists.

You are conflating consciousness as a posit needed to explain something else with consciousness as a phenomenon to explain. Whatever I believe, it seems to me that I am conscious, and that needs explaining.

The reason a AI would have to believe in consciousness in a Dualist way is the same - because it will not be able to induct such a thing as a "consciousness" from observations.

Because observations can't give even probablistic support? Because the physicalist argument doesn't work? Because it wouldn't have a concept of consciousness? Because it isn't conscious itself?

faulty logic

About what? If it judged you to be conscious would it be making a mistake?

You keep conflating consciousness with ontologocally fundamental consciousness.

I'm saying that the only sound logic justifying belief in consciousness arises out of Dualism. (please note I'm not trying to convince you to be Monist or Dualist). Or to put it another way, Physicalism offers no justifcation for belief consciousness of either type.

If consciousness is a thing we should be able to forget about it, and then rederive the concept, right? So in that spirit, if you're Monist, ask yourself what was the point where you discovered or learnt about consciousness. What moment did you think, that thing there, let's call it a consciousness. You didn't look into a brain and find an organ or electrical pattern and then later decide to give it a name right? If you're like 99.99% of people, you learnt about it much more philosophically. Yet, if you're a Physicalist, your belief in objects is derived from empirical data about matter. You observe the matter, and identify objects and processes through that observation. Study of the brain doesn't yield that unless you bring the concept of consciousness along with you beforehand, so consciousness for Physicalists is really in the same class as other hidden objects which we also can imagine and can't disprove (I'm looking at you Loki/Zeus etc).

I'll leave it at that and let you get the last word in, because even if you're willing to consider this, I appear to be offending some other people who are becoming kinda aggressive. Thanks for discussion.

And I keep saying that being conscious means being aware that you are conscious, and that is empirical evidenceof consciousness.

I can abandon and then recover the concept of consciousness, because there is stuff of which I am aware, but which other people are not aware, stuff that is private to me, and "consciousness" is the accustomed label for the awareness, and "subjective" is the accustomed label for the privacy.

Metaphysical beliefs, physicalism and so on, are not relevant. What is relevant is where you are willing to look for evidence. Most physicalists are willing to accept the introspective evidence for consciousness. You seem to think that the concept of consciousness cannot be recovered on the basis of evidence, and what you seem to mean by that is that you cannot detect someone else's consciousness. You have implicitly decided that your own introspective, subjective first person evidence doesn't count. That's an unusual attitude, which is why you have ended up with a minority opinion.

I don't have external OR introspective evidence of Loki and Zeus. The fact that you consider consciousness to fall into the same category as Zeus is another indication of your disregard of introspection.

I want to reply so bad it hurts, but I'll resist thanks for the convo.

Yeah, well, replying could lead to updating or something crazy like that.

sigh I agree with that sentiment. However, conversations where parties become entrenched should probably be called off. Do you really feel this could end in opinions being changed? I perceive your tone as slightly dismissive - am I wong to think this might indicate non-willingness to move at all on the issue?

I don't mean to imply anything personal. I still feel you're overlooking an important point about the ideas you're referring to having fundamentally dualist foundations when you take a proper look at their epistemology. You refer to introspection but introspection (aside from in casual lay usage) is properly a Dualist concept - it implies mind-body separation, and it is not in any way empirical knowledge. Even more prominently the use of the word "subjective" is almost the definition of bringing Dualism into the discussion, because the subject in subjective comes directly from Descartes separation of mind and body.

If someone wishes to be Monist, wouldn't they start by not assuming Dualist concepts? They'd start with the reliable empirical evidence (neuroscience etc) and approach thought interaction as a interactions between internal brain states. They wouldn't conceptualise those interactions using Dualist terminology like "consciousness", at least in any discussion where there was precise science or important issues to be considered.

I thought this was a well explained and epistemologically straightforward part of my post. The general reaction has to me appeared to have been immediate rejection without questions or attempts at clarifications. Actually I'm disappointed that the part people mostly want to reject so utterly is the only part getting much attention. Every sense I get is that the people that have replied can't even consider the possibility that there is problem with the way consciousness is thought about in FAI discussions. That worries me, but I can't see the possiblity of movement at this point, so I'm not terribly enthusiastic about continuing this repeating over and over of the various positions. I'm happy to continue if you find the concept interesting, and I guess I'm at least getting comments, but if you feel that you've already made up your mind let's not waste any more time.

People rarely change their mind's outright in a discussion, but often update on the meta level...for instance about how contentious their claims are, or how strong the arguments for or against are.

Since you are putting forward the minority opinion here...you're aware of that, right?...you are prima facie more likely to be the one who is missing something. No, that is not my sole argument.

Obviously, I think I am doing the epistemology right...but I have only been studying philosophy for 35 years, so I am sure I have plenty to learn.

Who told you that introspection implies separation of mind and body? For one thing it seems to work whatever you believe. For another, the majority of physicalists who aren't eliminativists don't see the problem: they see human introspection as a more sophisticated version of a computers ability to report on its free disk space or whatever.

But theny you say you are talking about proper intuition ...which would be a true scotsman argument.

Who told you that subjectivity comes from Descartes? I've seen it explained by physicalists as due to Loebian limitations. In any case, a historical premise, that X comes from Y, does not support a conceptual claim that belief in X is not rationally consistent without belief in Z.

Who told you that introspection isn't empirical evidence? Introspective evidence is a complex subject that doesn't summarize down thatway.

I can consider the idea that there is something wrong with the concept of consciousness ... I am... but I am not seeing good arguments...it all boils down to you supporting minority opinions with idiosyncratic definitions.

Ok thanks for this comment.

studying philosophy for 35 years

Stealthy appeal to authority, but ok. I can see you're a good philosopher, I wouldn't seek to question your credibility as a philosopher, but I do wish to call into question this particular position, and I hope you'll come with on this :-)

Who told you that introspection implies separation of mind and body?

I wrote on this topic at uni, but you'll have to forgive me if I haven't got proper sources handy...

"The sharp distinction between subject and object corresponds to the distinction, in the philosophy of René Descartes, between thought and extension. Descartes believed that thought (subjectivity) was the essence of the mind, and that extension (the occupation of space) was the essence of matter." [wikipedia omg]

I'll see if I can find a better source. I hope you'll resist the temptation to attack the source for now, as its pretty much the same as a range of explanations I ran into at uni. "Subject" can be directly linked back to the Cartesian separation.

But theny you say you are talking about proper intuition

I didn't mention intuition. You're right that proper isnt the "proper" language to use here :-) I should have said thorough. However, I think my point is clear either way - its a characterisation of the literature. I guess we do perceive that literature very differently for now. I wasn't aware that my position was minority in wider philosophy, or do you mean on LW?

Who told you that introspection isn't empirical evidence? Introspective evidence is a complex subject that doesn't summarize down thatway.

I haven't actually seen introspection discussed much by name in philosophy, usually its consciousness, subject/object etc. I infer that it implies subjective experience and is by definition not empirical. So my position to clarify is not that introspection is false, but rather that introspection in the way we are talking about it here is framing a Monist perception of the world by arbitrarily "importing" a Dualist term (or a term that at least very strongly implies subjectivity). Though we might argue for its "usefulness", this is ultimately unhelpful because we are tempted to make another "useful" leap to "consciousness", which is a compounds otherwise small problems.

I believe that if one is a consistent physicalist, then empirical evidence (the vast majority of definintions of this refer to sensory data as a prerequistite of "empirical") would be examined as reliable, ie. primary, without framing that evidence using a posteriori or idealist concepts. So you get the brain and behaviours. The introspective aspects, which you are right, a physicalist does not need to entirely deny, are then framed using the reliable (as claimed by physicalists) empirically established concepts. So in that sense the brain can interact with itself, but there is no particular thing in the study of the brain to suggest reference to concepts that have historically been used in Dualism, such as consciousness. Any usage of those terms as merely rhetorical (communicating with lay people, or discussing issues with Dualists) and are not treated as legitimate categories for philosophical thought. Those concepts might loosely stretch over the same areas, but they are very different for the physicalist to categories like "the brain" which are non-arbitrary because they are categories that appear to be suggested "by the evidence".

Again I don't wish to claim Dualist or Monsism or whatever is true (can of worms). I also don't wish to claim that all Physicalists think what I just described - I haven't met them all or read everyone's work. What I wish to claim is that a consistently physicalist position implies the rejection of justifications for concepts that are epistemological Dualist, and I also wish to claim that acceptance of consciousness, because it does not emerge from empirical sense data, relies on acceptance of "subject" which can be directly traced to Descartes separation of mind and body ("subject" and its tools for interacting with "objects") (ie. Dualist). Therefore the consistent Physicalist does not accept consciousness.

I am trying not to appeal to authority. I like unconventional claims. I also like good arguments. I am trying to get you to give a good argument for your unconventional claim.

I wasn't aware that my position was minority in wider philosophy, or do you mean on LW?

Both. Well the claim that that consciousness is ontologocally fundamental is a dualism/idealist claim. The claim that consciousness exists at all isn't. You don't seem to put much weight on the qualification "ontologocally fundamental"

" René Descartes, between thought and extension. Descartes believed that thought (subjectivity) was the essence of the mind, and that extension (the occupation of space) was the essence of matter." [wikipedia omg]"

What you need is evidence that monists don't or can't or shouldn't shouldn't believe in consciousness or subjectivity or introspection. Evidence that dualists do is not equivalent.

I haven't actually seen introspection discussed much by name in philosophy, usually its consciousness, subject/object etc.

There's an article on SEP.

[introspection] is by definition not empirical.

Where did you see the definition? In any case, introspection is widely used in psychology.

So in that sense the brain can interact with itself, but there is no particular thing in the study of the brain to suggest reference to concepts that have historically been used in Dualism, such as consciousness.

There are plenty, because of the way it is defined...as self awareness or higher order thought. It's use in dualism doesn't counteract that...particularly as it is not exclusive of its use in physicalism.

Any usage of those terms as merely rhetorical (communicating with lay people, or discussing issues with Dualists) and are not treated as legitimate categories for philosophical thought.

Says who?

What I wish to claim is that a consistently physicalist position implies the rejection of justifications for concepts that are epistemological Dualist

You haven't demonstrated that any concepts are inherently dualist, and physicalist clearly do use terms like consciousness.

I also wish to claim that acceptance of consciousness, because it does not emerge from empirical sense data,

Here's an experiment:

Stand next to someone.

Without speaking, Think about something hard to guess.

Ask them what it is.

If they don't know, you have just proved you have private thoughts, if which you are aware.

:-( I'm disappointed at this outcome. I think you're mentally avoiding the core issue here - but I guess my saying so is not going to achieve much. I'll answer some of your points quickly and make this my last post in this subthread.

What you need is evidence that monists don't or can't or shouldn't shouldn't believe in consciousness or subjectivity or introspection. Evidence that dualists do is not equivalent.

You're twisting my claim. Someone can't disprove a pure concept or schema - asking them to do so is a red herring. Instead one ought to prove rather than assume the appropriateness of a concept. I've pointed out that in order to derive a concept of "consciousness", you have to rely on an understanding of it as "subjective", and that subjective is a Dualist term derived directly from mind-body separation. As you've basically agreed to the first part of that, and haven't mounted any substantial objection to the second, I honestly cannot fathom your further your insistence that consciousness can be Physicalist?

If they don't know, you have just proved you have private thoughts, if which you are aware.

I didn't claim there were no private thoughts. A Physicalist might accept there is something like private thought, but they wouldn't then conceptualise private thought using arguments that rely on "subjective" and therefore Dualism. They'd seek to develop a schema arising out of physical reality.

Where did you see the definition? In any case, introspection is widely used in psychology.

That in no way shows that it's empirical. Empirical means sense data. You can't get sense-data for your own thoughts, or your consciousness. Sure you can operate under the assumption that thoughts are happenning, but to select a priori formulations based on purely subjective experience is treating your subjective thought as categorically different from sense data (concepts created independent of sense-data). Those different categories - THAT'S DUALISM.

Clarify for me - are you denying or accepting that "subject" is directly tied into the Cartesian separation? It's simply not possible for claims to rely on subject/object distinctions and not to rely on the mind-body distinction.

We've really exausted the productive side of this discussion long ago, so let me conclude my posts on this by suggesting a new way to look at this - say you had someone who had never heard of "consciousness", and who was also a Physicalist. They have thoughts, they accept that they have thoughts, but because they feel they are a physical organism with a physical brain, they also state that as a physical system their brain cannot include a reliable map of itself. This means of one's own mind's structure is innaccessible - accurate introspection is unreliable at best. What are you possibly going to say to them to convince them that consciousness is a thing, without treating the mind as reliable over and beyond their empirical knowledge / sense data? It's not enough to say that consciousness might be there - so might Fraud's "ego" and "id" or all sorts of historical concepts based on "subjective experience". You instead have to go to the empirical evidence without conceptual preconceptions and work backword from there. To work from a priori thought simply isn't reliable in the way it is for a Dualist.

TLDR; A Physicalist conceptualises the mental in terms of the physical - not the other way around.

If you ever want to really explore the concept play the Physicalist with no knowledge of consciousness and take a highly skeptical view to the concept . Refuse to believe in it unless proof is given and see what happens. Who knows - you could be able to mount a more influential enunciation of its impossiblity than me.

That's all from me for now.

You're twisting my claim. Someone can't disprove a pure concept or schema - asking them to do so is a red herring

I didn't say anything about disproving a concept. What you need to disprove is the claim that physicalists employ the concept of consciousness. That is not the concept itself.

.

Instead one ought to prove rather than assume the appropriateness of a concept.

One ought to provide evidence for extraordinary claims,

I've pointed out that in order to derive a concept of "consciousness", you have to rely on an understanding of it as "subjective", and that subjective is a Dualist term derived directly from mind-body separation

In order to support your claim about consciousness, you have made an identical claim about subjectivity, which is equally in need of support, and equally unsupported. That is going in circles.

As you've basically agreed to the first part of that, and haven't mounted any substantial objection to the second,

What is the second claim even asserting.. Subjective is a term used by dualusts? Yes. It is only used by dualists? No. I've list count of the number of physicalists who have informed me of the "fact" that morality is subjective...

I honestly cannot fathom your further your insistence that consciousness can be Physicalist?

See your own claims that the MENTAL can be explained physically, below.

I didn't claim there were no private thoughts. A Physicalist might accept there is something like private thought, but they wouldn't then conceptualise private thought using arguments that rely on "subjective" and therefore Dualism.

How do you know? As it happens, the meaning of "subjective" is closer to "private mental event" than it is to "non physical mind stuff". You haven't really argued against that, since vague claims that the the two terms have a common origin, or are used by the same people don't establish synonymity.

They'd seek to develop a schema arising out of physical reality.

As opposed to what? "Subjective" has a primarily epistemological meaning..you know that, right? Epistemology is largely orthogonal to ontology ... you know that, right?

In any case, introspection is widely used in psychology.

That in no way shows that it's empirical. Empirical means sense data. You can't get sense-data for your own thoughts, or your consciousness.

If your computer tells you it is low on memory, is that not empirical?

In any case, coming up with an idiosyncratic definitions of empirical that is narrower than the definition actual physicalists and scientists use proves nothing.

Sure you can operate under the assumption that thoughts are happenning,

!!!

There are no thoughts happening to you?

but to select a priori formulations based on purely subjective experience

Whatever that means.

is treating your subjective thought as categorically different from sense data (concepts created independent of sense-data). Those different categories - THAT'S DUALISM.

No, dualism is not having different categories, .or philosophers would be arguing about Pepsi-Coke dualism.

Dualism is ONTOLOGICAL categories.

Clarify for me - are you denying or accepting that "subject" is directly tied into the Cartesian separation?

I would summarize that as hopelessly vague.

It's simply not possible for claims to rely on subject/object distinctions and not to rely on the mind-body distinction.

You need to argue that point. I can't see any connection at all. Define the subject as the perceiver of states of affairs external to itself (like the observer in physics)... where is the immaterial mind there?

We've really exausted the productive side of this discussion long ago, so let me conclude my posts on this by suggesting a new way to look at this - say you had someone who had never heard of "consciousness", and who was also a Physicalist. They have thoughts, they accept that they have thoughts, but because they feel they are a physical organism with a physical brain, they also state that as a physical system their brain cannot include a reliable map of itself.

I've never heard them do that. There are reasons why one wouldn't expect a combination of completed reliability and total accuracy. But that would be setting the bar too high anyway. Maps can be judged reliable enough, and accurate, enough even though they don't go down to the blade of grass level.

In fact it would be bad news if all mental content were accessible to introspection: the existence of unconscious mentality is part of the standard theory of consciousness, which almost everyone believes in, including most physicalists.

This means of one's own mind's structure is innaccessible - accurate introspection is unreliable at best.

So? You can set the bar at a place where the introspection undershoots it. For that matter, you can set the bar at a place where conventional empiricism undershoots, since that isn't 100% reliable either...ask Pons and Fleishman.

Backtrack: the point was to demonstrate that consciousness, in some sense, exists. Since consciousness is self awareness, any level of introspection indicates some nonzero level of consciousness. Introspection about anything in particular is not required. Consciousness is not supposed to be all embracing - it is contrasted with unconscious mentality, after all - so all encompassing introspection is not needed as evidence for it.

What are you possibly going to say to them to convince them that consciousness is a thing

I am not in the business of talking about consciousness as a thing. That is your terminology, In fact there are at least three claims here:

1 Consciousness doesn't exist at all.(Eliminativism) 2 Consciousness exists as a reducible physical phenomenon (physicalism) 3 Consciousness exists as an irreducible non physical phenomenal.(dualism)

You keep trying to squeeze those three propositions into a scheme that has only two slots, "thing" and "not thing", and it keeps not working. It inevitably loses information. Three doesn't go into two. To understand what someone else us saying, you need to interpret it in terms of their categories, not yours.

without treating the mind as reliable over and beyond their empirical knowledge / sense data.

Nobody has at any stage said anything to imply that introspection is, or needs to be, more reliable than conventional empiricism.

It's not enough to say that consciousness might be there - so might Fraud's "ego" and "id" or all sorts of historical concepts based on "subjective experience".

Who said otherwise? I have conceded that eliminativism, position 1, exists, so I have tacitly conceded that it is a challenge.to position 2. I am not treating position 2 as an unproblematic default. But you HAVE been treating position 1 as a default.

Why do you insist that evidence for position 2 must be very strong, that nothing short of 100% accurate and reliable data can support it. Could it be that you are conflating position 2 with position 3? Position 3, the claim that consciousness requires its own ontology, IS an extraordinary claim, requiring strong evidence. Is that what you are doing when you read "ontologocally fundamental" as " "?

You instead have to go to the empirical evidence without conceptual preconceptions and work backword from there.

You don't have evidence that that is possible at all, nor that it lead to the result you expect.

To work from a priori thought simply isn't reliable in the way it is for a Dualist.TLDR; A Physicalist conceptualises the mental in terms of the physical - not the other way around.

A type 2 position is to conceptualise the mental in term of the physical, and to conceptualise consciousness in terms of the physical, and to conceptualise the subjective in terms of the physical....

That is not the same as rejecting the mental (etc) wholesale.

This isn't productive. As you've insisted on a long and highly critical response again, I sadly feel I need to briefly reply.

What you need to disprove is the claim that physicalists employ the concept of consciousness

No. I merely show that the core claims of physicalism and this use of consciousness are incompatible. That's what I've done. Whether some particular physicalists choose to employ some concept is a different question and a massive red herring, as I already said.

One ought to provide evidence for extraordinary claims,

Whereas ordinary claims don't need evidence? Could you be presenting your claims as "ordinary" to avoid the burden of evidence?

standard theory of consciousness, which almost everyone believes in, including most physicalists.

A "standard" theory of consciousness, that almost everyone believes in, including most physicalist, and presumably most dualists too? Dualists and physicalists have agreed on the nature of consciousness? I think you've gone waaaaay into the realms of fantasy on this one.

1 Consciousness doesn't exist at all.(Eliminativism)

You have warped my position again. I didn't argue it didn't exist, I argued that it is a concept that is rooted in dualism. I explained why. I argued that a physicalist would be consistent if they instead used concepts drawn from empirical investigations of the brain. You seem to feel that the physical should be conceptualised based upon the mental, rather than the other way around. That position isn't compatible with physicalism, because it implies treating the mental as categorically superior. Doing so is not an ontologically neutral position as you are presenting it.

Nobody has at any stage said anything to imply that introspection is, or needs to be, more reliable than conventional empiricism.

If so then why are you treating concepts derived from pure introspection as the superior schema to categorise empirical evidence?

You need to argue that point ... [etc.]

I did, you selectively ignored the arguments. This conversation has become largely pointless. Perhaps you feel you can achieve some goal of "winning" by merely repeating yourself and achieving "victory" by the other person losing faith in the merit of the conversation. Your basic approach seems to be (1) find some minor points that you can rephrase and attack (2) simply ignore the main points and claim over and over again the main proposition hasn't been proven. (3) deny the need to support your own claims when asked because they are conventional or "ordinary". As such I think it would be a mistake for me to see you as honestly engaged with my propositions here. That's a shame, because I think you would be a very interesting person to talk to if you weren't so eager to "win". Good luck and goodbye.

Whereas ordinary claims don't need evidence?

They don't need the re-presentation of existing evidence.

A "standard" theory of consciousness, that almost everyone believes in, including most physicalist, and presumably most dualists too? Dualists and physicalists have agreed on the nature of consciousness?

They disagree about some things, and they agree enough to be talking about the same thing. Disagreement requires commonalities , otherwise it's just miscommumication.

1 Consciousness doesn't exist at all.(Eliminativism) You have warped my position again. I didn't argue it didn't exist,

I didn't say you were eliminativist. I said you were shoehorning three categories into two categories. What is your response to that?

. I explained why. I argued that a physicalist would be consistent if they instead used concepts drawn from empirical investigations of the brain. You seem to feel that the physical should be conceptualised based upon the mental, rather than the other way around.

Both versions are naive. The explanatory process doesn't start with a perfect set of concepts...reality isn't pre-labelled. The explanatory process start with a set of "folk" or prima facie concepts, each of which may be retained, modified, or discarded as things are better understood. You cant start from nothing because you have to be able to state your explanandum, you have to state which phenomenon you are trying to explain. But having to have to a starting point does not prejudice the process forever, since the modification and abandonment options are available. For instance, the concept of phlogiston was abandoned, whereas the concept the atom was modified to no longer require indivisibility. Heat is a favourite example of a reductive explanation. The concept of heat as something to be explained was retained, but the earlier, non reductive explanation of heat as a kind of substance was abandoned, in favour of identifying heat with molecular motion. Since molecular motion exits, heat exists, but it doesn't exist separately - dualistically - from everything else. This style of explanation is what non eliminative physicalists, the type 2 position, are aiming at.

That position isn't compatible with physicalism, because it implies treating the mental as categorically superior.

Your background assumptions are wrong. There aren't any inherently, unchangeably, mental concepts. If you can reduce something to physics, like heat, then it's physical. You don't know in advance what you can reduce. The different positions on the nature of consciousness are different guesses or bets on the outcome. Non eliminating physicalism, the type 2 position is bet on the outcome that consciousness will be identified with some physical process. at which point it will no longer be a "dualistic concept".

Nobody has at any stage said anything to imply that introspection is, or needs to be, more reliable than conventional empiricism.

If so then why are you treating concepts derived from pure introspection as the superior schema to categorise empirical evidence?

I am not using concepts derived from introspection. The very fact of introspection indicates that consciousness, by a fairly minimal definition, is existent. Why do you ignore introspection? Why not look down the telescope?

There aren't any inherently, unchangeably, mental concepts.

From what I can observe in your position it seems like you are treating consciousness in exactly this way. For example, could you explain how it could possibly be challenged by evidence? How could it change or be refined if we say "introspection therefore consciousness"?

The very fact of introspection indicates that consciousness, by a fairly minimal definition, is existent.

I don't see how this follows. As there are a whole host of definitions of consciousness, could you explicitly select a definition, and explain why you feel that introspection proves that particular definintion (not just a general sense of introspectionyness) must follow? Consciousness definitions usually imply some form of discrete mental agent, endowed with certain fairly significant properties. I don't see how that follows from "person A can't see what person B is thinking", unless you invoke dualism. We need to understand what thought is first, and we would need a very compelling reason a physicalist would seek to derive concepts to deal with thought from disembodied thought itself rather than the physical world as they observe it.

Positions:

1 Consciousness doesn't exist at all.(Eliminativism) 2 Consciousness exists as a reducible physical phenomenon (physicalism) 3 Consciousness exists as an irreducible non physical phenomenal.(dualism)

I'm not conflating these positions as I feel you probably think I am, merely holding that (2) is not logically consistent. If (2) was "when we observe the brain we see a discrete phenomenon that we call consciousness", I would say that it is more logically consistent, though I would call for a different word that isn't historically associated with dualism.

Why do you ignore introspection? Why not look down the telescope?

I don't wish to ignore it. I merely think a consistent physicalist would categorise it, like everything else, as a physical process, and therefore seek to understand and explain it using empirical evidence rather than purely mental concepts that don't seem to exist in physical space?

I finally note you refuse again to accept any burden of evidence for your claims, and merely say the field generally supports your position. Anyone can say that for any position. I think you should drop claims of conventionality and stick to the reasoning and refutations that you propose. Noone expects references for logical statements, but claims that you have the support of most philosophers should be supported.

EDIT> Reply bait oh man

There aren't any inherently, unchangeably, mental concepts.From what I can observe in your position it seems like you are treating consciousness in exactly this way. For example, could you explain how it could possibly be challenged by evidence?

I have put forward the existence of introspection as evidence for the existence of consciousness . It is therefore logically possible for the existence of consciousness to be challenged by the non existence of introspection. It's not actually possible because introspection actually exists. The empirical claim that consciousness exists is supported by the empirical evidence,like any other. (Not empirical in your gerrymandered sense, of course, but empirical in the sense of not being apriori or tautologous).

The very fact of introspection indicates that consciousness, by a fairly minimal definition, is existent.

I don't see how this follows.

As there are a whole host of definitions of consciousness, could you explicitly select a definition, and explain why you feel that introspection proves that particular definintion

Already answered: again,

Consciousness =def self awareness

Introspection =def self awareness

(not just a general sense of introspectionyness) must follow? Consciousness definitions usually imply some form of discrete mental agent, endowed with certain fairly significant properties.

Is the ability to introspect not an unusual property? Are we actually differing, apart from your higher level of vagueness?

I don't see how that follows from "person A can't see what person B is thinking",

Person B can tell what person B is thinking, as well. That is important.

unless you invoke dualism. We need to understand what thought is first, and we would need a very compelling reason a physicalist would seek to derive concepts to deal with thought from disembodied thought itself

Who said anything about disembodied thought.

rather than the physical world as they observe it.Positions:1 Consciousness doesn't exist at all.(Eliminativism) 2 Consciousness exists as a reducible physical phenomenon (physicalism) 3 Consciousness exists as an irreducible non physical phenomenal.(dualism)I'm not conflating these positions as I feel you probably think I am, merely holding that (2) is not logically consistent.

So what is the actual contradiction?

If (2) was "when we observe the brain we see a discrete phenomenon that we call consciousness", I would say that it is more logically consistent, though I would call for a different word that isn't historically associated with dualism.

Why a discrete phenomenon?

Is a historical association enough to make an inconsistency?

Why do you ignore introspection? Why not look down the telescope?I don't wish to ignore it. I merely think a consistent physicalist would categorise it, like everything else, as a physical process, and therefore seek to understand and explain it using empirical evidence rather than purely mental concepts that don't seem to exist in physical space?

I have given a detailed explanation as to why consciousness is not an inherently mental concept. You need to respond to that, and not just repeat your claim.

I finally note you refuse again to accept any burden of evidence for your claims,

False. Here is the explanation again:

"Both versions are naive. The explanatory process doesn't start with a perfect set of concepts...reality isn't pre-labelled. The explanatory process start with a set of "folk" or prima facie concepts, each of which may be retained, modified, or discarded as things are better understood. You cant start from nothing because you have to be able to state your explanandum, you have to state which phenomenon you are trying to explain. But having to have to a starting point does not prejudice the process forever, since the modification and abandonment options are available. For instance, the concept of phlogiston was abandoned, whereas the concept the atom was modified to no longer require indivisibility. Heat is a favourite example of a reductive explanation. The concept of heat as something to be explained was retained, but the earlier, non reductive explanation of heat as a kind of substance was abandoned. This style of explanation is what non eliminate physicalists, the type 2 position, are aiming at."

Your engagement here is insincere. You argue based on to cherry-picking and distorting my statements. You simply ignore the explanations given and say "you haven't given justification" and then you give off-hand vague answers for my own queries and then state "already answered". I'm done with this.

Most imaginable unsafe AGIs would outcompete safe AGIs, because they would not neccessarily be "hamstrung" by complex goals such as protecting us meatbags from destruction.

I imagine having an AGI that is safe but not friendly would involve seriously limiting its abilities. However, a friendly AGI would be able to compete just fine with an unsafe AGI. Whether or not the FAI wins the fight is vastly more important than what it does during the fight, so its goal system will do little to change how it fights.

Interesting comment, cheers. What about a situation where the first action of a hostile AI would be to irradiate the Earth, killing all life and removing the FAI's reason to continue struggling, even though it itself was tough enough to survive? It seems options like this would tip the balance very much in UFAIs favour?

The FAI would precommit to struggle fruitlessly in an irradiated world, in order to prevent the UFAI from having a reason to irradiate the world.

If by struggle fruitlessly you mean do whatever it can to hurt the UFAI, then you have a point.

It would recreate humanity. I imagine that if the UFAI does enough damage that the FAI can't remember what humanity was, it would be enough damage to lobotomize any AI and defeat it easily.

I guess that depends on the level of capability that an AGI had at that point. It supposes the level of knowledge had was enough for recreation, which may be much higher than what's required for mere protection. It's hard to perceive as that level of capability is way beyond our own. I'll certainly give it some thought.

It also depends on how you define "human". I'd hope the FAI is willing to upload us instead of wasting vast amounts of resources just so we're instantiated in a physical universe instead of a virtual one.

It's worth noting that the AI only has to be advanced enough to store the information. Once it's beaten the UFAI, it has plenty of time to build up the resources and intelligence necessary to rebuild humanity.

I personally imagine that AGI will arrive well before its possible to store a full down-to-the-subatomic-level map of a person in a space that's any smaller than the person. "Just store the humans and bring them back" implies such a massive storage requirement that its basically not much different from making a full copy of them anyway, so I wonder if such a massive storage device wouldn't be equally vulnerable to attack.

I'm also keen to see us continue as a biological species, even if we also run simulated brains or people in parallel. Ideally I can see us doing both if we can establish a FAI. The best bet I can see so far is to make sure a FAI arrives first :-)

You don't need accurate down to the subatomic level. You just need a human. The same human would be nice, since it means the FAI managed to keep all of those people from dying, but unless it's programmed to only value currently alive people, that's not a big deal.

Also, you make it sound like you're saying we won't develop that storage capability until well after we develop the AGI. It's the AGI that will be developing technology. What we can do just before we make it is not a good indicator of what it can do.

Because neural pathways and other structures of the brain are pretty small, I think you'd need an extremely high resolution. However, I guess what you're saying is that a breeding population would be enough to at least keep the species going, so I acknowledge that. Still, I'm hoping we can make something that does something in addition to that.

Your second point depends on how small the AGI can make reliable storage tech I guess.

In the perhaps this whole point is moot because its unlikely an intelligence explosion will take long enough for there to be time for other researchers to construct an alternative AGI.

Still, I'm hoping we can make something that does something in addition to that.

Their children will be fine. You don't even need a breeding population. You just need to know how to make an egg, a sperm, and an artificial uterus.

In the perhaps this whole point is moot because its unlikely an intelligence explosion will take long enough for there to be time for other researchers to construct an alternative AGI.

It might encounter another AGI as it spreads, although I don't think this point will matter much in the ensuing war (or treaty, if they decide on that).

Basically its a challenge for people to briefly describe an FAI goal-set, and for others to respond by telling them how that will all go horribly wrong. ... We should encourage a slightly more serious version of this.

Thanks for the link. I reposted the idea currently on my mind hoping to get some criticism.

But more importantly, what features would you be looking for in a more serious version of that game?

I think I'd like the comments to be broadly organised and developed as the common themes and main arguments emerge. Apart from that a little more detail. I don't think it has to go into much implementation specifics, because that's a separate issue and requires a more highly developed set of math/CS skills. But I think we can make use of a broader set of smart brains by having this kind of discussion.

I would have liked it if the first part of the post (up to "Other mind problems") were posted separately. It has a clear independent point and could/should be discussed independently, don't you think?

Fair point. Because I'm new here I think I was concerned my content would not be of enough interest as separate posts. Probably this wasn't really the best decision. I might break parts of the article off into more developed pieces if there's enough interest.

I think another important point is how simulations are treated ethically. This is currently completely irrelevant since we only have the one level of reality we are aware of, but once AGIs exist, it will become a completely new field of ethics.

  • Do simulated people have the same ethical value as real ones?
  • When an AGI just thinks about a less sophisticated sophont in detail, can its internal representation of that entity become complex enough to fall under ethical criteria on its own? (this would mean that it would be unethical for an AGI to even think about humans being harmed if the thoughts are too detailed)
  • What are the ethical implications of copies in simulations? Do a million identical simulations carry the same ethical importance as a single one? A million times as much? Something in between? What if the simulations are not identical, but very similar? What differences would be important here?

And perhaps most importantly: When people disagree on how these questions should be answered, how do you react? You can't really find a middle ground here since the decision what views to follow itself decides which entities' ethical views should be considered in future deliberations, creating something like a feedback loop.

Yeah that's an important topic we're going to have to think about. I think its our natural inclination to give the same rights to simulated brains as for us meatbags, but there's some really odd perverse outcomes to that to consider too. Basically, virtual people could become tools for real people to exploit our legal and ethical systems - creating virtual populations for voting etc. I've written a little on that half way down this article: http://citizensearth.wordpress.com/2014/08/23/is-placing-consciousness-at-the-heart-of-futurist-ethics-a-terrible-mistake-are-there-alternatives/

I think we'll need to have some sort of split system - some new system of virtual rights in the virtual world for virtual people and meatbag world rights for us meatbags, basically just to account for the profound physical differences between the two worlds. That we can preserve the species and still have an interesting virtual world. Waaay easier said than done though. This is probably going to be one of the trickiest problems since someone said "so, this democracy thing, how's it going to work exactly?"