AspiringKnitter comments on Welcome to Less Wrong! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1953)
It stands for evaporative cooling and I'm not offended. It's a pretty valid point.
(Laoch: I expect God not to abuse his power, hence I wouldn't classify him as a whimsical tyrant. And part of my issue is with being turned into a computer, which sounds even worse than making a computer that acts like me and thinks it is me.)
I can't decide which of MixedNuts's hypotheses is more awesome.
I'd be interested to hear more about your understanding of what a computer is, that drives your confidence that being turned into one is a bad thing.
Relatedly, how confident are you that God will never make a computer that acts like you and thinks it is you? How did you arrive at that confidence?
(this is totally off-topic, but is there a "watch comment" feature hiddent around the LW UI somewhere ? I am also interested to see AspiringKnitter's opinion on this subject, but just I know I'll end up losing track of it without technological assistance...)
Every LW comment has its own RSS feed. You can find it by going to the comment's permalink URL and then clicking on "Subscribe to RSS Feed" from the right column or by adding "/.rss" to the end of the aforementioned URL, whichever is easier for you. The grandparent's RSS feed is here.
Not that I know of, but http://lesswrong.com/user/AspiringKnitter/ is one way to monitor that if you like.
For one thing, I'm skeptical that an em would be me, but aware that almost everyone here thinks it would be. If it thought it was me, and they thought it was me, but I was already dead, that would be really bad. And if I somehow wasn't dead, there could be two of us and both claiming to be the real person. God would never blunder into it by accident believing he was prolonging my life.
And if it really was me, and I really was a computer, whoever made the computer would have access to all of my brain and could embed whatever they wanted in it. I don't want to be programmed to, just as an implausible example, worship Eliezer Yudkowsky. More plausibly, I don't want to be modified without my consent, which might be even easier if I were a computer. (For God to do it, it would be no different from the current situation, of course. He has as much access to my brain as he wants.)
And if the computer was not me but was sentient (wouldn't it be awful if we created nonsentient ems that emulated everyone and ended up with a world populated entirely by beings with no qualia that pretend to be real people?), then I wouldn't want it to be vulnerable to involuntary modification, either. I'd feel a great deal of responsibility for it if I were alive, and if I were not alive, then it would essentially be the worst of both worlds. God doing this would not expose it to any more risk than all other living beings.
Does this seem rational to you, or have I said something that doesn't make sense?
I'm going to scoop TheOtherDave on this topic, I hope he doesn't mind :-/
But first of all, who do you mean by "an em" ? I think I know the answer, but I want to make sure.
From my perspective, a machine that thinks it is me, and that behaves identically to myself, would, in fact, be myself. Thus, I could not be "already dead" under that scenario, until someone destroys the machine that comprises my body (which they could do with my biological body, as well).
There are two scenarios I can think of that help illustrate my point.
1). Let's pretend that you and I know each other relatively well, though only through Less Wrong. But tomorrow, aliens abduct me and replace me with a machine that makes the same exact posts as I normally would. If you ask this replica what he ate for breakfast, or how he feels about walks on the beach, or whatever, it will respond exactly as I would have responded. Is there any test you can think of that will tell you whether you're talking to the real Bugmaster, or the replica ? If the answer is "no", then how do you know that you aren't talking to the replica at this very moment ? More importantly, why does it matter ?
2). Let's say that a person gets into an accident, and loses his arm. But, luckily, our prosthetic technology is superb, and we replace his arm with a perfectly functional prosthesis, indistinguishable from the real arm (in reality, our technology isn't nearly as good, but we're getting there). Is the person still human ? Now let's say that one of his eyes gets damaged, and similarly replaced. Is the person still human ? Now let's say that the person has epilepsy, but we are able to implant a chip in his brain that will stop the epileptic fits (such implants do, in fact, exist). What if part of the person's brain gets damaged -- let's say, the part that's responsible for color perception -- but we are able to replace it with a more sophisticated chip. Is the person still human ? At what point do you draw the line from "augmented human" to "inhuman machine", and why do you draw the line just there and not elsewhere ?
Two copies of me would both be me, though they would soon begin to diverge, since they would have slightly different perceptions of the world. If you don't believe that two identical twins are the same person, why would you believe that two copies are ?
Sure, it might be, or it might not; this depends entirely on implementation. Today, there exist some very sophisticated encryption algorithms that safeguard valuable data from modification by third parties; I would assume that your mind would be secured at least as well. On the flip side, your (and mine, and everyone else's) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won't necessarily be a step down.
So, you don't want your mind to be modified without your consent, but you give unconditional consent to God to do so ?
I personally would answer "no", because I believe that the concept of qualia is a bit of a red herring. I might be in the minority on this one, though.
That's a REALLY good response.
An em would be a computer program meant to emulate a person's brain and mind.
If you create such a mind that's just like mine at this very moment, and take both of us and show the construct something, then ask me what you showed the construct, I won't know the answer. In that sense, it isn't me. If you then let us meet each other, it could tell me something.
Because this means I could believe that Bugmaster is comfortable and able to communicate with the world via the internet, but it could actually be true that Bugmaster is in an alien jail being tortured. The machine also doesn't have Bugmaster's soul-- it would be important to ascertain whether or not it did have a soul, though I'd have some trouble figuring out a test for that (but I'm sure I could-- I've already got ideas, pretty much along the lines of "ask God")-- and if it doesn't, then it's useless to worry about preaching the Gospel to the replica. (It's probably useless to preach it to Bugmaster anyway, since Bugmaster is almost certainly a very committed atheist.) This has implications for, e.g., reunions after death. Not to mention that if I'm concerned about the state of Bugmaster's soul, I should worry about Bugmaster in the alien ship. And if both of them (the replica and the real Bugmaster) accept Jesus (a soulless robot couldn't do that), it's two souls saved rather than one.
That's a really good question. How many grains of sand do you need to remove from a heap of sand for it to stop being a heap? I suppose what matters is whether the soul stays with the body. I don't know where the line is. I expect there is one, but I don't know where it is.
Of course, what do we mean by "inhuman machine" in this case? If it truly thought like a human brain, and FELT like a human, was really sentient and not just a good imitation, I'd venture to call it a real person.
And who does the programming and encrypting? That only one person (who has clearly not respected my wishes to begin with since I don't want to be a computer, so why should xe start now?) can alter me at will to be xyr peon does not actually make me feel significantly better about the whole thing than if anyone can do it.
I feel like being sarcastic here, but I remembered the inferential distance, so I'll try not to. There's a difference between a human, whose extreme vulnerability to corruption has been extensively demonstrated, and who doesn't know everything, and may or may not love me enough to die for me... and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. This bothers me a lot less than an omniscient person without God's character. (God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he'd modify a human against the human's will.)
True. I consider the risk unacceptably high. I just think it'd be even worse as a computer. We have to practice our critical thinking as well as we can and avoid mind-altering chemicals like drugs and coffee. (I suppose you don't want to hear me say that we have to pray for discernment, too?) A core tenet of utilitarianism is that we compare possibilities to alternatives. This is bad. The alternatives are worse. Therefore, this the best.
Upvoted for dismissing the inclination to respond sarcastically after remembering the inferential distance.
That's what I thought, cool.
Agreed; that is similar to what I meant earlier about the copies "diverging". I don't see this as problematic, though -- after all, there currently exists only one version of me (as far as I know), but that version is changing all the time (even as I type this sentence), and that's probably a good thing.
Ok, that's a very good point; my example was flawed in this regard. I could've made the aliens more obviously benign. For example, maybe the biological Bugmaster got hit by a bus, but the aliens snatched up his brain just in time, and transcribed it into a computer. Then they put that computer inside of a perfectly realistic synthetic body, so that neither Bugmaster nor anyone else knows what happened (Bugmaster just thinks he woke up in a hospital, or something). Under these conditions, would it matter to you that you were talking to the replica or the biological Bugmaster ?
But, in the context of my original example, with the (possibly) evil aliens: why aren't you worried that you are talking to the replica right at this very moment ?
I agree that the issue of the soul would indeed be very important; if I believed in souls, as well as a God who answers specific questions regarding souls, I would probably be in total agreement with you. I don't believe in either of those things, though. So I guess my next two questions would be as follows:
a). Can you think of any non-supernatural reasons why an electronic copy of you wouldn't count as you, and/or
b). Is there anything other than faith that causes you to believe that souls exist ?
If the answers to (a) and (b) are both "no", then we will pretty much have to agree to disagree, since I lack faith, and faith is (probably) impossible to communicate.
Well, yes, preaching to me or to any other atheist is very unlikely to work. However, if you manage to find some independently verifiable and faith-independent evidence of God's (or any god's) existence, I'd convert in a heartbeat. I confess that I can't imagine what such evidence would look like, but just because I can't imagine it doesn't mean it can't exist.
Do you believe that a machine could, in principle, "feel like a human" without having a soul ? Also, when you say "feel", are you implying some sort of a supernatural communication channel, or would it be sufficient to observe the subject's behavior by purely material means (f.ex. by talking to him/it, reading his/its posts, etc.) in order to obtain this feeling ?
That's a good point: if you are trusting someone with your mind, how do you know they won't abuse that trust ? But this question applies to your biological brain, as well, I think. Presumably, there exist people whom you currently trust; couldn't the person who operates the mind transfer device earn your trust in a similar way ?
Oh, in that scenario, obviously you shouldn't trust anyone who wants to upload your mind against your will. I am more interested in finding out why you don't want to "be a computer" in the first place.
You're probably aware of this already, but just in case: atheists (myself included) would say (at the very minimum) that your first sentence contains logical contradictions, and that your second sentence is contradicted by evidence and most religious literature, even if we assume that God does exist. That is probably a topic for a separate thread, though; I acknowledge that, if I believed what you do about God's existence and his character, I'd agree with you.
Guilty as charged; I'm drinking some coffee right now :-/
I only want to hear you say things that you actually believe...
That said, let's assume that your electronic brain would be at least as resistant to outright hacking as your biological one. IMO this is a reasonable assumption, given what we currently know about encryption, and assuming that the person who transferred your brain into the computer is trustworthy. Anyway, let's assume that this is the case. If your computerized mind under this scenario was able to think faster, and remember more, than your biological mind; wouldn't that mean that your critical skills would greatly improve ? If so, you would be more resistant to persuasion and indoctrination, not more so.
Okay, but if both start out as me, how do we determine which one ceases to be me when they diverge? My answer would be the one who was here first is me, which is problematic because I could be a replica, but only conditional on machines having souls or many of my religious beliefs being wrong. (If I learn that I am a replica, I must update on one of those.)
Besides being electronic and the fact that I might also be currently existing (can there be two ships of Theseus?), no. Oh, wait, yes; it SHOULDN'T count as me if we live in a country which uses deontological morality in its justice system. Which isn't really the best idea for a justice system anyway, but if so, then it's hardly fair to treat the construct as me in that case because it can't take credit or blame for my past actions. For instance, if I commit a crime, it shouldn't be blamed if it didn't commit the crime. (If we live in a sensible, consequentialist society, we might still want not to punish it, but if everyone believes it's me, including it, then I suppose it would make sense to do so. And my behavior would be evidence about what it is likely to do in the future.)
If by "faith" you mean "things that follow logically from beliefs about God, the afterlife and the Bible" then no.
No, but it could act like one.
When I say "feel like a human" I mean "feel" in the same way that I feel tired, not in the same way that you would be able to perceive that I feel soft. I feel like a human; if you touch me, you'll notice that I feel a little like bread dough. I cannot perceive this directly, but I can observe things which raise the probability of it.
But something acting like a person is sufficient reason to treat it like one. We should err on the side of extending kindness where it's not needed, because the alternative is to err on the side of treating people like unfeeling automata.
Since I can think of none that I trust enough to, for instance, let them chain me to the wall of a soundproof cell in the wall of their basement, I feel no compulsion to trust anyone in a situation where I would be even more vulnerable. Trust has limits.
I'm past underestimating you enough not to know that. I'm aware that believing something is a necessary condition for saying it; I just don't know if it's a sufficient condition.
Those are some huge ifs, but okay.
Yes, and if we can prove that my soul would stay with this computer (as opposed to a scenario where it doesn't but my body and physical brain are killed, sending the real me to heaven about ten decades sooner than I'd like, or a scenario where a computer is made that thinks like me only smarter), and if we assume all the unlikely things stated already, and if I can stay in a corporeal body where I can smell and taste and hear and see and feel (and while we're at it, can I see and hear and smell better?) and otherwise continue being the normal me in a normal life and normal body (preferably my body; I'm especially partial to my hands), then hey, it sounds neat. That's just too implausible for real life.
EDIT: oh, and regarding why I'm not worried now, it's because I think it's unlikely for it to happen right now.
So... hm.
So if I'm parsing you correctly, you are assuming that if an upload of me is created, Upload_Dave necessarily differs from me in the following ways:
it doesn't have a soul, and consequently is denied the possibility of heaven,
it doesn't have a sense of smell, taste, hearing, sight, or touch,
it doesn't have my hands, or perhaps hands at all,
it is easier to hack (that is, to modify without its consent) than my brain is.
Yes?
Yeah, I think if I believed all of that, I also wouldn't be particularly excited by the notion of uploading.
For my own part, though, those strike me as implausible beliefs.
I'm not exactly sure what your reasons for believing all of that are... they seem to come down to a combination of incredulity (roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties) and that they contradict your existing religious beliefs. Have I understood you?
I can see where, if I had more faith than I do in the idea that computer programs will always be more or less like they are now, and in the idea that what my rabbis taught me when I was a child was a reliable description of the world as it is, those beliefs about computer programs would seem more plausible.
Mostly.
More like "it doesn't have a soul, therefore there's nothing to send to heaven".
I have a great deal of faith in the ability of computer programs to surprise me by using ever-more-sophisticated algorithms for parsing data. I don't expect them to feel. If I asked a philosopher what it's like for a bat to be a bat, they'd understand the allusion I'd like to make here, but that's awfully jargony. Here's an explanation of the concept I'm trying to convey.
I don't know whether that's something you've overlooked or whether I'm asking a wrong question.
If it helps, I've read Nagel, and would have gotten the bat allusion. (Dan Dennett does a very entertaining riff on "What is it like to bat a bee?" in response.)
But I consider the physics of qualia to be kind of irrelevant to the conversation we're having.
I mean, I'm willing to concede that in order for a computer program to be a person, it must be able to feel things in italics, and I'm happy to posit that there's some kind of constraint -- label it X for now -- such that only X-possessing systems are capable of feeling things in italics.
Now, maybe the physics underlying X is such that only systems made of protoplasm can possess X. This seems an utterly unjustified speculation to me, and no more plausible than speculating that only systems weighing less than a thousand pounds can possess X, or only systems born from wombs can possess X, or any number of similar speculations. But, OK, sure, it's possible.
So what? If it turns out that a computer has to be made of protoplasm in order to possess X, then it follows that for an upload to be able to feel things in italics, it has to be an upload running on a computer made of protoplasm. OK, that's fine. It's just an engineering constraint. It strikes me as a profoundly unlikely one, as I say, but even if it turns out to be true, it doesn't matter very much.
That's why I started out by asking you what you thought a computer was. IF people have to be made of protoplasm, AND IF computers can't be made of protoplasm, THEN people can't run on computers... but not only do I reject the first premise, I reject the second one as well.
Song
lyrics
story
article - really much more a discussion than a lesson.
I would say that they both cease to be you, just as the current, singular "you" ceases to be that specific "you" the instant you see some new sight or think some new thought.
Agreed, though I would put something like, "if a person diverged into two separate versions who then became two separate people, then one version shouldn't be blamed for the crimes of the other version".
On a separate note, I'm rather surprised to hear that you prefer consequentialist morality to deontological morality; I was under the impression that most Christians followed the Divine Command model, but it looks like I was wrong.
I mean something like, "whatever it is that causes you to believe in in God, the afterlife, and the Bible in the first place", but point taken.
Ooh, I see, I totally misunderstood what you meant. By feel, you mean "experience feelings", thus something akin to qualia, right ? But in this case, your next statement is problematic:
In this case, wouldn't it make sense to conclude that mind uploading is a perfectly reasonable procedure for anyone (possibly other than yourself) to undergo ? Imagine that Less Wrong was a community where mind uploading was common. Thus, at any given point, you could be talking to a mix of uploaded minds and biological humans; but you'd strive to treat them all the same way, as human, since you don't know which is which (and it's considered extremely rude to ask).
This makes sense to me, but this would seem to contradict your earlier statement that you could, in fact, detect whether any particular entity had a soul (by asking God), in which case it might make sense for you to treat soulless people differently regardless of what they acted like.
On the other hand, if you're willing to treat all people the same way, even if their ensoulment status is in doubt, then why would you not treat yourself the same way, regardless of whether you were using a biological body or an electronic one ?
Good point. I should point out that some people do trust select individuals to do just that, and many more people trust psychiatrists and neurosurgeons enough to give them at least some control over their minds and brains. That said, the hypothetical technician in charge of uploading your mind would have much greater degree of access than any modern doctor, so your objection makes sense. I personally would likely undergo the procedure anyway, assuming the technician had some way of proving that he has a good track record, but it's possible I'm just being uncommonly brave (or, more likely, uncommonly foolish).
Haha yes, that's a good point, you should probably stick to saying things that are actually relevant to the topic, otherwise we'd never get anywhere :-)
FWIW, this is one of the main goals of transhumanists, if I understand them correctly: to be able to experience the world much more fully than their current bodies would allow.
Oh, I agree (well, except for that whole soul thing, obviously). As I said before, I don't believe that anything like full mental uploading, not to mention the Singularity, will occur during my lifetime; and I'm not entirely convinced that such things are possible (the Singularity seems especially unlikely). Still, it's an interesting intellectual exercise.
I typed up a response to this. It wasn't a great one, but it was okay. Then I hit the wrong button and lost it and I'm not in the mood to write it over again because I woke up early this morning to get fresh milk. (By "fresh" I mean "under a minute from the cow to me", if you're wondering why I can't go shopping at reasonable hours.) It turns out that four hours of sleep will leave you too tired to argue the same point twice.
That said,
Deciding whether or not to get uploaded is a choice I make trying to minimize the risk of dying by accident or creating multiple copies of me. Reacting to other people is a choice I make trying to minimize the risk of accidentally being cruel to someone. No need to act needlessly cruel anyway. Plus it's good practice, since our justice system won't decide personhood by asking God...
Upvoted in empathy for the feeling of losing a large, well-written comment; and soldiering on to extract at least one relevant point from memory.
In recognition of your effort, I looked up the joke you couldn't find.
That sounds ecolicious to a city-slicker such as myself, but all right :-)
Fair enough, though I would say that if we assume that souls do not exist, then creating copies is not a problem (other than that it might be a drain on resources, etc.), and uploading may actually dramatically decrease your risk of dying. But if we assume that souls do exist, then your objections are perfectly reasonable.
That makes sense, but couldn't you ask God somehow whether the person you're talking to has a soul or not, and then act accordingly ? Earlier you indicated that you could do this, but it's possible I misunderstood.
If you'll allow me to butt into this conversation, I have to say that on the assumption that consciousness and identity depend not on algorithms executed by the brain (and which could be executed just as well by transistors), but on a certain special identity attached to your body which cannot be transferred to another - granting that premise - it seems perfectly rational to not want to change hardware. But when you say:
do you mean that you would like the justice system to decide personhood by asking God?
Please define "soul".
I realize that theological debate has a pretty tenuous connection to the changing of minds, but sometimes one is just in the mood....
Suppose that tonight I lay I minefield all around your house. In the morning, I tell you the minefield is there. Then I send my child to walk through it. My kid gets blown up, but this shows you a safe path out of your house and allows you to go about your business. If I then suggest that you should express your gratitude to me everyday for the rest of your life, would you think that reasonable?.... According to your theology, was hell not created by God?
I once asked my best friend, who is a devout evangelical, how he could be sure that the words of the Bible as we have it today are correct, given the many iterations of transcription it must have gone through. According to him, God's general policy of noninterference in free will didn't preclude divinely inspiring the writers of the Bible to trancribe it inerrantly. At least according to one thesist's account, then, God was willing to interfere as long it was something really important for man's salvation. And even if you don't agree with that particular interpretation, I'd like to hear your explanation how the points at which God "hardened Pharaoh's heart", for example, don't amount to interfering with free will.
When I feel the urge, I go to r/debatereligion. The standards of debate aren't as high as they are here, of course; but I don't have to feel guilty about lowering them.
I have nothing to say to your first point because I need to think that over and study the relevant theology (I never considered that God made hell and now I need to ascertain whether he did before I respond or even think about responding, a question complicated by being unsure of what hell is). With regard to your second point, however, I must cordially disagree with anyone who espouses the complete inerrancy of all versions of the Bible. (I must disagree less cordially with anyone who espouses the inerrancy of only the King James Version.) I thought it was common knowledge that the King James Version suffered from poor translation and the Vulgate was corrupt. A quick glance at the disagreements even among ancient manuscripts could tell you that.
I suppose if I complain about people with illogical beliefs making Christianity look bad, you'll think it's a joke...
I don't really have a dog in this race. That said, Matthew 25:41 seems to point in that direction, although "prepared" is perhaps a little weaker than "made". It does seem to imply control and deliberate choice.
That's the first passage that comes to mind, anyway. There's not a whole lot on Hell in the Bible; most of the traditions associated with it are part of folk as opposed to textual Christianity, or are derived from essentially fanfictional works like Dante's or Milton's.
That made me laugh. Calling Dante "fanfiction" of the Bible was just so unexpected and simultaneously so accurate.
Upvoted for self-awareness.
The more general problem, of course, is that if you don't believe in textual inerrancy (of whatever version of the Bible you happen to prefer), you still aren't relying on God to decide which parts are correct.
As Prismattic said, if you discard inerrancy, you run into the problem of classifications. How do you know which parts of the Bible are literally true, which are metaphorical, and which have been superseded by the newer parts ?
I would also add that our material world contains many things that, while they aren't as bad as Hell, are still pretty bad. For example, most animals eat each other alive in order to survive (some insects do so in truly terrifying ways); viruses and bacteria ravage huge swaths of the population, human, animal and plant alike; natural disasters routinely cause death and suffering on the global scale, etc. Did God create all these things, as well ?
That's not a very good argument. "If you accept some parts are metaphorical, how do you know which are?" is, but if you only accept transcription and translation errors, you just treat it like any other historical document.
My bad; for some reason I thought that when AK said,
She meant that some parts of the Bible are not meant to be taken literally, but on second reading, it's obvious that she is only referring to transcription and translation errors, like you said. I stand corrected.
Well, that really depends on what your translation criteria are. :) Reading KJV and, say, NIV side-by-side is like hearing Handel in one ear and Creed in the other.
Also: transcranial magnetic stimulation, pharmaceuticals and other chemicals, physical damage...
Makes sense enough.
For my own part, two things:
I entirely agree with you that various forms of mistaken and fraudulent identity, where entities falsely claim to be me or are falsely believed to be me, are problematic. Indeed, there are versions of that happening right now in the real world, and they are a problem. (That last part doesn't have much to do with AI, of course.)
I agree that people being modified without their consent is problematic. That said, it's not clear to me that I would necessarily be more subject to being modified without my consent as a computer than I am as whatever I am now -- I mean, there's already a near-infinite assortment of things that can modify me without my consent, and there do exist techniques for making accidental/malicious modification of computers difficult, or at least reversible. (I would really have appreciated error-correction algorithms after my stroke, for example, or at least the ability to restore my mind from backup afterwards. So the idea that the kind of thing I am right now is the ne plus ultra of unmodifiability rings false for me.)
Who wants to turn you into a computer? I'm confused. I don't want to turn anybody into anything, I have no sovereignty there nor would I expect it.
EY and Robin Hanson approve of emulating people's brains on computers.
Approving of something in principle doesn't necessarily translate into believing it should be mandatory regardless of the subject's feelings on the matter, or even into advocating it in any particular case. I'd be surprised if EY in particular ever made such an argument, given the attitude toward self-determination expressed in his Metaethics and Fun Theory sequences; I am admittedly extrapolating from only tangentially related data, though. Not sure I've ever read anything of his dealing with the ethics of brain simulation, aside from the specific and rather unusual case given in Nonperson Predicates and related articles.
Robin Hanson's stance is a little different; his emverse is well-known, but as best I can tell he's founding it on grounds of economic determinism rather than ethics. I'm hardly an expert on the subject, nor an unbiased observer (from what I've read I think he's privileging the hypothesis, among other things), but everything of his that I've read on the subject parses much better as a Cold Equations sort of deal than as an ethical imperative.
And? Does that mean forcing you to be emulated?
Good point.
I'm sure you're pro self determination right? Or are you? One of the things that pushed me away from religion in the beginning was there was no space for self determination(not that there is much from a natural perspective), the idea of being owned is not nice one to me. Some of us don't want watch ourselves rot in a very short space of time.
Um, according to the Bible, the Abrahamic God's supposed to have done some pretty awful things to people on purpose, or directed humans to do such things. It's hard to imagine anything more like the definition of a petty tyrant than wiping out nearly all of humanity because they didn't act as expected; exhorting people to go wipe out other cultures, legislating victim blame into ethics around rape, sending actual fragging bears to mutilate and kill irreverent children?
I'm not the sort of person who assumes Christians are inherently bad people, but it's a serious point of discomfort with me that some nontrivial portion of humanity believes that a being answering to that description and those actions a) exists and b) is any kind of moral authority.
If a human did that stuff, they'd be described as whimsical tyrants at the most charitable. Why's God supposed to be different?
While I agree with some of your other points, I'm not sure about this:
We shouldn't be too harsh until we are faced with either deleting a potentially self-improving AI that is not provably friendly or risking the destruction of not just our species but the destruction of all that we value in the universe.
That.... is a surprisingly good answer.
I don't understand the analogy. I see how deleting a superhuman AI with untold potential is a lot like killing many humans, but isn't it a point of God's omnipotence that humans can never even theoretically present a threat to Him or His creation (a threat that he doesn't approve of, anyway)?
Within the fictional universe of the Old and New Testaments, it seems clear that God has certain preferences about the state of the world, and that for some unspecified reason God does not directly impose those preferences on the world. Instead, God created humans and gave them certain instructions which presumably reflect or are otherwise associated with God's preferences, then let them go do what they would do, even when their doing so destroys things God values. And then every once in a while, God interferes with their doing those things, for reasons that are unclear.
None of that presupposes omnipotence in the sense that you mean it here, although admittedly many fans of the books have posited the notion that God possesses such omnipotence.
That said, I agree that the analogy is poor. Then again, all analogies will be poor. A superhumanly powerful entity doing and refraining from doing various things for undeclared and seemingly pointless and arbitrary motives is difficult to map to much of anything.
Yeah, I kind of realize that the problems of omnipotence, making rocks that one can't lift and all that, only really became part of the religious discourse in a more mature and reflection-prone culture, the ways of which would already have felt alien to the OT's authors.
Taking the old testament god as he is in the book of Genesis this isn't clear at all. At least when talking about the long term threat potential of humans.
or
The whole idea of what exactly God is varied during the long centuries in which the stories where written.
Do you have an opinion about whether an AI that wasn't an em could have a soul?
No. I haven't tested it. I haven't ever seen an AI or anything like that. I don't know what basis I'd have for theorizing.