All of ZZZling's Comments + Replies

ZZZling-10

Jeff's and his acquaintance's ideas should be combined! Why one or the other? Let's implement both! Ok, plan is like this. Offer all people "happiness maximization" free option, first. Those who accept it will immediately go to Happiness Vats. I hope, Jeff Kaufman, as the author of the idea, will go first, giving all us a positive example. When a deadline for "happiness maximization" program is over, then "suffering minimization" starts and the rest of humanity is wiped out by a sudden all out nuclear attack. Given that lucky ... (read more)

ZZZling-20

"So if you're under the impression that this is a point..."

Yes, I'm under that impression. Because the whole idea about "Friendly AI" implies a subtle, indirect, but still control. The idea here is not to control AI at its final stage, rather to control what this final stage is going to be. But I don't think such indirect control is possible. Because in my view, the final shape of AI is invariant of any contingencies, including our attempts to make it "friendly" (or "non-friendly"). However, I can admit that on early... (read more)

0TheOtherDave
Ah, cool. Yes, this is definitely a point of disagreement. For my own part, I think real intelligence is necessarily contingent. That is, different minds will respond differently to the same inputs, and this is true regardless of "how intelligent" those minds are. There is no single ideal mind that every mind converges on as its "final" or "fully grown" stage.
1Mitchell_Porter
This isn't true of human beings, what's different about AIs?
ZZZling00

Yes, there is some ambiguity in use of words, I myself noticed it yesterday. I can only say that you understood it correctly and made the right move! OK, I'll try to be more accurate in using words (sometimes it is not simple, requires time and effort).

4TheOtherDave
I agree completely that it's not simple and requires time and effort. I am, as I said explicitly, not criticizing your choice of words. I'm criticizing your listening skills. This whole thread got started because you chose to interpret "programming morality" in a fairly narrow way to mean something unreasonable, and then chose to criticize that unreasonable thing. I am suggesting that next time around, you can profitably make more of an effort as a listener to meet the speaker halfway and think about what reasonable thing they might have been trying to express, rather than interpret their words narrowly to suggest something unreasonable. Just as you value others doing the same for you.
3TheOtherDave
Pretty much everyone here agrees with you that we can't control a superintelligent system, most especially Eliezer, who has written many many words championing that position. So if you're under the impression that this is a point that you dispute with this community, you have misunderstood the consensus of this community. In particular, letting a system do what it wants is generally considered the opposite of controlling it.
0[anonymous]
Yes, this is why Friendly AI is difficult. Making an optimizing process that will care about what we want, in the way we want it to care, once we can no longer control it, is not something we know how to do yet.
ZZZling00

Well, it's not that I made it to self-organize, it is information coming from the real world that did the trick. I only used a conventional programming language to implement a mechanism for such self-organization (neural network). But I'm not programming the way how this network is going to function. It is rather "programmed" by reality itself. The reality can be considered as a giant supercomputer constantly generating consistent streams of information. Some of that information is fed to a network and makes it to self-organize.

2TheOtherDave
Fair enough... so, OK, "I made it to self-organize" isn't right either. That said, I'll point out that that was your own choice of words ("You've implemented a neural network [..] and made it to self-organize"). I mention this, not to criticize your choice of words, but to point out that you have experience with the dynamic that causes people to choose a brief not-quite-right phrase that more-or-less means what we want to express, rather than a paragraph of text that is more precise. Which is exactly what's going on when people talk about programming a computer to perform cognitive tasks. I could have challenged your word choice when you made it (just like you did now, when I echoed it back), but I more or less understood what you meant, and I chose to engage with your approximate meaning instead. Sometimes that's a helpful move in conversations.
ZZZling00

You've implemented a neural network (rather simple) and made it to self-organize to recognize rabbits. It was self-organized following outside sensory input (this is only one way direction of information flow, another direction would be sending controlling impulses to network output, so that those impulses would affect what kind of input the network receives).

0TheOtherDave
OK. Now, suppose I want a term that isn't quite so specific to a particular technology, a particular technique, a particular style of problem solving. That is, suppose I want a term that refers to a large class of techniques for causing my computer to perform a variety of cognitive tasks, including but not limited to recognizing rabbits. If I'm understanding you correctly, you reject the phrase "I program the computer to perform various cognitive tasks" but might endorse "I made the computer self-organize to perform various cognitive tasks." Have I understood you correctly?
ZZZling00

I think I understand now why you keep mentioning GAP. You thought that I objected the idea of morality programming due to zombie argument. Sort of, we will create only a morality imitating zombie, rather than a real moral mind, etc. No, my objection is not about this. I dont take zombies seriously and dont care about them. My objection is about hierarchy violation. Programming languages are not right means to describe/implement high-level cognitive architectures, which will be a basis for morality and other high-level phenomena of mind.

0wedrifid
Did you correctly infer that it is primarily because that post and the surrounding posts in the associated sequence appeared in my playlist while I was at the gym today? That would have been impressive. (If I hadn't been primed I may have ignored your comment rather than replied with the relevant link.) The other direction. Your objection (as it was then made) was a violation of the aforementioned GAZP so I rejected it.
2TheOtherDave
Hm. If I implement a neural-networking algorithm on my computer and present it with a set of prototypical images until it reliably recognizes pictures of rabbits, would you say I have not programmed my computer to recognize rabbits? If so, what verb would you use to describe what I've done?
ZZZling20

I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack. I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.

0wedrifid
GAP, The Generalized Antizombie Principle as mentioned in the preceding comments. (Perhaps I should have included the 'Z'.) You have made no social violation and there is nothing personal here, just a factual claim dismissed due to a commonly understood principle.
ZZZling00

I think you misunderstood my point here.

But first, yes, I skimmed through the recommended article, but dont see how does it fit in here. Its an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.

Now about morality. There is a good expression in the article you referenced: high-level cognitive architectures. We don`t know yet what this architecture is, ... (read more)

0wedrifid
I was responding directly to this claim: ... which I would not make due to the violation of GAP. Regarding the somewhat weaker claim "programming morality into computers would be very hard" we may have less disagreement. My expectation is that even with the best human minds dedicated into 'programming morality into computers" after first spending decades of research into those 'high-level architectures' they are still quite likely to make a mistake and thereby kill us all.
3Slackson
Okay, Eliezer will have worded this much better elsewhere, but I might as well give this a shot. The basic idea of friendly AI is this. When you design an AI, part of the design that you make is what it is that the AI wants. It doesn't have any magical defaults that you don't code in, it is just the code, it is only what you've written in to it. If you've written it to value something other than human values, it will likely destroy humanity since we are a threat to its values. If you've written it to value human values, then it will keep humanity alive and protect us and devote its resources to furthering human values. It will not change its values, since if it does that it won't optimize its values. This is practically a tautology, but people still seem to find it surprising.
5wedrifid
Either the future or catching up with the present research.
7syzygy
In case you haven't realized it, you're being downvoted because your post reads like this is the first thing you've read on this site. Just FYI.

The standard response to this is that it will care about our wishes if we build it to care about our wishes (see here).

1wedrifid
Required reading: The Generalized Antizombie Principle
ZZZling00

I would be cautious regarding noise or redundancy until we know exactly whats going on in there. Maybe we dont understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true. I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ). It all will start with simple neuronal networks modeled on co... (read more)

Manfred100

No, no. We want to figure out how to program moral behavior. "Programming" it would be much harder.

ZZZling-20

That’s not my point. Of course everything is reducible to Turing machine. In theory. However, it does not mean you can make this reduction practically. Or it would be very inefficient. Von Neumann architecture implies its own hierarchy of information processing, which is good for programming of various kinds of formal algorithms. However, IMHO, it does not support a hierarchy of information processing required for AI, which should be a neural network similar to a human brain. You cannot program each and every algorithm or mode of behavior, a neural network... (read more)

0jsalvatier
(if you respond by clicking "reply" at the bottom of comments, the person to whom you're responding will be notified and it will organize your comment better) I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I'm not under the impression that simulating neural networks is terribly challenging. Also, most neurons are redundant (since there's a lot of noise in a neuron). If you're simulating something along the lines of a human brain, the very first simulations might be very challenging when you don't know what the important parts are, but I think there's good reason to expect dramatic simplification once you understand what the important parts are.
ZZZling00

AI cannot be just "programmed" as, for example, a chess game. When we talk about computers, programming, languages, hardware, compilers, source code, etc., - we're, essentially implying a Von Neumann architecture. This architecture represents a certain principle of information processing, which has its fundamental limitations. That ghost that makes an intelligence cannot be programmed inside a Von Neumann machine. It requires a different type of information processing, similar to that implemented in humans. The real progress in building AI will ... (read more)

3wedrifid
Yes it can. It's just harder. An AI can be "just programmed" in Conway's Life if you really want to.
0Nornagest
That's true if and only if some aspect of biological neural architecture (as opposed to the many artificial neural network architectures out there) turns out to be Turing irreducible; all computing systems meeting some basic requirements are able to simulate each other in a pretty strong and general way. As far as I'm aware, we don't know about any physical processes which can't be simulated on a von Neumann (or any other Turing-complete) architecture, so claiming natural neurology as part of that category seems to be jumping the gun just a little bit.