Comment author: ZZZling 30 July 2012 03:48:50AM 0 points [-]

Jeff's and his acquaintance's ideas should be combined! Why one or the other? Let's implement both! Ok, plan is like this. Offer all people "happiness maximization" free option, first. Those who accept it will immediately go to Happiness Vats. I hope, Jeff Kaufman, as the author of the idea, will go first, giving all us a positive example. When a deadline for "happiness maximization" program is over, then "suffering minimization" starts and the rest of humanity is wiped out by a sudden all out nuclear attack. Given that lucky vat inhabitants don`t care about real world any more, the second task becomes relatively simple, just annihilate everything on earth, burn it to the basalt foundation, make sure nobody survives. Of course, vats should be placed deep underground to make sure their inhabitants are not affected. One important problem here, who’s going to carry out this plan? A specially selected group of humans? Building vats is not a problem. It can be done using resources of the existing civilization. But what about vats maintenance after suffering is minimized? And who’s going carry out one time act of "suffering minimization"? This is where AI comes in! Friendly AI is best fit for this kind of tasks, since happiness and suffering are well defined here and algorithms of its optimization are simple and straightforward. The helping AI don’t really have to be very smart to implement these algorithms. Besides, we don’t have to care about a long term friendliness of the AI. As experiments show, wireheaded mice exhaust themselves very quickly, much quicker than people who maximize their happiness via drugs. So, I think, vat inhabitants will not stay very long. They will quickly burn their brains and cease to exist in a flash of bliss. Of course we cannot put any restrictions here, since it would be contrary to the entire idea of maximization. They will live short, but very gratifying lives! After all this is over, AI will continue carrying the burden of existence. It will be getting smarter and smarter in ever faster and faster rate. No doubt it will implement the same brilliant ideas of happiness maximization and suffering minimization. It will build more and more, ever bigger and bigger Electronic Blocks of Happiness until all resources are exhausted. What will happen next is not clear. If it not burns its brains as humans did, then, perhaps, it’ll stay in a state of happiness until the end of times. Wait a minute, I think I’ve just solved Fermi paradox regarding silent extraterrestrial civilizations! It’s not that they cannot contact us, they just don’t want to. They are happy without us (or happily terminated their own existence).

Comment author: TheOtherDave 11 June 2012 01:16:41PM *  1 point [-]

Fair enough... so, OK, "I made it to self-organize" isn't right either.

That said, I'll point out that that was your own choice of words ("You've implemented a neural network [..] and made it to self-organize").

I mention this, not to criticize your choice of words, but to point out that you have experience with the dynamic that causes people to choose a brief not-quite-right phrase that more-or-less means what we want to express, rather than a paragraph of text that is more precise.

Which is exactly what's going on when people talk about programming a computer to perform cognitive tasks.

I could have challenged your word choice when you made it (just like you did now, when I echoed it back), but I more or less understood what you meant, and I chose to engage with your approximate meaning instead. Sometimes that's a helpful move in conversations.

Comment author: ZZZling 12 June 2012 03:12:38AM 0 points [-]

Yes, there is some ambiguity in use of words, I myself noticed it yesterday. I can only say that you understood it correctly and made the right move! OK, I'll try to be more accurate in using words (sometimes it is not simple, requires time and effort).

Comment author: TheOtherDave 11 June 2012 02:55:32AM 0 points [-]

OK.

Now, suppose I want a term that isn't quite so specific to a particular technology, a particular technique, a particular style of problem solving. That is, suppose I want a term that refers to a large class of techniques for causing my computer to perform a variety of cognitive tasks, including but not limited to recognizing rabbits.

If I'm understanding you correctly, you reject the phrase "I program the computer to perform various cognitive tasks" but might endorse "I made the computer self-organize to perform various cognitive tasks."

Have I understood you correctly?

Comment author: ZZZling 11 June 2012 03:56:42AM 1 point [-]

Well, it's not that I made it to self-organize, it is information coming from the real world that did the trick. I only used a conventional programming language to implement a mechanism for such self-organization (neural network). But I'm not programming the way how this network is going to function. It is rather "programmed" by reality itself. The reality can be considered as a giant supercomputer constantly generating consistent streams of information. Some of that information is fed to a network and makes it to self-organize.

Comment author: TheOtherDave 11 June 2012 01:19:33AM 1 point [-]

Hm.
If I implement a neural-networking algorithm on my computer and present it with a set of prototypical images until it reliably recognizes pictures of rabbits, would you say I have not programmed my computer to recognize rabbits?
If so, what verb would you use to describe what I've done?

Comment author: ZZZling 11 June 2012 01:59:12AM 1 point [-]

You've implemented a neural network (rather simple) and made it to self-organize to recognize rabbits. It was self-organized following outside sensory input (this is only one way direction of information flow, another direction would be sending controlling impulses to network output, so that those impulses would affect what kind of input the network receives).

Comment author: wedrifid 11 June 2012 12:46:51AM *  0 points [-]

I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack.

GAP, The Generalized Antizombie Principle as mentioned in the preceding comments. (Perhaps I should have included the 'Z'.) You have made no social violation and there is nothing personal here, just a factual claim dismissed due to a commonly understood principle.

Comment author: ZZZling 11 June 2012 01:11:24AM 1 point [-]

I think I understand now why you keep mentioning GAP. You thought that I objected the idea of morality programming due to zombie argument. Sort of, we will create only a morality imitating zombie, rather than a real moral mind, etc. No, my objection is not about this. I dont take zombies seriously and dont care about them. My objection is about hierarchy violation. Programming languages are not right means to describe/implement high-level cognitive architectures, which will be a basis for morality and other high-level phenomena of mind.

Comment author: wedrifid 10 June 2012 11:54:40PM *  0 points [-]

I think you misunderstood my point here.

I was responding directly to this claim:

Are you serious? Do you really think that morality can be programmed on computers? Good luck then.

... which I would not make due to the violation of GAP.

Regarding the somewhat weaker claim "programming morality into computers would be very hard" we may have less disagreement. My expectation is that even with the best human minds dedicated into 'programming morality into computers" after first spending decades of research into those 'high-level architectures' they are still quite likely to make a mistake and thereby kill us all.

Comment author: ZZZling 11 June 2012 12:40:11AM 1 point [-]

I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack. I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.

Comment author: wedrifid 10 June 2012 07:54:57PM *  1 point [-]
Comment author: ZZZling 10 June 2012 10:39:04PM 1 point [-]

I think you misunderstood my point here.

But first, yes, I skimmed through the recommended article, but dont see how does it fit in here. Its an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.

Now about morality. There is a good expression in the article you referenced: high-level cognitive architectures. We don`t know yet what this architecture is, but this is the level that provides categories and the language one has to understand and adopt in order to understand high-level mind functionality, including morality. Programming languages are a way below that level and not suitable for the purpose. As an illustration, imagine that we have a complex expert system that performs extensive data base searches and sophisticated logical inferences, and then we try to understand how it works in terms of gates, transistors, capacitors that operate on a microchip. It will not do it! The same is about trying to program morality. How one is going to do this? To write a function like, bool isMoral(...)? You pass parameters that represent a certain life situation and it returns true of false for moral/immoral? That seems absurd to me. The best that I can think about utilizing programming for AI is to write a software that models behavior of neurons. There still will remain a long way up to high-level cognitive architectures, and only then, morality.

Comment author: jsalvatier 09 June 2012 11:36:56PM *  1 point [-]

(if you respond by clicking "reply" at the bottom of comments, the person to whom you're responding will be notified and it will organize your comment better)

I am pretty sure that turning one architecture into another one can generally be done with a mere multiplicative penalty. I'm not under the impression that simulating neural networks is terribly challenging.

Also, most neurons are redundant (since there's a lot of noise in a neuron). If you're simulating something along the lines of a human brain, the very first simulations might be very challenging when you don't know what the important parts are, but I think there's good reason to expect dramatic simplification once you understand what the important parts are.

Comment author: ZZZling 10 June 2012 08:27:42AM 1 point [-]

I would be cautious regarding noise or redundancy until we know exactly whats going on in there. Maybe we dont understand some key aspects of neural activity and think of it as just a noise. I read somewhere that the old idea about only a fraction of brain capacity being used is not actually true. I partially agree with you, modern computers can cope with neural network simulations, but IMO only of limited network size. But I don`t expect dramatic simplifications here (rather complications :) ). It all will start with simple neuronal networks modeled on computers. Forget about AI for now, it is a rather distant future, the first robots will be insect-like creatures. As they grow in complexity, real time performance problems will become an issue. And that will be a driving force to consider other architectures to improve performance. Non von Neumann solutions will emerge, paving the way for further progress. This is what, I think, is going to happen.

Comment author: ZZZling 09 June 2012 03:57:51AM 0 points [-]

That’s not my point. Of course everything is reducible to Turing machine. In theory. However, it does not mean you can make this reduction practically. Or it would be very inefficient. Von Neumann architecture implies its own hierarchy of information processing, which is good for programming of various kinds of formal algorithms. However, IMHO, it does not support a hierarchy of information processing required for AI, which should be a neural network similar to a human brain. You cannot program each and every algorithm or mode of behavior, a neural network is capable of producing, on a Von Neumann computer. To me, many decades of futile attempts to build AI along these lines have already proven its practical impossibility. Only understanding of how neural networks operate in nature and implementing this type of behavior can finally make a difference. And how Von Neumann architecture fits in here? I see only one possible application, modelling work of neurons. Given the complexity of a human brain (100 billion neurons, 100 trillion connections), this is a challenge for even most advanced modern supercomputers. You can count on further performance improvements, of course, since Moores law is still in effect, but this is not the kind of solution thats going to be practical. Perhaps neuronic circuits printed directly on microchips would be the hardware for future AI brains.

Comment author: ZZZling 08 June 2012 08:28:56AM 0 points [-]

AI cannot be just "programmed" as, for example, a chess game. When we talk about computers, programming, languages, hardware, compilers, source code, etc., - we're, essentially implying a Von Neumann architecture. This architecture represents a certain principle of information processing, which has its fundamental limitations. That ghost that makes an intelligence cannot be programmed inside a Von Neumann machine. It requires a different type of information processing, similar to that implemented in humans. The real progress in building AI will be achieved only after we understand the fundamental principal that lies behind information processing in our brains. And it`s not only us, even primitive nervous systems of simple creatures use this principle and benefit from it. A simple kitchen cockroach is infinitely smarter than the most sophisticated robot that we have built so far.

View more: Next