In response to AI is not enough
Comment author: benjayk 08 February 2012 04:19:02PM -1 points [-]

My argument still holds in another form, though. Even if we assume the universe has a preexisting algorithm that just unfolds, we don't know which it is. So we can't determine the best seed AI from that either, effectively we still have to start from b). Unless we get the best seed AI by accident (which seems unlikely to me) there will be room for a better seed AI which can only determined if we start with a totally new algorithm (which the original seed AI is unable to, since then it would have to delete itself). We, having the benefit of not knowing our algorithm, can built a better seed AI which the old seed AI couldn't built because it already has a known algorithm it must necessarily build on.

Indeed, a good seed AI would at some point suggest us to try another seed AI, because it infers that its original code is unlikely to be the best possible at optimal self-modification. Or it would say "Delete this part of my source code and rewrite it, it doesn't seem optimal to me, but I can't rewrite it because I can't modify this part without destroying myself or basing the modification on the very part that I want to fundamentally rewrite".

In response to AI is not enough
Comment author: benjayk 08 February 2012 03:42:34PM *  -1 points [-]

I see in which case my argument fails:

If we assume a prexisting algorithm for the universe already (which most people here seem to do), then everything else could be derived from that, including all axioms of natural numbers, since we assume the algorithm to be more powerful then them at the start. Step b) is simply postulated to be already fulfilled, with the algorithm just being there (and "just being there" is not an algorithm), so that we already have an algorithm to start with (the laws of nature).

The "laws of nature" simply have to be taken as granted. There is nothing deeper than that. The problem is that then we face a universe which is essentially abitrary, since from what we know the laws could be anything abitrary else as well (these laws would just be the way they are, too). But this is obviously not true. The laws of nature are not abitrary, there is a deeper order in them which can't stem from any algorithm (since this would just be another abitrary algorithm). But if this is the case, I believe we have no reason to suppose that this order just stopped working and lets do one algorithm all the rest. We would rather expect it to be still continuously active. Then the argument works. The non-algorithmic order can always yield more powerful seed AIs (since there is no single most powerful algorithm, I think we can agree on that), so that AI is not sufficient for an ever increasing general intelligence.

So we face a problem of worldview here, which really is independent of the argument and this is maybe not the right place to argue it (if it is even useful to discuss it, I am not sure about that, either).

In response to AI is not enough
Comment author: falenas108 07 February 2012 07:57:32PM 1 point [-]

Based on your comments, you are clearly an atheist, and therefore reject the argument of God existing because there has to be an uncaused cause.

Yet, your uncaused algorithm argument takes the exact same form. Isn't it the same counterargument?

Comment author: benjayk 08 February 2012 02:44:03PM -1 points [-]

I am not necessarily an atheist, it depends on your definition of God. I reject all relgious conceptions of God, but accept God as a name for the mysterious source of all order, intelligence and meaning, or as existence itself.

So in this sense, God is the uncaused cause and also everything caused.

It would indeed be a counterargument if I didn't believe in uncaused cause, but I do believe in an uncaused cause, even though it isn't a seperate entity like the usual notion of God implies.

In response to AI is not enough
Comment author: Alex_Altair 07 February 2012 04:36:42PM *  3 points [-]

I think this post is filled with conceptual confusion, and I find it fascinating.

You never quite state the alternative to an algorithm. I propose that the only alternative is randomness. All processes in the universe are algorithms with elements of randomness.

I'm curious as to what you think the human brain does, if not an algorithm. I, like many on LW, believe the human brain can be simulated by a Turing machine (possible with a random generator). Concepts like "heuristics" or "intuition" or exploration" are algorithms with random elements. There is a lot of history on formalizing processes, and nothing known lies outside Turing machines (see the Church-Turing thesis).

In addition, I think the human brain is a lot more algorithmic than you think it is. A lot of Lukeprog's writings on neuroscience and psychology demonstrate ways in which our natural thoughts or intuitions are quite predictable.

at some point we have to actually find an algorithm to start with

The universe started with the laws of physics (which are known to be algorithms possibly with a random generator), and have run that single algorithm up to the present day.

What do you think about my proposed algorithm/random dichotomy?

Comment author: benjayk 07 February 2012 05:33:34PM *  -2 points [-]

You never quite state the alternative to an algorithm. I propose that the only alternative is randomness.

The alternative to algorithms are non-formalizable processes. Obviously I can't give a precise definition or example of one, since in this case we would have an algorithm again.

The best example I can give is the following: Assume that the universe works precisely according to laws (I don't think so, but let's assume it). What determines the laws? Another law? If so, you get an infinite regress of laws, and you don't have a law to determine this, either. So according to you, the laws of the universe are random. I think this hardly plausible.

I'm curious as to what you think the human brain does, if not an algorithm.

I don't know, and I don't think it is knowable in a formalizable way. I consider intelligence to be irreducible. The order of the brain can only be seen in recognizing its order, not in reducing it to any formal principle.

In addition, I think the human brain is a lot more algorithmic than you think it is. A lot of Lukeprog's writings on neuroscience and psychology demonstrate ways in which our natural thoughts or intuitions are quite predictable.

I am not saying the human brain is entirely non-algorithmic. Indeed, since the knwon laws of nature we discovered are quite algorithmic (except for quantum indeterminateness) and the behaviour of the brain can't deviate from that to a very large degree (otherwise we would have recognized it already) we can assume the behaviour of our brains can be quite closely approximated by laws. Still, this doesn't mean there isn't a very crucial non lawful behaviour inherent to it.

The universe started with the laws of physics (which are known to be algorithms possibly with a random generator), and have run that single algorithm up to the present day.

How did the universe find that algorithm? Also, the fact that the behaviour of physics is nicely approximated by laws doesn't mean that these laws are absolute or unchanging.

What do you think about my proposed algorithm/random dichotomy?

Frankly, I see no reason at all to think it is valid.

AI is not enough

-22 benjayk 07 February 2012 03:53PM

What I write here may be quite simple (and I am certainly not the first to write about it), but I still think it is worth considering:


Say we have an abitrary problem that we assume has an algorithmic solution, and search for the solution of the problem.


How can the algorithm be determined?
Either:
a) Through another algorithm that exist prior to that algorithm.
b) OR: Through something non-algorithmic.


In the case of AI, the only solution is a), since there is nothing else but algorithms at its disposal. But then we have the problem to determine the algorithm the AI uses to find the solution, and then it would have to determine the algorithm to determine that algorithm, etc...
Obviously, at some point we have to actually find an algorithm to start with, so in any case at some point we need something fundamentally non-algorithmic to determine a solution to an problem that is solveable by an algorithm.


This reveals something fundamental we have to face with regards to AI:

Even assuming that all relevant problems are solvable by an algorithm, AI is not enough. Since there is no way to algorithmically determine the appropiate algorithm for an AI (since this would result in an infinite regress), we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions. Even if we found a very powerful seed AI algorithm, there will always be more powerful seed AI algorithms that can't be determined by any known algorithm, and since we were able to find the first one, we have no reason to suppose we can't find another more powerful one. If an AI recursively improves 100000x times until it is 100^^^100 times more powerful, it still will be caught up if a better seed AI is found, which ultimately can't be done by an algorithm, so that further increases of the most general intelligence always rely on something non-algorithmic.

But even worse, it seems obvious to me that there are important practical problems that have no algorithmic solution (as opposed to theoretical problems like the halting problem, which are still tractable in practice), apart from the problem of finding the right algorithm.
In a sense, it seems all algorithms are too complicated to find the solution to the simple (though not necessarily easy) problem of giving rise to further general intelligence.
For example: No algorithm can determine the simple axioms of the natural numbers from anything weaker. We have postulate them by virtue of the simple seeing that they make sense. Thinking that AI could give rise to ever improving *general* intelligence is like thinking that an algorithm can yield "there is a natural number 0 and every number has a successor that, too, is a natural number". There is simply no way to derive the axioms from anything that doesn't already include it. The axioms of the natural numbers are just obvious, yet can't be derived - the problem of finding the axioms of natural numbers is too simple to be solved algorithmically. Yet still it is obvious how important the notion of natural numbers is.
Even the best AI will always be fundamentally incapable of finding some very simple, yet fundamental principles.
AI will always rely on the axioms it already knows, it can't go beyond it (unless reprogrammed by something external). Every new thing it learns can only be learned in term of already known axioms. This is simply a consequence of the fact that computers/programs are functioning according to fixed rules. But general intelligence necessarily has to transcend rules (since at the very least the rules can't be determined by rules).


I don't think this is an argument against a singularity of ever improving intelligence. It just can't happen driven (solely or predominantly) by AI, whether through a recursively self-improving seed AI or cognitive augmentation. Instead, we should expect a singularity that happens due to emergent intelligence. I think it is the interaction of different kind of intelligence (like human/animal intuitive intelligence, machine precision and the inherent order of the non-living universe, if you want to call that intelligence) that leads to increase in general intelligence, not just one particular kind of intelligence like formal reasoning used by computers.

Comment author: benjayk 10 June 2011 10:56:08PM -2 points [-]

I contest that afterlife is a lie. I think one reason many people believe in an afterlife is because it actually makes sense, even though their picture of what it looks like is very unlikely to be accurate.

In my opinion it is simply a logical certainty that there is an "afterlife" (if one dies in the first place): I can't ever experience nothing in the present (even though I can say in retrospect say "I experienced nothing ", which just means I failed to experience an experience with certain properties) , so I will always experience something in the present. And experiencing is not a static thing that could 'stop' in the present - it requires change -, thus I will always experience a future. What's the alternative?

Ceasing to exist is a 3-person concept, it can't happen to a subject. But we ARE subjects (notwithstanding our useful relative identify as a 3-person accessible thing, ie our current body), so we can't cease to exist in a final 1-person sense. Or at least we can't know what ceasing to exists means for us, anymore that we can know what the world would be like if there would be nothing. So there is no reaon to be afraid of ceasing to exist or treat it like something that actually happens to us or any one else (though temporary death is most probably something we should worry about).

To frame it as questions: What could ceasing to exist mean for me? I care about my experience, but there isn't one in this case. The dead one isn't me, it is just a body that used to be my body. So why would I worry about a non-experience of something that isn't me? Why not instead solely consider experiences I could have (eg being revived after death)?

So what kind of afterlife awaits us? I think it's likely to be the case that intelligence at some point in some branch of the multiverse can run abitrarily good simulations of our past/present/near future and thus will ressurect every being with no violation of the laws of physics. Actually I can't think of an alternative that doesn't require some fundamental things about the world or us to be very much unlike science think they are (eg there is a spiritual plane where we reincarnate from, or consciousness can eternally exist in random quantum fluctuations...).

Comment author: benjayk 16 April 2011 02:25:37PM 3 points [-]

I think the most practical / accurate way of conceiving of individuality is through the connection of your perceptions through memory. You are the same person as 3 years ago, because you remember being that person (not only rationally, but on a deeper level of knowing and feeling that you were that person). Of course different persons will not share the memory of being the same person. So if we conceive of individuality in the way we actually experience individuality (which I think is most reasonable), there is not much sense in saying that many persons living right know are the same person, no matter how much they share certain memes. Even for an outside observer this is true, since people express enough of their memory to the outside world to understand that their memories form distinct life stories. It may be true to say that many persons share a cultural identity or share a meme space, but this does make them the same person, since they do not share their personal identity. So unless your AI is dumb and does not understand what individuality consists of it won't say that there are only thousands of people.

I might be true though that at some point in the future some people that have different memories right know will merge into one entity and thus share the same memory (if the singularity happens I think it is not that unlikely). Then we could say that different persons living right now might not be different persons ultimately, but they still are different persons right now.

View more: Next