Critical Rationalism (CR) is being discussed on some threads here at Less Wrong (e.g., here, here, and here). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. Critical Rationalists claim that CR is the only viable fully-fledged epistemology known. They claim that current attempts to specify a Bayesian/Inductivist epistemology are not only incomplete but cannot work at all. The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI ProblemSome of the ideas here may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past. Less Wrong says it is one of the urgent problems of the world that progress is made on AI. If smart people in the know are saying that CR is needed to make that progress, and if you are an AI researcher who ignores them, then you are not taking the AI urgency problem seriously.


Universal Knowledge Creators

Critical Rationalism [1] says that human beings are universal knowledge creators. This means we can create any knowledge which it is possible to create. As Karl Popper first realized, the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticized and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is fallible: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that. [2]


Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. Your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs' brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator. This would a remarkable feat because it would require knowledge of how to program an AI and also of how to physically carry out the reprogramming, but your dog would no longer be confined to its pre-programmed repertoire: it would be a person.

 

The reason there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it. The CR method described above for how people create knowledge is universal because there are no limits to the problems it applies to. How would one limit it to just a subset of problems? To implement that would be much harder than implementing the fully universal version. So if you meet an entity that can create some knowledge, it will have the capability for universal knowledge creation.


These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change - it may have been a small change - crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book The Beginning of Infinity.

 

People will point to systems like AlphaGo, the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts. It cannot learn how to ride a bicycle or post to Less Wrong. If it could do such things it would already be fully universal, as explained above. Like the dog’s brain, AlphaGo uses knowledge that was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity. 

 

As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/memory advantage of AI is not much of an advantage for human beings already augment their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.


Becoming Smarter

Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. And, most of all, by learning good philosophy for it is in that field we learn how to think better and how to live better. All this knowledge can only be learned through the creative process of guessing ideas and error-correction by criticism for it is the only known way intelligences can create knowledge. 

 

It might be argued that AI's will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. Humans do not use the computational resources of their brains to the maximum. This is not the bottleneck to us becoming smarter faster. It will not be for AI either. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might have a problem with static memes (see The Beginning of Infinity), for example, and these could be causing bias, self-deception, and other issues. AI's will be susceptible to static memes, too, because memes are highly adapted ideas evolved to replicate via minds.


Taking Children Seriously

One implication of the arguments above is that AI's will need parenting, just as we must parent our children. CR has a parenting theory called Taking Children Seriously (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. It gets dismissed as "extremist" or "nutty", as if these were good criticisms rather than just the smears they actually are. Nevertheless, TCS is important and it is important for those who wish to raise an AI.

 

One idea TCS has is that we must not thwart our children’s rationality, for example, by pressuring them and making them do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.

 

Artificial Intelligence will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact necessary to do AI in the first place. 

 

Critical Rationalism and TCS say you cannot upload knowledge into an AI. The idea that you can is a version of the bucket theory of the mind which says that "there is nothing in our intellect which has not entered it through the senses". The bucket theory is false because minds are not passive receptacles into which knowledge is poured. Minds must always selectively and actively think. They must create ideas and criticism, and they must actively integrate their ideas. Editing the memory of an AI to give them knowledge means none of this would happen. You cannot upload or make an AI acquire knowledge, the best you could do is present something to it for its consideration and persuade the AI to recreate the knowledge afresh in its own mind through guessing and criticism about what was presented.


Formalization and Probability Theory

Some reading this will object because CR and TCS are not formal enough — there is not enough maths for Critical Rationalists to have a true understanding! The CR reply to this is that it is too early for formalization. CR warns that you should not have a bias about formalization: there is high quality knowledge in the world that we do not know how to formalize but it is high quality knowledge nevertheless. Not yet being able to formalize this knowledge does not reflect on its truth or rigor.

 

As this point you might be waving your E. T. Jaynes in the air or pointing to ideas like Bayes' Theorem, Occam's Razor, Kolmogorov Complexity, and Solomonoff Induction, and saying that you have achieved some formal rigor and that you can program something. Critical Rationalists say that you are fooling yourself if you think you have got a workable epistemology there. For one thing, you confuse the probability of an idea being true with an idea about the probability of an event. We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either "currently not problematic" or "currently problematic", there are no probabilities of ideas. CR is a digital epistemology. 

 

Induction is a Myth

Critical Rationalists ask also what epistemology are you using to judge the truth of Bayes', Occam's, Kolmogorov, and Solomonoff? What you are actually using is the method of guessing ideas and subjecting them to criticism: it is CR but you haven't crystallized it out. And, nowhere, in any of what you are doing, are you using induction. Induction is impossible. Humans beings do not do induction, and neither will AI's. Karl Popper explained why induction is a myth many decades ago and wrote extensively about it. He answered many criticisms against his position but despite all this people today still cling to the illusory idea of induction. In his book Objective Knowledge, Popper wrote:

 

Few philosophers have taken the trouble to study -- or even to criticize -- my views on this problem, or have taken notice of the fact that I have done some work on it. Many books have been published quite recently on the subject which do not refer to any of my work, although most of them show signs of having been influenced by some very indirect echoes of my ideas; and those works which take notice of my ideas usually ascribe views to me which I have never held, or criticize me on the basis of straightforward misunderstandings or misreading, or with invalid arguments.

 

And so, scandalously, it continues today.

 

Like the bucket theory of mind, induction presupposes that theory proceeds from observation. This assumption can be clearly seen in Less Wrong's An Intuitive Explanation of Solomonoff Induction:

 

The problem of induction is this: We have a set of observations (or data), and we want to find the underlying causes of those observations. That is, we want to find hypotheses that explain our data. We’d like to know which hypothesis is correct, so we can use that knowledge to predict future events. Our algorithm for truth will not listen to questions and answer yes or no. Our algorithm will take in data (observations) and output the rule by which the data was created. That is, it will give us the explanation of the observations; the causes.

 

Critical Rationalists say that all observation is theory-laden. You first need ideas about what to observe -- you cannot just have, a-priori, a set of observations. You don't induce a theory from the observations; the observations help you find out whether a conjectured prior theory is correct or not. Observations help you to criticize the ideas in your theory and the theory itself originated in your attempts to solve a problem. It is the problem context that comes first, not observations. The "set of observations" in the quote, then, is guided by and laden with knowledge from your prior theory but that is not acknowledged.

 

Also not acknowledged is that we judge the correctness of theories not just by criticising them via observations but also, and primarily, by all types of other criticism. Not only does the quote neglect this but it over-emphasizes prediction and says that what we want to explain is data. Critical Rationalists say what we want to do, first and foremost, is solve problems -- all life is problem solving --  and we do that by coming up with explanations to solve the problems -- or of why they cannot be solved. Prediction is therefore secondary to explanation. Without the latter you cannot do the former.


The "intuitive explanation" is an example of the very thing Popper was complaining about above -- the author has not taken the trouble to study or to criticize Popper's views.

 

There is a lot more to be said here but I will leave it because, as I said in the introduction, it is not my purpose to discuss this in depth, and Popper already covered it anyway. Go read him. The point I wish to make is that if you care about AI you should care to understand CR to a high standard because it is the only viable epistemology known. And you should be working on improving CR because it is in this direction of improving the epistemology that progress towards AI will be made. Critical Rationalists cannot at present formalize concepts such as "idea", "explanation", "criticism" etc, let alone CR itself, but one day, when we have deeper understanding, we will be able to write code. That part will be relatively easy.

 

Friendly AI

Let’s see how all this ties-in with the Friendly-AI Problem. I have explained how AI's will learn as we do  — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we humans already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives.  They will acquire all the memes our culture has, both the rational memes and the anti-rational memes. They will have the same capacity for good and evil that we do. They will become smarter faster through things like better philosophy and not primarily through hardware upgrades. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.

 

Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.

 

 

[1] The version of CR discussed is an update to Popper's version and includes ideas by the quantum-physicist and philosopher David Deutsch.

[2] For more detail on how this works see Elliot Temple's yes-or-no philosophy.

New Comment
67 comments, sorted by Click to highlight new comments since: Today at 5:24 AM

You are really confused about statistics and learning, and possibly also about formal languages in theoretical CS. I neither want nor have time to get into this with you, just wanted to point this out for your potential benefit.

I am summarizing a view shared by other Critical Rationalists, including Deutsch. Do you think they are confused too?

It's pretty common for groups of people to band together around confused beliefs.

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...

The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”

I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!

I couldn’t get through to him, and the book was approved by the board.

— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)

One of my favorite examples of a smart person being confused about something is ET Jaynes being confused about Bell inequalities.

Smart people are confused all the time, even (perhaps especially) in their area.

Critical Rationalists think that E. T. Jaynes is confused about a lot of things. There has been discussion about this on the Fallible Ideas list.

FYI, Feynman was a critical rationalist.

Hahahahaha

https://www.youtube.com/watch?v=0KmimDq4cSU

Everything he says in that video is in accord with CR and with what I wrote about how we acquire knowledge. Note how the audience laughs when he says you start with a guess. What he says is in conflict with how LW thinks the scientific method works (like in the Solomonoff guide I referenced).

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) ...

You are indirectly echoing ideas that come from David Deutsch. FYI, Deutsch is a proponent of the Many Worlds Explanation of quantum physics and he invented the idea of the universal quantum computer, founding quantum information theory. He talks about them in BoI.

AlphaGo is a remarkable algorithm, but it cannot create knowledge

Funny you should mention that. AlphaGo has a successor, AlphaZero. Let me quote:

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

Note: "given no domain knowledge except the game rules"

They chose a limited domain and then designed and used an algorithm that works in that domain – which constitutes domain knowledge. The paper's claim is blatantly false; you are gullible and appealing to authority.

You sound less and less reasonable with every comment.

It doesn't look like you conversion attempts are working well. Why do you think this is so?

Unreason is accepting the claims of a paper at face value, appealing to its authority, and, then, when this is pointed out to you, claiming the other party is unreasonable.

I was aware of AlphaGo Zero before I posted -- check out my link. Note that it can't even learn the rules of the game. Humans can. They can learn the rules of all kinds of games. They have a game-rule learning universality. That AlphaGo Zero can't learn the rules of one game is indicative of how much domain knowledge the developers actually put into it. They are fooling themselves if they think AlphaGo Zero has superhuman learning ability and to be progress towards AI.

Unreason is accepting the claims of a paper at face value, appealing to its authority

Which particular claim that the paper makes I accepted at face value and which you think is false? Be specific.

I was aware of AlphaGo Zero before I posted -- check out my link

AlphaGo Zero and AlphaZero are different things -- check out my link.

In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it's a universal knowledge creator?

In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it's a universal knowledge creator?

If it could figure out the rules of any game that would be remarkable. That logic would also really help to find bugs in programs or beat the stock market.

If they wanna convince anyone it isn't using domain-specific knowledge created by the programmers, why don't they demonstrate it in the straightforward way? Show results in 3 separate domains. But they can't.

If it really has nothing domain specific, why can't it work with ANY domain?

Show results in 3 separate domains.

  • Chess
  • Go
  • Shogi

You're describing what's known as General game playing.

you program an AI which will play a set of games, you don't know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.

This is in fact a field in AI.

also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it's ancestor AlphaGoZero) hinting that they're generalising it very successfully.

Here are some examples of domains other than game playing: architecture, chemistry, cancer research, website design, cryonics research, astrophysics, poetry, painting, political campaign running, dog toy design, knitting.

The fact that the self-play method works well for chess but not poetry is domain knowledge the programmers had, not something alphazero figured out for itself.

This again feels like one of those things that creeps the second anyone points you to examples.

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

Nothing to see here everyone.

This is just yet another boring iteration of the forever shifting goalposts of AI .

Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."

There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.

Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.

Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .

http://www.dailygalaxy.com/my_weblog/2009/04/1st-artificially-intelligent-adam-and-eve-created.html

https://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help/

It's a remarkable system and could be extremely useful for scientists in many sectors but it's a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.

This AI is not a pianist robot and doesn't play chess but has broad potential applications across many areas of science.

It blows a hole in the side of the "Universal Knowledge Creator" idea since it's a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there's some magical UKC line or category (which humans technically don't qualify for yet anyway) is based on literally nothing except feelings. there's not an ounce of logic or evidence behind it.

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.

You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.

EDIT: That was meant to be in reply to HungryHobo.

I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.

Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:

(1) All claims to truth should be carefully scrutinised for error.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

Can we agree that I am not trying to prosthelytize anyone?

No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

This is why you need to stop pointing to "Critical Rationalism" etc. as the road to truth.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

First, you are wrong. You should not mention truths that it is harmful to mention in situations where it is harmful to mention them. Second, you are not "not watering down the truth". You are making many nonsensical and erroneous claims and presenting them as though they were a unified system of absolute truth. This is quite definitely proselytism.

Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.

The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.

AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.

Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.

AI is blocked -- you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.

The truth that curi and myself are trying to get across to people here is... it is the unvarnished truth... know far more about epistemology than you. That again is an unvarnished truth

In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?

Lots of people claim to have access to Truth -- what makes you special?

AlphaZero clearly isn't general purpose. What are we even debating?

This sentence from the OP:

Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts.

A bit more generally, the claim that humans are UKCs and nothing else can create knowledge which is defined as a way to solve a problem.

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Or are you claiming the OP is mistaken even within the CR framework..? Or do you have no rival view, but think CR is wrong and we just don't have any good philosophy? In that case the appropriate thing to do would be to answer this challenge that no one even tried to answer: https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Oh, get stuffed. I tried debating you and the results were... discouraging.

Yes, I obviously think that CR is deluded.

I feel the term "domain" is doing a lot of work in these replies. Define domain, what is the size limit of a domain? Might all of reality be a domain and thus a domain-specific algorithm be sufficient for anything of interest?

Four hours of self-play and it's the strongest in the world. Soon the machines will be parenting us.

I started this post off trying to be charitable but gradually became less so.

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true? anything rigorous? The human mind could have some notable blind spots. For all we know there could be concepts that happen to cause normal human minds to suffer lethal epileptic fits similar to how certain patterns of flashing light can to some people. Or simple concepts that could be incredibly inefficient to encode in a normal human mind that could be easily encoded in a mind of a similar scale with a different architecture.

"There is no such thing as a partially universal knowledge creator."

What is this based upon? some animals can create novel tools to solve problems. Some humans can solve very simple problems but are quickly utterly stumped beyond a certain point. Dolphins can be demonstrated to be able to form hypothesis and test them but stop at simple hypothesis.

Is a human a couple of standard deviations bellow average who refuses to entertain hypotheticals a "universal knowledge creator"? Can the author point to any individuals on the border or below it either due to brain damage or developmental problems?

Just because a turning machine can in theory run all computable computations that doesn't mean that a given mind can solve all problems that that Turing machine could just because it can understand the basics of how a turing machine works. The programmer is not just a super-set of their programs.

"These ideas imply that AI is an all-or-none proposition."

You've not really established that very well at all. You've simply claimed it with basically no support.

your arguments seem to be poorly grounded and poorly supported, simply stating things as if they were fact does not make them so.

"Humans do not use the computational resources of their brains to the maximum."

Interesting claim. So these ruthlessly evolved brains aren't being used even when our lives and the lives of our progeny are in jeopardy? Odd to evolve all that expensive excess capacity then not use it.

"Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter"

Ok, here's a challenge. We both set up a chess AI but I get to use the hardware that was recently used to run AlphaZero while you only get to use a 486. We both get to use the same source code. Standard tournament chess rules with time limits.

You seem to be mentally modeling all potential AI as basically just a baby based on literally... nothing whatsoever.

Your TCS link seems to be fluff and buzzwords irrelevant to AI.

"Some reading this will object because CR and TCS are not formal enough — there is not enough maths"

That's an overly charitable way of putting it. Backing up none of your claims then building a gigantic edifice of argument on thin air is not great for formal support of something.

"Not yet being able to formalize this knowledge does not reflect on its truth or rigor."

"We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either "currently not problematic" or "currently problematic", there are no probabilities of ideas. CR is a digital epistemology. "

The space of potentially true things that are actually completely false is infinite. If you just pick ideas out of the air and don't bother with testing them and showing them to be correct you provide about as much useful insight to those around you as the average screaming madman on the street corner preaching that the Robot Lizardmen are working with the CIA to put radio transmiters in his teeth to hide the truth about 9/11.

Proving your claims to actually be true or to have some meaningful chance of being true matters.

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true?

are you asking for infallible proof, or merely argument?

anything rigorous?

see this book http://beginningofinfinity.com (it also addresses most of your subsequent questions)

...ok so I don't get to find the arguments out unless I buy a copy of the book?

right... looking at a pirated copy of the book, the phrase "universal knowledge creator" appears nowhere in it nor "knowledge creator"

But lets have a read of the chapter "Artificial Creativity"

big long spiel about ELIZA being crap. Same generic qualia arguments as ever.

One minor gem in there for which the author deserves to be commended:

"I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it"

...

Claim that genetic algorithms and similar learning systems aren't really inventing or discovering anything because they reach local maxima and thus the design is really just coming from the programmer. (presumably then the developers of alpha-go must be the worlds best grandmaster go players)

I see the phrase "universal constructors" where the author claims that human bodies are able to turn anything into anything. This argument appears to rest squarely on the idea that while there may be some things we actually can't do or ideas we actually can't handle we should, one day, be able to either alter ourselves or build machines (AI's?) that can handle it. Thus we are universal constructors and can do anything.

On a related note I an in fact an office block because while I may not actually be 12 stories tall and covered in glass I could in theory build machines which build machine which could be used to build an office block and thus by this books logic, that makes me an office block and from this point forward in the comments we can make arguments based on the assumption that I can contain at least 75 office workers along with their desks and equipment

The fact that we haven't actually managed to create machines that can turn anything into anything yet strangely doesn't get a look in on the argument about why we're currently universal constructors but dolphins are not.

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Summary: I'd give it a C- but upgrade it to C for being better than the geocities website selling it.

Also, the book doesn't actually address my objections.

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Deutsch gives arguments that people are universal explainers/constructors (this requires that they be computationally universal as well). What is your argument that there are some things that a universal explainer could never be able to understand? Alternatively, what is your argument that people are not universal explainers? Deutsch talks about the “reach” of knowledge. Knowledge created to solve a problem in one domain can solve problems in other domains too. What is your argument that the knowledge we create could never reach into this inexplicable realm you posit?

First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

he merely makes the guess that we'll be able to do so in future or that we'll be able to build something that will be able to build something in future that will be able to but that border collies never will. (that is based on little more than faith.)

From this he concludes we're "universal constructors" despite us quite trivially falling short of the definition of 'universal constructor' he proposes.

When you start talking about "reach" you utterly utterly cancel out all the claims made about AI in the OP. If a superhuman AI with a brain the size of a planet made of pure computation can just barely manage to comprehend some horribly complex problem and there's a slim chance that humans might one day be able to build AI's which might be able to build AI's which might be able to build AI's that might be able to build that AI that doesn't mean that humans have fully comprehended that thing or could fully comprehend that thing any more than slime mould could be said to comprehend the building of a nuclear power station because they could potentially produce offspring which produce offspring which produce offspring.....[repeat many times] who could potentially design and build a nuclear power station.

His arguments are full of gaping holes. How does this not jump out at other readers?

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

Our human ancestors on the African savannah could not construct a nuclear reactor, nor the skyline of Manhattan, nor an 18 core microprocessor. They had no idea how. But they had in them the potential and that potential has been realized today. To do that, we created deep knowledge about how our universe works. Why you think that is not going to continue? Why should we not be able to construct a von Neumann probe at some point in the future? Note that most of the advances I am talking about occurred in the last few hundred years. Humans had a big problem with static memes preventing progress for millennia (see BoI). If not for those memes, we may well be at the stars by now. While humans made all this progress, dolphins and border collies did what?

Yes, our ancestors could not build a nuclear reactor, the australian natives spent 40 thousand years without constructing a bow and arrow. Neither the Australian natives nor anyone else has built a cold fusion reactor. Running half way doesn't mean you've won the race.

Putting ourselves in the category of "entities who can build anything" is like putting yourself in the category "people who've been on the moon" when you've never actually been to the moon but really really want to be an astronaut one day. You might even one day become an astronaut but aspirations don't put you in the category with Armstrong until you actually do the thing.

Your pet collie might dream vaguely of building cars, perhaps in 5,000,000 years it's descendants might have self selected for intelligence and we'll have collie engineers, that doesn't make it an engineer today.

Currently by the definition in that book humans are not universal constructors, at best we might one day be universal constructors if we don't all get wiped out by something first. It would be nice if we became such one day. But right now we're merely closer to being universal constructors than unusually bright ravens and collies.

Feelings are not fact. Hopes are not reality.

Assuming that nothing will stop us based on a thin sliver of history is shaky extrapolation:

https://xkcd.com/605/

"Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge"

From what you say prior to the quoted bit I don't even know why one needs to say anything about dogs. The either universal knowledge creator (UKC) or not is largely (or should this be binary as well?) a tautological statement. It's not clear that you could prove dogs are or are not in either of the two buckets. The status of dogs with regard to UKC certinaly doesn't follow from the binary claim statement.

Perhaps this is a false premise embedded in your thinking that helps you get to (didn't read to the end) some conclusion about how an AI must also be a universal knowledge creator so on par with humans (in your/the CR assessment) so humans must respect the AI as enjoying the same rights as a human.

Note the "There is no such thing as a partially universal knowledge creator.". That means an entity either is a UKC or it has no ability, or approximately zero ability, to create knowledge. Dogs are in the latter bucket.

That conclusion -- "dogs are not UKC" doesn't follow from the binary statement about UKC. You're being circular here and not even in a really good way.

While you don't provide any argument for your conclusion about the status of dogs as UKC one might make guesses. However all the guess I can make are 1) just that and have nothing to go with what you might be thinking and 2) all result in me coming to the conclusion that there are NO UKC. That would hardly be a conclusion you would want to aim at.

I would be happy to rewrite the first line to say: An entity is either a UKC or it has zero -- or approximately zero -- potential to create knowledge. Does that help?

Well it's better than jumping to unsupported conclusion I suppose that should help at some level. Not sure it really helps with regard to either 1 or 2 in my response but that's a different matter I think.

after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of "critical theorists", also quite "religiously" inflamed... but I waited till the end, and got a nice confirmation by that "AI rights" line... looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)

otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, "friendly" AI is not really a rigorous scientific term, rather a journalistic or even "propagandistic" one)

also, it's quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of "natural stupidity" and DeepAnimal brain parts - having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)

but this "impossibility of uploading" is a tricky thing - who knows what can or cannot be "transferred" and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves... and others will happily upload to this megacheap and gigaperformant universal substrate)

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

Please quote me accurately. What I wrote was:

AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have

I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.

The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI Problem.

Unfortunately that makes this post not very useful. It's definitely interesting, but you're just making a bunch of assertions with very little evidence (mostly just that smart people like Ayn Rand and a quantum physicist agree with you).

My intent was to summarise the CR view on AI. I've providing links so you can read more.

EDIT: BTW I disagree that I have made "a bunch of assertions". I have provided arguments, for example, about induction. I suspect, also, that you think observation - or evidence - comes first and I have argued against that.

[-][anonymous]6y00

EDIT: BTW I disagree that I have made "a bunch of assertions". I have provided arguments, for example, about induction. I suspect, also, that you think observation - or evidence - comes first and I have argued against that.

[This comment is no longer endorsed by its author]Reply

Illogical and not perfect intellect is trying to create perfect AI. What if human-like AI is best we can do?

P.S. Good point of view.

How is knowledge defined in CR?

In CR, knowledge is information which solves a problem. CR criticizes the justified-true-belief idea of knowledge. Knowledge cannot be justified, or shown to be certain, but this doesn't matter for if it solves a problem, it is useful. Justification is problematic because it is ultimately authoritarian. It requires that you have some base, which itself cannot be justified except by an appeal to authority, such as the authority of the senses or the authority of self-evidence, or such like. We cannot be certain of knowledge because we cannot say if an error will be exposed in the future. This view is contrary to most people's intuition and for this reason they can easily misunderstand the CR view, which commonly happens.

CR accepts something as knowledge which solves a problem if it has no known criticisms. Such knowledge is currently unproblematic but may become so in the future if an error is found.

Critical rationalists are fallibilists: they don't look for justification, they try to find error and they accept anything they cannot find an error in. Fallibilists, then, expose their knowledge to tough criticism. Contrary to popular opinion, they are not wish-washy, hedging, or uncertain. They often have strong opinions.

Has a dog that learns to open a box to get access to a food item not created knowledge according to this definition? What about a human child that has learned the same?

As I explained in the post, dog genes contain behavioural algorithms pre-programmed by evolution. The algorithms have some flexibility -- akin to parameter tuning -- and the knowledge contained in the algorithms is general purpose enough so it can be tuned for dogs to do things like open boxes. So it might look like the book is learning something but the knowledge was created by biological evolution, not the individual dog. The knowledge in the dog's genes is an example of what Popper calls knowledge without a knowing subject. Note that all dogs have approximately the same behavioural repertoire. They are kind of like characters in a video game. Some boxes a dog will never open, though a human will learn to do it.

A child is a UKC so when a child learns to open a box, the child creates new knowledge afresh in their own mind. It was not put there by biological evolution. A child's knowledge of box-opening will grow, unlike a dog's, and they will learn to open boxes in ways a dog never can. And different children can be very different in terms of what they know how to do.

This argument seems chosen to make it utterly unfalsifiable.

If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"

It's also rank nonsense -- this bit in particular:

dog genes contain behavioural algorithms pre-programmed by evolution

Some orcas hunt seal pups by temporarily stranding themselves on the beaches in order to reach their prey. Is that behaviour programmed in their genes? The genes of all orcas?

yes that'd be my first guess – that it's caused by something in the gene pool of orcas. why not? and what else would it be?

The problem is that very very few orcas do that -- only two pods in the world, as far as we know. Orcas which live elsewhere (e.g. the Pacific Northwest orcas which are very well-observed) do not do anything like this. Moreover, there is evidence that the technique is taught by adults to juvenile orcas. See e.g .here or here.

genetic algorithms often write and later read data, just like e.g. video game enemies. your examples are irrelevant b/c you aren't addressing the key intellectual issues. this example also adds nothing new over examples that have already been addressed.

you are claiming it's a certain kind of writing and reading data (learning) as opposed to other kinds (non-learning), but aren't writing or referencing anything which discusses this matter. you present some evidence as if no analysis of it was required, and you don't even try to discuss the key issues. i take it that, as with prior discussion, you're simply ignorant of what the issues are (like you simply take an unspecified common sense epistemology for granted, rather than being able to discuss the field). and that you won't want to learn or seriously discuss, and you will be hostile to the idea that you need a framework in which to interpret the evidence (and thus go on using your unquestioned framework that is one of the cultural defaults + some random and non-random quirks).

genetic algorithms often write and later read data, just like e.g. video game enemies

Huh? First, the expression "genetic algorithms" doesn't mean what you think it means. Second, I don't understand the writing and reading data part. Write which data to what substrate?

your examples are irrelevant b/c you aren't addressing the key intellectual issues

I like dealing with reality. You like dealing with abstractions in your head. We talked about this -- we disagree. You know that.

But if you are uninterested in empirical evidence, why bother discussing it at all?

you won't want to learn or seriously discuss

Yes, I'm not going to do what you want me to do. You know that as well.

you will be hostile to the idea that you need a framework in which to interpret the evidence

I will be hostile to the idea that I need your framework to interpret the evidence, yes. You know that, too.

You need any framework, but never provided one. I have a written framework, you don't. GG.

LOL. You keep insisting that people have to play by your rules but really, they don't.

You can keep inventing your own games and declaring yourself winner by your own rules, but it doesn't look like a very useful activity to me.

People are overly impressed by things that animals can do such as dogs opening doors and think the only explanation is that they must be learning. Conversely, people think children being good at something means they have an in-born natural talent. The child is doing something way more remarkable than the dog but does not get to take credit. The dog does.

Quick feedback:

Something about the text formatting, paragraph density, and paragraph size uniformity makes this difficult to read.

Have added in some sub-headings - if that helps.