Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: entirelyuseless 11 December 2017 12:57:23AM 0 points [-]

Can we agree that I am not trying to prosthelytize anyone?

No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

This is why you need to stop pointing to "Critical Rationalism" etc. as the road to truth.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

First, you are wrong. You should not mention truths that it is harmful to mention in situations where it is harmful to mention them. Second, you are not "not watering down the truth". You are making many nonsensical and erroneous claims and presenting them as though they were a unified system of absolute truth. This is quite definitely proselytism.

Comment author: Fallibilist 11 December 2017 10:22:26AM 0 points [-]

Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.

The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.

AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.

Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.

AI is blocked -- you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.

Comment author: curi 10 December 2017 03:06:36AM *  0 points [-]

genetic algorithms often write and later read data, just like e.g. video game enemies. your examples are irrelevant b/c you aren't addressing the key intellectual issues. this example also adds nothing new over examples that have already been addressed.

you are claiming it's a certain kind of writing and reading data (learning) as opposed to other kinds (non-learning), but aren't writing or referencing anything which discusses this matter. you present some evidence as if no analysis of it was required, and you don't even try to discuss the key issues. i take it that, as with prior discussion, you're simply ignorant of what the issues are (like you simply take an unspecified common sense epistemology for granted, rather than being able to discuss the field). and that you won't want to learn or seriously discuss, and you will be hostile to the idea that you need a framework in which to interpret the evidence (and thus go on using your unquestioned framework that is one of the cultural defaults + some random and non-random quirks).

Comment author: Fallibilist 10 December 2017 06:46:27AM 0 points [-]

People are overly impressed by things that animals can do such as dogs opening doors and think the only explanation is that they must be learning. Conversely, people think children being good at something means they have an in-born natural talent. The child is doing something way more remarkable than the dog but does not get to take credit. The dog does.

Comment author: jmh 08 December 2017 02:31:08PM 0 points [-]

That conclusion -- "dogs are not UKC" doesn't follow from the binary statement about UKC. You're being circular here and not even in a really good way.

While you don't provide any argument for your conclusion about the status of dogs as UKC one might make guesses. However all the guess I can make are 1) just that and have nothing to go with what you might be thinking and 2) all result in me coming to the conclusion that there are NO UKC. That would hardly be a conclusion you would want to aim at.

Comment author: Fallibilist 10 December 2017 06:27:57AM 0 points [-]

I would be happy to rewrite the first line to say: An entity is either a UKC or it has zero -- or approximately zero -- potential to create knowledge. Does that help?

Comment author: entirelyuseless 09 December 2017 03:00:09PM 0 points [-]

I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.

Comment author: Fallibilist 09 December 2017 09:06:20PM *  0 points [-]

Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:

(1) All claims to truth should be carefully scrutinised for error.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

Comment author: HungryHobo 08 December 2017 09:48:15PM *  0 points [-]

First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

he merely makes the guess that we'll be able to do so in future or that we'll be able to build something that will be able to build something in future that will be able to but that border collies never will. (that is based on little more than faith.)

From this he concludes we're "universal constructors" despite us quite trivially falling short of the definition of 'universal constructor' he proposes.

When you start talking about "reach" you utterly utterly cancel out all the claims made about AI in the OP. If a superhuman AI with a brain the size of a planet made of pure computation can just barely manage to comprehend some horribly complex problem and there's a slim chance that humans might one day be able to build AI's which might be able to build AI's which might be able to build AI's that might be able to build that AI that doesn't mean that humans have fully comprehended that thing or could fully comprehend that thing any more than slime mould could be said to comprehend the building of a nuclear power station because they could potentially produce offspring which produce offspring which produce offspring.....[repeat many times] who could potentially design and build a nuclear power station.

His arguments are full of gaping holes. How does this not jump out at other readers?

Comment author: Fallibilist 09 December 2017 03:52:22AM 0 points [-]

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

Our human ancestors on the African savannah could not construct a nuclear reactor, nor the skyline of Manhattan, nor an 18 core microprocessor. They had no idea how. But they had in them the potential and that potential has been realized today. To do that, we created deep knowledge about how our universe works. Why you think that is not going to continue? Why should we not be able to construct a von Neumann probe at some point in the future? Note that most of the advances I am talking about occurred in the last few hundred years. Humans had a big problem with static memes preventing progress for millennia (see BoI). If not for those memes, we may well be at the stars by now. While humans made all this progress, dolphins and border collies did what?

Comment author: entirelyuseless 09 December 2017 02:28:25AM 0 points [-]

Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."

There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.

Comment author: Fallibilist 09 December 2017 03:22:43AM *  0 points [-]

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.

You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.

EDIT: That was meant to be in reply to HungryHobo.

Comment author: IlyaShpitser 08 December 2017 10:54:55PM *  1 point [-]

One of my favorite examples of a smart person being confused about something is ET Jaynes being confused about Bell inequalities.

Smart people are confused all the time, even (perhaps especially) in their area.

Comment author: Fallibilist 09 December 2017 01:49:37AM *  0 points [-]

Critical Rationalists think that E. T. Jaynes is confused about a lot of things. There has been discussion about this on the Fallible Ideas list.

Comment author: Elo 09 December 2017 12:35:31AM 0 points [-]

Hahahahaha

Comment author: Fallibilist 09 December 2017 01:30:42AM *  0 points [-]

https://www.youtube.com/watch?v=0KmimDq4cSU

Everything he says in that video is in accord with CR and with what I wrote about how we acquire knowledge. Note how the audience laughs when he says you start with a guess. What he says is in conflict with how LW thinks the scientific method works (like in the Solomonoff guide I referenced).

Comment author: HungryHobo 08 December 2017 10:15:18PM 5 points [-]

It's pretty common for groups of people to band together around confused beliefs.

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...

The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”

I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!

I couldn’t get through to him, and the book was approved by the board.

— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)
Comment author: Fallibilist 09 December 2017 12:25:54AM 0 points [-]

FYI, Feynman was a critical rationalist.

Comment author: HungryHobo 08 December 2017 10:15:18PM 5 points [-]

It's pretty common for groups of people to band together around confused beliefs.

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...

The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”

I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!

I couldn’t get through to him, and the book was approved by the board.

— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)
Comment author: Fallibilist 09 December 2017 12:12:35AM *  0 points [-]

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) ...

You are indirectly echoing ideas that come from David Deutsch. FYI, Deutsch is a proponent of the Many Worlds Explanation of quantum physics and he invented the idea of the universal quantum computer, founding quantum information theory. He talks about them in BoI.

View more: Next