Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 12 December 2017 03:27:26PM 0 points [-]

LW is kinda dead (not entirely, there is still some shambling around happening, but the brains are in short supply) and is supposed to be replaced by a shinier reincarnated version which has been referred to as LW 2.0 and which is now in open beta at www.lesserwrong.com

LW 1.0 is still here, but if you're looking for active discussion, LW 2.0 might be a better bet.

Re qualia, I suggest that you start with trying to set up hard definitions for terms "qualia" and "exists". Once you do, you may find the problem disappears -- see e.g. this.

Re simulation, let me point out that the simulation hypothesis is conventionally known as "creationism". As to the probability not being calculable, I agree.

Comment author: Shrikey 12 December 2017 01:40:26PM 0 points [-]

Hey there,

Just joined. My only exposure to LW has been reading about it on other websites, and reading a short story by Yudkowsky (I think) about baby eating aliens, which was a fun read. (Though I prefer the original ending to the "real" one.)

I have no idea what I plan to get out of joining the site, other than looking around. I know I do have an itch to write out my thoughts about a few topics on some public forum, but no idea if they're at all novel or interesting.

So, I do have questions about what the "prevalent view (assuming there is one)" is on LW about a couple topics, and where I can find how people have arrived at that view.

  1. Qualia. I don't believe they exist. Or, equivalently, qualia being something "special" is an illusion, just like free will. Is there a consensus here about that? Or has the topic been beaten to death? Also, would the perception of having free will itself count as qualia?

  2. The possibility that we're in a simulation. I believe it's basically currently not calculable, given what we know. It's a consequence of me finding no compelling reason to believe that the capabilities of technology either end shortly beyond our current capabilities, or are unimaginably limitless. It's simply not predictable where they end, but obvious that they do end somewhere. Any of that interest anyone?

Comment author: jmh 12 December 2017 01:13:36PM 0 points [-]

Well it's better than jumping to unsupported conclusion I suppose that should help at some level. Not sure it really helps with regard to either 1 or 2 in my response but that's a different matter I think.

Comment author: Mark-Mills 12 December 2017 10:11:08AM 0 points [-]

Interesting

Comment author: Luke_A_Somers 12 December 2017 01:41:29AM 0 points [-]

… you don't think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.

Comment author: Lumifer 11 December 2017 03:30:03PM *  2 points [-]

The truth that curi and myself are trying to get across to people here is... it is the unvarnished truth... know far more about epistemology than you. That again is an unvarnished truth

In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?

Lots of people claim to have access to Truth -- what makes you special?

Comment author: ChristianKl 11 December 2017 12:41:19PM 0 points [-]

You said you can't deduce something. This means that there's a puzzle that you couldn't solve and it's not a hard problem to solve.

Comment author: HungryHobo 11 December 2017 11:58:08AM *  0 points [-]

Yes, our ancestors could not build a nuclear reactor, the australian natives spent 40 thousand years without constructing a bow and arrow. Neither the Australian natives nor anyone else has built a cold fusion reactor. Running half way doesn't mean you've won the race.

Putting ourselves in the category of "entities who can build anything" is like putting yourself in the category "people who've been on the moon" when you've never actually been to the moon but really really want to be an astronaut one day. You might even one day become an astronaut but aspirations don't put you in the category with Armstrong until you actually do the thing.

Your pet collie might dream vaguely of building cars, perhaps in 5,000,000 years it's descendants might have self selected for intelligence and we'll have collie engineers, that doesn't make it an engineer today.

Currently by the definition in that book humans are not universal constructors, at best we might one day be universal constructors if we don't all get wiped out by something first. It would be nice if we became such one day. But right now we're merely closer to being universal constructors than unusually bright ravens and collies.

Feelings are not fact. Hopes are not reality.

Assuming that nothing will stop us based on a thin sliver of history is shaky extrapolation:

https://xkcd.com/605/

Comment author: HungryHobo 11 December 2017 11:43:56AM *  0 points [-]

Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.

Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .

http://www.dailygalaxy.com/my_weblog/2009/04/1st-artificially-intelligent-adam-and-eve-created.html

https://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help/

It's a remarkable system and could be extremely useful for scientists in many sectors but it's a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.

This AI is not a pianist robot and doesn't play chess but has broad potential applications across many areas of science.

It blows a hole in the side of the "Universal Knowledge Creator" idea since it's a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there's some magical UKC line or category (which humans technically don't qualify for yet anyway) is based on literally nothing except feelings. there's not an ounce of logic or evidence behind it.

Comment author: Fallibilist 11 December 2017 10:22:26AM 0 points [-]

Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.

The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.

AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.

Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.

AI is blocked -- you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.

Comment author: entirelyuseless 11 December 2017 12:57:23AM 0 points [-]

Can we agree that I am not trying to prosthelytize anyone?

No, I do not agree. You have been trying to proselytize people from the beginning and are still doing trying.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

This is why you need to stop pointing to "Critical Rationalism" etc. as the road to truth.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

First, you are wrong. You should not mention truths that it is harmful to mention in situations where it is harmful to mention them. Second, you are not "not watering down the truth". You are making many nonsensical and erroneous claims and presenting them as though they were a unified system of absolute truth. This is quite definitely proselytism.

Comment author: Lumifer 10 December 2017 07:43:38PM 0 points [-]

LOL. You keep insisting that people have to play by your rules but really, they don't.

You can keep inventing your own games and declaring yourself winner by your own rules, but it doesn't look like a very useful activity to me.

Comment author: curi 10 December 2017 09:02:15AM 0 points [-]

You need any framework, but never provided one. I have a written framework, you don't. GG.

Comment author: Lumifer 10 December 2017 07:23:48AM *  0 points [-]

genetic algorithms often write and later read data, just like e.g. video game enemies

Huh? First, the expression "genetic algorithms" doesn't mean what you think it means. Second, I don't understand the writing and reading data part. Write which data to what substrate?

your examples are irrelevant b/c you aren't addressing the key intellectual issues

I like dealing with reality. You like dealing with abstractions in your head. We talked about this -- we disagree. You know that.

But if you are uninterested in empirical evidence, why bother discussing it at all?

you won't want to learn or seriously discuss

Yes, I'm not going to do what you want me to do. You know that as well.

you will be hostile to the idea that you need a framework in which to interpret the evidence

I will be hostile to the idea that I need your framework to interpret the evidence, yes. You know that, too.

Comment author: Fallibilist 10 December 2017 06:46:27AM 0 points [-]

People are overly impressed by things that animals can do such as dogs opening doors and think the only explanation is that they must be learning. Conversely, people think children being good at something means they have an in-born natural talent. The child is doing something way more remarkable than the dog but does not get to take credit. The dog does.

Comment author: Fallibilist 10 December 2017 06:27:57AM 0 points [-]

I would be happy to rewrite the first line to say: An entity is either a UKC or it has zero -- or approximately zero -- potential to create knowledge. Does that help?

Comment author: curi 10 December 2017 03:06:36AM *  0 points [-]

genetic algorithms often write and later read data, just like e.g. video game enemies. your examples are irrelevant b/c you aren't addressing the key intellectual issues. this example also adds nothing new over examples that have already been addressed.

you are claiming it's a certain kind of writing and reading data (learning) as opposed to other kinds (non-learning), but aren't writing or referencing anything which discusses this matter. you present some evidence as if no analysis of it was required, and you don't even try to discuss the key issues. i take it that, as with prior discussion, you're simply ignorant of what the issues are (like you simply take an unspecified common sense epistemology for granted, rather than being able to discuss the field). and that you won't want to learn or seriously discuss, and you will be hostile to the idea that you need a framework in which to interpret the evidence (and thus go on using your unquestioned framework that is one of the cultural defaults + some random and non-random quirks).

Comment author: Lumifer 10 December 2017 12:39:49AM 1 point [-]

The problem is that very very few orcas do that -- only two pods in the world, as far as we know. Orcas which live elsewhere (e.g. the Pacific Northwest orcas which are very well-observed) do not do anything like this. Moreover, there is evidence that the technique is taught by adults to juvenile orcas. See e.g .here or here.

Comment author: Fallibilist 09 December 2017 09:06:20PM *  0 points [-]

Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:

(1) All claims to truth should be carefully scrutinised for error.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

Comment author: entirelyuseless 09 December 2017 03:00:09PM 0 points [-]

I basically agree with this, although 1) you are expressing it badly, 2) you are incorporating a true fact about the world into part of a nonsensical system, and 3) you should not be attempting to proselytize people.

Comment author: turchin 09 December 2017 11:11:24AM 0 points [-]

Why? if we have fullblown papercliper, which is coming of age, we are already doomed, and starting its bullshit detector will not make our situation any worse.

Comment author: turchin 09 December 2017 11:07:10AM 0 points [-]

In fact, I am not describing here how to build such a trap, but I express our commitment to build the trap, in order to install uncertainty into any future AI if the trap exists and what it could be.

I think that real constructors of the trap will find the correct ways to make it effective and balance false positives with needed level of control.

In response to Memory is Everything
Comment author: Arthur12 09 December 2017 06:42:02AM 0 points [-]

Memory is everything. Explains neurologist Eric Kandel, who received his Nobel Prize in 2000 for his pioneering research on the physiology of brain memory: "Memory is the glue," says Kandel, who links the mind and ensures continuity. His last mentor, the respected neurologist Harry Grundfest, told him: "You must follow a reductionist approach, one cell at a time.

Comment author: Lumifer 09 December 2017 05:15:33AM *  0 points [-]

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Oh, get stuffed. I tried debating you and the results were... discouraging.

Yes, I obviously think that CR is deluded.

Comment author: Fallibilist 09 December 2017 03:52:22AM 0 points [-]

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

Our human ancestors on the African savannah could not construct a nuclear reactor, nor the skyline of Manhattan, nor an 18 core microprocessor. They had no idea how. But they had in them the potential and that potential has been realized today. To do that, we created deep knowledge about how our universe works. Why you think that is not going to continue? Why should we not be able to construct a von Neumann probe at some point in the future? Note that most of the advances I am talking about occurred in the last few hundred years. Humans had a big problem with static memes preventing progress for millennia (see BoI). If not for those memes, we may well be at the stars by now. While humans made all this progress, dolphins and border collies did what?

Comment author: Fallibilist 09 December 2017 03:22:43AM *  0 points [-]

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.

You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.

EDIT: That was meant to be in reply to HungryHobo.

Comment author: entirelyuseless 09 December 2017 02:28:25AM 0 points [-]

Nothing to see here; just another boring iteration of the absurd idea of "shifting goalposts."

There really is a difference between a general learning algorithm and specifically focused ones, and indeed, anything that can generate and test and run experiments will have the theoretical capability to control pianist robots and scuba dive and run a nail salon.

Comment author: Fallibilist 09 December 2017 01:49:37AM *  0 points [-]

Critical Rationalists think that E. T. Jaynes is confused about a lot of things. There has been discussion about this on the Fallible Ideas list.

Comment author: Fallibilist 09 December 2017 01:30:42AM *  0 points [-]

https://www.youtube.com/watch?v=0KmimDq4cSU

Everything he says in that video is in accord with CR and with what I wrote about how we acquire knowledge. Note how the audience laughs when he says you start with a guess. What he says is in conflict with how LW thinks the scientific method works (like in the Solomonoff guide I referenced).

Comment author: curi 09 December 2017 12:50:26AM *  0 points [-]

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Or are you claiming the OP is mistaken even within the CR framework..? Or do you have no rival view, but think CR is wrong and we just don't have any good philosophy? In that case the appropriate thing to do would be to answer this challenge that no one even tried to answer: https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology

Comment author: Elo 09 December 2017 12:35:31AM 0 points [-]

Hahahahaha

Comment author: Lumifer 09 December 2017 12:34:00AM 0 points [-]

This sentence from the OP:

Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts.

A bit more generally, the claim that humans are UKCs and nothing else can create knowledge which is defined as a way to solve a problem.

Comment author: Fallibilist 09 December 2017 12:25:54AM 0 points [-]

FYI, Feynman was a critical rationalist.

Comment author: Fallibilist 09 December 2017 12:12:35AM *  0 points [-]

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) ...

You are indirectly echoing ideas that come from David Deutsch. FYI, Deutsch is a proponent of the Many Worlds Explanation of quantum physics and he invented the idea of the universal quantum computer, founding quantum information theory. He talks about them in BoI.

Comment author: IlyaShpitser 08 December 2017 10:54:55PM *  1 point [-]

One of my favorite examples of a smart person being confused about something is ET Jaynes being confused about Bell inequalities.

Smart people are confused all the time, even (perhaps especially) in their area.

Comment author: curi 08 December 2017 10:22:51PM 0 points [-]

AlphaZero clearly isn't general purpose. What are we even debating?

Comment author: HungryHobo 08 December 2017 10:15:18PM 3 points [-]

It's pretty common for groups of people to band together around confused beliefs.

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...

The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”

I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!

I couldn’t get through to him, and the book was approved by the board.

— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)
Comment author: HungryHobo 08 December 2017 10:04:11PM 0 points [-]

This again feels like one of those things that creeps the second anyone points you to examples.

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

Nothing to see here everyone.

This is just yet another boring iteration of the forever shifting goalposts of AI .

Comment author: curi 08 December 2017 09:56:36PM 0 points [-]

yes that'd be my first guess – that it's caused by something in the gene pool of orcas. why not? and what else would it be?

Comment author: HungryHobo 08 December 2017 09:48:15PM *  0 points [-]

First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

he merely makes the guess that we'll be able to do so in future or that we'll be able to build something that will be able to build something in future that will be able to but that border collies never will. (that is based on little more than faith.)

From this he concludes we're "universal constructors" despite us quite trivially falling short of the definition of 'universal constructor' he proposes.

When you start talking about "reach" you utterly utterly cancel out all the claims made about AI in the OP. If a superhuman AI with a brain the size of a planet made of pure computation can just barely manage to comprehend some horribly complex problem and there's a slim chance that humans might one day be able to build AI's which might be able to build AI's which might be able to build AI's that might be able to build that AI that doesn't mean that humans have fully comprehended that thing or could fully comprehend that thing any more than slime mould could be said to comprehend the building of a nuclear power station because they could potentially produce offspring which produce offspring which produce offspring.....[repeat many times] who could potentially design and build a nuclear power station.

His arguments are full of gaping holes. How does this not jump out at other readers?

Comment author: curi 08 December 2017 09:46:34PM 1 point [-]

Here are some examples of domains other than game playing: architecture, chemistry, cancer research, website design, cryonics research, astrophysics, poetry, painting, political campaign running, dog toy design, knitting.

The fact that the self-play method works well for chess but not poetry is domain knowledge the programmers had, not something alphazero figured out for itself.

Comment author: Fallibilist 08 December 2017 08:53:52PM *  0 points [-]

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Deutsch gives arguments that people are universal explainers/constructors (this requires that they be computationally universal as well). What is your argument that there are some things that a universal explainer could never be able to understand? Alternatively, what is your argument that people are not universal explainers? Deutsch talks about the “reach” of knowledge. Knowledge created to solve a problem in one domain can solve problems in other domains too. What is your argument that the knowledge we create could never reach into this inexplicable realm you posit?

Comment author: Lumifer 08 December 2017 06:11:55PM *  0 points [-]

the AI risks starting these triggers when it starts to think first thoughts about existing of the triggers

So basically you have a trap which kills you the moment you become aware of it. The first-order effect will be a lot of random deaths from just blundering into such a trap while walking around.

I suspect that the second-order effect will be the rise of, basically, superstitions and some forms of magical thinking which will be able to provide incentives to not go "there" without actually naming "there". I am not sure this is a desirable outcome.

Comment author: Luke_A_Somers 08 December 2017 05:56:54PM 0 points [-]

I suspect that an AI will have a bullshit detector. We want to avoid setting it off.

Comment author: Lumifer 08 December 2017 05:32:12PM *  0 points [-]

It's also rank nonsense -- this bit in particular:

dog genes contain behavioural algorithms pre-programmed by evolution

Some orcas hunt seal pups by temporarily stranding themselves on the beaches in order to reach their prey. Is that behaviour programmed in their genes? The genes of all orcas?

Comment author: Lumifer 08 December 2017 03:56:12PM *  0 points [-]

Show results in 3 separate domains.

  • Chess
  • Go
  • Shogi
Comment author: Lumifer 08 December 2017 03:55:23PM *  0 points [-]

Unreason is accepting the claims of a paper at face value, appealing to its authority

Which particular claim that the paper makes I accepted at face value and which you think is false? Be specific.

I was aware of AlphaGo Zero before I posted -- check out my link

AlphaGo Zero and AlphaZero are different things -- check out my link.

In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it's a universal knowledge creator?

Comment author: HungryHobo 08 December 2017 03:15:54PM 0 points [-]

This argument seems chosen to make it utterly unfalsifiable.

If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"

Comment author: HungryHobo 08 December 2017 03:07:06PM *  0 points [-]

You're describing what's known as General game playing.

you program an AI which will play a set of games, you don't know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.

This is in fact a field in AI.

also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it's ancestor AlphaGoZero) hinting that they're generalising it very successfully.

Comment author: HungryHobo 08 December 2017 02:44:12PM *  2 points [-]

...ok so I don't get to find the arguments out unless I buy a copy of the book?

right... looking at a pirated copy of the book, the phrase "universal knowledge creator" appears nowhere in it nor "knowledge creator"

But lets have a read of the chapter "Artificial Creativity"

big long spiel about ELIZA being crap. Same generic qualia arguments as ever.

One minor gem in there for which the author deserves to be commended:

"I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it"

...

Claim that genetic algorithms and similar learning systems aren't really inventing or discovering anything because they reach local maxima and thus the design is really just coming from the programmer. (presumably then the developers of alpha-go must be the worlds best grandmaster go players)

I see the phrase "universal constructors" where the author claims that human bodies are able to turn anything into anything. This argument appears to rest squarely on the idea that while there may be some things we actually can't do or ideas we actually can't handle we should, one day, be able to either alter ourselves or build machines (AI's?) that can handle it. Thus we are universal constructors and can do anything.

On a related note I an in fact an office block because while I may not actually be 12 stories tall and covered in glass I could in theory build machines which build machine which could be used to build an office block and thus by this books logic, that makes me an office block and from this point forward in the comments we can make arguments based on the assumption that I can contain at least 75 office workers along with their desks and equipment

The fact that we haven't actually managed to create machines that can turn anything into anything yet strangely doesn't get a look in on the argument about why we're currently universal constructors but dolphins are not.

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Summary: I'd give it a C- but upgrade it to C for being better than the geocities website selling it.

Also, the book doesn't actually address my objections.

Comment author: jmh 08 December 2017 02:31:08PM 0 points [-]

That conclusion -- "dogs are not UKC" doesn't follow from the binary statement about UKC. You're being circular here and not even in a really good way.

While you don't provide any argument for your conclusion about the status of dogs as UKC one might make guesses. However all the guess I can make are 1) just that and have nothing to go with what you might be thinking and 2) all result in me coming to the conclusion that there are NO UKC. That would hardly be a conclusion you would want to aim at.

Comment author: Subsumed 08 December 2017 12:03:43PM 0 points [-]

I feel the term "domain" is doing a lot of work in these replies. Define domain, what is the size limit of a domain? Might all of reality be a domain and thus a domain-specific algorithm be sufficient for anything of interest?

Comment author: curi 08 December 2017 11:02:59AM *  0 points [-]

If they wanna convince anyone it isn't using domain-specific knowledge created by the programmers, why don't they demonstrate it in the straightforward way? Show results in 3 separate domains. But they can't.

If it really has nothing domain specific, why can't it work with ANY domain?

Comment author: Fallibilist 08 December 2017 09:54:23AM 0 points [-]

Unreason is accepting the claims of a paper at face value, appealing to its authority, and, then, when this is pointed out to you, claiming the other party is unreasonable.

I was aware of AlphaGo Zero before I posted -- check out my link. Note that it can't even learn the rules of the game. Humans can. They can learn the rules of all kinds of games. They have a game-rule learning universality. That AlphaGo Zero can't learn the rules of one game is indicative of how much domain knowledge the developers actually put into it. They are fooling themselves if they think AlphaGo Zero has superhuman learning ability and to be progress towards AI.

Comment author: Lumifer 08 December 2017 06:02:35AM 0 points [-]

<shrug> You sound less and less reasonable with every comment.

It doesn't look like you conversion attempts are working well. Why do you think this is so?

Comment author: curi 08 December 2017 05:17:19AM *  0 points [-]

They chose a limited domain and then designed and used an algorithm that works in that domain – which constitutes domain knowledge. The paper's claim is blatantly false; you are gullible and appealing to authority.

Comment author: gwern 08 December 2017 03:31:56AM 0 points [-]

On an intermediate class of anesthetics: "Surgical Patients May Be Feeling Pain—and (Mostly) Forgetting It: Amnesic anesthetics are convenient and help patients make a faster recovery, but they don't necessarily prevent suffering during surgery", Kate Cole-Adams:

In 1993, as a little-known anesthesiologist from the recursive Hull, England, Russell published a startling study. Using a technique almost primitive in its simplicity, he monitored 32 women undergoing major gynecological surgery at the Hull Royal Infirmary to assess their levels of consciousness. The results convinced him to stop the trial halfway through.

The women were put to sleep with a low-dose anesthetic cocktail that had been recently lauded as providing protection against awareness. The main ingredients were the (then) relatively new drug midazolam, along with a painkiller and muscle relaxant to effectively paralyze her throughout the surgery. Before the women were anesthetized, however, Russell attached what was essentially a blood-pressure cuff around each woman’s forearm. The cuff was then tightened to act as a tourniquet that prevented the flow of blood, and therefore muscle relaxant, to the right hand. Russell hoped to leave open a simple but ingenious channel of communication—like a priority phone line—on the off chance that anyone was there to answer him. Once the women were unconscious Russell put headphones over their ears through which, throughout all but the final minutes of the operation, he played a prerecorded one-minute continuous-loop cassette. Each message would begin with Russell’s voice repeating the patient’s name twice. Then each woman would hear an identical message. “This is Dr. Russell speaking. If you can hear me, I would like you to open and close the fingers of your right hand, open and close the fingers of your right hand.”

Under the study design, if a patient appeared to move her hand in response to the taped command, Russell was to hold her hand, raise one of the earpieces and say her name, then deliver this instruction: “If you can hear me, squeeze my fingers.” If the woman responded, Russell would ask her to let him know, by squeezing again, if she was feeling any pain. In either of these scenarios, he would then administer a hypnotic drug to put her back to sleep. By the time he had tested 32 women, 23 had squeezed his hand when asked if they could hear. Twenty of them indicated they were in pain. At this point he stopped the study. When interviewed in the recovery room, none of the women claimed to remember anything, though three days later several showed some signs of recall. Two agreed after prompting that they had been asked to do something with their right hand. Neither of them could remember what it was, but while they were thinking about it, said Russell, both involuntarily opened and closed that hand. Fourteen of the patients in the study (including one who was later excluded) showed some signs of light anesthesia (increased heart rate, blood-pressure changes, sweating, tears), but this was true of fewer than half of the hand-squeezers.* Overall, said Russell, such physical signs “seemed of little value” in predicting intraoperative consciousness.

He concluded thus:

If the aim of general anesthesia is to ensure that a patient has no recognizable conscious recall of surgery, and views the perioperative period [during the surgery] as a “positive” experience, then ... [this regimen] may fulfill that requirement. However, the definition of general anesthesia would normally include unconsciousness and freedom from pain during surgery—factors not guaranteed by this technique.

For most of the women in his study, he continued, the state of mind produced by the anesthetic could not be viewed as general anesthesia. Rather, he said, “it should be regarded as general amnesia.”...Twenty years after that discontinued study, Russell staged similar experiments using the isolated-forearm technique alongside a bispectral-index monitor (BIS), which tracks depth of anesthesia. While the number of women who responded dropped to one-third when staff used an inhalation anesthetic, another study using the intravenous drug propofol showed that during BIS-guided surgery, nearly three-quarters of patients still responded to command—half those responses within the manufacturer’s recommended surgical range.

...(This post is adapted from Cole-Adams’s new 2017 book, Anesthesia: The Gift of Oblivion and the Mystery of Consciousness.)

Comment author: Mitchell_Porter 07 December 2017 09:09:59PM 0 points [-]

Four hours of self-play and it's the strongest in the world. Soon the machines will be parenting us.

Comment author: Lumifer 07 December 2017 08:49:49PM *  1 point [-]

AlphaGo is a remarkable algorithm, but it cannot create knowledge

Funny you should mention that. AlphaGo has a successor, AlphaZero. Let me quote:

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

Note: "given no domain knowledge except the game rules"

Comment author: curi 07 December 2017 07:23:49PM *  0 points [-]

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true?

are you asking for infallible proof, or merely argument?

anything rigorous?

see this book http://beginningofinfinity.com (it also addresses most of your subsequent questions)

Comment author: Hafurelus 07 December 2017 07:06:35PM 0 points [-]

I've mailed CFAR (contact@rationality.org) or should I have mailed people directly?

Comment author: Fallibilist 07 December 2017 06:47:58PM 0 points [-]

As I explained in the post, dog genes contain behavioural algorithms pre-programmed by evolution. The algorithms have some flexibility -- akin to parameter tuning -- and the knowledge contained in the algorithms is general purpose enough so it can be tuned for dogs to do things like open boxes. So it might look like the book is learning something but the knowledge was created by biological evolution, not the individual dog. The knowledge in the dog's genes is an example of what Popper calls knowledge without a knowing subject. Note that all dogs have approximately the same behavioural repertoire. They are kind of like characters in a video game. Some boxes a dog will never open, though a human will learn to do it.

A child is a UKC so when a child learns to open a box, the child creates new knowledge afresh in their own mind. It was not put there by biological evolution. A child's knowledge of box-opening will grow, unlike a dog's, and they will learn to open boxes in ways a dog never can. And different children can be very different in terms of what they know how to do.

Comment author: Manfred 07 December 2017 05:12:01PM 0 points [-]

This site isn't too active - maybe email someone from CFAR directly?

Comment author: HungryHobo 07 December 2017 02:28:26PM *  1 point [-]

I started this post off trying to be charitable but gradually became less so.

"This means we can create any knowledge which it is possible to create."

Is there any proof that this is true? anything rigorous? The human mind could have some notable blind spots. For all we know there could be concepts that happen to cause normal human minds to suffer lethal epileptic fits similar to how certain patterns of flashing light can to some people. Or simple concepts that could be incredibly inefficient to encode in a normal human mind that could be easily encoded in a mind of a similar scale with a different architecture.

"There is no such thing as a partially universal knowledge creator."

What is this based upon? some animals can create novel tools to solve problems. Some humans can solve very simple problems but are quickly utterly stumped beyond a certain point. Dolphins can be demonstrated to be able to form hypothesis and test them but stop at simple hypothesis.

Is a human a couple of standard deviations bellow average who refuses to entertain hypotheticals a "universal knowledge creator"? Can the author point to any individuals on the border or below it either due to brain damage or developmental problems?

Just because a turning machine can in theory run all computable computations that doesn't mean that a given mind can solve all problems that that Turing machine could just because it can understand the basics of how a turing machine works. The programmer is not just a super-set of their programs.

"These ideas imply that AI is an all-or-none proposition."

You've not really established that very well at all. You've simply claimed it with basically no support.

your arguments seem to be poorly grounded and poorly supported, simply stating things as if they were fact does not make them so.

"Humans do not use the computational resources of their brains to the maximum."

Interesting claim. So these ruthlessly evolved brains aren't being used even when our lives and the lives of our progeny are in jeopardy? Odd to evolve all that expensive excess capacity then not use it.

"Critical Rationalism, then, says AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter"

Ok, here's a challenge. We both set up a chess AI but I get to use the hardware that was recently used to run AlphaZero while you only get to use a 486. We both get to use the same source code. Standard tournament chess rules with time limits.

You seem to be mentally modeling all potential AI as basically just a baby based on literally... nothing whatsoever.

Your TCS link seems to be fluff and buzzwords irrelevant to AI.

"Some reading this will object because CR and TCS are not formal enough — there is not enough maths"

That's an overly charitable way of putting it. Backing up none of your claims then building a gigantic edifice of argument on thin air is not great for formal support of something.

"Not yet being able to formalize this knowledge does not reflect on its truth or rigor."

"We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the objective probability of events (e.g., AlphaGo). In CR, the status of ideas is either "currently not problematic" or "currently problematic", there are no probabilities of ideas. CR is a digital epistemology. "

The space of potentially true things that are actually completely false is infinite. If you just pick ideas out of the air and don't bother with testing them and showing them to be correct you provide about as much useful insight to those around you as the average screaming madman on the street corner preaching that the Robot Lizardmen are working with the CIA to put radio transmiters in his teeth to hide the truth about 9/11.

Proving your claims to actually be true or to have some meaningful chance of being true matters.

Comment author: Subsumed 07 December 2017 09:03:19AM 0 points [-]

Has a dog that learns to open a box to get access to a food item not created knowledge according to this definition? What about a human child that has learned the same?

Comment author: Fallibilist 07 December 2017 12:17:43AM 0 points [-]

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

Please quote me accurately. What I wrote was:

AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have

I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.

Comment author: Fallibilist 06 December 2017 11:41:49PM *  0 points [-]

In CR, knowledge is information which solves a problem. CR criticizes the justified-true-belief idea of knowledge. Knowledge cannot be justified, or shown to be certain, but this doesn't matter for if it solves a problem, it is useful. Justification is problematic because it is ultimately authoritarian. It requires that you have some base, which itself cannot be justified except by an appeal to authority, such as the authority of the senses or the authority of self-evidence, or such like. We cannot be certain of knowledge because we cannot say if an error will be exposed in the future. This view is contrary to most people's intuition and for this reason they can easily misunderstand the CR view, which commonly happens.

CR accepts something as knowledge which solves a problem if it has no known criticisms. Such knowledge is currently unproblematic but may become so in the future if an error is found.

Critical rationalists are fallibilists: they don't look for justification, they try to find error and they accept anything they cannot find an error in. Fallibilists, then, expose their knowledge to tough criticism. Contrary to popular opinion, they are not wish-washy, hedging, or uncertain. They often have strong opinions.

Comment author: turchin 06 December 2017 11:02:17PM 0 points [-]

It will kill humanity not because it will be annoyed, but for two main goals: its own safety, or to use human atoms. Other variants also possible, I explored them here: http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/

Comment author: Subsumed 06 December 2017 10:25:40PM 0 points [-]

How is knowledge defined in CR?

Comment author: Fallibilist 06 December 2017 09:55:12PM 0 points [-]

Note the "There is no such thing as a partially universal knowledge creator.". That means an entity either is a UKC or it has no ability, or approximately zero ability, to create knowledge. Dogs are in the latter bucket.

Comment author: tukabel 06 December 2017 06:56:06PM 1 point [-]

after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of "critical theorists", also quite "religiously" inflamed... but I waited till the end, and got a nice confirmation by that "AI rights" line... looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)

otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, "friendly" AI is not really a rigorous scientific term, rather a journalistic or even "propagandistic" one)

also, it's quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of "natural stupidity" and DeepAnimal brain parts - having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)

but this "impossibility of uploading" is a tricky thing - who knows what can or cannot be "transferred" and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves... and others will happily upload to this megacheap and gigaperformant universal substrate)

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World

Comment author: Fallibilist 06 December 2017 06:45:46PM *  0 points [-]

My intent was to summarise the CR view on AI. I've providing links so you can read more.

EDIT: BTW I disagree that I have made "a bunch of assertions". I have provided arguments, for example, about induction. I suspect, also, that you think observation - or evidence - comes first and I have argued against that.

Comment author: Fallibilist 06 December 2017 06:43:28PM 0 points [-]

I am summarizing a view shared by other Critical Rationalists, including Deutsch. Do you think they are confused too?

Comment author: korin43 06 December 2017 06:37:45PM 0 points [-]

The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI Problem.

Unfortunately that makes this post not very useful. It's definitely interesting, but you're just making a bunch of assertions with very little evidence (mostly just that smart people like Ayn Rand and a quantum physicist agree with you).

Comment author: jmh 06 December 2017 06:34:16PM *  1 point [-]

"Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge"

From what you say prior to the quoted bit I don't even know why one needs to say anything about dogs. The either universal knowledge creator (UKC) or not is largely (or should this be binary as well?) a tautological statement. It's not clear that you could prove dogs are or are not in either of the two buckets. The status of dogs with regard to UKC certinaly doesn't follow from the binary claim statement.

Perhaps this is a false premise embedded in your thinking that helps you get to (didn't read to the end) some conclusion about how an AI must also be a universal knowledge creator so on par with humans (in your/the CR assessment) so humans must respect the AI as enjoying the same rights as a human.

Comment author: Fallibilist 06 December 2017 06:30:57PM 1 point [-]

Have added in some sub-headings - if that helps.

Comment author: jmh 06 December 2017 06:18:26PM 0 points [-]

Not sure I have anything to add to the question but do find myself having to ask why the general presumption so often seems to be that of AI gets annoyed at stupid people and kills humanity.

It's true that we can think of situation where that might be possible, and maybe even a predictable AI response, but I just wonder if such settings are all that probable.

Has anyone ever sat down and tried to list out the situations where an AI would have some incentive to kill off humanity and then assess how reasonable thinking such a situation might be?

Comment author: IlyaShpitser 06 December 2017 06:05:05PM *  4 points [-]

You are really confused about statistics and learning, and possibly also about formal languages in theoretical CS. I neither want nor have time to get into this with you, just wanted to point this out for your potential benefit.

Comment author: Dustin 06 December 2017 05:47:15PM 0 points [-]

Quick feedback:

Something about the text formatting, paragraph density, and paragraph size uniformity makes this difficult to read.

Comment author: IlyaShpitser 06 December 2017 05:06:54PM *  0 points [-]

http://callingbullshit.org/syllabus.html

(This is not "Yudkowskian Rationality"<tm> though.)

View more: Next