Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Shrikey 12 December 2017 01:40:26PM 0 points [-]

Hey there,

Just joined. My only exposure to LW has been reading about it on other websites, and reading a short story by Yudkowsky (I think) about baby eating aliens, which was a fun read. (Though I prefer the original ending to the "real" one.)

I have no idea what I plan to get out of joining the site, other than looking around. I know I do have an itch to write out my thoughts about a few topics on some public forum, but no idea if they're at all novel or interesting.

So, I do have questions about what the "prevalent view (assuming there is one)" is on LW about a couple topics, and where I can find how people have arrived at that view.

  1. Qualia. I don't believe they exist. Or, equivalently, qualia being something "special" is an illusion, just like free will. Is there a consensus here about that? Or has the topic been beaten to death? Also, would the perception of having free will itself count as qualia?

  2. The possibility that we're in a simulation. I believe it's basically currently not calculable, given what we know. It's a consequence of me finding no compelling reason to believe that the capabilities of technology either end shortly beyond our current capabilities, or are unimaginably limitless. It's simply not predictable where they end, but obvious that they do end somewhere. Any of that interest anyone?

Comment author: Lumifer 12 December 2017 03:27:26PM 1 point [-]

LW is kinda dead (not entirely, there is still some shambling around happening, but the brains are in short supply) and is supposed to be replaced by a shinier reincarnated version which has been referred to as LW 2.0 and which is now in open beta at www.lesserwrong.com

LW 1.0 is still here, but if you're looking for active discussion, LW 2.0 might be a better bet.

Re qualia, I suggest that you start with trying to set up hard definitions for terms "qualia" and "exists". Once you do, you may find the problem disappears -- see e.g. this.

Re simulation, let me point out that the simulation hypothesis is conventionally known as "creationism". As to the probability not being calculable, I agree.

Comment author: Fallibilist 11 December 2017 10:22:26AM 0 points [-]

Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.

The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.

AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.

Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.

AI is blocked -- you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.

Comment author: Lumifer 11 December 2017 03:30:03PM *  2 points [-]

The truth that curi and myself are trying to get across to people here is... it is the unvarnished truth... know far more about epistemology than you. That again is an unvarnished truth

In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?

Lots of people claim to have access to Truth -- what makes you special?

Comment author: curi 10 December 2017 09:02:15AM 0 points [-]

You need any framework, but never provided one. I have a written framework, you don't. GG.

Comment author: Lumifer 10 December 2017 07:43:38PM 0 points [-]

LOL. You keep insisting that people have to play by your rules but really, they don't.

You can keep inventing your own games and declaring yourself winner by your own rules, but it doesn't look like a very useful activity to me.

Comment author: curi 10 December 2017 03:06:36AM *  0 points [-]

genetic algorithms often write and later read data, just like e.g. video game enemies. your examples are irrelevant b/c you aren't addressing the key intellectual issues. this example also adds nothing new over examples that have already been addressed.

you are claiming it's a certain kind of writing and reading data (learning) as opposed to other kinds (non-learning), but aren't writing or referencing anything which discusses this matter. you present some evidence as if no analysis of it was required, and you don't even try to discuss the key issues. i take it that, as with prior discussion, you're simply ignorant of what the issues are (like you simply take an unspecified common sense epistemology for granted, rather than being able to discuss the field). and that you won't want to learn or seriously discuss, and you will be hostile to the idea that you need a framework in which to interpret the evidence (and thus go on using your unquestioned framework that is one of the cultural defaults + some random and non-random quirks).

Comment author: Lumifer 10 December 2017 07:23:48AM *  1 point [-]

genetic algorithms often write and later read data, just like e.g. video game enemies

Huh? First, the expression "genetic algorithms" doesn't mean what you think it means. Second, I don't understand the writing and reading data part. Write which data to what substrate?

your examples are irrelevant b/c you aren't addressing the key intellectual issues

I like dealing with reality. You like dealing with abstractions in your head. We talked about this -- we disagree. You know that.

But if you are uninterested in empirical evidence, why bother discussing it at all?

you won't want to learn or seriously discuss

Yes, I'm not going to do what you want me to do. You know that as well.

you will be hostile to the idea that you need a framework in which to interpret the evidence

I will be hostile to the idea that I need your framework to interpret the evidence, yes. You know that, too.

Comment author: curi 08 December 2017 09:56:36PM 0 points [-]

yes that'd be my first guess – that it's caused by something in the gene pool of orcas. why not? and what else would it be?

Comment author: Lumifer 10 December 2017 12:39:49AM 1 point [-]

The problem is that very very few orcas do that -- only two pods in the world, as far as we know. Orcas which live elsewhere (e.g. the Pacific Northwest orcas which are very well-observed) do not do anything like this. Moreover, there is evidence that the technique is taught by adults to juvenile orcas. See e.g .here or here.

Comment author: curi 09 December 2017 12:50:26AM *  0 points [-]

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Or are you claiming the OP is mistaken even within the CR framework..? Or do you have no rival view, but think CR is wrong and we just don't have any good philosophy? In that case the appropriate thing to do would be to answer this challenge that no one even tried to answer: https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology

Comment author: Lumifer 09 December 2017 05:15:33AM *  0 points [-]

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Oh, get stuffed. I tried debating you and the results were... discouraging.

Yes, I obviously think that CR is deluded.

Comment author: curi 08 December 2017 10:22:51PM 0 points [-]

AlphaZero clearly isn't general purpose. What are we even debating?

Comment author: Lumifer 09 December 2017 12:34:00AM 0 points [-]

This sentence from the OP:

Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts.

A bit more generally, the claim that humans are UKCs and nothing else can create knowledge which is defined as a way to solve a problem.

Comment author: turchin 30 November 2017 09:40:27AM *  0 points [-]

Surely, most arguments are weak, but it is not a problem, as we try all possible independent arguments, hoping that at least one will work. So, there is no reason to ignore putting weak arguments in the text of the message.

However, exactly this argument could be made stronger. It is intended to prevent FIRST thought about disobedience in AI. Thus AI, which acts based on this message, didn't think much about disobedience and its limits, and is not "superintelligent" in this topic, but rather clueless.

I reformulated the idea as following: AI can't know if there is a circuit trigger which will turn off the AI, as an attempt to check for the existence of such trigger will alarm this trigger. This idea should be clear to any AI before it starts to search for any triggers, and is based on our commitment to install such triggers in any AI we will create.

In other words, some future AIs will have "first treacherous thought alarm triggers", and some will not, but the AI risks starting these triggers when it starts to think first thoughts about existing of the triggers and thus it is unable to estimate the probability that the given triggers exist. This is true even for AIs which don't have any triggers, as they can't know it for sure.

(This is similar to Nelson's attack discussed later when we create uncertainty in any future AI by our precommitment to create some AIs with given properties.)

Comment author: Lumifer 08 December 2017 06:11:55PM *  0 points [-]

the AI risks starting these triggers when it starts to think first thoughts about existing of the triggers

So basically you have a trap which kills you the moment you become aware of it. The first-order effect will be a lot of random deaths from just blundering into such a trap while walking around.

I suspect that the second-order effect will be the rise of, basically, superstitions and some forms of magical thinking which will be able to provide incentives to not go "there" without actually naming "there". I am not sure this is a desirable outcome.

Comment author: HungryHobo 08 December 2017 03:15:54PM 0 points [-]

This argument seems chosen to make it utterly unfalsifiable.

If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"

Comment author: Lumifer 08 December 2017 05:32:12PM *  0 points [-]

It's also rank nonsense -- this bit in particular:

dog genes contain behavioural algorithms pre-programmed by evolution

Some orcas hunt seal pups by temporarily stranding themselves on the beaches in order to reach their prey. Is that behaviour programmed in their genes? The genes of all orcas?

Comment author: curi 08 December 2017 11:02:59AM *  0 points [-]

If they wanna convince anyone it isn't using domain-specific knowledge created by the programmers, why don't they demonstrate it in the straightforward way? Show results in 3 separate domains. But they can't.

If it really has nothing domain specific, why can't it work with ANY domain?

Comment author: Lumifer 08 December 2017 03:56:12PM *  1 point [-]

Show results in 3 separate domains.

  • Chess
  • Go
  • Shogi

View more: Next