Comment author: ahbwramc 01 May 2015 03:37:31PM 3 points [-]

I think we might be working with different definitions of the term "causal structure"? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency - if neuron A hadn't have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn't call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That's what I think is wrong with your GIF example, btw - there's no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn't change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.

Anyway, beyond that, we're obviously working from very different intuitions, because I don't see the China Brain or Turing machine examples as reductio's at all - I'm perfectly willing to accept that those entities would be conscious.

Comment author: Kyre 02 May 2015 02:53:28PM 0 points [-]

Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.

Comment author: brainmaps 30 April 2015 04:42:21PM *  6 points [-]

Shawn Mikula here. Allow me to clear up the confusion that appears to have been caused by being quoted out of context. I clearly state in the part of my answer preceding the quoted text the following:

"2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious?".

So this is not a question of misunderstanding universal computation and whether a computer simulation can mimic, for practical purposes, the computations of the brain. I am already assuming the computer simulation is mimicking the brain's activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure. In other words, the coordinated activity of the binary logic gates underlying the computer running the simulation has a vastly different causal structure than the coordinated activity and massive parallelism of neurons in a brain.

The confusion appears to result from the fact that I'm not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation.

Anyway, I hope this helps.

Comment author: Kyre 01 May 2015 09:21:36AM 2 points [-]

Thanks for replying ! Sorry if the bit I quoted was too short and over-simplified.

That does clarify things, although I'm having difficulty understanding what you mean by the phrase "causal structure". I take it you do not mean the physical shape or substance, because you say that a different computer architecture could potentially have the right causal structure.

And I take it you don't mean the cause and effect relationship between parts of the computer that are representing parts of the brain, because I think that can be put into one-to-one correspondence with the cause and effect relationship of the things being represented.

For example, If neuron N1 causes changes to neurons N2, N3 and N4; and I have a simulated S1 causing changes to simulated S2, S3 and S4, then that simulated cause and effect happens by honest-to-god physical cause and effect: voltage levels in the memory gates representing S1 propagate through the architecture to the gates representing S2, S3, S4 causing them to change.

Using a different computer architecture may avert this problem ...

So consciousness would have to then be something that flesh brains and "correctly causally structured" computer hardware have in common, but which is not shared by a simulation of either of those things running on a conventional computer ?

Comment author: Kyre 29 April 2015 05:32:47AM 3 points [-]

That is very interesting; there does seem to be quite rapid progress in this area.

From the blog entry:

... the reason for this is because simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem ...

Can anyone explain what that means ? I can't see how it can be correct.

In response to Self-verification
Comment author: Kyre 21 April 2015 04:35:40AM 1 point [-]

Tattoo private key on inside of thigh.

Comment author: Houshalter 15 April 2015 04:33:35AM 1 point [-]

I totally agree with you that AIs should be able to learn what humans mean by different concepts. I never really understood that objection. I think the problem is a bit deeper. This sentence right here:

the AI being told whether or not some behavior is good or bad and then constructing a corresponding world-model based on that.

What's to stop the AI from instead learning that "good" and "bad" are just subjective mental states or words from the programmer, rather than some deep natural category of the universe? So instead of doing things it thinks the human programmer would call "good", it just tortures the programmer and forces them to say "good" repeatedly.

The AI understands what you mean, it just doesn't care.

Comment author: Kyre 15 April 2015 05:09:18AM 4 points [-]

What's to stop the AI from instead learning that "good" and "bad" are just subjective mental states or words from the programmer, rather than some deep natural category of the universe? So instead of doing things it thinks the human programmer would call "good", it just tortures the programmer and forces them to say "good" repeatedly.

The pictures and videos of torture in the training set that are labelled "bad".

It is not perfect, but I think the idea is that with a large and diverse training set the hope is that it alternative models of "good/bad" become extremely contrived, and the human one you are aiming for becomes the simplest model.

I found the material in the post very interesting. It holds out hope that after training your world model, it might not be as opaque as people fear.

Comment author: Kyre 25 March 2015 05:52:53AM 1 point [-]

I'm not sure succeeding at number 4 helps you with with the unattractiveness and discomfort of number 3.

Say you do find some alternative steel-manned position on truth that is comfortable and intellectually satisfying. What are the odds that this position will be the same position as that held by "most humans", or that understanding it will help you get along with them ?

Regardless of the concept of truth you arrive at, you're still faced with the challenge of having to interact with people who have not-well-thought-out concepts of truth in a way that is polite, ethical, and (ideally) subtly helpful.

Comment author: [deleted] 26 January 2015 03:17:33AM *  3 points [-]

Was going to try and provide an answer but lukeprog's link is more informative than what I would have written :P

One factor though that to me seems worth considering:

Imagine your goal of people being able to back themselves up turns out to never actually happen - either because of technical infeasibility, or we go extinct before figuring out how, or because static structural information is simply insufficient to derive psychological phenomena. (That last option might be a minority position on LW, but I'd venture to say it is not a minority position among neuroscientists and psychologists.)

If that WERE the case, then money and time donated to medial imaging is likely to STILL have enormous positive benefits, in terms of advancing basic science and diagnosis/treatment of mental illness. By contrast, if your end goal turns out to be infeasible, money and time spent on cryonics and plastination will turn out to have been largely wasted.

Or at the very least, it is much easier to imagine the former having a large societal benefit for reasons other than life extension, while the "side-effect" discoveries of brain plastination would not have the same obvious public benefit (even if those benefits were non-zero).

So it seems like if you're not sure which is the best approach and want to be fairly certain you're not wasting your time/money, it's better to dump resources into medical imaging, which would be justified by the spillover effects alone.

Comment author: Kyre 26 January 2015 08:32:06AM 2 points [-]

I thought CLARITY was an interesting development - a brain preservation technique that renders tissue transparent. I imagine in the near future there's likely to be benefits going both was from preservation and imaging research.

Comment author: Vika 09 January 2015 02:50:52AM 3 points [-]

Do you know of any examples, fictional or real, of a male sidekick to a female hero?

Comment author: Kyre 09 January 2015 05:07:42AM 5 points [-]

Buffy / Xander, Motoko / Batu, Deunan / Briareos

(although I'm not sure "Sidekick" is exactly right here)

Comment author: Eniac 09 December 2014 01:10:30AM 11 points [-]

I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.

Comment author: Kyre 09 December 2014 05:13:05AM 10 points [-]
Comment author: Document 23 November 2014 09:47:54PM 1 point [-]

The last quote isn't from Yudkowsky.

Comment author: Kyre 23 November 2014 11:13:03PM 1 point [-]

Ah, my mistake, thanks again.

View more: Prev | Next