Comment author: RichardKennaway 18 November 2014 02:03:58PM *  6 points [-]

The published version can be found here. Link rot protection: the link is to the Journal of Cognitive Science, vol.12, issue 4, 2011. That issue and some subsequent ones contain responses to Chalmers and Chalmers' response to the responses.

I'm having difficulty in seeing how the paper says more than simply that the mind is a physical process. According to his definitions, all physical processes implement computations, and it is not clear that the mind specifically should be described in those terms, any more than the rest of the world. But perhaps mental physicalism still needs to be expounded, perhaps even more so in 1993 when the paper was written. The last 20 years of neuropsychology, though, takes that as a given, just as molecular biology takes for granted that living things can be explained in terms of being built from atoms.

Comment author: john_ku 19 November 2014 05:35:53PM 1 point [-]

I'm looking forward to checking out the responses you linked to.

One implication of the paper that I found interesting is that not every physical process implements every computation or even every computation of a comparable finite size. Thus, I find Chalmers' paper to be the most satisfactory response I've come across to Greg Egan's Dust Theory, previously discussed on lw here. (As others have anticipated though, you do need to grant a coherent and not-too-liberal notion of reliable causation, but we seem to have ample evidence for that.)

For many scientific interests, I agree that it may not be necessary to describe or conceive of the mind in these computational terms. But if one is engaged in a grand reductionist project comparable to reducing neuropsychology to molecular biology to atomic theory, then, well, it helps to have the equivalent of a precise atomic theory to reduce to. For the purposes of my philosophical research, I'm reducing metaethics to facts about the cognitive architecture of our decision algorithms, which in turn are reduced to certain kinds of instantiated computations, which are reduced a la Chalmers to physical processes, which I take to be modelled by Pearl style causal models allowing us to be otherwise agnostic about the level of explanation.

Comment author: Gunnar_Zarncke 18 November 2014 11:01:37AM 1 point [-]

You are welcome! And Don't Be Afraid of Asking Personally Important Questions of Less Wrong.

I am especially hoping to receive any information that may help out with some confusing memories I have.

I understand that you might not want to give details but I'm unclear what information I might provide. Maybe you could drop a few hints. You might also look at the Baseline of my opinion on LW topics.

Comment author: john_ku 18 November 2014 01:09:31PM 0 points [-]

You're right that I was being intentionally vague. For what it's worth, I was trying to drop some hints targeted at some who might be particularly helpful. If you didn't notice them, I wouldn't worry about it. This is especially true if we haven't met in person and you don't know much about me or my situation.

[Link] Chalmers on Computation: A first step From Physics to Metaethics?

0 john_ku 18 November 2014 10:39AM

A Computational Foundation for the Study of Cognition by David Chalmers

Abstract from the paper:

Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions.

Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science.

See my welcome thread submission for a brief description of how I conceive of this as the first step towards formalizing friendliness.

Comment author: john_ku 17 November 2014 09:19:59PM *  4 points [-]

Hi everyone!

I'm John Ku. I've been lurking on lesswrong since its beginning. I've also been following MIRI since around 2006 and attended the first CFAR mini-camp.

I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.

This process landed me in University of Michigan's Philosophy PhD program, during which time I read Kurzweil's The Singularity is Near. This struck me as very important and I quickly followed a chain of references and searches to discover what was to become MIRI and the lesswrong community. Partly due to lesswrong's influence, I dropped out of my PhD program to become a programmer and entrepreneur and I now live in Berkeley and work as CTO of an organic growth startup.

I have, however, continued my philosophical research in my spare time, focusing largely on metaethics, psychosemantics and metaphilosophy. I believe I have worked out a decent initial overview of how to formalize a friendly utility function. The major pieces include:

  • adapting David Chalmers' theory of when a physical system instantiates a computation,
  • formalizing a version of Daniel Dennett's intentional stance to determine when and which decision algorithm is implemented by a computation, and
  • modelling how we decide how to value by positing (possibly rather thin and homuncular) higher order decision algorithms, which according to my metaethics is what ethical facts get reduced to.

Since I think much of philosophy boils down to conceptual analysis, and I've also largely worked out how to assign an intensional semantics to a decision algorithm, I think my research also has the resources to meta-philosophically validate that the various philosophical propositions involved are correct. I hope to fill in many remaining details in my research and find a way to communicate them better in the not too distant future.

Compared to others, I think of myself as having been focused more on object-level concerns than more meta-level instrumental rationality improvements. But I would like to thank everyone for their help which I'm sure I've absorbed over time through lesswrong and the community. And if any attempts to help have backfired, I would assume it was due to my own mistakes.

I would also like to ask for any anonymous feedback, which you can submit here. Of course, I would greatly appreciate any non-anonymous feedback as well; an email to ku@johnsku.com would be the preferred method.

Comment author: Jasen 28 October 2010 10:55:48PM 5 points [-]

On a related note, a friend of ours named John Ku has negotiated a donation of 20% stock to SIAI from his company MetaSpring. MetaSpring is a digital marketing consultancy that mostly sells a service of rating the effectiveness of advertising campaigns and they are currently hiring. They are looking for experience with:

Ruby on Rails MySql / Sql web design / user interface JavaScript wordpress php web programming in general sales client communication unix system administration Photoshop / slicing HTML & CSS drupal

If you're interested, contact John Ku at ku@johnsku.com

Comment author: john_ku 17 November 2014 09:18:33PM 1 point [-]

I apologize for the embarrassing amount of time it has taken to respond to this. This was posted before the negotiations were actually finalized, which took some number of weeks. Then, in a matter of months, I ended up returning all of the equity in exchange for a computer and waived referral fee. At this point, I assume any further details are a moot point.

Comment author: bryjnar 20 May 2012 11:18:41AM 3 points [-]

I have to say, I think Chalmers' Two-Dimensional Semantics thing is pretty awesome! Possibly presented in an overly complicated fashion, but hey.

As for Putnam, I think his point is stronger than that! He's not just saying that the extension of a term can vary given the state of the world: no shit, there might have been fewer cats in the world, and then the extension of "cat" would be different. He's saying that the very function that picks out the extension might have been different (if the objects we originally ostended as "cats" had been different) in an externalist way. So he's actually being an externalist about intensions too!

Comment author: john_ku 21 May 2012 06:31:20AM 0 points [-]

You're right that Putnam's point is stronger than what I initially made it out to be, but I think my broader point still holds.

I was trying to avoid this complication but with two-dimensional semantics, we can disambiguate further and distinguish between the C-intension and the A-intension (again see the Stanford Encyclopedia of Philosophy article for explanation). What I should have said is that while it makes sense to be externalist about extensions and C-intensions, we can still be internalist about A-intensions.

Comment author: john_ku 20 May 2012 01:30:10AM 1 point [-]

I think many of the other commenters have done an admirable job defending Putnam's usage of thought experiments, so I don't feel a need to address that.

However, there also seems to be some confusion about Putnam's conclusion that "meaning ain't in the head." It seems to me that this confusion can be resolved by disambiguating the meaning of 'meaning'. 'Meaning' can refer to either the extension (i.e. referent) of a concept or its intension (a function from the context and circumstance of a concept's usage to its extension). The extension clearly "ain't in the head" but the intension is.

The Stanford Encyclopedia of Philosophy article on Two-Dimensional Semantics has a good explanation of my usage of the terms 'intension' and 'extension'. Incidentally, as someone with a lot of background in academic philosophy, I think making two-dimensional semantics a part of LessWrong's common background knowledge would greatly improve the level of philosophical discussion here as well as reduce the inferential distance between LessWrong and academic philosophers.

Comment author: john_ku 05 May 2012 12:48:46PM *  18 points [-]

If the difficulty of a physiological problem is mathematical in essence, ten physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics and no further.

Norbert Wiener