Here is my summary of his post and some related thoughts.
Scott instrumentalizes Chalmers' vague Hard problem of consciousness:
the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes
into something concrete and measurable, which he dubs the Pretty-Hard Problem of Consciousness:
a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict
and shows that Tononi's IIT fails to solve the latter. He does it by constructing a counterexample which has arbitrarily high integrated information (more than a human brain) while doing nothing anyone would call conscious. He also notes that building a theory of consciousness around information integration is not a promising approach in general:
As humans, we seem to have the intuition that global integration of information is such a powerful property that no “simple” or “mundane” computational process could possibly achieve it. But our intuition is wrong. If it were right, then we wouldn’t have linear-size superconcentrators or LDPC codes.
Scott is very good at instrumentalizing vague ideas (what lukeprog calls hacking away at the edges). He did the same for the notion of "free will" in his paper The Ghost in the Quantum Turing Machine. His previous blog entry was about "The NEW Ten Most Annoying Questions in Quantum Computing", which are some of the "edges" to hack at when thinking about the "deep" and "hard" problems of Quantum Computing. This approach has been very successful in the past:
of the nine questions, six have by now been completely settled
after 8 years of work.
I hope that there are people at MIRI who are similarly good at instrumentalizing big ideas into interesting yet solvable questions.
Tononi gives a very interesting (weird?) reply: Why Scott should stare at a blank wall and reconsider (or, the conscious grid), where he accepts the very unintuitive conclusion that an empty square grid is conscious according to his theory. (Scott's phrasing: "[Tononi] doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.") Here is Scott's reply to the reply:
Here's one particularly weird consequence of IIT: a zeroed-out system has the same degree of consciousness as a dynamic one, because it's a structural measure of a system. For example, a physical, memrister based neural net has the same degree of integrated information when it's unplugged. Or, to chase after a more absurd-seeming conclusion, human consciousness is not reduced immediately upon death (assuming no brain damage), instead slowly decreasing as the cellular arrangement begins to decay.
Given that, I agree with Scott- while interesting, IIT doesn't track particularly well with 'consciousness' in the conceptual sense.
His use of philosophical zombies does not dissuade me.
The idea of devices that transform input data with "low-density parity-checks" having more phi than humans concerns me slightly more. If it is a valid complaint, then I believe it's probably an issue with the formalism, not with the concept.
I need to read further.
I didn't downvote, but I'm guessing it's because you stated your opinions about the post without giving reasons for believing in those opinions.
I didn't think I had to cite my sources on philosophical zombies; we're on LessWrong.
And the downvoting continues. Would the individual in question say something?
There was also this:
If it is a valid complaint, then I believe it's probably an issue with the formalism, not with the concept.
What's wrong with that? I'd say it's a prevalent problem when trying to formalize complicated concepts.
Like I said in my original comment, it's stating your opinion without giving any reason to believe in that opinion. If you don't say why you believe that it's an issue with the formalism rather than the concept, you're adding more noise than information. Facts are better than opinions.
Take that, Aumann!
His answer: "Au, Mann!" ("au" means "ouch" in German, his original mother tongue). Aw man, bad puns are my personal demon (works phonetically). Amen to that being a bad case of nomen est omen.
Aumann must be rolling in his grave from disagreeing with all the misuses of his agreement theorem as applying in a social context. Figure of speech, since he's still alive.
ETA: Au-mann puns, the poor man's gold!
Scott Aaronson, complexity theory researcher, disputes Tononi's theory of consciousness, that a physical system is conscious if and only if it has a high value of "integrated information". Quote:
http://www.scottaaronson.com/blog/?p=1799