Scott_Aaronson comments on How An Algorithm Feels From Inside - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (77)
While "reifying the internal nodes" must indeed be counted as one of the great design flaws of the human brain, I think the recognition of this flaw and the attempt to fight it are as old as history. How many jokes, folk sayings, literary quotations, etc. are based around this one flaw? "in name only," "looks like a duck, quacks like a duck," "by their fruits shall ye know them," "a rose by any other name"... Of course, there wouldn't be all these sayings if people didn't keep confusing labels with observable attributes in the first place -- but don't the sayings suggest that recognizing this bug in oneself or others doesn't require any neural-level understanding of cognition?
Exactly. People merely need to keep in mind that words are not the concepts they represent. This is certainly not impossible, but - like all aspects of being rational - it's harder than it sounds.
I think it goes beyond words.
Reality does not consist of concepts, reality is simply reality. Concepts are how we describe reality. They are like words squared, and have all the same problems as words.
Looking back from a year later, I should have said, "Words are not the experiences they represent."
As for "reality," well it's just a name I give to a certain set of sensations I experience. I don't even know what "concepts" are anymore - probably just a general name for a bunch of different things, so not that useful at this level of analysis.