Peterdjones comments on Cult impressions of Less Wrong/Singularity Institute - Less Wrong

29 Post author: John_Maxwell_IV 15 March 2012 12:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 10 January 2013 01:09:45PM *  0 points [-]

As for the is-ought problem, if we accept that "ought" is just a matter of calculations in our brain returning an output

But we shouldnt accept that, because we can miscalculate an "ought" or antyhing else. The is-ought problem is the problem of correctly inferring an ought from a tractable amount of "is's".

(and reject that it's a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the "ought" is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.

It perhaps might be one day given sufficiently advanced brain scanning, but we don't have that now, so we still have an is-ought gap.

It's not a matter of "cramming" an entire world-state into your brain -- any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain -- your current brain, not some hypothetical superbrain.

The is-ought problem is epistemic. Being told that I have an epistemically inaccessible black box in my head that calculates oughts still doesn't lead to a situation where oughts can be consciously undestood as correct entailments of is's.

Comment author: ArisKatsaris 10 January 2013 01:19:46PM 0 points [-]

because we can miscalculate an "ought" or anything else.

One way to miscalculate an "ought" is the same way that we can miscalculate an "is" -- e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.

And also, because people aren't perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm -- same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there's no "blue" light frequency striking the optical nerve. But that sort of thing doesn't seem as a very deep philosophical problem to me.

Comment author: Peterdjones 10 January 2013 01:30:05PM -1 points [-]

We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.