We are surprisingly high in forebrain neuron count:
https://en.m.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
Agree on this criticism for the difference between humans and pigs, but there too many orders of magnitude of difference between shrimp and human to consider detailed measures of computing power very necesary.
Quantifying empathy is intrinsically hard, because everything begins by postulating (not observing) consciousness in a group of beings, and that is only well grounded for humans. So, at the end, even if you are totally successful in developing a theory of human sentience, for other beings you are extrapolating. Anything beyond solipsism is a leap of faith (unlike you find St. Anselm ontological proof credible).
Illusionism is not a competitor, because consciousness is obviously an illusion. That is immediate since Descartes. That is why you cannot distinguish between "the true reality" and "matrix": both produce a legitimate stream of illusory experience ("you").
Epiphenomenalism is physicalist in the sense that it respects the autonomy and closeness of the physical world. Given that we are not p-zombis (because there is an "illusory" but immediate difference between real humans and p-zombies), that difference is precisely what we call “consciousness”.
Descartes+Laplace=Chalmers.
In fact, there is only one scape: consciousness could play an active role in the fundamental Laws of Physics. That would break the Descartes/Laplace orthogonality, making philosophy interesting again.
This is the kind of criticism I kindly welcome. I used the cockroach data (forebrain) here as a Proxy:
Thank you very much for the reference, because I am searching for co-authors for further develpments on SV-PAYW.
Also posted in EA Forum: https://forum.effectivealtruism.org/posts/uW77FSphM6yiMZTGg/why-not-parliamentarianism-book-by-tiago-ribeiro-dos-santos
That is the whole point of ethical systems, isn't it? To derive all (etical) values from a few postulates. Of course, most of valuations are not ethical (they are preferences or tastes), but this is an excellent agument for rational (systematic) Ethics.
Well, “one feel you can have done otherwise” is the part of the qualia of free will my definition do not legitimize.
When you chose among several options, the options are real (other person could have done otherwise) but once it is “you” who choses, mechanism imply “all degrees of freedom have been used”.
I say: "you are free when you do as you want, not matter how determined are your desires". This is how I define freedom in "Freedom under naturalistic dualism"(and I think that this position is original, so if this is not the case, I would be glad of being corrected).
I miss something about evolutionary game theory, where some of the discrepancies can be rationalized.
I wrote this tour from game theory to cultural evolution:
https://www.lesswrong.com/posts/xajeTjMtkGGEAwfbw/the-evolution-towards-the-blank-slate