I had a white-blue upbringing (military family) and a blue-green career (see below); my hobbies are black-green-white (compost makes my garden grow to feed our community); my vices are green-red; and my politics are five-color (at least to me).
Almost all of my professional career has been in sysadmin and SRE roles: which is tech (blue) but cares about keeping things reliable and sustainable (green) rather than pursuing novelty (red). Within tech's blue, it seems to me that developer roles run blue-red (build the exciting new feature!); management roles run blue-white (what orderly social rules will enable intelligence?); and venture capital runs blue-black (how do we get cleverness to make money for us?); while SRE and similar roles run blue-green.
My garden runs on blood, bones, rot, and worm poop: green-black Golgari territory, digesting the unwanted to feed the wanted. I stick my hands in the dirt to feel the mycorrhizae. But the point of the garden is to share food with others (green-white) because it makes me feel good (black-red). I'm actually kinda terrible at applying blue to my hobbies, and should really collect some soil samples for lab testing one of these days.
Here are some propositions I think I believe about consciousness:
If the text says that it is not holy, then who are we to disagree?
I think I agree: perceptions are fallible representations of reality, but infallible representations of themselves. If I think I see a cat, I may be wrong about reality (it's actually a raccoon) but I'm not wrong about having had the perception of a cat.
A standardized test measures some combination of ① the intended subject-matter skills and knowledge, and ② skills and knowledge that are specific to manipulating the test, also known as "test-taking skills". We know this because we can teach test-taking skills; which is to say, we can improve a student's performance on a standardized subject test by teaching them something that is not that subject.
A perfect standardized test would measure only ① with no influence from ②. If the subject is biology, then a perfect test would be one where the only way to get a better score would be to actually get better at biology, not at test-taking.
Some elements of moral thought seem to reflect an underlying reality akin to mathematical truth. "Fairness" for instance naturally relates to equal divisions, reciprocity, and symmetries akin to Rawls' veil of ignorance. Eliezer discusses this here: if Y thinks "fair" is splitting the pie evenly and Z thinks "fair" is "Z gets the whole pie", Y is just right and Z is just wrong.
Even though human moral judgment is evolved, what it is evolved to do includes mapping out symmetries like "it's wrong for A to murder B, or for B to murder A" → "it's wrong for anyone to murder anyone" → "murder is generically wrong".
Perceptions are infallible
Can you expand on this? Optical and auditory illusions exist, which seem to me to be repeatably demonstrable fallible perceptions: people reliably say that line A looks longer than line B in the Müller-Lyer illusion (the one with the arrowheads), even after measuring.
I'll just also say that commitment to a belief being the whole point is a very Abrahamic view and less common in other religions.
It seems to me that Anglo-American atheists often have a Protestant (or even specifically Lutheran) ontology of religion; they implicitly expect that "religion" must mean something creedal, evangelical, often sola scriptura, and various other things that aren't even universal among denominations of Christianity.
There are ways to address the problem of cheating in school without nuking the datacenters.
For instance: If a student gets an LLM to write their homework essay or term paper, that's not really very different from having another student or an essay-writing service write it — and those are problems that schools and colleges faced before LLMs came along. In these cases, the student will not be able to discuss their work very effectively in class. So, structure the class as a seminar or workshop, in which students are expected to discuss their work. In math classes, have students discuss proofs and constructions, do work on whiteboards, etc.
If the class is structured as a homework-based password-guessing exercise, then the LLM cheaters win. But if the class is structured as an in-person discussion, the LLM cheaters lose.
The whole AI safety concern implies that the intersection of "thinkers" and "disasters" is non-null, yes?