All of andzuck's Comments + Replies

andzuckΩ010

I think many of the different takes you listed with "consciousness as... X" can actually be held together and are not mutuall exclusive :)

Also, you may enjoy seeing David Chalmer's paper on The Meta-Problem of Consciousness... "the problem of explaining why we think consciousness is hard to explain" in the first place. https://philarchive.org/archive/CHATMO-32

That’s a good distinction on hope something will exist vs belief that something exists! Thanks.

Today or someday in the future.

What empirical evidence would someone need to observe to believe that such an AGI, that is maximal in any of those traits, exists?

1tivelen
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence's origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.  I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. "There exists some entity that is omnipotent, omniscient, and omnibenevolent" is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
2JBlack
Maximality of those traits? I don't think that's empirically determinable at all, and certainly not practically measurable by humans. One can certainly have beliefs about comparative levels of power, knowledge, and benevolence. The types of evidence for and against them should be pretty obvious under most circumstances. Evidence against those traits being greater than some particular standard is also evidence against maximality of those traits. However, evidence for reaching some particular standard is only evidence for maximality if you already believe that the standard in question is the highest that can possibly exist. I don't see any reason why we should believe that any standard that we can empirically determine is maximal, so I don't think that one can rationally believe some entity to be maximal in any such trait. At best, we can have evidence that they are far beyond human capability.
1andzuck
Today or someday in the future.
andzuck-10

Hey Rob, on the question of God, you wrote: “This question is 'philosophy in easy mode', so seems like a decent proxy for field health / competence”

Saying that this is philosophy in easy mode implies that the answer is obvious, and the way you phrased it above makes it seem like atheism is obviously the correct answer.

How would you answer a question I asked about a year ago: Besides implementation details, what differences are there between rationalists' conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God? (source tweet)

4Rob Bensinger
I don't know what you meant to set aside by saying "Besides implementation details", but it seems worth noting that the most important difference is that AGI (if it existed today) would be a naturalistic posit, not a supernatural or magical hypothesis. To my eye, your question sounds like 'What's the difference between believing sorcerers exist who can conjure arbitrarily large fireballs, and believing engineers exist who can build flamethrowers?' One is magical (seems strongly contrary to the general character of physical law, treats human-psychology-ish concepts as fundamental rather than physics-ish concepts, etc.), the other isn't.
5Matthew Barnett
We could distinguish belief in something with hope that it will exist. For example, one could hope that they won't get a disease without committing to the belief that they won't get that disease. If by "rationalist conception of a benevolent AGI" you are referring to a belief that such an entity will come into existence, then I think one of the primary differences between this and the monotheistic conception of God, is that rationalists don't necessarily claim that such a benevolent entity will come into existence. At most, they claim it would simply be good if one (or many) were developed. But it does not seem inevitable, hence the efforts to ensure that AI is developed safely.
3tivelen
Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed. Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.

I'm commenting on this post probably two years too late, but I wanted to express my enthusiasm for this series you started on Category Theory! CT seems really cool and I recently started browsing Category Theory in Context by Emily Riehl, but paused because I felt like I haven't explored enough different branches of math deeply to see the beauty in what Riehl was sharing. The few posts you wrote here on LW sparked my interest again. I'm writing mostly for myself now, but also as a clue for others that come next. My plan from here is to explore:

johnswentwor... (read more)

Appreciate the crepe joke! My preference is sweet over savory.

On the topic of language, I strongly support Mike's reply which pushes in the direction of finding the 'deep structure' of consciousness. Johannes Kleiner also has written about ways to approach this problem in his paper "Mathematical Models of Consciousness" (https://arxiv.org/pdf/1907.03223.pdf).

To respond to your ask for us to rethink our philosophical commitments... if you were alive before the period table of elements was discovered, would you similarly urge Mendeleev to rethink his commit... (read more)

1Signer
The way I see it, the crux is not in a deep structure being definable - functionalism is perfectly compatible with definitions of experience on the same level of precision and reality as elements. And the research into the physical structures that people associate with consciousness certainly can be worthwhile and it can be used to resolve ethical disagreements in the sense that actual humans would express agreement afterwards. But the stance of QRI seems to be that resulting precise definition would be literally objective as in "new fundamental physics" - I think it should be explicitly clarified whether it's the case.
1Charlie Steiner
Neuroscience and philosophy are not physics and chemistry. I don't expect there to be an "atomic theory of color qualia" or anything like it because of a combination of factors like: Cultural and general interpersonal differences in color perception. The tendency of evolution to produce complicated, interlinked mechanisms, including in the brain, rather than modular ones. Examples of brain damage and people with unusual psychology or physiology that have dramatically different color qualia than me. Animals and artificial systems that use color perception to navigate the world but don't seem to converge to similar ways pf perceiving color. The evidence of absence of a soul or other homuncular center of perception, which necessitates understanding perception as an emergent phenomenon made of lots of little pieces. The causal efficacy of color perception (i.e. I don't just see things, I actually do different things depending on what I see) tying colors into all the other complications of the human mind. Complications that we know about from neuroscience, such as asymmetric local centers of function, and certain individual clusters of neurons being causally related to individual memories, motions, and sensations. Our experience with artificial neural networks, and how challenging interpreting their weights is. -- If we compare this with atoms, atoms do indeed have some local variation in mass, but only within a suspiciously small range. Rules like conservation of mass appear to hold among elements, rather than there being common exceptions. We didn't already know that atoms were emergent phenomena from the interactions of bajillions of pieces. We did not already have a scientific field studying how many of those bajillions of pieces played idiosyncratic and evolutionarily contingent roles. Et c.