asparisi

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

One could judge the strength of these with a few empirical tests: such as for (2), comparing industries where it is clear that the skills learned in college (or in a particular major) are particularly relevant vs. industries where it is not as clear, and comparing the number of college grads w/ the relevant skill-signals vs. college grads w/o the relevant skill-signals vs. non-college grads; and for (3), looking to industries where signals of pre-existing ability in that industry do not conform to being in college and comparing their rate of hiring grads vs. non-grads. (This would presumably be jobs in sectors where some sort of loosely defined intellectual ability is not as important. These jobs are becoming more scarce due to automation, and in First World countries in particular, but the tests should still be possible.) (1) is harder to test, as it is agnostic, but trying to see how these intuitions conform to those in hiring positions could be informative. Other signals, as mentioned in the comments, probably have their own tests which can be run on them.

I don't get paid on the basis of Omega's prediction given my action. I get paid on the basis of my action given Omega's prediction. I at least need to know the base-rate probability with which I actually one-box (or two-box), although with only two minutes, I would probably need to know the base rate at which Omega predicts that I will one-box. Actually, just getting the probability for each of P(Ix|Ox) and P(Ix|O~x) would be great.

I also don't have a mechanism to determine if 1033 is prime that is readily available to me without getting hit by a trolley (with what probability do I get hit by the trolley, incidentally?), nor do I know the ratio of odd-numbered primes to odd-numbered composites is off-hand.

I don't quite have enough information to solve the problem in any sort of respectable fashion. So what the heck, I two-box and hope that Omega is right and that the number is composite. But if it isn't, then I cry into my million dollars. (With P(.1): I don't expect to actually be sad winning $1M, especially after having played several thousand times and presumably having won at least some money in that period.)

Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.

Qualia can perhaps best be described, briefly, as "subjective experience." So what do we mean by 'subjective' and 'experience'?

If by 'subjective' we mean 'unique to the individual position' and by 'experience' we mean 'alters its internal state on the basis of some perception' then qualia aren't that mysterious: a video camera can be described as having qualia if that's what we are talking about. Of course, many philosophers won't be happy with that sort of breakdown. But it isn't clear that they will be happy with any definition of qualia that allows for it to be distinguished.

If you want it to be something mysterious, then you aren't even defining it. You are just being unhelpful: like if I tell you that you owe me X dollars, without giving you anyway of defining X. If you want to break it down into non-mysterious components or conditions, great. What are they? Let me know what you are talking about, and why it should be considered important.

At this point, it's not a matter of ruling anything out as incoherent. It's a matter of trying to figure out what sort of thing we are talking about when we talk about consciousness and seeing how far that label applies. There doesn't appear to be anything inherently biological about what we are talking about when we are talking about consciousness. This could be a mistake, of course: but if so, you have to show it is a mistake and why.

  1. You've chosen one of the easier aspects of consciousness: self-awareness rather than, eg. qualia.

I cover this a bit when I talk about awareness, but I find qualia to often be used in such a way as to obscure what consciousness is rather than explicate it. (If I tell you that consciousness requires qualia, but can't tell you how to distinguish things which have qualia from things which do not, along with good reason to believe that this way of distinguishing is legitimate, then rocks could have qualia.)

  1. The "necessarily biological" could be aposteriori nomic necessity, not apriori conceptual necessity, which is the only kind you knock down in your comment.

If the defenders of a biological theory of consciousness want to introduce an empirically testable law to show that consciousness requires biology then I am more than happy to let them test it and get back to us. I don't feel the need to knock it down, since when it comes to a posteriori nomic necessity, we use science to tell whether it is legitimate or not.

I find it helps to break down the category of 'consciousness.' What is it that one is saying when one says that "Consciousness is essentially biological"? Here it's important to be careful: there are philosophers who gerrymander categories. We can start by pointing to human beings, as we take human beings to be conscious, but obviously we aren't pointing at every human attribute. (For instance, having 23 base pairs of chromosomes isn't a characteristic we are pointing at.) We have to be careful that when we point at an attribute, that we are actually trying to solve the problem and not just obscure it: if I tell you that Consciousness is only explainable by Woogles, that's just unhelpful. The term we use needs to break down into something that allows us to (at least in principle) tell whether or not a given thing is conscious. If it can't do THAT, we are better off using our own biased heuristics and forgoing defintions: at least in the former case, I can tell you that my neighbor is conscious and a rock isn't. Without some way of actually telling what is conscious and what is not, we have no basis to actually say when we've found a conscious thing.

It seems like with consciousness, we are primarily interested in something like "has the capacity to be aware of its own existence." Now, this probably needs to be further explicated. "Awareness" here is probably a trouble word. What do I mean when I say that it is "aware"? Well, it seems like I mean some combination of being able to percieve a given phenomenon and being able to distinguish both degrees of the phenomenon when present and distinguishing the phenomenon from other phenomenon. When I say that my sight makes me aware of light, I mean that it allows me to both distinguish different sorts of light and light from non-light: I don't mistake my sight for hearing, after all. So if I am "aware of my own existence" then I have the capacity to distinguish my existence from things that are not my existence, and the ability to think about degrees to which I exist. (in this case, my intuition says that this caches out in questions like "how much can I change and still be me?")

Now, there isn't anything about this that looks like it is biological. I suppose if we came at it another way and said that to be conscious is to "have neural activity" or something, it would be inherently biological since that's a biological system. But while having neural activity may be necessary for consciousness in humans, it doesn't quite feel like that's what we are talking about when we talk about what we are pointing to when we say "conscious." If somehow I met a human being and was shown a brain scan showing that there was no neural activity, but it was apparently aware of itself and was able to talk about how its changed over time and such and I was convinced I wasn't being fooled, I would call that conscious. Similarly, if I was shown a human being with neural activity but which didn't seem capable of distinguishing itself from other objects or able to consider how it might change, I would say that human being was not conscious.

On those criteria, I would say Plato. Because Plato came up with a whole mess of ideas that were... well, compelling but obviously mistaken. Much of Western Philosophy can be put in terms of people wrestling with Plato and trying to show just why he is wrong. (Much of the rest is wrestling with Aristotle and trying to show why HE is wrong... but then, one can put Aristotle into the camp of "people trying to show why Plato is wrong.")

There's a certain sort of person who is most easily aroused from inertia when someone else says something so blatantly, utterly false that they want to pull their hair out. Plato helped motivate these people a lot.

The New Organon, particularly Aphorisms 31-46, show not only an early attempt to diagnose human biases (what Bacon referred to as "The Idols of the Mind") but also some of the reasons why he rejected Aristotelian thought, common at the time, in favor of experimental practice.

Maybe there are better ways to expand than through spacetime, better ways to make yourself into this sort of maximizing agent, and we are just completely unaware of them because we are comparatively dull next to the sort of AGI that has a brain the size of a planet? Some way to beat out entropy, perhaps. That'd make it inconsistent to see any sort of sky with UFAI or FAI in it.

I can somewhat imagine what these sorts of ways would be, but I have no idea if those things are likely or even feasible, since I am not a world-devouring AGI and can only do wild mass speculation at what's beyond our current understanding of physics.

A simpler explanation could be that AGIs use stealth in pursuing their goals: the ability to camouflage oneself has always been of evolutionary import, and AGIs may find it useful to create a sky which looks like "nothing to see here" to other AGIs. (As they will likely be unfriendly toward each other) Camoflage, if good enough, would allow one to hide from predators (bigger AGI) and sneak up on prey (smaller AGI) Since we would likely be orders of magnitude worse at detecting an AGI's camoflage, we see a sky that looks like there is nothing wrong. This doesn't explain why we haven't been devoured, of course, which is the weakness of the argument.

Or maybe something like acausal trade limits the expansion of AGI. If AGIs realize that fighting over resources is likely to hinder their goals more than help them in the long wrong, they might limit their expansion on the theory that there are other AGIs out there. If I think I am 1 out of a population of a billion, and I don't want to be a target for a billion enemies at once, I might decide that taking over the entire galaxy/universe/beyond isn't worth it. In fact, if these sorts of stand-offs become more common as the scale becomes grander, it might be motivation not to pursue such scales. The problem with this being that you would expect earlier AGIs to be more likely to just take advantage before future ones can really get to the point of being near-equals and defect on this particular dilemma. (A billion planet-eating AGI are probably not a match for one galaxy-eating AGI. So if you see a way to become the galaxy-eater before enough planet-eaters can come to the party, you go for it.)

I don't find any of these satisfying, as one seems to require a sub-set of possibilities for unknown physics and the others seem to lean pretty heavily on the anthropic principle to explain why we, personally, are not dead yet. I see possibilities here, but none of them jump out at me as being exceptionally likely.

I get that feeling whenever I hit a milestone in something: if I run a couple miles further than I had previously, if I understand something that was opaque before, if I am able to do something that I couldn't before, I get this "woo hoo!" feeling that I associate with levelling up.

Even if they are sapient, it might not have the same psychological effect.

The effect of killing a large, snarling, distinctly-not-human-thing on one's mental faculties and the effect of killing a human being are going to be very different, even if one recognizes that thing to be sapient.

If they are, Harry would assign moral weight to the act after the fact: but the natural sympathy that is described as eroding in the above quote doesn't seem as likely to be affected given a human being's psychology.

Load More