Sorry this is so late, but I honestly completely forgot about this after I wrote it, so I never came back to see what transpired.
Anyway, I'm aware of how the marginal propensity to consume affects tax incidence, but in this case, where payroll taxes apply to every employee at every business, the only choices involved are whether to work and whether to hire, and companies have far more leeway in that decision. You can avoid the fizzlesprot tax by consuming an untaxed equivalent or finding a different, fizzlesprotless sexual fetish. You can only avoid a pa...
I need to quibble with the "compulsory retirement savings" point. Realistically, any amount that the government forces the employer to contribute as a condition to hire you is money that would have otherwise been given to you as wages. There is no way to increase someone's value by fiat, so it's misleading to suggest that you somehow gain from the tax (apart from the social value of the retirement scheme). Also, the US SS withholding is 12.4% of income, as half of it is paid by the employer before the employee sees the funds but, as discussed,...
I was unfamiliar with the case. After checking out both links for quite some time, but prior to reading the comments, I estimated:
After reading the comments, I was a little surprised that the consensus seems to be decidedly against Knox's guilt. The simplest explanation is that I'm just not a very good rationalist, but I don't find that very satisfying. The four parts of the story that I felt were inconsistent with Knox being innocent were:
I'm looking for a particular fallacy or bias that I can't find on any list.
Specifically, this is when people say "one more can't hurt;" like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can't find a name. I would expect it to be called the "Lost Cause Fallacy" or the "Fallacy of Futility" or something, but neither seems to be recognized anywhere. Does this have a standard name that I don't know, or is it so obvious that no one ever bothered to name it?
I have a theory about alcohol consumption; I call people who like (or don't mind) the taste "tongue blind." My theory is that these people have such poor taste receptors that they need an overly strong stimulus to register anything other than bland. Under this theory, I would expect people that like alcohol to also like very spicy food, to put extra salt most things they eat, and to think that vanilla is a synonym for plain.
Oh, I don't know that. What would remain of you if you could download your mind into a computer? Who would you be if you were no longer affected by the level of serotonin or adrenaline you are producing, or if pheromones didn't affect you? Once you subtract the biological from the human, I imagine what remains to be pure person. There should be no difference between that person and one who was created intentionally or one that evolved in a different species, beyond their personal experiences (controlling for the effects of their physiology).
I don't have...
I'm just trying to figure out under what circumstances we could consider a completely artificial entity a continuation of our existence. As you pointed out, merely containing our knowledge isn't enough. Human knowledge is a constantly growing edifice, where each generation adds to and build upon the successes of the past. I wouldn't expect an AI to find value in everything we have produced, just as we don't. But if our species were wiped out, I would feel comfortable calling an AI which traveled the universe occasionally writing McCartney- or Lennon-inspired songs "us." That would be survival. (I could even deal with a Ringo Starr AI, in a pinch.)
I don't consider our innate biological tendencies the core of our being. We are an intelligence superimposed on a particular biological creature. It may be difficult to separate the aspects of one from the other (and I don't pretend to be fully able to do so), but I think it's important that we learn which is which so that we can slowly deemphasize and discard the biological in favor of the solely rational.
I'm not interested in what it means to be human, I want to know what it means to be a person. Humanity is just an accident as far as I'm concerned. It might as well have been anything else.
Preferred, absolutely. I just think that the survival of our knowledge is more important than the survival of the species sans knowledge. If we are looking to save the world, I think an AI living on the moon pondering its existence should be a higher priority than a hunter-gatherer tribe stalking wildebeest. The former is our heritage, the latter just looks like us.
The squirrel civilization would be a pretty impressive achievement, granted. The destruction of this particular species (humans) would seemingly be a tremendous loss universally, if intelligence is a rare thing. Nonetheless, I see it as only a certain vessel in which intelligence happened to arise. I see no particular reason why intelligence should be specific to it, or why we should prefer it over other containers should the opportunity present itself. We would share more in common with an intelligent squirrel civilization than a band of gorillas, eve...
If I implied that, it was unintentional. All I mean is that I see no reason why we should feel a kinship toward humans as humans, as opposed to any species of people as people. If our civilization were to collapse entirely and had to be rebuilt from scratch, I don't see why the species that is doing the rebuilding is all that important -- they aren't "us" in any real sense. We can die even if humanity survives. By that same token, if the paperclip AI contains none of our accumulated knowledge, we go extinct along with the species. If the AI contains some our of knowledge and a good degree of sentience, I would argue that part of us survives despite the loss of this particular species.
I agree generally, but I think when we talk about wiping out humanity we should include the idea that if we were to lose a significant portion of our accumulated information it would be essentially the same as extinction. I don't see a difference between a stone age tech. group of humans surviving the apocalypse and slowly repopulating the world and a different species (whether dogs, squirrels, or porpoises) doing the same thing.
I don't think it is necessarily true that merely by joining the faction most likely to win you will share in the spoils of victory. Leaders distribute rewards based on seniority more than support. In a close contest, you would likely be courted heavily by both sides, providing a temporary boost in status, but that would disappear once the conflict is over. You will have not earned the trust of the winner since your allegiance was in doubt. I don't think there is much to gain by joining the larger side late; you'll be on the bottom of society once the d...
True, but I think that would be a problem with any test. I'm just trying to find a way around it since I think that as you add ways to avoid gaming, you both complicate and weaken the test. Perhaps a solution would be to test people without their knowledge, and reveal whether they succeeded or not at a later date.
I get the feeling that the real problem here is repeatability. It's one thing to design a test for rationality, it's another to design a test that could not be gamed once the particulars are known. Since it probably isn't possible to control the flow of information in that way, the next-best option might be to design a test so that the testing criteria would not be understood except by those who pass.
I'm thinking of a test I heard about years ago. The teacher passes out the test, stressing to the students to read the instructions before beginning. The ...
I think, then, that the harm associated with this man's suicide would have to take into account the rise in premiums he would be forcing on people in similar situations. His death may increase the amount a similar man would have to pay, decreasing the likelihood that he could afford insurance and increasing the harm that man's death would cause his dependents. Over time, those effects could swamp any short-term benefit to the charity.
Or, if the behavior became common, insurance companies could simply decline to cover suicide. The problems would arise if, say, a car accident were accused of being a covert suicide (but wouldn't we have this same problem before the 2-year limit?) Perhaps that's why insurance companies cover suicides - for peace of mind, so that you know they won't accuse your corpse of having done it on purpose.
I don't think this qualifies as a belief; it's just something I have noticed.
My dreams are always a collection of images (assembled into a narrative, naturally) of things I thought about precisely once the prior day. Anything I did not think about, or thought about more than a single time, is not included. I like to use this to my advantage to avoid nightmares, but I have also never had a sex dream. The fact that other people seem to have sex dreams is good evidence that my experience is rare or unique, but I have no explanation for it.
I stopped lying, to the best of my ability, years ago. I've found, though, that as my lying skills have degraded, I have also partially lost the ability to consider my words before I speak and I have lost the knack for social pablum (although I may never have had that to begin with; tough to say).
When someone asks me how I am, I always answer "same as always." I would like to say that I do it so that I don't need to commit to a position with which I disagree, but the truth is that the words come out before I can figure out the normal, polite re...
I can't believe that this is something people talk about. I've had a group of people in my head for years, complete with the mindscape the reddit FAQ talks about. I just thought I was a little bit crazy; it's nice to see that there's a name for it.
I can't imagine having to deal with just one though. I started with four, which seemed like a good idea when I was eleven, and I found that distracting enough. Having only one sounds like being locked in a small room with only one companion -- I'd rather be in solitary. I kept creating more regardless, and I... (read more)