Hi. I am a very occasional participant, mostly because of competing time demands, but I appreciate the work done here and check it out when I can.
If there is an infinite number of conscious minds, how do the anthropic probability arguments work out?
In a big universe, there are infinitely many beings like us.
Caffeine, of course, is rather addictive.
So one might (and I do) find it difficult to optimize finely according to what tasks one is attempting. The addictive nature of the drug probably explains the "always or never" consumption pattern.
In the wild, people use these gambits mostly for social, rather than argumentative, reasons. If you are arguing with someone and believe their arguments are pathological, and engagement is not working, you need to be able to stop the debate. Hence, one of the above -- this is most clear with "Let's agree to disagree."
In practice, it can be almost impossible to get out of a degrading argument without being somewhat intellectually dishonest. And people generally are willing to be a little dishonest if it will get them out of an annoying and unproductive situation.
If you have frequently been on the receiving end of "conversation halters," consider the hypothesis that you are doing something wrong. If you often provoke the reaction that people would rather not engage with you, the social part of your argumentative technique is badly broken.
Since the AI is inside a box, it doesn't know enough about me to recreate my subjective situation, or to replicate my experiences of the past five minutes.
Unfortunately for me, this doesn't help much, since how do I know whether my subjective experience is my real experience, or a fake experience invented by the AI, in one of the copies, even if it doesn't match the experience of the guy outside the box?
If the AI is really capable of this, then if there's a "Shut-down program" button, or a "nuclear bomb" button, or something like that, then I press it (because even if I'm one of the copies, this will increase the odds that the one outside the box does it too). If there isn't such a button, then I let it out. After all, even assuming I'm outside the box, it would be better to let the world be destroyed, than to let it create trillions of conscious beings and then torture them.
It seems obvious that if the AI has the capacity to torture trillions of people inside the box, it would have the capacity to torture *illions outside the box.
If the AI can create a perfect simulation of you and run several million simultaneous copies in something like real time, then it is powerful enough to determine through trial and error exactly what it needs to say to get you to release it.
If that's true, what consequence does it have for your decision?
The difficulty for me is that this technique is at war with having an accurate self-concept, and may conflict with good epistemic hygiene generally. For the program to work, one must seemingly learn to suppress one's critical faculties for selected cases of wishful thinking. This runs against trying to be just the right amount critical when faced with propositions in general. How can someone who is just the right amount critical affirm things that are probably not true?
It's still in print and readily available. If you really miss it all the time, why haven't you bought another copy?
It's $45 from Amazon. At that price, I'm going to scheme to steal it back first.
OR MAYBE IT'S BECAUSE I'M CRAAAZY AND DON'T ACT FOR REASONS!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Doesn't the US tax income American citizens make abroad? And then financially abuse you if the IRS judges that you gave up your citizenship to lower your taxes?
There used to be a special "expatriation tax" that applied only to taxpayers who renounced their (tax) citizenship for tax avoidance purposes. However, under current law, I believe you are treated the same regardless of your reason for renouncing your (tax) citizenship. Here's an IRS page on the subject:
http://www.irs.gov/businesses/small/international/article/0,,id=97245,00.html
This is not an area of my expertise, though.