Thoughts on the frame problem and moral symbol grounding
(some thoughts on frames, grounding symbols, and Cyc)
The frame problem is a problem in AI to do with all the variables not expressed within the logical formalism - what happens to them? To illustrate, consider the Yale Shooting Problem: a person is going to be shot with a gun, at time 2. If that gun is loaded, the person dies. The gun will get loaded at time 1. Formally, the system is:
- alive(0) (the person is alive to start with)
- ¬loaded(0) (the gun begins unloaded)
- true → loaded(1) (the gun will get loaded at time 1)
- loaded(2) → ¬alive(3) (the person will get killed if shot with a loaded gun)
So the question is, does the person actually die? It would seem blindingly obvious that they do, but that isn't formally clear - we know the gun was loaded at time 1, but was it still loaded at time 2? Again, this seems blindingly obvious - but that's because of the words, not the formalism. Ignore the descriptions in italics, and the names of the suggestive LISP tokens.
Since that's hard to do, consider the following example. Alicorn, for instance, hates surprises - they make her feel unhappy. Let's say that we decompose time into days, and that a surprise one day will ruin her next day. Then we have a system:
- happy(0) (Alicorn starts out happy)
- ¬surprise(0) (nobody is going to surprise her on day 0)
- true → surprise(1) (somebody is going to surprise her on day 1)
- surprise(2) → ¬happy(3) (if someone surprises her on day 2, she'll be unhappy the next day)
So here, is Alicorn unhappy on day 3? Well, it seems unlikely - unless someone coincidentally surprised her on day 2. And there's no reason to think that would happen! So, "obviously", she's not unhappy on day 3.
Except... the two problems are formally identical. Replace "alive" with "happy" and "loaded" with "surprise". And though our semantic understanding tells us that "(loaded(1) → loaded (2))" (guns don't just unload themselves) but "¬(surprise(1) → surprise(2))" (being surprised one day doesn't mean you'll be surprised the next), we can't tell this from the symbols.
And we haven't touched on all the other problems with the symbolic setup. For instance, what happens with "alive" on any other time than 0 and 3? Does that change from moment to moment? If we want the words to do what we want, we need to put in a lot of logical conditionings, so that our intuitions are all there.
This shows that there's a connection between the frame problem and symbol grounding. If we and the AI both understand what the symbols mean, then we don't need to specify all the conditionals - we can simply deduce them, if asked ("yes, if the person is dead at 3, they're also dead at 4"). But conversely, if we have a huge amount of logical conditioning, then there is less and less that the symbols could actually mean. The more structure we put in our logic, the less structures there are in the real world that fit it ("X(i) → X(i+1)" is something that can apply to being dead, not to being happy, for instance).
This suggests a possible use for the Cyc project - the quixotic attempt to build an AI by formalising all of common sense ("Bill Clinton belongs to the collection of U.S. presidents" and "all trees are plants"). You're very unlikely to get an AI through that approach - but it might be possible to train an already existent AI with it. Especially if the AI had some symbol grounding, then there might not be all that many structures in the real world that could correspond to that mass of logical relations. Some symbol grounding + Cyc + the internet - and suddenly there's not that many possible interpretations for "Bill Clinton was stuck up a tree". The main question, of course, is whether there is a similar restricted meaning for "this human is enjoying a worthwhile life".
Do I think that's likely to work? No. But it's maybe worth investigating. And it might be a way of getting across ontological crises: you reconstruct a model as close as you can to your old one, in the new formalism.
The Virtue of Compartmentalization
Cross posted from my blog, Selfish Meme.
I’d like to humbly propose a new virtue to add to Eliezer’s virtues of rationality — the virtue of Compartmentalization. Like the Aristoteian virtues, the virtue of Compartmentalization is a golden mean. Learning the appropriate amount of Compartmentalization, like learning the appropriate amount of bravery, is a life-long challenge.
Learning how to program is both learning the words to which computers listen and training yourself to think about complex problems. Learning to comfortably move between levels of abstraction is an important part of the second challenge.
Large programs are composed of multiple modules. Each module is composed of lines of code. Each line of code is composed of functions manipulating objects. Each function is yet a deeper set of instructions.
For a programmer to truly focus on one element of a program, he or she has to operate at the right level of abstraction and temporarily forget the elements above, below or alongside the current problem.
Programming is not the only discipline that requires this focus. Economists and mathematicians rely on tools such as regressions and Bayes’ rule without continually recanting the math that makes them truths. Engineers do not consider wave-particle duality when predicting Newtonian-type problems. When a mechanic is fixing a radiator, the only relevant fact about spark plugs is that they produce heat.
If curiosity killed the cat, it’s only because it distracted her from more urgent matters.
As I became a better programmer I didn’t notice my Compartmentalization-skills improving – I was too lost in the problem at hand, but I noticed the skill when I noticed its absence in other people. Take, for example, the confused philosophical debate about free will. A typical spiel from an actual philosopher can be found in the movie Waking Life.
Discussions about free will often veer into unproductive digressions about physical facts at the wrong level of abstraction. Perhaps, at its deepest level, reality is a collection of billiard balls. Perhaps reality is, deep down, a pantheon of gods rolling dice. Maybe all matter is composed of cellists balancing on vibrating tightropes. Maybe we’re living in a simulated matrix of 1’s and 0s, or maybe it really is just turtles all the way down.
These are interesting questions that should be pursued by all blessed with sufficient curiosity, but these are questions at a level of abstraction absolutely irrelevant to the questions at hand.
A philosopher with a programmer’s discipline thinking about “free will” will not start by debating the above questions. Instead, he will notice that “free will” is itself a philosophical abstraction that can be broken down into several, oft-convoluted components. Debating the concept as a whole is too high of an abstraction. When one says “do I have free will?” one could actually be asking:
- Are the actions of humans predictable?
- Are humans perfectly predictable with complete knowledge and infinite computational time?
- Will we ever have complete knowledge and infinite computational time necessary to perfectly predict a human?
- Can you reliably manipulate humans with advertising/priming?
- Are humans capable of thinking about and changing their habits through conscious thought?
- Do humans have a non-physical soul that directs our actions and is above physical influences?
I’m sure there are other questions lurking beneath in the conceptual quagmire of “free will,” but that’s a good start These six are not only significantly narrower in scope than “Do humans have free will?” but also are also answerable and actionable. Off the cuff:
- Of course.
- Probably.
- Probably not.
- Less than marketers/psychologists would want you to believe but more than the rest of us would like to admit.
- More so than most animals, but less so than we might desire.
- Brain damage and mind-altering drugs would suggest our “spirits” are not above physical influences.
So, in sum, what would a programmer have to say about the question of free will? Nothing. The problem must be broken into manageable pieces, and each element must be examined in turn. The original question is not clear enough for a single answer. Furthermore, he will ignore all claims about the fundamental nature of the universe. You don’t go digging around machine code when you’re making a spreadsheet.
If you want your brain to think about problems larger, older and deeper than your brain, then you should be capable of zooming in and out of the problem – sometimes poring over the minutest details and sometimes blurring your vision to see the larger picture. Sometimes you need to alternate between multiple maps of varying detail for the same territory. Far from being a vice, this is the virtue of Compartmentalization.
Your homework assignment: Does the expression “love is just a chemical” change anything about Valentine’s Day?
Cryptographic Boxes for Unfriendly AI
Related to: Shut up and do the impossible!; Everything about an AI in a box.
One solution to the problem of friendliness is to develop a self-improving, unfriendly AI, put it in a box, and ask it to make a friendly AI for us. This gets around the incredible difficulty of developing a friendly AI, but it creates a new, apparently equally impossible problem. How do you design a box strong enough to hold a superintelligence? Lets suppose, optimistically, that researchers on friendly AI have developed some notion of a certifiably friendly AI: a class of optimization processes whose behavior we can automatically verify will be friendly. Now the problem is designing a box strong enough to hold an unfriendly AI until it modifies itself to be certifiably friendly (of course, it may have to make itself smarter first, and it may need to learn a lot about the world to succeed).
Edit: Many people have correctly pointed out that certifying friendliness is probably incredibly difficult. I personally believe it is likely to be significantly easier than actually finding an FAI, even if current approaches are more likely to find FAI first. But this isn't really the core of the article. I am describing a general technique for quarantining potentially dangerous and extraordinarily sophisticated code, at great expense. In particular, if we developed uFAI before having any notion of certifiable friendliness, then we could still use this technique to try and use the uFAI in a very limited way. It allows us to quarantine an AI and force everything it tries to say to us through an arbitrarily complicated filter. So, for example, if we are interested in solving some problem with a unique and verifiably correct answer, then we can ask the AI for this answer and only see the output if it passes the verification. We can also use a quarantined AI in more useful ways, at the expense of introducing some limited channels of influence (for example, if we ask for the solution to a problem which may have several solutions, we allow the AI to influence us by choosing which solution to provide)
Minimally, I hope this scheme at least suggests that putting an AI in a box may not be catastrophic. What to do with a boxed AI is another question. /Edit.
Concretely, suppose someone comes to you with the design of a recursively self-improving AI---call it Hal. Unfortunately, Hal is not certifiably friendly. Maybe it is friendly but not provably so, but more likely it is ambivalent to the human condition and is likely to destroy the universe if you just tell it to make a friendly AI. Nevertheless, you think Hal is smart enough to solve the friendly AI problem for you. All you need to do is give it access to all of the information about the world it wants and a couple of hours to run; after that, it should output a modified version of itself which is certifiably friendly.
You almost run Hal and ask it to design a friendly AI for you. After all, if you just run Hal and check to see if it outputs a certifiably friendly AI, what can it do other than refuse? But after some more thought you realize this is overly optimistic. Maybe you can avoid giving it an output channel that a human will have to read, thereby eliminating the most obvious failure mode of a boxed AI, but you still have to give it access to a computer to run on. Computation produces electromagnetic radiation, heat, etc., and it is honestly hard to say what a superintelligence could do with such a channel.
We seem to be faced with an impossible problem. We need to ask Hal to design us a certifiably friendly AI, but we also need to give him absolutely no ability to interact with the world destructively.
There is one field in particular which has a good history of solving obviously impossible problems: computational cryptography. We want to implement the following obviously impossible device. A cage into which we can place Hal so that he can think and examine all of the data we give him, but from which he cannot interact with his environment. In the cage with him is a tamper-proof arbiter implementing our friendliness test. Hal may give the arbiter the code to a single AI; if the arbiter confirms that this AI is certifiably friendly it will kill Hal, replace him with the friendly AI, and then allow the friendly AI to send a single message out of the cage.
To implement this impossible device we co-opt an important cryptographic primitive called public key homomorphic encryption, typically used for the impossible problem of private delegated computation. To get a basic idea you can see the Wikipedia article on homomorphic encryption. I'll give the brief description here (you can skip the next two paragraphs if you are familiar with homomorphic encryption).
A public key encryption scheme has the following properties. Anyone can encrypt a message using a publicly published key, but you cannot efficiently distinguish the encryptions of two different messages unless you know something about the secret key. A fully homomorphic encryption scheme has the additional counter intuitive property that, although you cannot learn anything about the encrypted data without the secret key, you can still perform arbitrary operations on it. In particular, if you have a homomorphic encryption of the source code of a program you can run that program without learning anything about it (except how long the source is), but the output of the program will be homomorphically encrypted. All of this can be done using only the public key.
The existence of a fully homomorphic encryption scheme was completely open until recently, when it was shown by Craig Gentry that such a scheme exists under the assumption that a certain problem on lattices is computationally intractable. Known homomorphic encryption schemes are very inefficient and based on new assumptions, but we can realistically expect improvements in the coming years and decades both in speed and the strength of the necessary assumptions.
If you run a homomorphically encrypted AI, sufficiently isolated from the private key, you get the very strong guarantee that the AI's execution has no computationally detectable effect on the rest of the world. If the AI was able to effect the world in a computationally detectable way, then we would have successfully broken the homomorphic encryption scheme by distinguishing the encrypted execution of the AI from the encrypted execution of some benign program.
Now we need to give the arbiter some means to allow a genuinely friendly AI access to the rest of the world, without giving Hal any illegitimate access. We allow Hal access to an output channel whose outputs are homomorphically encrypted. After some time, we destroy Hal and then homomorphically check to see if he has output the code of a certifiably friendly AI over the channel (we have to decode a single homomorphically encrypted bit to perform this check). If he hasn't, then we do nothing---apparently Hal would rather not give us a friendly AI. If he has output the code of a certifiably friendly AI, we homomorphically run it, obtain a homomorphic encryption of its output, and then decrypt only this output.
That's the entire scheme. Whenever you have some code whose execution itself might do incredible damage, this precaution allows you to execute it in perfect safety (the main caveat is that you need an automatic way to screen the output which doesn't give the algorithm enough room to possibly do harm).
Note: If you are really exceptionally paranoid you may be able to obtain even better guarantees by developing more powerful cryptographic techniques tailored for this application. In particular, the secret key used by the encryption procedure may be leaked to a sufficiently powerful AI through a side channel (the computations involving the key typically leak some data in the form of radiation, which might be recovered from sufficiently exhaustive sensor data by a sufficiently powerful AI). By doing these computations on a quantum computer you can probably guarantee that no data was leaked. If you want to be even more fantastically paranoid (ie, if you want a formal proof of security, which is a rare treat in real problems) you will actually have to shred the key, guaranteeing that an AI will have to interact with the shredded key to recover it. If you do this, you have to use another new primitive to implement the final revelation of the friendly AI's message.
If you have any concerns about the security of this protocol, I would be happy to try and defend it and would be surprised and interested if I failed. Whether or not such a cryptographic box is really an interesting or important object is another question. (It is interesting to me as an unexpected application of cryptography).
Isolated AI with no chat whatsoever
Suppose you make a super-intelligent AI and run it on a computer. The computer has NO conventional means of output (no connections to other computers, no screen, etc). Might it still be able to get out / cause harm? I'll post my ideas, and you post yours in the comments.
(This may have been discussed before, but I could not find a dedicated topic)
My ideas:
-manipulate current through its hardware, or better yet, through the power cable (a ready-made antenna) to create electromagnetic waves to access some wireless-equipped device. (I'm no physicist so I don't know if certain frequencies would be hard to do)
-manipulate usage of its hardware (which likely makes small amounts of noise naturally) to approximate human speech, allowing it to communicate with its captors. (This seems even harder than the 1-line AI box scenario)
-manipulate usage of its hardware to create sound or noise to mess with human emotion. (To my understanding tones may affect emotion, but not in any way easily predictable)
-also, manipulating its power use will cause changes in the power company's database. There doesn't seem to be an obvious exploit there, but it IS external communication, for what it's worth.
Let's hear your thoughts! Lastly, as in similar discussions, you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" There are plenty of unknown unknowns here.
What does the world look like, the day before FAI efforts succeed?
TL;DR: let's visualize what the world looks like if we successfully prepare for the Singularity.
I remember reading once, though I can't remember where, about a technique called 'contrasting'. The idea is to visualize a world where you've accomplished your goals, and visualize the current world, and hold the two worlds in contrast to each other. Apparently there was a study about this; the experimental 'contrasting' group was more successful than the control in accomplishing its goals.
It occurred to me that we need some of this. Strategic insights about the path to FAI are not robust or likely to be highly reliable. And in order to find a path forward, you need to know where you're trying to go. Thus, some contrasting:
It's the year 20XX. The time is 10 AM, on the day that will thereafter be remembered as the beginning of the post-Singularity world. Since the dawn of the century, a movement rose in defense of humanity's future. What began with mailing lists and blog posts became a slew of businesses, political interventions, infrastructure improvements, social influences, and technological innovations designed to ensure the safety of the world.
Despite all odds, we exerted a truly extraordinary effort, and we did it. The AI research is done; we've laboriously tested and re-tested our code, and everyone agrees that the AI is safe. It's time to hit 'Run'.
And so I ask you, before we hit the button: what does this world look like? In the scenario where we nail it, which achievements enabled our success? Socially? Politically? Technologically? What resources did we acquire? Did we have superior technology, or a high degree of secrecy? Was FAI research highly prestigious, attractive, and well-funded? Did we acquire the ability to move quickly, or did we slow unFriendly AI research efforts? What else?
I had a few ideas, which I divided between scenarios where we did a 'fantastic', 'good', or 'sufficient' job at preparing for the Singularity. But I need more ideas! I'd like to fill this out in detail, with the help of Less Wrong. So if you have ideas, write them in the comments, and I'll update the list.
Some meta points:
- This speculation is going to be, well, pretty speculative. That's fine - I'm just trying to put some points on the map.
- However, I'd like to get a list of reasonable possibilities, not detailed sci-fi stories. Do your best.
- In most cases, I'd like to consolidate categories of possibilities. For example, we could consolidate "the FAI team has exclusive access to smart drugs" and "the FAI team has exclusive access to brain-computer interfaces" into "the FAI team has exclusive access to intelligence-amplification technology."
- However, I don't want too much consolidation. For example, I wouldn't want to consolidate "the FAI team gets an incredible amount of government funding" and "the FAI team has exclusive access to intelligence-amplification technology" into "the FAI team has a lot of power".
- Lots of these possibilities are going to be mutually exclusive; don't see them as aspects of the same scenario, but rather different scenarios.
Anyway - I'll start.
Visualizing the pre-FAI world
- Fantastic scenarios
- The FAI team has exclusive access to intelligence amplification technology, and use it to ensure Friendliness & strategically reduce X-risk.
- The government supports Friendliness research, and contributes significant resources to the problem.
- The government actively implements legislation which FAI experts and strategists believe has a high probability of making AI research safer.
- FAI research becomes a highly prestigious and well-funded field, relative to AGI research.
- Powerful social memes exist regarding AI safety; any new proposal for AI research is met with a strong reaction (among the populace and among academics alike) asking about safety precautions. It is low status to research AI without concern for Friendliness.
- The FAI team discovers important strategic insights through a growing ecosystem of prediction technology; using stables of experts, prediction markets, and opinion aggregation.
- The FAI team implements deliberate X-risk reduction efforts to stave off non-AI X-risks. Those might include a global nanotech immune system, cheap and rigorous biotech tests and safeguards, nuclear safeguards, etc.
- The FAI team implements the infrastructure for a high-security research effort, perhaps offshore, implementing the best available security measures designed to reduce harmful information leaks.
- Giles writes: Large amounts of funding are available, via government or through business. The FAI team and its support network may have used superior rationality to acquire very large amounts of money.
- Giles writes: The technical problem of establishing Friendliness is easier than expected; we are able t construct a 'utility function' (or a procedure for determining such a function) in order to implement human values that people (including people with a broad range of expertise) are happy with.
- Crude_Dolorium writes: FAI research proceeds much faster than AI research, so by the time we can make a superhuman AI, we already know how to make it Friendly (and we know what we really want that to mean).
- Pretty good scenarios
- Intelligence amplification technology access isn't exclusive to the FAI team, but it is differentially adopted by the FAI team and their supporting network, resulting in a net increase in FAI team intelligence relative to baseline. The FAI team uses it to ensure Friendliness and implement strategy surrounding FAI research.
- The government has extended some kind of support for Friendliness research, such as limited funding. No protective legislation is forthcoming.
- FAI research becomes slightly more high status than today, and additional researchers are attracted to answer important open questions about FAI.
- Friendliness and rationality memes grow at a reasonable rate, and by the time the Friendliness program occurs, society is more sane.
- We get slightly better at making predictions, mostly by refining our current research and discussion strategies. This allows us a few key insights that are instrumental in reducing X-risk.
- Some X-risk reduction efforts have been implemented, but with varying levels of success. Insights about which X-risk efforts matter are of dubious quality, and the success of each effort doesn't correlate well to the seriousness of the X-risk. Nevertheless, some X-risk reduction is achieved, and humanity survives long enough to implement FAI.
- Some security efforts are implemented, making it difficult but not impossible for pre-Friendly AI tech to be leaked. Nevertheless, no leaks happen.
- Giles writes: Funding is harder to come by, but small donations, limited government funding, or moderately successful business efforts suffice to fund the FAI team.
- Giles writes: The technical problem of aggregating values through a Friendliness function is difficult; people have contradictory and differing values. However, there is broad agreement as to how to aggregate preferences. Most people accept that FAI needs to respect values of humanity as a whole, not just their own.
- Crude_Dolorium writes: Superhuman AI arrives before we learn how to make it Friendly, but we do learn how to make an 'Anchorite' AI that definitely won't take over the world. The first superhuman AIs use this architecture, and we use them to solve the harder problems of FAI before anyone sets off an exploding UFAI.
- Sufficiently good scenarios
- Intelligence amplification technology is widespread, preventing any differential adoption by the FAI team. However, FAI researchers are able to keep up with competing efforts to use that technology for AI research.
- The government doesn't support Friendliness research, but the research group stays out of trouble and avoids government interference.
- FAI research never becomes prestigious or high-status, but the FAI team is able to answer the important questions anyway.
- Memes regarding Friendliness aren't significantly more widespread than today, but the movement has grown enough to attract the talent necessary to implement a Friendliness program.
- Predictive ability is no better than it is today, but the few insights we've gathered suffice to build the FAI team and make the project happen.
- There are no significant and successful X-risk reduction efforts, but humanity survives long enough to implement FAI anyway.
- No significant security measures are implemented for the FAI project. Still, via cooperation and because the team is relatively unknown, no dangerous leaks occur.
- Giles writes: The team is forced to operate on a shoestring budget, but succeeds anyway because the problem turns out to not be incredibly sensitive to funding constraints.
- Giles writes: The technical problem of aggregating values is incredibly difficult. Many important human values contradict each other, and we have discovered no "best" solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached. Nevertheless, we come up with a satisfactory compromise.
- Crude_Dolorium writes: The problems of Friendliness aren't solved in time, or the solutions don't apply to practical architectures, or the creators of the first superhuman AIs don't use them, so the AIs have only unreliable safeguards. They're given cheap, attainable goals; the creators have tools to read the AIs' minds to ensure they're not trying anything naughty, and killswitches to stop them; they have an aversion to increasing their intelligence beyond a certain point, and to whatever other failure modes the creators anticipate; they're given little or no network connectivity; they're kept ignorant of facts more relevant to exploding than to their assigned tasks; they require special hardware, so it's harder for them to explode; and they're otherwise designed to be safer if not actually safe. Fortunately they don't encounter any really dangerous failure modes before they're replaced with descendants that really are safe.
Empirical claims, preference claims, and attitude claims
What do the following statements have in common?
- "Atlas Shrugged is the best book ever written."
- "You break it, you buy it."
- "Earth is the most interesting planet in the solar system."
My answer: None of them are falsifiable claims about the nature of reality. They're all closer to what one might call "opinions". But what is an "opinion", exactly?
There's already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful. This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical? The idea here is similar to the idea behind anti-virus software: Even if you can't rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.
Why is it useful to be able to be able to flag non-empirical claims? Well, for one thing, you can believe whatever you want about them! And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.
Struck with a belief in Alien presence
Recently I've been struck with a belief in Aliens being present on this Earth. It happened after I watched this documenary (and subsequently several others). My feeling of belief is not particular interesting in itself - I could be lunatic or otherwise psychological dysfunctional. What I'm interested in knowing is to what extend other people, who consider themselves rationalists, feel belief in the existence of aliens on this earth, after watching this documentary. Is anyone willing to try and watch it and then report back?
Another question arising in this matter is how to treat evidence of extraordinary things. Should one require 'extraordinary evidence for extraordinary claims'? I somehow feel that this notion is misguided - it discriminates evidence prior to observation. That is not the right time to start discriminating. At most we should ascribe a prior probability of zero and then do some Bayesian updating to get a posterior. Hmm, if no one has seen a black swan and some bayesian thinking person then sees a black swan a) in the distance or b) up front, what will his a posterior probability of the existence of black swans then be?
Voting is like donating thousands of dollars to charity
Summary: People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000. So I figure something on the order of $1000 is a reasonable evaluation (after all, I'm writing this post because the number turned out to be large according to this method, so regression to the mean suggests I err on the conservative side), and that's be enough to make me do it.
Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.
I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!
Time for a Fermi estimate
Below is an example Fermi calculation for the value of voting in the USA. Of course, the estimates are all rough and fuzzy, so I'll be conservative, and we can adjust upward based on your opinion.
I'll be estimating the value of voting in marginal expected altruistic dollars, the expected number of dollars being spent in a way that is in line with your altruistic preferences.1 If you don't like measuring the altruistic value of the outcome in dollars, please consider making up your own measure, and keep reading. Perhaps use the number of smiles per year, or number of lives saved. Your measure doesn't have to be total or average utilitarian, either; as long as it's roughly commensurate with the size of the country, it will lead you to a similar conclusion in terms of orders of magnitude.
Please don't vote because democracy is a local optimum
Related to: Voting is like donating thousands of dollars to charity, Does My Vote Matter?
And voting adds legitimacy to it.
Thank you.
#annoyedbymotivatedcognition
Request for sympathy; frustrated with Dark Side
It is really quite frustrating to discuss the intersection of physics and free will with a man who is capable of posting this as his opinion:
I regard atomic motions as determined, that is, as exactly defined to an infinite degree of precision, by the laws of physics, with nothing left over or left out of the explanation, and nothing else to explain.
and then, a day later, posts this as summing up mine:
[His] axiom that the motions of atoms are determined by and only by that description of non-deliberate physical reactions to outside forces called physics. [...] This axiom is not just false, it is obviously, outrageously false.
AAAGH! *Make up your damned mind, man!*
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)