"Stupid" questions thread
r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (850)
Why is space colonization considered at all desirable?
Would you rather have one person living a happy, fulfilled life, or two? Would you rather have seven billion people living with happy, fulfilled lives, or seven billion planets full of people living happy, fulfilled lives?
Oh, okay. Personally I lean much more towards average utilitarianism as opposed to total, but I haven't really thought through the issue that much. I was unaware that total utilitarianism was popular enough that it alone was sufficient for so many people to endorse space colonization.
But, now that I think about it, even if you wanted to add as many happy people to the universe as possible, couldn't you do it more efficiently with ems?
Ems are still limited by the amount of available matter. They may enable you to colonise non-Earthlike planets, but you still need to colonise.
In fact, pretty much everything possible is limited by available energy and matter.
Either way, more territory means more matter and energy, which means safer and longer lives.
You should check out this post and its related posts. (also here, and here). Which is to say, there is a whole wide world out there of preferences - why should I have one or two small options?
Both/and.
Personally, I too tend toward 'utilitarianism's domain does not include number of people', but I think most people have a preference toward at least minor pop. growth.
Also, many people (including me) are skeptical about ems or emulation in general. Plus, you'd want to colonize universe to build more emulation hardware?
Even without total utilitarianism, increasing the population may be desirable as long as average quality of life isn't lowered. For instance, increasing the amount of R&D can make progress faster, which can benefit everyone. Of course one can also think of dangers and problems that scale with population size, so it's not a trivial question.
Earth is currently the only known biosphere. More biospheres means that disasters that muck up one are less likely to muck up everything.
Less seriously, people like things that are cool.
EDIT: Seriously? My most-upvoted comment of all time? Really? This is as good as it gets?
1: It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.
2: It's an insurance policy against things that might wreck the earth but not other planets/solar systems.
3: Insofar as we can imagine there to be other alien races, understanding space colonization is extremely important either for trade or self defense.
4: It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.
In sci-fi maybe. I doubt people actually living in space (or on un-Earth-like planets) would concur, without some very extensive technological change.
New incompatible sub-subsets will just keep arising in new colonies - as has happened historically.
Eggs, basket, x-risk.
It seems likely that exploiting resources in space will make society richer, benefiting everyone. Perhaps that will require people live in space.
If you're an average utilitarian, it's still a good idea if you can make the colonists happier than average. Since it's likely that there is large amounts of wildlife throughout the universe, this shouldn't be that difficult.
???
What's the question?
Earth isn't the only planet with life, is it? If most planets do not evolve sapient life, then the planets will be full of wildlife, which doesn't live very good lives.
It is not the space as currently is, to be colonized. It's the radically technologically transformed space we are after!
Then why not be after technological transformation of Earth first, and (much easier) expansion into space afterwards? Is it only the 'eggs in one basket' argument that supports early colonization?
no population cap
On a global scale, the demographic transition means most nations don't care about population caps much. On a local scale, individuals won't find it cheaper to raise children in colonies; in fact the cost of life will be much higher than on Earth at first.
Of course if you're a population ethicist, then you want to increase the population and space colonization looks good.
Another reason is that the earth's crust is quite rare in virtually all precious and useful metals (just look at the d-block of the periodic table for examples). Virtually all of them sank to the core during earth's formation, the existing deposits are the result of asteroids striking. So, asteroid mining is worth considering even if you're a pure capitalist working for your own gain.
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal's Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.
I should think that this is more likely to indicate that nobody, including really smart people, and including you, actually knows whats what and trying to chase after all these pascals muggings is pointless becuase you will always run into another one that seems convincing from someone else smart.
There's a bit of a problem with the claim that nobody knows what's what: the usual procedure when someone lacks knowledge is to assign an ignorance prior. The standard methods for generating ignorance priors, usually some formulation of Occam's razor, assign very low probability to claims as complex as common religions.
http://en.wikipedia.org/wiki/List_of_religious_populations
How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can't use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal's wager but the biggest to me is it's impossible to choose WHICH pascal's wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it's belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of "christianity" being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.
To steelman it, what about a bet that believing in a higher power, no matter the flavor, saves your immortal soul from eternal damnation?
If the higher power cared, don't you think such power would advertise more effectively? Religious wars seem like pointless suffering if any sufficient spiritual belief saves the soul.
I don't think this is just about the afterlife. Do any religions offer good but implausible advice about how to live?
What do you mean by 'good but implausible'?
I was thinking about the Christian emphasis on forgiveness, but the Orthodox Jewish idea of having a high proportion of one's life affected by religious rules would also count.
That is eerily similar to an Omega who deliberately favours specific decision theories instead of their results.
Just trying to see what form of the Pascal's wager would avoid the strongest objections.
Well, correct me if I'm wrong, but most of the other popular religions don't really believe in eternal paradise/damnation, so Pascal's Wager applies just as much to, say, Christianity vs. Hinduism as it does Christianity vs. atheism. Jews, Buddhists, and Hindus don't believe in hell, but as far as I can tell. Muslims do. So if I were going to buy into Pascal's wager, I think I would read apologetics of both Christianity and Islam, figure out which one seemed more likely, and going with that one. Even if you found equal probability estimates for both, flipping a coin and picking one would still be better than going with atheism, right?
Why? Couldn't it be something like, Religion A is correct, Religion B almost gets it and is getting at the same essential truth, but is wrong in a few ways, Religion C is an outdated version of Religion A that failed to update on new information, Religion D is an altered imitation of Religion A that only exists for political reasons, etc.
Good post though, and you sort of half-convinced me that there are flaws in Pascal's Wager, but I'm still not so sure.
You're combining two reasons for believing: Pascal's Wager, and popularity (that many people already believe). That way, you try to avoid a pure Pascal's Mugging, but if the mugger can claim to have successfully mugged many people in the past, then you'll submit to the mugging. You'll believe in a religion if it has Heaven and Hell in it, but only if it's also popular enough.
You're updating on the evidence that many people believe in a religion, but it's unclear what it's evidence for. How did most people come to believe in their religion? They can't have followed your decision procedure, because it only tells you to believe in popular religions, and every religion historically started out small and unpopular.
So for your argument to work, you must believe that the truth of a religion is a strong positive cause of people believing in it. (It can't be overwhelmingly strong, though, since no religion has or has had a large majority of the world believing in it.)
But if people can somehow detect or deduce the truth of a religion on their own - and moreover, billions of people can do so (in the case of the biggest religions) - then you should be able to do so as well.
Therefore I suggest you try to decide on the truth of a religion directly, the way those other people did. Pascal's Wager can at most bias you in favour of religions with Hell in them, but you still need some unrelated evidence for their truth, or else you fall prey to Pascal's Mugging.
There are also various Christian's who believe that other Christian's who follow Christianity the wrong way will go to hell.
People being religious is some evidence that religion is true. Aside from drethelin's point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
To pick an easy example, I don't think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization's recommendations?
To me it is only evidence that people are irrational.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox's roommate's death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
The issue is: How do you know that you aren't just as irrational as them?
I don't think it's fair to say that no one of the practical predictions of religion holds up to rigorous examination. In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.
Buddhist meditation is also a practice that has a lot of backing in rigorous examination.
On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.
Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get's really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.
Neither do Catholics think their priests turn wine into actual blood. After all, they're able to see and taste it as wine afterwards! Instead they're dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don't come true, and so it's more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don't have beliefs in the sense that LW uses the word. People just say words, mostly words that they've heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it's a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don't ask "what do these people believe?" but "what do these people do?" The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
Hm?
In the form of religious stories or perhaps advice from a religious leader. I should've been more specific than "life situations": my guess is that religious people acquire from their religion ways of dealing with, for example, grief and that atheists may not have cached any such procedures, so they have to figure out how to deal with things like grief.
Why such a high number? I cannot imagine any odds I would take on a bet like that.
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there's an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn't move your prior odds all that much.
This seems irrelevant to the truth of Christianity.
That probability is way too high.
Of course, there are also perspective-relative "highly probable" alternate explanations than sound reasoning for non-Christians' belief in non-Christianity. (I chose that framing precisely to make a point about what hypothesis privilege feels like.) E.g., to make the contrast in perspectives stark, demonic manipulation of intellectual and political currents. E.g., consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote. Also "human minds are prone to see agency when there is in fact none, therefore no perception of agency can provide evidence of (non-human) agency" would be a useful idea for (Christian-)hypothetical demons to promote.
Of course, from our side that perspective looks quite discountable because it reminds us of countless cases of humans seeing conspiracies where it's in fact quite demonstrable that no such conspiracy could have existed; but then, it's hard to say what the relevance of that is if there is in fact strong but incommunicable evidence of supernaturalism—an abundance of demonstrably wrong conspiracy theorists is another thing that the aforementioned hypothetical supernatural processes would like to provoke and to cultivate. "The concept of 'evidence' had something of a different meaning, when you were dealing with someone who had declared themselves to play the game at 'one level higher than you'." — HPMOR. At roughly this point I think the arena becomes a social-epistemic quagmire, beyond the capabilities of even the best of Lesswrong to avoid getting something-like-mind-killed about.
[ETA: Retracted because I don't have the aversion-defeating energy necessary to polish this, but:]
To clarify, presumably "true" here doesn't mean all or even most of the claims of Christianity are true, just that there are some decision policies emphasized by Christianity that are plausible enough that Pascal's wager can be justifiably applied to amplify their salience.
I can see two different groups of claims that both seem central to Christian moral (i.e. decision-policy-relevant) philosophy as I understand it, which in my mind I would keep separate if at all possible but that in Christian philosophy and dogma are very much mixed together:
The first group of claims is in some ways more practical and, to a LessWronger, more objectionable. It reasons from various allegedly supernatural phenomena to the conclusion that unless a human acts in a way seemingly concordant with the expressed preferences of the origins of those supernatural phenomena, that human will be risking some grave, essentially game theoretic consequence as well as some chance of being in moral error, even if the morality of the prescriptions isn't subjectively verifiable. Moral error, that is, because disregarding the advice, threats, requests, policies &c. of agents seemingly vastly more intelligent than you is a failure mode, and furthermore it's a failure mode that seemingly justifies retrospective condemnatory judgments of the form "you had all this evidence handed to you by a transhumanly intelligent entity and you chose to ignore it?" even if in some fundamental sense those judgments aren't themselves "moral". An important note: saying "supernaturalism is silly, therefore I don't even have to accept the premises of that whole line of reasoning" runs into some serious Aumann problems, much more serious than can be casually cast aside, especially if you have a Pascalian argument ready to pounce.
The second group of claims is more philosophical and meta-ethical, and is emphasized more in intellectually advanced forms of Christianity, e.g. Scholasticism. One take on the main idea is that there is something like an eternal moral-esque standard etched into the laws of decision theoretic logic any deviations from which will result in pointless self-defeat. You will sometimes see it claimed that it isn't that God is punishing you as such, it's that you have knowingly chosen to distance yourself from the moral law and have thus brought ruin upon yourself. To some extent I think it's merely a difference of framing born of Christianity's attempts to gain resonance with different parts of default human psychology, i.e. something like third party game theoretic punishment-aversion/credit-seeking on one hand and first person decision theoretic regret-minimization on the other. [This branch needs a lot more fleshing out, but I'm too tired to continue.]
But note that in early Christian writings especially and in relatively modern Christian polemic, you'll get a mess of moralism founded on insight into the nature of human psychology, theological speculation, supernatural evidence, appeals to intuitive Aumancy, et cetera. [Too tired to integrate this line of thought into the broader structure of my comment.]
Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.
This question has been bugging me for the last couple of years here. Clearly Eliezer believes in the power of anthropics, otherwise he would not bother with MWI as much, or with some of his other ideas, like the recent writeup about leverage. Some of the reasonably smart people out there discuss SSA and SIA. And the Doomsday argument. And don't get me started on Boltzmann brains...
My current guess that in the fields where experimental testing is not readily available, people settle for what they can get. Maybe anthropics help one pick a promising research direction, I suppose. Just trying (unsuccessfully) to steelman the idea.
The obvious application (to me) is figuring out how to make decisions once mind uploading is possible. This point is made, for example, in Scott Aaronson's The Ghost in the Quantum Turing Machine. What do you anticipate experiencing if someone uploads your mind while you're still conscious?
Anthropics also seems to me to be relevant to the question of how to do Bayesian updates using reference classes, a subject I'm still very confused about and which seems pretty fundamental. Sometimes we treat ourselves as randomly sampled from the population of all humans similar to us (e.g. when diagnosing the probability that we have a disease given that we have some symptoms) and sometimes we don't (e.g. when rejecting the Doomsday argument, if that's an argument we reject). Which cases are which?
Or even: deciding how much to care about experiencing pain during an operation if I'll just forget about it afterwards. This has the flavor of an anthropics question to me.
It tells you when to expect the end of the world.
There's a story about anthropic reasoning being used to predict properties of the processes which produce carbon in stars, before these processes were known. (apparently there's some debate about whether or not this actually happened)
I'd add that the Doomsday argument in specific seems like it should be demolished by even the slightest evidence as to how long we have left.
Not sure about anthropics, but we need decision theories that work correctly with copies, because we want to build AIs, and AIs can make copies of themselves.
An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They're not trying to draw conclusions about the state of the world, they're trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said "This is just a mess of tautologies (Bayes' theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?", how would you respond? Presumably you'd tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn't predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).
So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It's like understanding any other selection effect -- in order to properly interpret the significance of pieces of evidence you collect, you need to have a proper understanding of the tools you use to collect them. To use Eddington's much-cited example, if your net can't catch fish smaller than six inches, then the fact that you haven't caught any such fish doesn't tell you anything about the state of the lake you're fishing. Understanding the limitations of your data-gathering mechanism prevents you from making bad updates. And if the particular limitation you're considering is the fact that observations can only be made in regimes accessible to observers, then you're engaged in anthropic reasoning.
Paul Dirac came up with a pretty revisionary cosmological theory based on several apparent "large number coincidences" -- important large (and some small) numbers in physics that all seem to be approximate integer powers of the Hubble age of the universe. He argued that it is implausible that we just happen to find ourselves at a time when these simple relationships hold, so they must be law-like. Based on this he concluded that certain physical constants aren't really constant; they change as the universe ages. R. H. Dicke showed (or purported to show) that at least some of these coincidences can be explained when one realizes that observers can only exist during a certain temporal window in the universe's existence, and that the timing of this window is related to a number of other physical constants (since it depends on facts about the formation and destruction of stars, etc.). If it's true that observers can only exist in an environment where these large number relationships hold, then it's a mistake to update our beliefs about natural laws based on these relationships. So that's an example of how understanding the anthropic selection effect might save us (and not just us, but also superhumans like Dirac) from bad updates.
So much for anthropics in general, but what about the esoteric particulars -- SSA, SIA and all that. Well, here's the basic thought: Dirac's initial (non-anthropic) move to his new cosmological theory was motivated by the belief that it is extraordinarily unlikely that the large number coincidences are purely due to chance, that we just happen to be around at a time when they hold. This kind of argument has a venerable history in physics (and other sciences, I'm sure) -- if your theory classifies your observed evidence as highly atypical, that's a significant strike against the theory. Anthropic reasoning like Dicke's adds a wrinkle -- our theory is allowed to classify evidence as atypical, as long as it is not atypical for observers. In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn't count against it, as long as the theory also says (based on good reason, not ad hoc stipulation) that observers can only exist in those few parts of the universe where phenomenon X occurs. Atypicality is allowed as long as it is correlated with the presence of observers.
But only that much atypicality is allowed. If your theory posits significant atypicality that goes beyond what selection effects can explain, then you're in trouble. This is the insight that SSA, SIA, etc seek to precisify. They are basically attempts to update the Diracian "no atypicality" strategy to allow for the kind of atypicality that anthropic reasoning explains, but no more atypicality than that. Perhaps they are misguided attempts for various reasons, but the search for a mathematical codification of the "no atypicality" move is important, I think, because the move gets used imprecisely all the time anyway (without explicit evocation, most of the time) and it gets used without regard for important observation selection effects.
If you taboo "anthropics" and replace by "observation selection effects" then there are all sorts of practical consequences. See the start of Nick Bostrom's book for some examples.
The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed. Almost everyone who's heard of the argument thinks there's something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom's book. Further, alternative approaches to anthropics (such as the "self indication assumption"), or attempts to completely bypass anthropics (such as "full non-indexical conditioning"), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace's theisis.
I care about anthropics because from a few intuitive principles that I find interesting for partially unrelated reasons (mostly having to do with wanting to understand the nature of justification so as to build an AGI that can do the right thing) I conclude that I should expect monads (programs, processes; think algorithmic information theory) with the most decision-theoretic significance (an objective property because of assumed theistic pansychism; think Neoplatonism or Berkeleyan idealism) to also have the most let's-call-it-conscious-experience. So I expect to find myself as the most important decision process in the multiverse. Then at various moments the process that is "me" looks around and asks, "do my experiences in fact confirm that I am plausibly the most important agent-thingy in the multiverse?", and if the answer is no, then I know something is wrong with at least one of my intuitive principles, and if the answer is yes, well then I'm probably psychotically narcissistic and that's its own set of problems.
It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:
Not a core competency of the sort of people LW attracts.
Rewards not as immediate as the sort of epiphany porn that some of LW generates.
Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.
LW's foundational posts are all very strongly biased towards epistemic rationality, and I think that strong bias still affects our attempts to talk about instrumental rationality. There are probably all sorts of instrumentally rational things we could be doing that we don't talk about enough.
Would also be useful to know how to get other people around you to up meta-ness or machiavellianism.
Do you have any experience doing this successfully? I'd assume that powerful people already have lots of folks trying to make friends with them.
Sure, but rationalists should win.
Power isn't one dimensional. The thing that matters isn't so much to make relationships with people who are more powerful than you in all domains but to make relationship with people who are poweful in some domain where you could ask them for help.
Depends on how powerful you want to become. Those relationships will be a burden the moment you'll "surpass the masters" so to speak. You may want to avoid building too many.
Because it's hard. That's what kept me from doing it.
I am very close to explicitly starting a project to do just that, and didn't get to this point even until one of my powerful friends explicitly advised me to take a particular strategy to get relationships with more powerful people.
I find myself unable to be motivated to do it without calling it "Networking the Hard Way", to remind myself that yes, it's hard, and that's why it will work.
How do you tell the difference between a preference and a bias (in other people)?
(I think) a bas would change your predictions/assessments of what is true in the direction of that bias, but a preference would determine what you want irrespective of the way the world currently is.
Pretty much. Also, most preferences are 1. more noticable and 2. often self-protected, i.e. " I want to keep wanting this thing".
Would you have any specific example?
I can't even easily, reliably do that in myself!
What experiences what you anticipate in a world where utilitarianism is true that you wouldn't anticipate in a world where it is false?
In what sense can utiliarianism be true or false?
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
Casting morality as facts that can be true or false is a very convenient model.
I don't think most people agree that useful = true.
Not explicitly, but most people tend to believe what their evolutionary and cultural adaptations tell them it's useful to believe and don't think too hard about whether it's actually true.
Woah there. I think we might have a containment failure across an abstraction barrier.
Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.
"Utilitarianism is false because it is useful to believe it is false" is a confusion of levels, IMO.
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I don't see how this answers my question. And certainly not the original question
In the former world, I anticipate that making decisions using utilitarianism would leave me satisfied upon sufficient reflection, and more reflection after that wouldn't change my opinion. In the latter world, I don't.
So you defined true as satisfactory? What if you run into a form of repugnant conclusion, as most forms of utilitarianism do, does it mean that utilitarianism is false? Furthermore, if you compare consequentialism, virtue ethics and deontology by this criteria, some or all of them can turn out to be "true" or "false", depending on where your reflection leads you.
I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?
You ask them to help you find a lost puppy.
If I have lost a puppy,
I desire to believe that I have lost a puppy.
If I have not lost a puppy,
I desire to believe that I have not lost a puppy.
Let me not become attached to puppies I may not want.
The downsides of talking to strangers are really, really low. Your feelings of anxiety are just lies from your brain.
I've found that writing a script ahead of time for particular situations, with some thoughts of different possible variations in how the conversation could go.
Honestly, not sure I understand the question.
Yeah, it was deliberately vague so I'd get answers to a wide variety of possible interpretations. To be more specific, I have trouble figuring out what my opening line should be in situations where I'm not sure what the social script for introducing myself is, e.g. to women at a bar (I'm a straight male). My impression is that "hi, can I buy you a drink?" is cliché but I don't know what reasonable substitutes are.
I think you need to taboo "introducing yourself." The rules are different based on where you want the conversation to end up.
I think to a first-order approximation it doesn't matter where I want the conversation to end up because the person I'm talking to will have an obvious hypothesis about that. But let's say I'm looking for women to date for the sake of concreteness.
Sorry, I have no experience with that, so I lack useful advice. Given your uncertainty about how to proceed, I suggest the possibility that this set of circumstances is not the easiest way for you to achieve the goal you identified.
I am wary of this reasoning. It would make sense if one was uncertain how to pick up women in bars specifically but was quite familiar with how to pick up women in a different environment. However the uncertainty will most likely be more generalised than that and developing the skill in that set of circumstances is likely to give a large return on investment.
This uncertainty is of the type that calls for comfort zone expansion.
I've been reading PUA esque stuff lately and something they stress is that "the opener doesn't matter", "you can open with anything". This is in contrast to the older, cheesier, tactic based PUAs who used to focus obsessively over finding the right line to open with. This advice is meant for approaching women in bars, but I imagine it holds true for most ocassions you would want to talk to a stranger.
In general if you're in a social situation where strangers are approaching each other, then people are generally receptive to people approaching them and will be grateful that you are putting in the work of initiating contact and not them. People also understand that it's sometimes awkward to initiate with strangers, and will usually try to help you smooth things over if you initially make a rough landing. If you come in awkwardly, then you can gauge their reaction, calibrate to find a more appropriate tone, continue without drawing attention to the initial awkwardness, and things will be fine.
Personally, I think the best way to open a conversation with a stranger would just be to go up to them and say "Hey, I'm __" and offer a handshake. It's straightforward and shows confidence.
If you're in a situation where it's not necessarily common to approach strangers, you'll probably have to to come up with some "excuse" for talking to them, like "that's a cool shirt" or "do you know where the library is?". Then you have to transition that into a conversation somehow. I'm not really sure how to do that part.
EDIT: If an approach goes badly, don't take it personally. They might be having a bad day. They might be socially awkward themselves. And if someone is an asshole to you just for going up and saying hi, they are the weirdo, not you. On the other hand, if ten approaches in a row go badly, then you should take it personally.
Here's a recent example (with a lady sitting beside me in the aeroplane; translated):
from which it was trivially easy to start a conversation.
Don't leave us hanging! Why the hell could she speak all those languages but not English?
She had been born in Brazil to Italian parents, had gone to school in Italy, and was working in the French-speaking part of Switzerland.
"Hi, what's your name?" or "Hi, I'm Qiaochu" (depending on the cultural context, e.g. ISTM the former is more common in English and the latter is more common in Italian). Ain't that what nearly any language course whatsoever teaches you to say on Lesson 1? ;-)¹
Or, if you're in a venue where that's appropriate, "wanna dance?" (not necessarily verbally).
(My favourite is to do something awesome in their general direction and wait for them to introduce themselves/each other to me, but it's not as reliable.)
I conjecture that "Hi, I'm Qiaochu" is a very uncommon greeting in Italian :-).
"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.
I feel like I read something interesting about this on Mark Manson's blog but it's horribly organized so I can't find it now.
A good way to start is to say something about your situation (time, place, etc.). After that, I guess you could ask their names or something. I consider myself decent at talking to strangers, but I think it's less about what you say and more about the emotions you train yourself to have. If you see strangers as friends waiting to be made on an emotional level, you can just talk to them the way you'd talk to a friend. Standing somewhere with lots of foot traffic holding a "free hugs" sign under the influence of something disinhibiting might be helpful for building this attitude. If you currently are uncomfortable talking to strangers then whenever you do it, afterwards comfort yourself internally the same way you might comfort an animal (after all, you are an animal) and say stuff like "see? that wasn't so bad. you did great." etc. and try to build comfort through repeated small exposure (more).
I think the question is badly formed. I think it's better to ask: "How do I become a person who easily talks to strangers?" When you are in your head and think about: "How do I talk to that person over there?" you are already at a place that isn't conductive to a good interaction.
Yesterday during the course of traveling around town three stangers did talk to me, where the stranger said the first word.
The first was a woman in mid 30s with a bicycle who was searching the elevator at the public train station. The second was an older woman who told me that the Vibriam Fivefinger shoes in wearing look good. The third was a girl who was biking next to me when her smart phone felt down. I picked it up and handed it back to her. She said thank you.
I'm not even counting beggars at the train in public transportation.
Later that evening I went Salsa dancing. There two woman I didn't know who were new to Salsa asked me to dance.
Why was I at a vibe that let's other people approach me? I spent five days at a personal development workshop given by Danis Bois. The workshop wasn't about doing anything to strangers but among other things teaches a kind of massage and I was a lot more relaxed than I was in the past.
If you get rid of your anxiety interactions with strangers start to flow naturally.
What can you do apart from visiting personal development seminars that put you into a good emotional state?
Wear something that makes it easy for strangers to start a conversation with you. One of the benefits of Vibriam Fivefingers is that people are frequently curious about them.
Do good exercises.
1) One exercise is to say 'hi' or 'good morning' to every stranger you pass. I don't do it currently but it's a good exercise to teach yourself that interaction with strangers is natural.
2) Learn some form of meditation to get into a relaxed state of mind.
3) If you want to approach a person at a bar you might feel anxiety. Locate that anxiety in your body. At the beginning it makes sense to put your hand where you locate it.
Ask yourself: "Where does that feeling want to move in my body" Tell it to "soften and flow". Let it flow where it wants to flow in your body. Usually it wants to flow at a specific location out of your body.
Do the same with the feeling of rejection, should a stranger reject you.
Exercise three is something that I only learned recently and I'm not sure if I'm able to explain it well over the internet. In case anybody reading it finds it useful I would be interested in feedback.
I was climbing a tree yesterday and realized that I hadn't even thought that the people watching were going to judge me, and that I would have thought of it previously, and that it would have made it harder to just climb the tree. Then I thought that if I could use the same trick on social interaction, it would become much easier. Then I wondered how you might learn to use that trick.
In other words, I don't know, but the question I don't know the answer to is a little bit closer to success.
I travel long-distance by plane alone quite a lot, and I like talking to people. If the person I'm sitting with looks reasonably friendly, I often start with something like "Hey, I figure we should introduce ourselves to each other right now, because I find it almost excruciatingly awkward to sit right next to somebody for hours without any communication except for quick glances. Why the hell do people do that? Anyway, I'm so-and-so. What's your name?"
I've gotten weird looks and perfunctory responses a couple of times, but more often than not people are glad for the icebreaker. There are actually a couple of people I met on planes with whom I'm still in regular touch. On the downside, I have sometimes got inextricably involved in conversation with people who are boring and/or unpleasant. I don't mind that too much, but if you are particularly bothered by that sort of thing, maybe restrict your stranger-talking to contexts where you have a reasonable idea about the kind of person you're going to be talking with. Anyway, my advice is geared towards a very specific sort of situation, but it is a pretty common situation for a lot of people.
Why is everyone so intereted in decision theory? Especially the increasingly convoluted variants with strange acronyms that seem to be popping up
As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer's conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don't work in the presence of things like other agents that can read the AI's source code, so we need to find some non-obvious guesses because that's something that could actually happen.
Hey, I think your tone here comes across as condescending, which goes against the spirit of a 'stupid questions' thread, by causing people to believe they will lose status by posting in here.
Fair point. My apologies. Getting rid of the first sentence.
data point: I didn't parse it as condescending at all.
This was what I gathered from reading the beginning of the TDT paper: "There's this one decision theory that works in every single circumstance except for this one crazy sci-fi scenario that might not even be physically possible, and then there's this other decision theory that works in said sci-fi scenario but not really anywhere else. We need to find a decision theory that combines these two in order to always work, including in this one particular sci-fi scenario."
I guess it might be useful for AI research, but I don't see why I would need to learn it.
the sci-fi bit is only to make it easier to think about. The real world scenarios it corresponds to require the reader to have quite a bit more background material under their belt to reason carefully about.
What are the real world scenarios it corresponds to? The only one I know of is the hitchhiker one, which is still pretty fantastic. I'm interested in learning about this.
Why are we throwing the word "Intelligence" around like it actually means anything? The concept is so ill-defined It should be in the same set with "Love."
I can't tell whether you're complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it's an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer's working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.
"Efficient cross-domain optimization" is just fancy words for "can be good at everything".
achieves its value when presented with a wide array of environments.
This is again different words for "can be good at everything". :-)
When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of "good" is basically the ultimate goal of MIRI.
I just showed that your redefinition does not actually unpack anything.
I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.
Yes. And your point is?
This is the stupid questions thread.
There seems to be a thing called "competence" for particular abstract tasks. Further, there are kinds of tasks where competence in one task generalizes to the whole class of tasks. One thing we try to measure by intelligence is an individual's level of generalized abstract competence.
I think part of the difficulties with measuring intelligence involve uncertainty about what tasks are within the generalization class.
I'm not really sure why you use "love" as an example. I don't know that much about neurology, but my understand is that the chemical makeup of love and its causes are pretty well understood. Certainly better understood than intelligence?
I think what you talk about here is certain aspects of sexual attraction. Which are, indeed, often lumped together into the concept of "Love". Just like a lot of different stuff is lumped together into the concept of "Intelligence".
This seems like matching "chemistry" to "sexual" in order to maintain the sacredness of love rather than to actually get to beliefs that cash out in valid predictions. People can reliably be made to fall in love with each other given the ability to manipulate some key variables. This should not make you retch with horror any more than the stanford prison experiment already did. Alternatively, update on being more horrified by tSPE than you were previously.
?
Lots of eye contact is sufficient if the people are both single, of similar age, and with a person of their preferred gender. But even those conditions could be overcome given some chemicals to play with.
[citation needed]
The fact that English uses the same word for several concepts (which had different names in, say, ancient Greek) doesn't necessarily mean that we're confused about neuropsychology.
All language is vague. Sometimes vague language hinders us in understanding what another person is saying and sometimes it doesn't.
Legg & Hutter have given a formal definition of machine intelligence. A number of authors have expanded on it and fixed some of its problems: see e.g. this comment as well as the parent post.
Because it actually does mean something, even if we don’t really know what and borders are fuzzy.
When you hear that X is more intelligent than Y, there is some information you learn, even though you didn’t find out exactly what can X do that Y can’t.
Note that we also use words like “mass” and “gravity” and “probability”; even though we know lots about each, it’s not at all clear what they are (or, like in the case of probability, there are conflicting opinions).
What is more precious - the tigers of India, or lives of all the people eaten every year by the tigers of India?
Depends on your utility function. There is nothing inherently precious about either. Although by my value system it is the humans.
A bit of quick Googling suggests that there are around 1500 tigers in India, and about 150 human deaths by tiger attack every year (that's the estimate for the Sundarbans region alone, but my impression is that tiger attack deaths outside the Sundarbans are negligible in comparison). Given those numbers, I would say that if the only way to prevent those deaths was to eliminate the tiger population and there wouldn't be any dire ecological consequences to the extinction, then I would support the elimination of the tiger population. But in actual fact, I am sure there are a number of ways to prevent most of those deaths without driving tigers to extinction, so the comparison of their relative values is a little bit pointless.
Ways as easy as sending a bunch of guys with rifles into the jungle?
The effort involved is not the only cost. Tigers are sentient beings capable of suffering. Their lives have value. Plus there is value associated with the existence of the species. The extinction of the Bengal tiger in the wild would be a tragedy, and not just because of all the trouble those guys with guns would have to go to.
Also, tigers are presumably having some ecological effect, so there might be costs to a tigerless region.
insofar as we can preserve tigers as a species in zoos or with genetic materials I'd say the people are more valuable, but if killing these tigers wiped out the species, they're worth more.
Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?
I've been working pretty much every day for the past year but I had two longish breaks. After each of them there was a long period of feeling pretty awful all the time. I figured out eventually that this was probably how long it took me to forget what ok feels like. Is this plausible or am I probably ok given sufficient sleep and adequate diet?
Also, does mixing modafinil and starting strength sound like a bad idea? I know sleep is really important for recovery and gainz but SS does not top out at anything seriously strenuous for someone who isn't ill and demands less than 4 hours gym time a week.
You might be but this would not be evidence for it. If anything it is slight evidence that you are not sleep deprived - if you were it would be harder to wake up.
Modafinil might lead you down the sleep deprivation road but this ^ would not be evidence for it.
I mentally inserted “even” before “if” in that question.
Well then obviously it is possible. This is definitely not the sure-fire way to know whether you are sleep deprived or not.
I think that's possible if you've woken up at about the same time every morning for a month in a row or longer, but over the past week you've been going to bed a couple hours later than you usually do.
In a different thread, the psychomotor vigilance task was mentioned as a test of sleep deprivation. Try it out.
Calling bullshit on that test. It says I should seek medical evaluation for testing at an average of 313. In comparison to this: http://www.humanbenchmark.com/tests/reactiontime/stats.php
Are you sure it doesn't say 'might be suboptimal' and 'Consider seeking medical evaluation'?
I still consider that wildly over the top. But then again, I have an accurate model of how likely doctors are to kill me.
Details?
Do you find it that incredible that somewhere around 10% of Internet users are severely sleep-deprived? :-)
But yeah, probably they used figures based on laboratory equipment and I guess low-end computer mice are slower than that.
Yes. Seth Robert would be someone who wrote a lot about his own problem with sleep deprivation that was due to him waking up too soon.
You might want to look into adrenal fatigue.
What's with the ems? People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird. Does anthropics come into it?
Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)
We can make more soild predictions about ems than we can about strong AI since there are less black swans regarding ems to mess up our calculations.
No.
For what it's worth, Eliezer's answer to your second question is here:
Because the AI is better at estimating the consequences of following an order than the person giving the order.
There also the issue that the AI is likely to act in a way that changes the order that the person gives if it's own utility criteria are about fulfilling orders.
Also, even assuming a “right” way of making obedient FAI is found (for example, one that warns you if you’re asking for something that might bite you in the ass later), there remains the problem of who is allowed to give orders to the AI. Power corrupts, etc.
Basically it's a matter of natural selection. Given a starting population of EMs, if some are unwilling to be copied, the ones that are willing to be copied will dominate the population in short order. If EMs are useful for work, eg valuable, then the more valuable ones will be copied more often. At that point, EMs that are willing to be copied and do slave labor effectively for no complaints will become the most copied, and the population of ems will end up being composed largely of copies of the person/people who are 1) ok with being copied, 2) ok with being modified to work more effectively.
I don't know whether Hanson has a concret concept of 'humanity'.
I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.
I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.
If no question you ask is ever considered stupid, you're not checking enough of your assumptions.
Or you know, you might be using Google for asking the questions that would be considered stupid. (In fact for me the definition of a stupid question is a question that could be answered by googling for a few minutes)
Here's a possible norm:
If you'd like to ask an elementary-level question, first look up just one word — any word associated with the topic, using your favorite search engine, encyclopedia, or other reference. Then ask your question with some reference to the results you got.
How do people construct priors? Is it worth trying to figure out how to construct better priors?
The Handbook of Chemistry and Physics?
But seriously, I have no idea either, other than 'eyeball it', and I'd like to see how other people answer this question too.
They make stuff up, mostly, from what I see here. Some even pretend that "epsilon" is a valid prior.
Definitely. Gwern recommends the prediction book as a practice to measure and improve your calibration.
I don't know how much this answers your question.
From LessWrong posts such as 'Created in Motion' and 'Where Recursive Justification Hits Rock Bottom' I've come to see that humans are born with priors (the post 'inductive bias' is also related, where an agent must have some sort of prior to be able to learn anything at all ever - a pebble has no priors, but a mind does, which means it can update on evidence. What Yudkowsky calls a 'philosophical ghost of perfect emptiness' is other people's image of a mind with no prior, suddenly updating to have a map that perfectly reflects the territory. Once you have a thorough understanding of Bayes Theorem, this is blatantly impossible/incoherent).
So, we're born with priors about the environment, and then our further experience give us new priors for our next experiences.
Of course, this is all rather abstract, and if you'd like to have a guide to actually forming priors about real life situations that you find confusing... Well, put in an edit, maybe someone can give you that :-)
I don't have a specific situation in mind, it's just that priors from nowhere make me twitch-- I have the same reaction to the idea that mathematical axioms are arbitrary. No, they aren't! Mathematicians have to have some way of choosing axioms which lead to interesting mathematics.
At the moment, I'm stalking the idea that priors have a hierarchy or possibly some more complex structure, and being confused means that you suspect you have to dig deep into your structure of priors. Being surprised means that your priors have been attacked on a shallow level.
What do you mean 'priors from nowhere'? The idea that we're just born with a prior, or people just saying 'this is my prior, and therefore a fact' when given some random situation (that was me paraphrasing my mum's 'this is my opinion, and therefore a fact').
A better prior is a worse (but not useless) prior plus some evidence.
You construct a usable prior by making damn sure that the truth has non-exponentially-tiny probability, such that with enough evidence, you will eventually arrive at the truth.
From the inside, the best prior you could construct is your current belief dynamic (ie. including how you learn).
From the outside, the best prior is the one that puts 100% probability on the truth.
Do you build willpower in the long-run by resisting temptation? Is willpower, in the short-term at least, a limited and depletable resource?
I don't know about the first question, but for the second: yes.
I once heard of a study finding that the answer is “yes” also for the first question. (Will post a reference if I find it.)
And the answer to the second question might be “yes” only for young people.
Apparently the answer to the second question depends on what you believe the answer to the second question to be.
The standard metaphor is "willpower is like a muscle". This implies that by regularly exercising it, you can strengthen it, but also that if you use it too much in the short term, it can get tired quickly. So yes and yes.
I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:
Elsewhere in the book (I forget where) he also notes that the easiest explanation for people to go low on willpower when hungry is simply that a situation where your body urgently needs food is a situation where your brain considers everything that’s not directly related to acquiring food to have a very high opportunity cost. It seems like a more elegant and realistic explanation than saying the common folk-psychological explanation that seems to suggest something like willpower being a resource that you lose when you’re hungry or tired. It’s more of a question of the evolutionary tradeoffs being different when you’re hungry or tired, which leads to different cognitive costs.
Why is average utilitarianism popular among some folks here? The view doesn't seem to be at all popular among professional population ethicists.
I don't like average utilitarianism, and I wasn't even aware that most folks here did, but I still have a guess as to why.
For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together. You cannot get people to honesty report their utility functions, and further they can never even know them, because they have no way to normalize and figure out whether or not they actually care more than the person next to therm.
However, a sufficiently advanced Friendly AI may be able to discover the true utility functions of everyone by looking into everyone's brains at the same time. This makes average utilitarianism an actual plausible option for a futurist, but complete nonsense for a professional population ethicist.
This is all completely a guess.
It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.
Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.
However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).
My impression of Eliezer's model of the intelligence explosion is that he believes b) is much harder than it looks. If you make developing strong AI illegal then the only people who end up developing it will be criminals, which is arguably worse, and it only takes one successful criminal organization developing strong AI to cause an unfriendly intelligence explosion. The general problem is that a) requires that one organization do one thing (namely, solving friendly AI) but b) requires that literally all organizations abstain from doing one thing (namely, building unfriendly AI).
CFCs and global warming don't seem analogous to me. A better analogy to me is nuclear disarmament: it only takes one nuke to cause bad things to happen, and governments have a strong incentive to hold onto their nukes for military applications.
Right, I see your point. But it depends on how close you think we are to AGI. Assuming we are sill quite far away, then if you manage to ban AI research early enough it seems unlikely that a rogue group will manage to do by itself all the rest of the progress, cut off from the broader scientific and engineering community.
The difference is that AI is relatively easy to do in secret. CFCs and nukes are much harder to hide.
Also, only AGI research is dangerous (or, more exactly, self-improving AI), but the other kinds are very useful. Since it’s hard to tell how far the danger is (and many don’t believe there’s a big danger), you’ll get a similar reaction to emission control proposals (i.e., some will refuse to stop, and it’s hard to convince a democratic country’s population to start a war over that; not to mention that a war risks making the AI danger moot by killing us all).
I agree that all kinds of AI research that are even close to AGI will have to be banned or strictly regulated, and that convincing all nations to ensure this is a hugely complicated political problem. (I don't think it is more difficult than controlling carbon emissions, because of status quo bias: it is easier to convince someone to not do something new that sounds good, than to get them to stop doing something they view as good. But it is still hugely difficult, no questions about that.) It just seems to me even more difficult (and risky) to aim to solve flawlessly all the problems of FAI.
There's a third alternative, though it's quite unattractive: damaging civilization to the point that AI is impossible.
Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Uh, apparently my awesome is very different from your awesome. What scares me is this "Singleton" thing, not the friendly part.
Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?
We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don't apply. At least that's how it felt to me when I thought through it.
But, in the current situation (or even a few years from now) would it be possible for a smart kid in a basement to build an AI from scratch? Isn't it something that still requires lots of progress to build on? See my reply to Qiaochu.
Why doesn't the Copernican Principle apply to inferences of the age and origins of the universe? Some cosmologists argue that we live in a privileged era of the universe when we can infer its origins because we can still observe the red shift of distant galaxies. After these galaxies pass beyond the event horizon, observers existing X billion years from now in our galaxy wouldn't have the data to deduce the universe's expansion, its apparent age, and therefore the Big Bang.
Yet the Copernican Principle denies the assumption that any privileged observers of the universe can exist. What if it turns out instead that the universe appears to have the same age and history, regardless of how much time passes according to how we measure it?
I suspect when it comes to the evolution of the universe we are starting to run up against the edge of the reference class that the Copernicaln principle acts within and start seeing anthropic effects. See here - star formation rates are falling rapidly across the universe and if big complicated biospheres only appear within a few gigayears of the formation of a star or not at all, then we expect to find ourselves near the beginning despite the universe being apparently open-ended. This would have the side-effect of us appearing during the 'priviliged' era.
But again, why doesn't the Copernican Principle apply here? Perhaps all observers conclude that they live on the tail end of star formation, no matter how much time passes according to their ways of measuring time.
When I'm in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?
What do you want to learn more about? If there isn't an obvious answer, give yourself some time to see if an answer surfaces.
The good news is that this is the thread for vague questions which might not pan out.
One approach: Think of two terms or ideas that are similar but want distinguishing. "How is a foo different from a bar?" For instance, if you're looking to learn about data structures in Python, you might ask, "How is a dictionary different from a list?"
You can learn if your thought that they are similar is accurate, too: "How is a list different from a for loop?" might get some insightful discussion ... if you're lucky.
If, as Michael Rose argues, our metabolisms revert to hunter-gatherer functioning past our reproductive years so that we would improve our health by eating approximations of paleolithic diets, does that also apply to adaptations to latitudes different from the ones our ancestors lived in?
In my case, I have Irish and British ancestry (my 23andMe results seem consistent with family traditions and names showing my origins), yet my immediate ancestors lived for several generations in the Southern states at latitudes far south from the British Isles. Would I improve my health in middle age by moving to a more northerly latitude, adopting a kind of "paleo-latitude" relocation analogous to adopting paleolithic nutrition?
Cardio-vascular disease becomes more common as you move away from the equator.
Yes, but is it genetic or environmental? In other words, do people who move away from the equator have more CVD, or do people whose ancestors lived further from the equator have more CVD?
To what degree does everyone here literally calculate numerical outcomes and make decisions based on those outcomes for everyday decisions using Bayesian probability? Sometimes I can't tell if when people say they are 'updating priors' they are literally doing a calculation and literally have a new number stored somewhere in their head that they keep track of constantly.
If anyone does this could you elaborate more on how you do this? Do you have a book/spreadsheet full of different beliefs with different probabilities? Can you just keep track of it all in your mind? Or calculating probabilities like this only something people do for bigger life problems?
Can you give me a tip for how to start? Is there a set of core beliefs everyone should come up with priors for to start? I was going to apologize if this was a stupid question, but I suppose it should by definition be one if it is in this thread.