[META] ajax.googleapis.com
Apparently, by not unblocking scripts for "ajax.googleapis.com", I am unable to vote on LW. I generally dislike enabling scripting for domains that are used in many places -- unblocking Google APIs would unblock it everywhere, not just here -- so the result is that I am no longer voting. I suspect that I am not alone in this.
(Apparently I can't post without enabling it either. Looks like I'll have make an exception and do the script-on-script-off dance after all. Whee.)
Open Thread: September 2011
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.
Anthropics Does Not Work LIke That
People around here seem to think that a recent series of near-misses, such as not destroying the world in the Cold War, is evidence in favor of quantum immortality.
This fails to appreciate that the anthropic selection bias has no limit on how far back it can make things retroactively seem to happen. If, as has been suggested, a majority of the Everett branches from our 1950 destroyed the world, then it is equally true that a majority of the Everett branches from our 1750 in which there is someone still alive in 2010 failed to contain probably-world-destroying technology.
The existence of x-risk near-miss events should be taken as evidence against quantum immortality.
I just donated to the SIAI.
My purpose in writing this is twofold.
First (chronologically: I thought of this earlier than the other), I want to discuss some of the pragmatic points in how I got myself to do it.
The most important thing is that I didn't try to force the decision through with willpower. Instead, I slipped it through with doublethink. I knew perfectly well - and have known for months - that giving money to SIAI was the right thing to do. But I didn't do it. I spent money on things like Minecraft instead.
But somehow I found myself at the donation page, and I didn't think about it. Or, rather, I didn't let myself think about the fact that I was thinking about it. I made a series of expected-value guesstimates aimed at working around my own cognitive limitations.
I chose monthly donation over one-time because $20 monthly sounds like about the same amount of money as $20; past experience with recurring donations suggests that I tend to leave automatic recurring donations in place for about a year or two, so that probably gained me about a factor of 20. Similarly, I chose $20 as the largest amount that wouldn't put me in serious risk of chickening out and not donating anything.
In order to pull this off, I had to avoid thinking certain true thoughts. Numbers like "$240 per year" only drifted through my consciousness just long enough to make the expected-value judgment, and were then discarded quickly so as to avoid setting off my rotten-meat hypervisor.
This was not the first time I decided that I should give money to SIAI. It was the first time I actually did give them money. (Except for that one time with the $1 charity-a-day thing, which actually might have helped with dissolving psychological barriers to the general idea.)
I think this is important.
The second fold of my purpose is to reinforce the behavior using the glowy feeling that comes from having other people know what an awesome person I am.1 Anyone else who's done anything worthwhile should feel free to post in this thread too.
1. It's true. Statistically speaking, I probably saved like a jillion people's lives per dollar. And more-than-doubled quality of life for a zillion more. Let me also note that you can get in on this action.
I know that sounds advertisementy, but... well, that's kind of the point. Practice Dark Arts on yourself for fun and profit.
Trying to better understand futarchy
I'm trying to work out exactly what instruments should be traded for the purposes of a futarchy.
Let the decision be whether to adopt some proposal C; our options are C or ~C. In particular, we wish to know which of EV|C or EV|~C is larger, where EV is expected-value utility according to a utility function agreed upon somewhere offscreen.
For convenience, let our utility U be bounded on [0,1].
We can create the following primitive instruments:
a. U|C, worth (EV|C) * p(C)
b. U|~C, worth (EV|~C) * p(~C)
c. (1-U)|C, worth (1-(EV|C)) * p(C)
d. (1-U)|~C, worth (1-(EV|~C)) * p(~C)
It's worth pointing out a few compounds we can make by combining these:
a+b is worth EV. c+d is worth 1-EV.
a+c is worth p(C). b+d is worth p(~C).
a+b+c+d is worth 1.
I know that I can achieve what I want by establishing two separate markets, one trading a versus a+c and the other trading b versus b+d, and comparing the spot prices of the two.
The question is: is it possible in a single market?
Raw silicon ore of perfect emptiness
Does building a computer count as explaining something to a rock?
(If we still had open threads, I would have posted this there. As it is, I figure this is better than not saying anything.)
Should I be afraid of GMOs?
I was raised to believe that genetically-modified foods are unhealthy to eat and bad for the environment, and given a variety of reasons for this, some of which I now recognize as blatantly false (e.g., human genetic code is isomorphic to fundamental physical law), and a few of which still seem sort of plausible.
Because of this history, I need to anchor my credence heavily downward from my sense of plausibility.
The major reasons I see to believe that GMOs are safe are:
- I would probably think they were dangerous even if they were safe, due to my upbringing.
- In general, whenever someone opposes a particular field of engineering on the grounds that it's unnatural and dangerous, they're usually wrong.
- It's not quite obvious to me that introducing genetically-engineered organisms to a system is significantly more dangerous than introducing non-native naturally-evolved organisms.
The major reason I see to believe that GMOs are dangerous is:
- I might believe they were safe even if they were dangerous, due to "yay science" (which was also part of my upbringing).
- We are designing self-replicating things and using them without reliable containment, thereby effectively releasing them into the wild.
So: green goo, yes or no?
Free Will as Unsolvability by Rivals
Nadia wanted to solve Alonzo. To reduce him to a canonical, analytic representation, sufficient to reconfigure him at will. If there was a potential Alonzo within potential-Alonzo-space, say, who was utterly devoted to Nadia, who would dote on her and die for her, an Alonzo-solution would make its generation trivial.
from True Names, by Cory Doctorow and Benjamin Rosenbaum
Warning: this post tends toward the character of mainstream philosophy, in that it relies on the author's intuitions to draw inferences about the nature of reality.
If you are dealing with an intelligence vastly more or less intelligent than yourself, there is no contest. One of you can play the other like tic-tac-toe. The stupid party's values are simply irrelevant to the final outcome.
If you are dealing with an intelligence extremely close to your own -- say, two humans within about five IQ points of each other -- then both parties' values will significantly affect the outcome.
If you are dealing with an intelligence moderately more or less intelligent than yourself, such as a world-class politician or an average eight-year-old child respectively, then the weaker intelligence might be able to slightly affect the outcome.
If we formalize free will as the fact that what we want to do has a causal effect on what we actually do, then perhaps we can characterize the sensation of free will -- the desire to loudly assert in political arguments that we have free will -- as a belief that our values will have a causal effect on the eventual outcome of reality.
This matches the sense that facing a terrifyingly powerful intelligence, one that can solve us completely, strips away our free will, which in turn probably explains the common misconception that free will is incompatible with reductionism -- knowing that an explanation exists feels like having the explanation be known by someone. We don't want to be understood.
It matches the sense that a person's free will can be denied by forcing them into a straitjacket and tossing them in a padded cell. It matches the assumption that not having free will would feel like sitting at the wheel of a vehicle that was running on autopilot and refusing manual commands.
In general, we can distinguish three successive stages at which free will can be cut off:
- The creature can be constructed non-heuristically to begin with; that is, it lacks a utility function.
- The creature can control insufficient resources to be in a winnable state; that is, it is physically helpless.
- The creature can be outsmarted; that is, it has a vastly superior opponent.
Probably the last two, and possibly all three, cannot remain cleanly separated under close scrutiny. But the model has such a deep psychological appeal that I think it must be useful somehow, if only as an intermediate step in easing lay folk into compatibilism, or in predicting and manipulating the vast majority of humans that believe or alieve it.
Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling
(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)
You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.
Do you push the button?
I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.
Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)
So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.
But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.
META: Tiered Discussions
(Edit: It seems lots of people thought this was a terrible idea. I'm keeping the post as it was, though, mostly because I still think it's an interesting experiment and it ought to have been tried at least once somewhere on this site. Also, blah blah something about preserving the historical record so that earlier comments still make sense, whatever.)
You aren't allowed to know what this post says unless you can figure out what LW post this sentence is a clever reference to. The URL of that post is the CAST5 symmetric key for this one. Please help downvote spoilers into oblivion.
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.10 (GNU/Linux)
jA0EAwMCtYf1bHFxvmRgyekK9+VOnIJEKESX8yr4CXk5IGX3rsoyS50Nsc+uCy85
413pFT1XlfX7UpRNjUPIlG5IjcFhMKhk4NUv32KEgBk7rbfCnPqIid6ry4Sb/QNC
RvgOhbTw1YY95+K9KMuZi67D0+Fak14jnL4ZrTTwgzl6dWaJmWnONpCK2hku8n9E
IZNFR5sGdxGdvmHRLvsqiVjZk0NP4ZyqN9bEAMIFOO4HcBISm24UyU46+leopqpE
K0dkirqKSL/7ZOXvk3s5cW9h7SOStUw9bo8mapHrkoPpPLmQmWB7FnJYY4omb5k+
5pGAS2qdXLQYvu1z7e8fyfMPiSqXFmGycM09tq5Un7y7ek63UHKkyIy29VuRa4uT
E88Yop/z0zodHoHruiDJLEN/JiWtitMouvpO/WzN4dJE1zOmQSTAiIWGUnvWUhc6
16m1dAPXR5+N5lkYRvPhi/tpTN96mZbesGBR0qOjheRssBMzRJhDAsZWt/Um/Qu3
au3Uxokq4UojOzJZSXLLYOhuBOa0nxNebp+Hcl/kbLkBe2WLgmVQY4EP8CVsMT0i
5PAYtwLpTmaakO/kwSe/ctd/Dr5KYOC+H68ciCXyRERQQrWczdI1gROPv+bmOp0R
TsQqGsWJhvuCMp4ZAsnj3HV/DhJKihb8F9TXwxjuC3tUghVrzzHUKk1ramGlWlK/
iMB9D2DWovBCbK00jfIQeZxu/kXBHns2DlcFwueGPShdarmtHCaWd/8wqChJ75sS
FupwvLKpVYcwa+hukNJi2BUgkfb/yrn4Y6vwhF+xF9D4MJrJG3mng4u7OnllifrZ
6OdUNYeQEG5P2Qkj1uu9hvJC8PP4vO648JjVEsaR7gtRouH3H1v5cKqElFpRlyED
qgkYdKzCfY96MOTj0b9BgEOw5a728F+rtSyDc+dcLWtFlSeuUc793YUvF6lxng2/
KlUyJ9dCDfSiTq+HsQH/kJHR8bmudomJC+ftnBoGxC5BLuQhC4gCPcaYM8evqWzP
kkR1OhjhWE9H+O2o53t75IIz/P2LUhwrfqGhBgD3PTAmxw94gbOz0Ckj3UjhKnTY
ID22NmrHRRZyJammi6TViHTWRNSLHKifMIGp0/hOC1o=
=sPUV
-----END PGP MESSAGE-----
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)