Gun Control: How would we know?
I don't know how to keep this topic away from http://lesswrong.com/lw/gw/politics_is_the_mindkiller/ , so I'm just going to exhort everyone to try to keep this about rationality and not about politics as usual. I myself have strong opinions here, which I'm deliberately squelching.
So I got to thinking about the issue of gun control in the wake of a recent school shooting in the US, specifically from the POV of minimizing presumed-innocents getting randomly shot. Please limit discussion to that *specific* issue, or we'll be here all year.
My question is not so much "Is strict gun control or lots of guns better for us [in the sole context of minimizing presumed-innocents getting randomly shot]?", although I'm certainly interested in knowing the answer to that, but I think if that was answerable we as a culture wouldn't still be arguing about it.
Let's try a different question, though: how would we know?
That is, what non-magical statistical evidence could someone give that would actually settle the question reasonably well (let's say, at about the same level as "smoking causes cancer", or so)?
As a first pass I looked at http://en.wikipedia.org/wiki/List_of_countries_by_intentional_homicide_rate and http://en.wikipedia.org/wiki/List_of_countries_by_firearm-related_death_rate and I noted that the US, which is famously kind of all about the guns, has significantly higher rates than other first-world countries. I had gone into this with a deliberate desire to win, in the less wrong sense, so I accepted that this strongly speaks against my personal beliefs (my default stance is that all teachers should have concealed carry permits and mandatory shooting range time requirements), and was about to update (well, utterly obliterate) those beliefs, when I went "Now, hold on. In the context of first world countries, the US has relatively lax gun control, and we seem to rather enjoy killing each other. How do I know those are causally related, though? Is it not just as likely that, for example, we have all the homicidally crazy people, and that that leads to both of those things? It doesn't seem to be the case that, say, in the UK, you have large-scale secret hoarding of guns; if that was the case, they'd be closer to use in gun-related homicides, I would think. But just because it didn't happen in the UK doesn't mean it wouldn't happen here."
At that point I realized that I don't know, even in theory, how to tell what the answer to my question is, or what evidence would be strong evidence for one position or the other. I am not strong enough as a rationalist or a statistician.
So, I thought I'd ask LW, which is full of people better at those things than I am. :)
Have at.
-Robin
Empirical Sleep Time
I'm thinking that it should be possible to decide when to sleep based on reduced performance.
Can anyone suggest a tool for that purpose? Perhaps some reaction time testing software?
I guess I would have to track myself during the day to make a baseline, which is fine.
But without some sort of test I end up staying up way pass effectiveness, which is a waste of my time.
-Robin
What deserves cryocide?
So being signed up for cryonics shifts my views on life and death, as might be expected.
In particular, it focuses my views of success on the preservation of my brain (everything else too, just in case, but especially the brain). This means, obviously, not just the lump of meat but also the information within it.
If I'm suffering a degenerative disease to that meat or its information, I'm going to want to cryocide to preserve the information (and the idea of living through slow brain death doesn't thrill me regardless).
What I don't know is: given the current state of science, what sorts of things do I need to be worried about?
In particular, I'm wondering about Alzheimer's; does it appear to be damage to the information, or to the retrieval mechanism?
But any other such diseases interest me in this context.
Thanks!
-Robin
Looking for some pieces of transhumanist fiction
The first one: [EDIT: Found it! Thanks to RolfAndreassen]
This is turning out to be *really* hard to find; I would have made a point of saving it if I'd expected no-one else to have heard of it. I need to make a page of all the weird singularity/transhuman fiction I've read. -_-
Anyways, what I can remember:
I think I read this on the web. I *think* it was a short story; at most novelette length. This was within the last 5 years or so.
Basically, it's the future, humans have done lots and lots of intelligence enhancement; each generation is smarter than the one before. Then we find a planet with alien ruins. There is a ship sent there. For reasons I can no longer remember, one of the people (female?) on the ship tries to destroy the ruins, and another tries to stop her (pretty sure male). The destroyer is younger, and hence smarter, than the protector, so he ends up taking lots of heavy-side-effect nootropics to keep up with her. The war is fought almost entirely by 3-D printed robots from the ships machine shops.
The emphasis is very much on intelligence: that a standard deviation of IQ is going to determine the results of any strategy game (probably mostly true, given equal experience) and that war is basically that (also mostly true in this case, since the robots won't freak out and run).
I particularily remember a scene in which the main character takes a drug that will up his IQ by 20 points or so for a while, at the expense of 12+ hours of very bad (insanity? unconsciousness? can't remember). Also waves of (remote control?) robots fighting on the surface of the planet below.
The second one: [EDIT: Found! Thanks to nazgulnarsil]
Humans develop AIs, which are fully benevolent and try to help/protect humanity. There end up being problems with the sun, and they try to fix it but create a horrible ice age, and eventually they just upload everybody and go looking for something better. They decide that stars are true problematic, and park humanity around an interstellar brown dwarf.
One particular AI ship is somewhat eccentric and thinks that protecting humans isn't everything. A group of humans convince him to take them (or rather, their descendants) to earth. To prove they are capable of the (extremely long) journey, the ship requires that they live on him, without going anywhere, in a functional society for a thousand years. Then he takes them to earth.
FWIW, I'm trying to make a page of all the singularity/transhuman stuff I've read; it's at http://teddyb.org/robin/tiki-index.php?page=Post-Singularity+And+Transhumanist+Fiction+I%27ve+Enjoyed&no_bl=y (just started).
-Robin
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)