Open Thread, Apr. 27 - May 3, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (352)
Still More to the Prisoner's Dilemma
After reading http://www.pnas.org/content/early/2012/05/16/1206569109.full.pdf+html , the detail that's caught my attention: "The player with the shortest memory sets the terms of the game." If a strategy remembers 0 turns, and simply Always Cooperates, or Always Defects, or randomly chooses between them, then no matter how clever its opponent might be, it can't do any better than by acting as if it were also a Memory-0 strategy. Tit-for-Tat is a Memory-1 strategy - and despite all the analysis that I've read on it before, I now see it from a new perspective, in that it's one of the few Memory-1 strategies that gracefully falls back to the appropriate Memory-0 strategy when faced with All-C or All-D... and any strategy which tries to implement a more complicated scheme based on longer strings is faced with the fact that Tit-for-Tat simply doesn't remember anything beyond a single turn.
I would like to see if this perspective can be extended to a Memory-2 strategy that falls gracefully back to appropriate Memory-1 strategies such as Tit-for-Tat when faced with Memory-1 strategies, and like Tit-for-Tat, to a suitable Memory-0 strategy when faced with Memory-0 ones.
Does anyone have a link to a suitable set of programs to run some experimental tourneys, and instructions on how to apply them? (If it matters, the OSes I have available are WinXP and Fedora 21.)
This is a neat question, but I think programs being successful is not really about gracefully going down a hierarchy. For example, Tit-for-Tat does not take the correct strategy against always-cooperate (If your opponent is always cooperating, you say thank you and always defect). Tit-for-Tat succeeds for much more ecological reasons. I'd say bigger-memory versions of Tit-for-Tat are going to be something like the class of "peaceful, non-exploitable" strategies. Such strategies are not going to be the first to defect, which means they actually don't get that much information about their opponent. I think the lesson of iterated prisoners dilemmas is that you don't need that information anyhow, as long as your strategy occupies a good ecological niche.
There's still some subtlety here. A Memory-0 strategy picks C with probability p and D with probability q, independent of any past results. If you know p and q, you can devise a strategy to optimize your score. The result in the paper is that this new strategy is Memory-0 and that you can't do better by increasing your memory.
The advantage of a longer memory is that, given enough iterations, you can get a good approximation for p and q and so deduce the appropriate Memory-0 strategy. Something like Tit-for-Tat is devised to basically get the same score as its opponent (the opponent can get an advantage of one defection). It's not going to do worse than any individual opponent, but neither is it going to do better. A strategy that remembers the entire game can recognize, say, All-C and exploit it by defecting, which Tit-for-Tat can't do.
A Memory-1 strategy is one where p and q are functions of the previous round. In general, they'll depend both on what it did last round and what the opponent did last round. There are four possible results (C-C, C-D, D-C, D-D), which means that the strategy will have up to four distinct probabilities for cooperation next round. If you can learn those, you can come up with the optimal strategy for playing against it. This strategy can be modeled as a Memory-1 strategy.
The big difference, I think, is that having a longer memory is helpful if you're in a diverse environment. In any individual game, there's always a strategy with a shorter memory that will do as well as yours. However, the same short-memory strategy will not be optimal against every opponent, while you can use your longer memory to devise the best short-memory strategy for a given match.
I am not clear on how this is the case. It seems to me that the appropriate strategy when faced with any Memory-0 strategy is to go All-D, since your defections would optimize your own score while having no influence on the future behavior of your opponent. Tit for Tat does not default to All-D unless the opponent is All-D.
I managed to get my Bayes RPG into such a state that, although it still isn't that interesting as a game, it's moderately entertaining for a brief while until you master it, and seems like it should produce some actual learning.
I had this game as my MSc thesis topic as a way to force myself to work on the game, but I'm now finally starting to get to the point where a) working on it is fun enough that I don't need an external motivator b) I'd like to actually graduate. So I'll take what I have so far, run it to a bunch of test subjects, see if they learn anything, and write up the results in my thesis. Then I'll continue working on the game on my spare time.
But I'd like to do the empirical part of the thesis properly. Since LW has a bunch of people who know a lot about statistics, I'd like to ask LW: what kinds of statistical tests would be most appropriate for measuring the results?
To elaborate more on the test setup. I expect to go with the standard approach: have some task that measures understanding of something that we want the game to teach, and split people into an intervention group and control group. Have them complete the task first, dropping anyone who does too well in this pre-test, and then carry out the intervention (i.e. either have them play the game or do some "placebo" task, depending on their group). Then have them re-do a new version of the original task and see whether the intervention group has improved more than the controls have.
I don't want to elaborate too much on what tasks we'll give to the subjects, in case I'll recruit someone reading this to be one of my test subjects. But you can expect the standard mammography/cancer thing to be there, since it's such a classic in the literature, though it's not the thing that I'd expect the game's current state to be the most successful at teaching. There will also be a task on a subject I do expect the game to currently be good at teaching. Then there will be one task that I'd expect to have a bimodal distribution in whether or not the game improves it, since the game doesn't force you to pay attention to it. I'd expect some types of players to pay attention to it with others ignoring it.
Additionally I'd like to test things like:
So, what statistical tests to use here? I don't actually have much experience with statistics. I guess that the naive approach would be to use some (which?) form of ANOVA to test whether the means of pre-test, control intervention, and game intervention populations are the same. And then just do Spearman's correlation between every numerical item that I've collected and see whether any statistically significant items pop up. Is that fine? Neither of those tests is going to pick up on the hypothesized bimodal distribution in the improvement in one of the tasks, but I might not bother with digging too deeply into that.
Also, how do I set the threshold for how good of a performance in the pre-test indicates that the subject already knows this too well to learn anything, and should thus be ignored in the analysis? Or should I even do that in the first place?
Typical analysis of the basic design you described is often something like a mixed 2×2 factorial design: which test (pre- / post-test, within subjects) × intervention (yes/no, between subjects) - the interaction term being evidence for effects of intervention (greater increase between pre- and post- test in intervention condition). Often analysed using ANOVA (participants as random effect), nonparametric equivalents may be more appropriate.
More complex models are also very appropriate, e.g., adding question type as a factor/predictor rather than treating the different questions as separate dependent variables: this would provide indications of whether improvement after intervention differs for the question types, as you've predicted. This doesn't give you clues about bimodality but at least allows you to more directly test your predictions about relative degree of improvement (if the intervention works).
Correlations between your different dependent measures: feel free by all means - but make sure you examine the characteristics of the distributions rather than just zooming ahead with a matrix of correlation coefficients. And be aware of the multiple comparisons problem, Type I error is very likely.
Excluding participants on the basis of overly high performance in pretest is appropriate. If possible I suggest setting this criterion before formal testing (even an educated guess is appropriate as this doesn't harm the conclusions you can draw: it can be justified as leaving room for improvement if the intervention works) - or at the very least do this before analysing anything else of the participant's performance to avoid biasing your decision about setting the threshold.
I'm afraid you've said too much already - and if you're looking for people who are naive about the principles involved, LW is probably not a great place for recruiting anyway.
please feel free to private message me if you'd like clarification of what I've posted - this sort of thing is very much part of my day job.
Thanks a lot!
Could you elaborate on that? Something like "so we're going to test the impact of traditional instruction versus this prototype educational game on your ability to do these tasks" is what I'd have expected to say to the test subjects anyway, and that's mostly the content of what I said here. (Though I do admit that the bit about expecting a bimodal distribution depending on whether or not the subjects pay attention to something was a bit of an unnecessary tipoff here.)
In particular, I expect to have a tradeoff - I can tell people even less than that, and get a much smaller group of testers. Or I can tell people that I've gotten the game I've been working on to a very early prototype stage and am now looking for testers, and advertise that on e.g. LW, and get a much bigger group of test subjects.
It's true that LW-people are much more likely to be able to e.g. solve the mammography example already, but I'd still expect most users to be relatively unfamiliar with the technicalities of causal networks - I was too, until embarking on this project.
I was thinking more about your previous posts on the subject (your development of the game and some of the ideas behind it). The same general reason I'd avoid testing people from my extended lab network, who may not know any details of a current study but have a sufficiently clear impression of what I'm interested in to potentially influence the outcomes (whether intentionally, "helping me out", or implicitly).
When rolling it out for testing, you could always include a post-test which probes people's previous experience (e.g. what they knew in advance about your work & the ideas behind it) & exclude people who report that they know "too much" about the motivations of the study. Could even prompt for some info about LW participation, could also be used to mitigate this issue (especially if you end up with decent samples both in and outside LW).
Ah, that's a good point. And a good suggestion, too.
What question about your game and learning math/probability are you trying to answer?
If you want "an effect" you want a comparison of two arms. But you can only have one arm have an intervention, and the other just be the baseline arm with no treatment at all (or just the 'background treatment' of being a college undergraduate). For example, you can take a set of undergrads, and advertise that you are testing probability aptitude or something, and then the control arm just gets the test, while the test arm gets your game and the test afterwards.
I don't know about your advisor, but I would accept a study like that.
I always found it slightly puzzling that LW folks who get into practical data analysis start with F methods, and not B. Isn't B kind of a LW "thing?"
Starting to think about measuring results via ANOVA et al is, to me, starting at the wrong level of abstraction (I realize I may differ on this from a lot of statisticians). For example, ANOVA can test for the null. What does that null mean? Well, you are interested in some causal effect. Maybe this: E[test result | assigned to game] - E[test result | baseline undergrad].
Or maybe you give them a questionaire first, and learn how much math they have had (or even what particular classes). Maybe you want to actually look at an effect conditional on math preparation level. Does your game possibly have an 'interaction' with background math sophistication level? Then you need to model that. Then maybe if you decide on the model, you decide for how to test for the null. Or maybe you don't want the null, but the size of the effect itself. etc. etc.
You think about what you want first, the stats technique afterwards.
Additionally, I'm a little worried about the control group part. I expect it's relatively easy to recruit people to play a game and have them be motivated to play it, but if I tell people that "oh, but you may be randomly assigned to the control condition where you're given more traditional math instruction instead", I expect that that will drop participation. And even the people who do show up regardless may not be particularly motivated to actually work on the problems if they do get assigned to the control condition, especially given that I'm hoping to also educate people who'd usually avoid maths. How insane would it be to just not have a control group?
Pretty insane in my opinion. I can't imagine anything I would grade more harshly than not having a control except ethics violations.
Besides, don't most university psychology experiments with volunteers keep the protocol secret throughout the whole experiment and then debrief at the end? (Or sometimes even lie about the protocol to avoid skewing the results?)
Alternatively, have you thought about doing a crossover-style design?
Take group A and group B. Group A plays your game, and then takes the test. Group B either just takes the test or goes through some traditional education lesson (or whatever else you want for your control) and then takes the test. Next, group A does the traditional education, group B does the game, and both take part 2 of the test.
That way, everyone gets to play the game at least, though it means they're there for twice as long. Do you think you could pitch this in a way that is better than the "Maybe you play a game, maybe you don't" option?
You could potentially derive additional research value from this as well. If group A does better on Test Part 2, then your game would be shown to be a better way of acclimating people to traditional education on the subject or something like that (I'm sure you can draw a better conclusion or phrase this better).
Just some thoughts. Also, make sure you write up a grading rubric ahead of time (or ideally, have someone else do it) and then have someone who knows nothing (or as little as possible) about the experiment (and especially the subjects) grade the answers to avoid researcher bias.
I think there might be reasonable theoretical grounds for it in this case, though? If I was testing say a medical treatment or self-help technique, then yes, there should absolutely be a control group since some people might get better on their own or just do better for a while because the self-help technique gave them extra confidence.
But suppose I give people a pre-test, have them play for some minimum time, and then fill out the post-test when they're done. I don't see much in the way for random chance to confound things here: either they know the things needed for solving the tasks, or they don't. If they didn't know enough to solve the problems on the first try, they're not going to suddenly acquire that knowledge in between.
To some extent, but usually they still give some brief description of it beforehand, to attract people.
That's a good idea, thanks.
If I get a problem I can't solve I can Google afterwards and read about how to solve the problem. Even if you lock me in a dark room, there the possibility that I recover forgotten knowledge if you give my brain a few hours.
The pretest itself also provides practice. You need a control group, but it would be possible to give the control group nothing to do.
"Traditional math instruction" isn't the only possible control. I don't even think that you need to prove that your game is better than "Traditional math instruction". You could simply take any other game that includes a bit of math as control.
Maybe the Credence game.
Nice idea, thanks.
If I were designing the experiment, I would have the control group be to play a different game instead of having it be maths instructions.
You generally don't want test subjects to know whether they are in the control condition or not. So if you're going to make it be maths instructions, you probably shouldn't tell them what the experiment is designed to test at all, until you're debriefing at the end. If you tell people you are recruiting that you are testing the effects of playing computer games on statistical reasoning, then the people in the control condition won't need to realize that what you're really testing is whether your RPG in particular helps people think about statistics. They can just play HalfLife 2 or whatever you pick for them to play for a few minutes, and then take your tests afterwards.
Do you have access to units of caring?
Are you trying to gain knowledge, get a piece of paper, both, one as a side effect of another?
"actually graduate" versus "see if they learn anything" might hugely inform your process. Off-the-cuff I'm guessing you want to actually graduate first with hopes of nice learning side effects, then see if they learn anything via something that takes longer.
Also a consideration: 3+ arms. Instruction game, instruction non-game, and non-instruction game. Also possibly non-instruction non-game.
To some limited extent.
Correct.
If you didn't have any control group, you wouldn't be able to interpret any improvement between pretest and posttest, if you observed such a pattern: repetition or practice effects could explain any improvement. If you observed no improvement, you wouldn't need a control group because there's no effect to be explained.
Sometimes exploratory methods start out with no-control group pilots just to see if a method is potentially promising (if no hints of effects, don't invest a lot of resources in trying to set up a proper study).
Sometimes studies like this are set up with multiple control groups to address specific concerns that may apply to individual control conditions. Here it seems like two would be the minimum: one in which participants play a different game that is expected to confer no benefit for learning; and another with some kind of more traditional instruction.
In cases like this, recruitment is usually very vague - giving participants a realistic impression of the kinds of tasks they will be asked to do, and definitely no indications about who is assigned to a "control" group.
So, there is this blog/forum which tries to teach people rationality! and science! and proper ways to solve problems! It even hopes to raise the sanity waterline.
And then "oh, but it's inconvenient..." X-/
There's the extent to which I'm willing to go to raise the sanity waterline, and then there's the extent to which I'm willing to go for the sake of possibly improving my grade on a work whose final grade nobody will really ever care about.
That might not be the most productive mindset. If you show that your game works at teaching Bayes, I would expect people to refer to your thesis from time to time.
In this case I don't quite understand what are you asking.
LW is unlikely to know whether your adviser / committee will consider the absence of a control group acceptable enough for this project.
You're right, I wasn't very clear on my objectives. Also, my previous comment was needlessly snarky, for which I apologize.
To be honest, I'm not very sure of what I want, myself. I have reason to believe that they'll consider it acceptable regardless of whether there's a control group or not (this being the CS department and not the psych one), so that's not actually an issue. And I've got some desire to do things "properly", for its own sake, and also because it might be fun to do this well enough to turn it into a real publication. But I'm also swamped with a bunch of other stuff and don't have a chance to spend too much effort on this.
So, I guess I dunno what I'm asking, myself.
How about going to the office hours of a professor in the psychology department and ask them for advice on how to run your study?
Your question made me go d'oh, in that I suddenly remembered that there's an obvious place right nearby to ask help from, both for designing the study and recruiting test subjects. I'll talk with them, thanks.
Speaking very practically - who will be marking/grading your project?
If psychologists aren't going to be looking at it, it's surely going to be fine to do the intervention as best you can and then discuss implications and limitations (including need for control group) in whatever you have to write up. It's not going to be publishable but then you can deal with that later, depending on your circumstances this would probably mean re-doing the study with random assignment to conditions, starting with your project study as a pilot/proof of concept.
It's going to be graded by computer scientists, so yeah, I can get away with a less rigorous protocol than what psychologists would insist on. (And then collaborate with actual psychologists with more resources later on.)
I sometimes come across an interesting scientific paper where the study being done seems easy and/or low-budget enough to make me think "hey, I could do that" (on this occasion, this paper on theanine levels in tea, which I skimmed too quickly the first time to notice that they used big, proper and presumably expensive lab equipment to measure it because I was reading it for practical reasons (reading about modafinil amplifying the side-effects of caffeine, while beginning an all-nighter powered by those chemicals)), and to me there's a strong "coolness factor" to being someone who's published real research, especially if that also means a finite Erdos number. How easy/difficult is it to become author or co-author of a scientific paper as an amateur, given that you're trying to actually accomplish something and not munchkin for "get my name published as easily as possible"?
Unrelatedly, I'm pretty sure posting under the influence of caffeine and modafinil is a terrible idea for me. I just spent two hours writing and re-writing that question, and I'm only stopping now because I'm giving up on trying to get it right. That's only exacerbating a tendency I already have, but damn.
Sounds like you should take more l-theanine to mitigate that effect
I would have, but I take tea with milk and the cup I had after reading that paper (to check decaf tea still had theanine at all) used up the last of it.
As someone who is published, I can tell you that it depends entirely on the field. One possibility is obtaining data from other people and analyzing it in new ways. There are many free public sources of data, and a lot of researchers will share old data sitting on their hard drives if they think it could result in publications. Off the top of my head, genomics, bioinformatics, microscopy, medical imaging/radiology, and biometrics are all fields where there is a glut of data and people would gladly welcome new, more powerful analysis tools and procedures.
What kind of research do you want to do?
The feelings that are motivating it aren't really specific to any field, so I suppose it's "any research I could plausibly do as an amateur without spending too much resources on it". I'm not specifically planning now to set out and do an easy-for-an-amateur research paper, the main thought driving it is that at some point I might find a question interesting enough to research it on my own, gwern-style, and then if it's plausible to do so I would want to get whatever work I do up to publishable standards for extra nerd cred.
I only also mentioned Erdos numbers because of a tangent thought of "hey, I'm in the rationalsphere, if I got other rationalsphere people involved in such a project and at least one of them was someone with a finite Erdos number, I'd get one too". And then by the time I was typing up that comment I had a tab open on this, although I also can't play an instrument (yet) and have never acted (yet).
What are the health risks of one time MDMA use?
The risks of one-time MDMA use can be roughly sorted into two categories: "Normal Risks" which apply to everyone and "Edge-Case Risks" which only apply to certain people (though it may not always be clear, as we will see, if you are at risk for one of the edge-cases). I will give a very brief and oversimplified description of how MDMA is processed by the body and the effects it has, and then I will describe some of these risks. I didn't have time to put together sources and citations (especially as this was written from memory + fact checking), but my hope is that this will help people understand what the risks are and some of the mechanisms of action so that they can do more informed research into the topic.
Basic Neuroscience Background Information
In the human brain, where two neurons meet is called the Synapse. In reality, there is actually a small gap at the synapse between neurons called the synaptic cleft. When a signal traveling down a neuron reaches the end, it causes a release of neurotransmitters into the synaptic cleft. These neurotransmitters fit like keys into keyholes called receptors on the second neuron. Depending on which keyholes/receptors are activated/filled, the second neuron takes some action like firing or not.
You can visualize this by holding your right fist up and then making a "C" shape around it with your left hand (they should be close, but not touching. While maintaining this, hold your elbows out to the sides. Each of your arms is a neuron. Your forearms are the "Axon" which is the path the signals travel down and the space between your hands is the synaptic cleft. A signal travels from your left elbow to your left fingers which causes them to release neurotransmitters into the space between your left hand and your right fist. All over your right fist are small keyholes called receptors that are shaped to specifically fit certain neurotransmitters. The neurotransmitters float around in the gap for a while, some of them fitting into the keyholes, and some of them being reabsorbed by your left hand to be used later. (This reabsorption process is called "reuptake" by the way) If enough of the keyholes on your right fist are filled, then a signal travels from your right fist down to your right elbow, at which there is another synapse and the same thing happens.
Now, on your fist (or the "receiving" end of a synapse) you have a lot of receptors, but not all of these receptors are "on" or "active" at any given time. Your body maintains homeostasis by regulating the amount of active receptors for a certain neurotransmitter in response to the amount of that neurotransmitter that is typically produced. For example, I might produce (or rather, release) slightly less serotonin than you on average, but my body will deal with that by activating more serotonin receptors. In that case, our expected number of serotonin receptors that are filled with serotonin molecules at any given time would be roughly the same, so there would be no difference between us in that respect.
The last piece of background information we'll need is to explain "free radicals". You learned in high-school chemistry how atoms have electron shells/orbitals that "want" to have a specific number of electrons in them. This applies to molecules as well. When a molecule is missing an electron, it goes crazy and tries to steal one from its neighbors. A molecule in this state is called a "free radical". If it's pull is stronger than a neighbor's, then it takes one of the neighbor's electrons and calms down while the neighbor then goes crazy and becomes a free radical itself, looking to steal an electron from another molecule. This causes a chain reaction and can be very damaging to sensitive structures in your body like DNA or receptors in the brain.
The definition of an "antioxidant" is a molecule that can give up at least one of its electrons to a free radical without becoming a free radical itself, thereby ending the chain reaction. Free radicals are produced by pretty much all metabolic functions, so they are unavoidable to a certain extent. Your body uses antioxidants from your diet and endogenous antioxidants to counter this process every second of every day. Typically, it does a pretty good job and your body maintains homeostasis.
MDMA Mechanism of Action and Pharmacokinetics
MDMA causes excess release of Serotonin, Dopamine, and Norepinephrine into the synaptic cleft, though its action is primarily on Serotonin, so that's what I'll mostly focus on here.
So when you take MDMA, your neurons release excess serotonin into the synaptic cleft. This causes more binding to serotonin receptors which causes more firing of those neurons. This leads to the euphoria associated with MDMA use. However, your body wants to maintain homeostasis, so it starts turning off serotonin receptors. That way, even though there's more Serotonin, there are less places for it to bind, which reduces activity. Then your MDMA wears off and your neurons are actually releasing less serotonin than before. This, coupled with less active serotonin receptors leads to considerably lower serotonin activity. This is associated with the anhedonia, anxiety, and depression that sometimes follow MDMA usage for a few days. However, nothing I've just described is permanent, so your body will eventually up-regulate your serotonin receptors again, your serotonin stores will replenish, and you'll be back to normal.
However, this excess activity at the Serotonin receptors also creates excess free radicals. These free radicals can actually damage serotonin receptors (break them permanently) so that they can never be reactivated. In a single, modest dose, this is likely negligible, though with a single super-dose, it is significant. This highlights the important of having antioxidants available to your brain throughout the MDMA trip. A study was done on monkeys where they gave them super-does of MDMA, some with a Vitamin C injection and some without, and found significant brain-damage reduction (as in, blocked the vast majority of brain damage) in the Vitamin C group. (sorry, I can't find the study right now.)
So, with chronic use or higher doses, this kind of damage becomes more and more of a problem and leads to brain lesions in the serotonergic neural pathways, and is Normal Risk #1
When your neurons absorb MDMA, it has effects on the Monoamine Oxidase system. Therefore, taking MDMA with Monoamine Oxidase Inhibitors (MAOIs) is extremely dangerous and can be life-threatening. Do not take MDMA if you are on MAOIs (and make damn sure you check if any meds you are on or recreation drugs you take are MAOIs). This is Edge-Case Risk #1
Your body breaks down MDMA largely through the CYP450 family of liver enzymes—primarily CYP2D6, but also CYP3A4 and possibly some others to minor extents. Therefore, if these enzymes are inhibited or otherwise not fully functional, your body will not be able to eliminate MDMA (or will do so much more slowly) which can lead to overdose and amplification of the detrimental effects on your brain, cardiovascular system, and more. Inhibition of CYP450 enzymes can be caused by certain medications (like Tagamet/Cimetidine or Ritonavir) or foods like grapefruit and grapefruit juice. This is Edge-Case Risk #2a. Certain people may also have genetically-impaired CYP2D6 activity which can lead to similar complications. This is Edge-Case Risk #2b
Part 2
Macro-Level Physiological Effects
The increase in Dopamine, Norepinephrine, and Serotonin caused by MDMA causes Central Nervous System (CNS) stimulation that can raise body temperature, heart rate, and blood pressure. It can also cause increased sweating and perspiration, insomnia, nausea, and diarrhea, all of which contribute to dehydration. These comprise Normal Risk #2 family. The fact that MDMA use is often associated with excessive dancing, hot environments, and limited access to water and electrolytes (such as at raves, music festivals, concerts, etc.) compounds these risks. So, if a person has no cardiovascular issues, is mindful of these risks, stays hydrated, ensures not to drink too much water without electrolytes, and keeps their heart rate and body temperature in check, most of these risks can be avoided. However, for anyone with cardiovascular issues (even ones they don't know about) this becomes Edge-Case Risk #3
Other/Unknown-Mechanism Psychological Effects
MDMA is a psychotropic drug. As such, it has the possibility of triggering latent psychological disorders such as Bipolar Disorder, Depression, epilepsy, and Schizophrenia just the same way that LSD, emotional stress, and head trauma do. The mechanism behind this phenomenon is still unknown. This is Edge-Case Risk #4 (I am also not as familiar with this area as most of the others, so I encourage you do to some independent research here.)
MDMA has also been documented to cause acute psychosis. The authors of the case-studies I have read dismissed the idea of there being a latent psychological disorder that was simply triggered because none of the typical milder early symptoms were present before ingestion of MDMA. A clinical psychiatrist that I know also confirmed to me that sometimes these psychotic episodes just happen in conjunction with psychotropic drugs. This is Edge-Case Risk #5 It should be noted that this is considered a rare event.
MDMA has also been observed to cause seizures, though this is rare. It is unknown whether this can be fully-attributed to dehydration, drug interactions, drug adulterants, or undiagnosed epilepsy. However, I have personally seen two people have seizures while on MDMA, so take from that what you will. This is Edge-Case Risk #6
And finally, long-term psychological side-effects such as insomnia and sleep disturbances, anxiety, depression, anhedonia, irritability, and memory-impairment have been found in epidemiological studies (and reviews of such studies) of MDMA users. Unfortunately, epidemiological studies only show correlation, not causation, and many of the results could be attributed to self-medication. Human prospective studies and clinical trials are extremely limited with MDMA due to its legal status and ethics constraints, meaning that the majority of the published information on effects of MDMA is either animal studies, or is epidemiological and typically skewed toward chronic MDMA users. However, that correlation does exist which is at least weak evidence that MDMA use can cause these long-term effects. This is Normal Risk #3
Disclaimer: I am not a doctor. This is not medical advice. This is not intended to encourage or enable any illegal activity. I have posted this information in the interest of harm reduction and scholarly interest alone. If you do anything stupid or if any harm comes to you based on this advice, it's not my fault.
If you found this valuable, leave me a comment. I'd appreciate it. And if you have any followup questions, feel free to ask. Cheers all.
How high is the risk of adulterants with unexpected effects?
There's no way to give a broad estimate on that. It's going to vary widely based on source, geographic location, and form (pressed pills vs powder/crystals/rocks).
Pressed pills or "Ecstasy" pills are more likely to have Amphetamine and/or other stimulants like caffeine and piperazines in addition to the MDMA, as they are intended as "rave drugs" for clubbing and dance parties. (Many users actually prefer amphetamine/caffeine in their pills because MDMA alone is more of a psychedelic than an "upper" and can make people want to sit down, look at the pretty colors, and rub each other instead of dance. Piperazines are typically considered "bad" adulterants, even by the crowd who likes amphetamine, and can be very dangerous, especially when combined with other drugs.)
Sometimes, product sold as MDMA (or Ecstasy) will not contain MDMA at all. Common drugs sold as MDMA are MDA (a metabolite of MDMA with similar effects), Methylone, and BZP (a piperazine), though there are many others depending on your geographic location and source.
Regarding geographic location, you can often find reports on government websites. For example, in the USA, I believe the DEA publishes the percentage of seized drugs that are adulterated (or are another drug altogether) by area (I have seen published numbers on their site for certain areas before, but I don't know if it's done regularly and I wouldn't expect out-of-date reports to still be accurate).
However, your estimate of the likelihood of having dangerous adulterants in your MDMA will likely be dominated by your ability to get trustworthy reports of other people who have taken the same "batch". (Note there are multiple areas for uncertainty here to account for. Honesty/motives of the people reporting + number of reports, ability of the people reporting to tell the difference, is it actually the same batch, and heterogeneity of the batch, to name a few.)
A few sources of this kind of information are:
If you are buying online, here is a harm reduction strategy: Select a seller with a perfect track record with regard to quality and a large number of reviews. Once you've received it, wait until you've seen a significant number of reviews from orders placed around the same time as and after you placed yours that are all still positive with respect to quality. This will help protect you from bait-and-switch tactics and should increase your confidence that the reviews you've read are of the same batch/product as you've received.
A chemical reagent test is a good risk- and harm-reduction measure that can be used in conjunction with any of the other measures. They change color based on the presence of MDMA and common adulterants (obviously sacrificing some of the stuff you're testing in the process). The most popular test is the Marquis Test. Other common tests include the Mecke Test and the Simons test. (Make sure to check the legality of these "test kits" in your area before ordering/purchasing one.)
Disclaimer (again): Do not do any of these things if it is illegal for you to do so. This is not intended to encourage or enable any illegal activity. I have posted this information in the interest of harm reduction and scholarly interest alone. If you do anything stupid or if any harm comes to you based on this information, it's not my fault. If you do anything illegal based on this information, I am not in any way responsible and I told you not to do it.
I don't quite understand gratitude journaling. First of all, gratitude is the same thing as gratefulness or thankfulness, right? If yes, it means, you are glad because you got stuff you did not really earn, you got stuff that was not yours by right, was not owed to you, right? Because when a debt is paid or you get paid for your work, you don't feel grateful, this is yours by right.
So to me gratitude journaling seems to drive your focus on the things you got without earning them. Is that supposed to help people who have self-esteem problems? SSC wrote how most depressed people feel like a burden, how the heck does feeling grateful for things one does not really earn or deserve make one feel less of a burden?
What am I missing here?
If anything, I would experiment with achievement journaling.
That sounds like a deeply unsatisfying way to live; it seems like you will mostly be disappointed by the things that are "yours by right" that you don't get.
The point of gratitude journaling is to focus on how your life has many good things in it. "I got my paycheck for the hours I worked this week; I'm thankful that my employer is honest and prompt, I'm thankful that I have a job, I'm thankful that past-me put in the effort to develop skills relevant to this job, I'm thankful that I live within my means..." and so on. This might involve lowering your expectations so that actually being paid is remarkable enough to write down.
In general that's done by setting a target of at least 3 things to write down every day, so you just pick the best ones.
If you read a bit of the happiness literature you find that people feel more happy when buying experiences than when buying "stuff". When doing gratitude journaling, don't focus on stuff but on experience.
Thinking about rights isn't very fun.
Let's say that on your way to work a beautiful woman smiles at you. A appropriate reaction is to simple feel good and be grateful. Thinking about whether or not you deserve that she smiles at you on the other hand is stressful and not fun.
Focusing on gratitude shifts attention away from the question whether or not you deserve something.
On LW Elo wrote that they are much more happy than most other smart people that they know. If you look through her post a good portion of them express gratitude like http://lesswrong.com/lw/m3o/lesswrong_experience_of_flavours/canb. That's the kind of post most people on LW wouldn't write. It's reflective of a happier mindset.
I understood it as focusing on everything good that happened... whether it was your work, luck, or a mix of both.
The goal is to cultivate the feeling "my life is good". Which will help reduce anxiety, or something like that.
This is (I think) an extension of mindfulness practice. So the ultimate point of the exercise is to help you conscientiously notice and assign weight to a certain class of experience. Your feeling of entitlement is opposed to that in the sense that humans tend not to notice a well-functioning machine. So if we put a dollar in a vending machine and candy comes out, we might enjoy the candy, or be sad about not having a dollar any more, but we rarely take any time to be excited about how great it is to have a machine that performs the swap. Same with getting a paycheck.
Ideally, gratitude journaling expands the class of things you have to be happy about. It adds the vending machine as an object of joy, rather than an 'inert' object that catches our attention only when it fails.
I’m a fourth year PhD student in the life sciences, and I need mentorship, preferably from a Slytherin, or at least someone with a Slytherin hat. My advisor doesn’t want me doing “mercenary collaborations”, or quick experiments with researchers outside my field in exchange for secondary authorships. He says I need to focus on my thesis research in the next year so as to publish and graduate. Are there any academics in the LW readership who have the insight to tell me whether this is good advice or whether he just wants me pumping out papers with his name on them so he can get tenure?
I've intentionally been getting 45-90 minutes of daily sun and it feels good. Where can I find a good cost-benefit calculation for natural sun exposure vs. dietary vitamin D supplementation without sun? (Presumably mostly weighing cancer risk against vitamin d / nitric oxide / other benefits of natural sun?).
Bonus points if darker skin tones are taken into consideration.
"An algorithm is developed and used to relate vitamin D production to the widely used UV index, to help the public to optimize their exposure to UV radiation."
Excellent, thank you! Looking over the figures I think this has the necessary info for calculating optimal sun exposure length, inputting skin tone, latitude, and month. Sadly, it doesn't weigh dietary vitamin D against sun exposure or factor in the non-vitamin D stuff (I suspect the nitric oxide stuff and circadian regulation is pretty important), but still!
If If I sufficiently understand this then once I have more time I will try to give back by making an info-graphic which is more accessible to the public.
Judging from what I'm seeing here I think there might be benefit to timing when one's skin personally begins to "redden". I wonder if "darkening" is the same as "reddening". (I'm north-Indian dark and start getting tan lines with only 10 minutes of sun, which disappear within a 1-2 hours of shade. I'm not sure if that's analogous to the "skin reddening" they describe or if the skin reddening is a separate process indicating damage rather than melanin production. I've never actually gotten sunburn so I'm not sure when darkening ends and reddening begins, if it is indeed separate)
As far as I know, sunburn is associated with skin cancer, while sun exposure without sunburn is not, or at least starts to depend on other factors.
See e.g. this abstract which says
Luckily I'm dark enough to never have burnt, unluckily that means I need more exposure.
Interesting. So 20 minute cycles over 2 hours is probably better than continuous 1 hour exposure. Not surprising, but unfortunately inconvenient from a scheduling standpoint, given that the peak time for D synthesis is supposed to be noon which is during most people's workday. I kind of thought this might be the case and try to mimic cycling by flipping around frequently.
(That said, the noon people might be wrong, longer exposure over less intense evening sun might be better and intense noon exposure).
I've been doing the same thing for ~40 minutes of daily peak sunlight, because of heuristics ("make your environment more like the EEA") and because there's evidence it improves mood and cognitive functioning (e.g.). The effect isn't large enough to be noticeable. Sunlight increases risk of skin cancer, but decreases risks of other, less-survivable cancers more; I'm not sure how much of the cancer reduction you could get from taking D3 and not getting sunlight. I guess none of that actually answers your question.
My vague and untrustworthy impression is that D3 supplementation is better than nothing but has risks related to calcium going to the wrong places, which may be mitigated by Nitric oxide which is also sun linked, and might also be mitigated by not being K2 and magnesium deficient which most people are. I should probably start being better about archiving what I read so that I can stop being vague and untrustworthy.
I do notice a muscle and general relaxation effect which is deeper and lasts longer than, say, an equally warm shower. A blood panel I got back when I was not supplemented said I was pretty severely D deficient, so it might be that I feel the effects more. (Though from what I know of the biology of this the NO is more likely to be responsible for the relaxation effect than the D3.)
If you're white, you're no longer adapted to the ancestral environment where humans evolved.
Agreed, considering "EEA" to mean the African savannah. So for instance if your ancestry is European and you're currently living in California you don't need to spend very much time outside, and if you're dark-skinned and living at a high latitude you should try to get lots of sunlight.
Evolutionary selection pressures are strong enough that skin color of natives over the world corresponds to the level of sun exposure of various places.
Of course being indoors means that you get less sun then the environment for which evolution prepared you.
Disclaimer: this thought is "foxy", in the sense that I don't assert it's definitively true, but I still think it could be a useful lens for viewing the world.
Startups Don't Create New Technology
Contra gurus like Paul Graham and Peter Thiel, successful tech startup companies do not actually create new technology. Good tech startups do one of two things: 1) invent a new technology-dependent business model, or 2) repackage and polish existing technology in such a way as to bring it above the threshold for widespread use.
Consider a couple of recent successful tech startups: Facebook, Twitter, Uber, AirBNB, and Dropbox. None of these can be said to have innovated deeply new technology. Instead, they realized that they could create a new business model based entirely on available technology.
Uber is a particularly illustrative example. The company depends enormously on several powerful new recent technologies: smart phones, GPS, and mapping software. However, Uber itself did not innovate any of those. If one of those technologies hadn't been available, Uber probably would not have been successful. Uber certainly could not have created any of those technologies as part of its business plan.
I'm not suggesting here, of course, that tech companies in general do not create new technology. The point is that startups don't create technology. Instead, deeply new technology is primarily developed by large, established companies. The basic pattern for technology creation is:
The history of Amazon illustrates this pattern very well. Amazon started by creating a new business model using currently available web technology. It depended on a huge array of technology that was developed by others - web browsers, web servers, databases, the internet, personal computers - but it did not develop any of that technology itself and would not have been successful if it had had tried to do so (imagine trying to innovate the web browser so you could sell books online).
While Amazon did not create new technology in its startup phase, it certainly has created deeply new technology now that it is in its mature phase. The clearest example of deeply new technology created by Amazon is cloud computing (some people might also point to eBooks). Cloud computing could never have been innovated by a startup company - the resources required in terms of finance, talent, and corporate resilience are far too great. While cloud computing could never have been innovated by a startup, it is now becoming a foundational technology for the new generation of startups.
So the lifecycle of entrepreneurial technology development suggests a kind of virtuous circle. A company becomes profitable by building a new technology-dependent business model or repackaging technology developed by others. Then it grows, and when it reaches a certain point, it becomes able to create new technology that feeds the next generation of startups.
To add a bit of empirical analysis to this comment, I analyzed the YCombinator Winter 2015 batch. I categorized the startups into one of three buckets: Tech-Dependent Business Model (TDBM), RePackaging and Polishing existing tech (RPP), and Novel Tech (NT). The list can be found here.
The following pattern emerged from this exercise: YC is not funding startups that are developing new computer science technology, with the possible exception of MashGin and AtomWise. The YC startups that are attempting to develop new technology are in the biotech/medtech space - Transcriptic, Standard Cyborg, Industrial Microbes, Zenflow, Lully, and 20N.
Edit I noticed after writing that the list is from Demo Day 2, representing the second half of the Winter 2015 batch. However, it doesn't appear to me that analyzing only half the batch causes a serious bias in the conclusion. The Demo Day 1 batch is available here.
"New technology" is ill-defined. Is a more practical version of something which already exists considered new technology or old technology?
I agree with your first premise (that startups don't create new technology), but not the second premise (that large, established companies do).
Read 'The Sources of Innovation' by Eric von Hippel. It reaffirms what your first point, and shows that real technological progress usually comes from users of technology rather than producers (or, to put it in a better way, from cooperation between users and producers). More precisely, innovation happens when there is a feedback loop where users use technology in creative ways (according to needs not foreseen by the original producers) and producers incorporate those ideas back into their products. The contribution of the producer is to identify creative uses of their products and formulate business models around them. Amazon's cloud computing initiative is definitely consistent with this point of view.
Another major source of innovation is academic institutions, where risk-taking is encouraged when it comes to new ideas. Of course, it's also true that established companies also fund research.
I am not sure I'm willing to agree with that.
First, absolutely everyone depends on technologies invented by others and it's turtles all the way down -- a start-up depends on personal computers which depend microprocessors which depend on transistors... etc.
Second, Google and Apple would probably be the canonical examples of startups which actually created new technology. Not coincidentally they belong to the biggest and richest companies in the world. I think Facebook also created new technology, albeit intangible, and also joined that club.
Third, look beyond bits. Biotech startups, for example, attempt to create technology much more often that the code-driven ventures.
I see Google and Apple as marginal examples - they don't exactly fit into my schema, but they don't exactly break my schema either. Apple's success depended on two key insights contributed by the two founders. Jobs saw that a market for personal computers could exist, and Wozniak saw a way to repackage existing computer technology cheaply and usably enough for the customers in that market. Google did build a better search engine, but they also saw a new way to make money with search, and it's not clear which insight was more important.
You are now arguing that a start-up must have business sense to succeed -- which is entirely true, but not related to your original claim that start-ups don't create new technology.
If Google's business model were more important than its technology, that wouldn't cause its technology to cease to exist. Your original claim was that startups don't create technology, which is a very, very different claim than people who want to become rich should pursue business models, rather than technology.
But, actually, I don't think that Google's business model was more important to its earning power than its technology. Many people have copied its business model, but they don't have the scale of being the most popular website, so they don't make as much money. Part of that is that other companies have copied its basic search technology, but the first-mover advantage has turned Google's early technology into an enduring brand advantage.
Also, my guess is that Google had better technology 10 years ago for running scalable infrastructure than Microsoft has today. While that may have contributed to their bottom line, I'm not sure it contributed much to their popularity.
One of the most sensible books I've read about how technology works, from an economic perspective, is The Nature of Technology, by Brian Arthur. It talks about how different technologies interact with each other, and with the economy, and how what he calls standard engineering, which mostly involves assembling off-the-shelf parts, contributes to the advancement of technology as a whole.
A lot of the concepts he talks about can be experienced by using an open source operating system with package management, such as Ubuntu. At least, as I was reading the book, a lot of open source software examples came to mind.
Brian Arthur was involved in the founding of the Santa Fe institute that studies complexity.
I have the exact opposite feeling about Uber. I think that their main business model is: a taxi dispatch service that actually comes when you call it. There is no technology in that at all. The problem is that it is very difficult to enter a business where everyone else is frauds. You can't just advertise that you aren't a fraud, because who would believe you? Uber differentiated itself by being techy, to get people to try it. Maybe the technology was necessary to allow people to monitor cabs and allow people to trust it, but if the industry hadn't dug itself into a hole, a similar business could have been built 50 years ago.
I don't think that it's fair to say that software isn't technology. Facebook didn't create new hardware but the idea of the timeline was new.
But even if we look at hardware I don't think it's true. Bre's MakerBot industries did manage to sell MakerBots while it was a startup.
Arduino is created by a startup.
Pebble is a YCombinator startup. Technology like Arduino allowed Pebble to do their prototyping easier than was possible before. There are also a bunch of other Kickstarter projects that produce technology.
I do consider the Hackerspace ecosystem capable of creating new hardware.
A lot of new technology get's developed by repurposing existing technology. Arduino couldn't have been developed without the ability to buy cheap chips but Arduino is still new technology.
I think this is coming from both the way you're defining technology (which looks like it's excluding various forms of cultural or social technology) and the set of startups you're considering. I think both Graham or Thiel would agree with you that entrepreneurs create businesses, which seems like the short version of your claim. Yes, both of them think that new technology is a fruitful place to look for new businesses, but it isn't the only one.
Consider biotech startups, specifically Genentech. The company wasn't founded until a few years after the underlying tech had been invented in a university lab, and while now it has extensive research labs that do basic as well as applied research, most of the startups I'm familiar with (and early Genentech) are very much in the 'applied research' category.
TDBM, I would argue is the most important step. A single discovery could have hundreds of different ways to coordinate with existing technologies. As for RPP, often the people who are best at creating and the ones who are best at distributing are very different. It's a shame the distributor gets the lion's share, but such is life. There are also levels below technology creation. Before the technology can be applied, it's principles must be experimentally tested. Before an experimental test can be conducted, a theory must be developed to explain what you are testing for although some technologies skip this step. The experimenter and the theorist often receive even less than the applier who receives less than the distributor.
I want to invest $10,000 in a stock index fund. (The money is currently in a checking account.) How do I actually go about doing this?
Lumifer's answered this already in a sibling comment, but note that this general class of question is answered in the Procedural Knowledge Gaps thread (or its repeat). (I wrote a longer answer to this particular question there.)
You decide on which fund you want, open an account with the appropriate mutual fund company (e.g. Vanguard) or a brokerage, transfer to them the money from your checking account, and put that money into the fund.
I'm looking for a book, or a combination of up to 5 books, that fulfills the following requirements:
Textbooks are fine, as long as they meet all those requirements.
I'm assuming you already have some absolutely basic knowledge of the major physical theories, at the level of Brian Greene's The Fabric of the Cosmos (which was recommended in another comment). The books I'll recommend take you deeper into the theories (emphasizing philosophical implications) without excessive mathematics. If you don't have knowledge at this level, read Greene's book first. Some of the books I'm suggesting aren't entirely up to date, but none of them are obsolete. I'm not aware of any more recent books that cover the same material with the same quality. I teach philosophy of physics to non-physics majors, and these are usually among the books I assign (supplemented with recent papers, lecture notes, etc.).
Space-Time: Geroch, General Relativity from A to B
Quantum Mechanics: Albert, Quantum Mechanics and Experience
Statistical Mechanics: Ben-Naim, Entropy and the Second Law: Interpretation and Misss-Interpretations (Supplement with Albert's Time and Chance if you want to go deeper into the "Arrow of Time" issue)
Quantum Field Theory and the Standard Model: Oerter, The Theory of Almost Everything (A pretty superficial book compared to the others on this list, I admit, but I'm not aware of any philosophically deep treatment of QFT that doesn't presume considerable math knowledge. You could also try Feynman's QED, which is excellent but very out-dated.)
Cosmology: Tegmark, Our Mathematical Universe (Good basic overview of cosmology, but the philosophical speculation doesn't meet your third requirement. Try Unger and Smolin's The Singular Universe and the Reality of Time for a counterpoint.)
How much mathematics is excessive for this? Physics is made of mathematics.
"Excessive" was probably a poorly chosen word. I meant that the books I listed are the ones that provide the deepest insight into the theories (out of all the books I have seen) within the constraints specified by iarwain (presuming nothing more than high school mathematics). Some of the books teach some slightly more advanced math along the way, because yeah, it's hard to really comprehend much of GR without at least a basic conception of differential geometry, or understand QM without some idea of linear algebra, but none of the books inundates you with math like The Road to Reality does.
I was questioning whether to keep reading lesswrong; thanks to the questioner and the answerer for reminding me why I should. Books are cheap so I'm buying them all, even if not for all immediate reads. Don't suppose you teach near upstate New York?
I teach about 8000 miles away from upstate New York, I'm afraid.
Are your requirements sorted by order of importance?
Quantum Computing Since Democritus might be a good choice. If I think of the first item as the goal and the others qualifications, it is a poor choice, but if I rearrange them, maybe a good choice.
I didn't originally intend them to be in order, but they actually are. The only exception is that the very low math part is very important and should go at the top.
How about The Fabric of the Cosmos by Brian Greene? It is a clearly written account of cosmology, though with more emphasis on string theory than on other topics.
As for comparing and explaining the different interpretations of quantum mechanics, I am not aware of any book that does what you ask for. The clearest explanation of some of the interpretations of quantum mechanics that I've read so far is actually right here on Less Wrong, in the Sequences. However, that focuses on a few of the interpretations, without context of the others, and I had to read a bunch of scientific papers to start to get some of the missing context, though I still feel like there are gaps in my knowledge. I too would be interested in reading a book that properly explains and compares the different interpretations of Quantum Mechanics, so I'll be checking back at this thread to see if someone recommends one.
Penrose
The first time I read the Sequences, I definitely didn't understand everything. And of the things I did "understand", I didn't remember them all. Even after rereading different posts, it doesn't always stick.
I have just come across the brilliant idea (sarcasm) to take notes. In particular, to try to boil each post down to its essence, and write a summary. I've done it for about 20 posts so far, and it seems to be really helping me understand stuff.
Furthermore, the act of having "conquered" a post (having had boiled it down to its essence and summarized it in a way that I'm confident I'd understand quickly after referencing my summary) feels really good, and being that the posts are rather bite-sized, I've gotten into a nice flow in writing my summaries.
All of this probably sounds obvious, and I doubt that any of you are surprised to hear that summarizing things is an effective way of learning. And I'm not sure whether other people just understand everything perfectly their first time through the Sequences.
But...
1) I doubt that people understand everything perfectly their first time.
2) Despite knowing that summarizing things is a good way to learn, I suspect that the trivial inconvenience of taking the initiative to write notes is powerful enough where most people don't do it.
If 1 and 2 are true (for you), then perhaps it'd be a good idea to buy Rationality from AI to Zombies (can we call this RAZ please :) ) and use some sort of e-reader to highlight and take notes.
No, but I save the most 'relevant' of them on my phone, to reread when commuting. I seldom reread notes and indeed am more anxious that I might lose them than whether I have them within easy reach (a naturalist's nightmare, I guess.)
Seeking Moore's Law extrapolations
I once found some charts showing a few close variants of Moore's Law, such as MIPS per dollar per year; but I seem to have lost them. Does anyone have some references handy, which I can mine for some SFnal worldbuilding? (Eg, how big and costly a device storing 100 petabytes would be in a given year.)
I've done some rather extensive investigations into the physical limits of computation and the future of Moore's Law style progress. Here's the general lowdown/predictions:
Moore's law for conventional computers is just running into some key new asymptotic limits. The big constraint is energy, which is entirely dominated now by interconnect (and to a lesser degree, passive leakage). For example, on a modern GPU it costs only about 10pJ for a flop, but it costs 30pJ just to read a float from a register, and it gows up orders of magnitude to read a float from local cache, remote cache, off-chip RAM, etc. The second constraint is the economics of shrinkage. We may already be hitting a wall around 20nm to 28nm. We can continue to make transistors smaller, but the cost per transistor is not going down so much (this effects logic transistors more than memory).
3D is the next big thing that can reduce interconnect distances, and using that plus optics for longer distances we can probably squeeze out another 10x to 30x improvement in ops/J. Nvidia and Intel are both going to use 3D RAM and optics in their next HPC parts. At that point we are getting close to the brain in terms of a limit of around 10^12 flops/J, which is a sort of natural limit for conventional computing. Low precision ops don't actually help much unless we are willing to run at much lower clockrates, because the energy cost comes from moving data (lower clock rates reduce latency pressure which reduces register/interconnect pressure). Alternate materials (graphene etc) are a red herring and not anywhere near as important as the interconnect issue, which is completely dominate at this point.
The next big improvement would be transitioning to a superconducting circuit basis which in theory allows for moving bits across the interconnect fabric for zero energy cost. That appears to be decades away, and it would probably only make sense for cloud/supercomputer deployment where large scale cryocooling is feasible. That could get us up to 10^14 flops/J, and up to 10^18 ops/J for low precision analog ops. This tech could beat the brain in terms of energy efficiency by a factor of about 100x to 1000x or so. At that point you are at the Landauer limit.
The next steps past that will probably involve reversible computing and quantum computing. Reversible computing can reduce the energy of some types of operations arbitrarily close to zero. Quantum computing can allow for huge speedups for some specific algorithms and computations. Both of these techs appear to also require cryocooling (as reversible computing without a superconducting interconnect just doesn't make much sense, and QC coherence works best near absolute zero). It is difficult to translate those concepts into a hard speedup figure, but it could eventually be very large - on the order of 10^6 or more.
For information storage density, DNA is close to the molecular packing limit of around ~1 bit / nm^3. A typical hard drive has a volume of around 30 cm^3, so using DNA level tech would result in roughly 10^21 bytes for an ultimate hard drive - so say 10^20 bytes to give room for the non-storage elements.
You might be interested in Kryder's Law.
That's a good start. Let's see; if we start with platters holding 0.6 terabytes in 2014, and assume an annual 15% increase, then platters start hitting the petabyte range in... 2070ish? Does that look about right?
(Yes, I know any particular percentage can be argued against. This is for fiction - I'm going for reasonable plausibility, not for betting on prediction-market futures.)
1.15^50 = 1084, so given the 15% rate of growth you'll have an increase of about three orders of magnitude in fifty years.
In this specific case, though, the issue is whether rotating-platter technology will survive. In a way it's a relic -- this is a mechanical device with physical objects moving inside, at pretty high speed and with pretty tiny tolerances, too. Solid-state "hard drives" are smaller, faster, less power-hungry, and more robust already. Their only problem is that SSDs are more expensive per GB, but that's a fixable problem.
True - but for my purposes, having /some/ number, even if it's known to use poor assumptions, is better than none. I'm looking for things like "in which decade does a program requiring X MIPS become cheaper than minimum wage?" and "when can 100 petabytes be stuffed into ~1500 cm^3 or less, and how much will it cost?". Which crossovers happen in which order is more interesting than nailing down an exact year.
Well... a 200Gb microSD card already exists. So you need five of them per 1Tb, 5000 per 1Pb and 500,000 per 100 Pbs.
A microSD card is 11 x 15 x 1 mm = 165 mm3 = 0.165 cm3 and some of that is packaging and connectors.
500,000 x 0.165 = 82,500 cm3. You wanted 1,500? That's only about 50 times difference and getting rid of all that packaging and connectors should get you to about 30 times difference, more or less.
So the current flash memory density has to improve only by a factor of 30 or so to get you to your goal. That doesn't seem to be too far off.
The fun task of calculating the bandwidth of one of those stuffed to the gills with contemporary microSD cards is left as an exercise for the reader :-)
Don't forget about the sheer amount of waste heat used by such an array were it actually on.
Depends on the use case, I guess. The memory is non-volatile and the start-up time is negligible. If you only access one petabyte of memory within some time period, the other 99 can stay switched off and emit no heat.
In about a decade we will have machines that cost less than $10,000 and can run roughly brain sized ANNs. However, this prediction relies more on software simulation improvement rather than hardware.
Storage is much less of an issue for brain sims because synaptic connections are extremely compressible using a variety of techniques. Indeed current ANNs already take advantage of this to a degree. Also, using typical combinations of model and data parallelism a population of AIs can share most of their synaptic connections.
My mom has multiple sclerosis. Recently, researchers found that two currently available drugs reverse de-myelinization in mice. The drugs are only approved for being applied to the skin, though - it hasn't been proven to regulator's standards that they're safe for humans to swallow or inject.
Can anything be done to take advantage of this other than "sit and wait for years while Medical Science does more research"?
This mostly depends on your attribute to risk and responsibility when things go wrong. No doctor is going to tell you "yeah, sure, try it out now" because that would open them up to significant risk and responsibility; if you tell your mom to try this and it doesn't work out, then you're taking on some risk and responsibility. She may not be interested in doing anything riskier than what's been verified by medical science, and talking it over with her is the first step.
The next step is to ask her doctor about trials for this. It may be possible to be involved in human trials, though there is probably waiting involved.
Self-medication is possible. It seems unlikely that a doctor will help you figure out a correct dose, but it's worth asking. In either event, you only have to do it once, and so it may be worth doing the paper-dive and finding the relevant textbooks to borrow (you'll probably only need to read a few sections). If internal application is necessary, you'll probably need to purchase the active ingredient directly. If you do decide to self-medicate, talk to your doctor about it. That'll help prevent doing anything dangerous or any potentially foreseeable interactions between medications.
I think this is the kind of question MetaMed was created to answer. MetaMed's website seems to be offline. Has the company shut down?
Yes. It would be helpful if they did a public postmortem, but I'm not sure there's a way to do that that's not ugly.
In the Sleeping Beauty problem, SIA and SSA disagree on the probability that it's Monday or Tuesday. But if we have to bet, then the optimal bet depends on what Ms Beauty is maximizing - the number of bet-instances that are correct, or whether the bet is correct, counting the two bets on different days as the same bet. Once the betting rules are clarified, there's always only one optimal way to bet, regardless of whether you believe SIA or SSA.
Moreover, one of those bet scenarios leads to bets that give "implied beliefs" that follow SIA, and the other gives "implied beliefs" that follow SSA. This suggests that we should taboo the notion of "beliefs", and instead talk only about optimal behavior. This is the "phenomenalist position" on Sleeping Beauty, if I understand correctly.
Question 1: Is this correct? Is this roughly the conclusion all those LW discussions a couple years ago came to?
Question 2: Does this completely resolve the issue, or must we still decide between SIA and SSA? Are there scenarios where optimal behavior depends on whether we believe SIA or SSA even after the exact betting rules have been specified?
I think the consensus was not so much that phrasing anthropic problems in terms of decision problems is necessary, or that there is a "dissolution" taking place, but merely that it works, which is a very important property to have.
One has to be careful when identifying implied beliefs as SSA or SIA, because the comparison is usually made by plugging SSA and SIA probabilities into a naive causal decision theory that assumes 'the' bet is what counts (or reverse-engineering such a decision theory). Anything outside that domain and the labels start to lose usefulness.
In the course of answering Stuart Armstrong I put up two posts on this general subject, except that in both cases the main bodies of the posts were incomplete and there's important content in comments I made replying to my own posts. Which is to say, they're absolutely not reader-friendly, sorry. But if you do work out their content, I think you should find the probabilities in the case of Sleeping Beauty somewhat less mysterious. First post on how we assign probabilities given causal information. Second post on what this looks like when applied.
Probably gotten most of the responses it was going to get, so here's a scatter plot:
People seem to think it's worse the more they know about it (except those who know nothing seem slightly more pessimistic than those who know only a little).
Made by running this in IPython (after "import pandas as pd" and "from numpy.random import randn" in .pythonstartup):
It is interesting to watch how different things I observe on internet interact with each other. Two recent discoveries:
1) Arthur Chu, known to readers of SSC as a person not exactly in favor of niceness, created a Kickstarted project called "Who is Arthur Chu?". Failed by collecting only 20% of the planned $50.000. (Which, if I understand the rules of Kickstarter correctly, means he will get nothing.)
Not sure if the proper reaction here is to laugh (something like: "you had a choice between niceness and winning, you rejected niceness, and now you have neither"), or to congratulate for empirically testing "how much money could I get from random people on internet if I just openly ask them to contribute to my glory". I mean, next time if he would ask people to donate mere $10.000, he could actually get it. Of course only if they do not forget him in the meantime.
2) Gamergate was recently a 3rd largest article on RationalWiki. Then it was split into multiple articles, so at this moment it is merely at positions 8 ("Gamergate") and 13 ("Timeline of Gamergate") in the longest article list. Unsurprisingly, the whole article is one person's playground. Specifically, it is a person recently kicked out for similar behavior from Wikipedia. It will be interesting to see how other people on RationalWiki will deal with this.
Can you recommend a good summary of RationaWiki as such from an external and fairly unbiased point of view? To me it looks like a place that very easily hands out insulting, degrading evaluations of other people's work/thoughts and the part I find kind of weird is that while they clearly have a kind of an agenda or ideology it is not really clear what that is. Who are the main people behind it and what are their convictions etc.
They appear to be a bunch of reasonably smart people who got really good at guessing the teacher's password. Now they've heard that the passwords are "skepticism" and "science", unfortunately they don't appear to understand what either of those words mean.
I think you summarised it pretty well. RationalWiki is exactly what it looks like.
I heard it described somewhere as "providing arguments for left-wing atheists to win internet debates". Seems accurate.
The ideology is called "Atheism Plus".
They welcome him because he writes at the usual RationalWiki quality standard?
More importantly, he is compatible with the party line. Articles about wrong targets would not be tolerated, regardless of quality. Try to use a "snarky point of view" on something politically correct and see what happens.
I don't see any indication that lack of niceness deprived him of any of his Jeopardy! winnings.
You say that as if he was simply planning to collect people's money, put it in a big pile, and sit on it while cackling evilly, but the ostensible plan was actually to make a documentary. The most obvious explanation for the Kickstarter failure seems to me to be "no one was very interested in a documentary about some guy who won a bit of money on Jeopardy!" rather than "no one wanted to contribute to making this documentary because its subject wasn't a nice enough person".
Similar to what? Writing a long article? Splitting an article into two shorter ones? (Neither of those seems like something anyone should or would be kicked off Wikipedia for. A bit of googling suggests it was for edit-warring and behaving generally obsessively and disagreeably, but I don't know how accurate that is because the people saying so are apparently on the other side of the "Gamergate" culture war. from Ryulong and that seems to be a thing that brings out the worst in people.)
Judging by the project description ("The Documentary Film about Arthur Chu: a spokesperson for social justice, the new king of the nerds, and 11-time Jeopardy Champion."), it was not really about Jeopardy. Most people do not care strongly about Jeopardy, but many people care a lot about their political faction winning -- you just have to convince them that giving their money to you is their best move. Mentioning Jeopardy success is just a way to separate yourself from the crowd.
Some people can play this game well enough to get 10× more money for their videoblogs. My hypothesis (which I have no way to verify, and I admit that I am completely partial here) is that Chu tried to play the same game... and failed. Although I still give him credit for trying.
I didn't say it was. I said it was
and I chose my words carefully :-). But the description on the Kickstarter page does suggest that a lot of it was in fact going to be about Jeopardy! rather than just about Chu, or for that matter about social justice. Do you think that was all just lies, and if so why do you think that?
(My feeling is that you may be being at least one notch too cynical. My political faction is not the same as yours, though, and it's possible that I'm being one notch too un-cynical instead.)
Here is an article written by Arthur Chu that seems to suggest otherwise:
So, it's not about Chu being a smart person, or a successful person, but about Chu being a good person. (Where "good" probably happens to be more or less the same thing as "belonging to political faction X".)
Am I? For the record, I consider it likely that Arthur Chu sincerely believes his own story, where he is the good guy on the right side of the history. He probably also overestimates his own smartness, and believes that the ethical injunctions made for lesser mortals do not apply to him. (And I believe he is obviously wrong at this point.) I would also guess that he has a good heart and that he hates himself more than he should, but that's just unbased speculation.
(And I am not saying this about everyone. For example, I also believe that those people who have raised 10× more money for their videoblogs, they do not truly believe their cause. Which is why they made a successful plan and got the money, but Arthur didn't. He made a few mistakes that would be obvious to a cynical person. For example, he didn't put a high-status-behaving white girl into his movie. But that's where the real money and power are in his faction. Arthur, by being a true believer, does not recognize the rules of the game, and fails.)
Okay, I'll try to convert you to the dark side (and also give you a chance to convert me, by the law of conservation of expected evidence). If Arthur Chu is such a defender of oppressed people, give me an example of a black woman or a lower-class woman that he has defended publicly (calling her by her name, not merely as a part of a large anonymous group). Because I can give you an example of a rich white woman.
Again, for the record, I consider it likely that Arthur Chu is blind about what I am suggesting here; that he is clueless instead of hypocritical.
Except that the article says that Chu doesn't want the focus just to be Jeopardy!, not that Scott Drucker (the person who was actually proposing to make the movie, and the person whose Kickstarter it was) doesn't want it to be. And my reading of both Chu's article and the Kickstarter page is that Drucker's goals were not necessarily the same as Chu's, even though obviously both were hoping that cooperating with the other guy would do something for both people's goals.
The comparison here is with Anita Sarkeesian, to whom you linked before, right? Now, it seems to me that the reason why Anita Sarkeesian put a high-status-behaving white girl into her videoblogging is because she is a high-status-behaving white girl (in so far as videoblogging about video games can count as high-status behaviour), and it doesn't seem either obviously insincere for her to act as such, or obviously incompetent for Chu not to have done likewise. And I'm not sure what you think Scott Drucker should have done with a high-status-behaving white girl, or how it would have made the Kickstarter more successful.
What, by the way, makes you think that Anita Sarkeesian doesn't truly believe in her cause? I've only seen a small quantity of her stuff, but what I've seen looks sincere (and fairly plausible) to me.
You may be right (perhaps it depends what counts as "his faction") but your link from the word "are" doesn't seem to me to say what I think you're implying it does. It's arguing that "solidarity is for white women", but the stress is on "white", not "women"; I'd summarize the message as something like "contemporary feminism portrays itself as being for women, but really it's only interested in white women and black women get ignored or thrown overboard whenever it's convenient".
Wait, what? When did I say or imply or suggest that he is? I certainly didn't intend to. (Not because I particularly think he isn't, but because I have no idea whether he is and had no idea that that was the question that was meant to be at issue.)
I had a look through some of his writing, and he doesn't spend much of his time defending anyone by name. He spends much more attacking large-scale phenomena. I don't see any obvious reason why this indicates either cluelessness or hypocrisy. But that's kinda irrelevant; I never claimed that Chu is a great defender of oppressed people, and I have no idea how "you're being one notch too cynical" turned into "Arthur Chu is a great defender of the oppressed".
I have not verified it personally, but it is believed among Gamergate fans that Feminist Frequency is a project of Jonathan McIntosh. If that is true, then it was a strategic move to use Anita Sarkeesian as a public face of the project, because McIntosh himself could not use the "damsel in distress" effect to generate as much money.
Analogically, the correct way to make money using Arthur Chu would be to somehow make him a part of a project focused on white women. He would officially be a mere sidekick of a female protagonist. Then he could write many articles attacking everyone who gets in the way of his project.
(Oh damn, now I am in a full political mode. Well, I tried to explain what I meant.)
The fact that he didn't do this, I process as an evidence for (a) sincerity of his beliefs, and (b) obliviousness about the rules of the game.
You didn't. The Kickstarter project called him "a spokesperson for social justice".
It appears to me that all kinds of things are believed among people highly invested in one side or other of the "Gamergate" flap, and that being so believed is not very strong evidence for the truth of anything.
(The people producing those videos say he's "producer and co-writer". Cynical-me suspects that "Gamergate fans" think he must be the real driving force because Anita Sarkeesian is a girl and therefore not to be taken seriously. I do hope cynical-me is wrong. Not-so-cynical me thinks Sarkeesian is more likely to be the real driving force because, other things being equal, a woman is more likely to feel strongly about this stuff than a man.)
No, the correct way to make money using Arthur Chu is to have him play Jeopardy!. That's been done and it seems to have worked pretty well.
I'm having trouble figuring out what you think is actually going on here. It seems to be something like this: some unscrupulous person decides that their goal is "to make money using Arthur Chu" (why?) and then decides that the best way to do that is via a focus on social justice (why??) but then fails to include a high-status-looking white girl as Viliam's Guide To Exploiting Social Justice People would have told him to and therefore fails, whereas if they had had a high-status-looking white girl as central character the Kickstarter would have made a load of money.
But that doesn't make a bit of sense to me, so probably my different political/social/psychological assumptions are stopping me working out what scenario you have in mind.
(The more likely scenario seems to me to be this, obtained by taking things more or less at face value. Scott Drucker sees that Arthur Chu has raised a bit of a ruckus, and been somewhat successful, by playing Jeopardy! in an unorthodox way; maybe he also thinks Chu is an interesting guy. So he decides to make a little documentary about Chu and his Jeopardy! playing. He contacts Chu. Chu is prepared to play along, but he has got very much into social justice and wants that front and centre in the documentary. Drucker is willing to go along with this because "Chu gets angry about stuff" fits his narrative pretty well, and also because he can't make the documentary without Chu's cooperation. They put up their Kickstarter page, and it turns out that actually the internet has mostly forgotten about Chu and people who are interested in unorthodox Jeopardy! tactics mostly aren't very interested in social justice. To first order, no one wants to back the project. The Kickstarter fails. The end. In my version of the scenario, making a cute rich white girl the central character would have made it no longer a documentary about Chu, hence uninteresting to Scott Drucker; would have been unacceptable to Chu for all kinds of reasons; and would have made little difference to the success of the Kickstarter unless it happened to get noticed by a lot of people who enjoy looking at cute white girls so much they'll fund anything with one in it. That audience might overlap somewhat with the Jeopardy! fans; maybe not so much with the social justice warriors.)
OK. So what conclusion am I supposed to draw from that plus the fact (assuming it is one) that he never happens to have defended a poor black woman by name in his writings online? I'd have thought it might be "Chu is insincere and isn't really interested in social justice", except that you have said several times that you think he is sincere.
True for many political debates in general. Both sides start with different sets of "facts". In worse case, some of those "facts" are factually wrong. In better case, those facts are true, but were selected from the set of all possible facts to support a specific conclusion.
Thus a rational debate would have to start by establishing a base of mutually accepted facts. If you skip this step and go ahead, it will catch you later at some moment.
(For example, we might agree that Jonathan McIntosh is involved in Feminist Frequency, and that his name is usually not mentioned; someone who does not do a background research might easily come to a conclusion that Anita Sarkeesian is doing this alone. -- Of course whether this is a trivial technical detail or a damning evidence, that depends on many other assumptions.)
I think (p = 0.9) that McIntosh and Sarkeesian are following the "Guide To Exploiting Social Justice People". I think (p = 0.6) that Chu is not aware of this, and that he believes they are simply doing the right thing. And, being a good person, he wants to do the right thing, too. (But he fails precisely because he is not following the Guide.) I do not have an opinion on Drucker yet, as I have almost no data about him.
It's not about cuteness and enjoying, but about saving the damsel in distress (but of course if the damsel is white and high-status, saving her is a higher priority). The cute central character would be described as struggling with barbaric hordes of low-status men in STEM fields. Arthur Chu would pose as an expert on STEM fields and on nerds, using Jeopardy as credentials. He would also profess that the damsel is at least 10× smarter than him; she just didn't have a chance to prove it (because we all know how the society oppresses women). Chu would be the knight defending the damsel. But the real hero who can fix the world by the power of her awesomeness, that would be the damsel. Next step is to generate a controversy, and use the backlash as a proof that the forces of evil have united against this awesome damsel, but you can still send your money to make the good side win. Also, it will serve as a convenient excuse if the project fails.
This is what my Guide To Exploiting Social Justice People would recommend. It also requires having allies in media, who will cover the story from the correct angle, and will refuse to give a platform to opponents or competing projects.
My guess (p = 0.6) is that Arthur Chu is trying to do the right thing, but by being mindkilled he sacrificed his ability to notice that he may be doing it wrong.
What is the base probability that if one tries to become "a spokesperson for social justice", their best cause will be publicly defending an abusive rich white American woman with powerful friends? So if you happen to find yourself in such situation, you may want to slow down and reflect on what happened. You probably didn't plan it this way, but your brain had an evolutionary adaptation to do it for you.
Since it's been brought up...
As far as I can tell the best evidence they have for this is a widely circulated video (from before FemFreq) in which she says she's "not a fan of videogames".
And Mcintosh clearly "feels strongly about this", as much as any woman I've seen. The Gamergate people created a whole hashtag to display his tweets (#FullMcintosh), which also became, incidentally, what they use to indicate that they think someone has gone particularly far down the SJ rabbit hole.
Personally, I think the conclusion Viliam mentions doesn't rest in very solid evidence, but it's not far-fetched either. (meanwhile, the "because she's a girl" hypothesis looks very unlikely to me)
I'm not sure how familiar you are with videogames, or which of her videos you've seen. But I can't imagine how some of the ones I've seen could possibly have been made without outright dishonesty.
And some Feminist Frequency tweets repeating what McIntosh posted before: 1, 2, I think there are more but I cannot find them now. (Memetic hazard: here is the "argument" in a form of a youtube video.)
By the way Feminist Frequency is a project account, not Sarkeesian's private account (although it uses her photo), so it wouldn't be a damning evidence even if McIntosh would really sometimes use it. Also, when two people cooperate and have similar opinions, it would not be so unlikely to use the same words. = this is just a weak evidence
how is making a documentary about yourself not just contributing to your own glory?
He wasn't proposing to make a documentary about himself. Someone else was proposing to make a documentary about him. And (as DanielLC quite rightly says) the perfectly obvious purpose of this is that some people might find it interesting and want to watch it. Indeed, presumably about $10k worth of people did anticipate finding it interesting and wanting to watch it, since the project did get some backers.
Torture vs. dust specks: I go for dust specks, because it is a reverse lottery. People derive a lot of utility about fantasizing about winning the lottery. Conversely, the disutility of the average person derived from fearing the next time they may be the person tortured is larger than the dust speck. That and sympathetic pain.
Of course it was not in the original definition that people actually know about it. But from my angle every even remotely plausible real life scenario involves that people generally know about it.
Also, social contract theory and slippery slopes. If the social contract allows one person to be tortured, it could be the next time a million. Slippery slopes are not fallacious as long as a mechanism of the slipping can be demonstrated, and the mechanism is here is the lack of categorical - that is, not even one person - ban on torture. Putting it differently, people doing bad things to each other is part of human nature, so human societies naturally slip towards occasional atrocities, and categorical bans are themselves braking mechanism on that kind of slippery slope, and it is not wise to mess with them. Thus, we are all better off if we have a social contract that categorically forbids torture, the disutility deriving from being worried about a future where we are not protected by a categorical ban on torture is larger than the disutility of the dust speck.
That really sounds like just fighting the hypothetical. I mean, in practice, if something approximating the experiment was attempted in the real world, your reasoning is right, but that's not at all what the thought experiment is about. Do you at least acknowledge that, given that the people involved don't know about it (and also won't find out about the torture later), torture is the correct option?
This is pretty hard to answer. For moral / ethical questions, I don't want to get "pure math" but also rely on intuitions, and I cannot really rely on my intuitions here as they are very much social. As in, immoral is what horrifies a lot of people. I don't really know how to approach it without relying on such intuitions. Surely I can calculate the total sum of utils but how does that quantitative and descriptive approach turn into a qualitative and prescriptive worse/better? I am not at all sure worse entirely equals the result of a utility calculation. It is not unrelated to it either, of course, my basic intuition - that wrong is whatever horrifies a lot of people - does of course correlate to utility as well.
I mean, what else is morality if not some sort of a social condemnation or approval?
Stuart Russell interviewed by Quanta Magazine on the topic of AI safety.
They touch on the phrase "provably aligned" (with human values), which has been singled out before.
I have been thinking about politics again, this time from a meta level and considering motivations for positions.
Among my peer group and much of the media, the dominant model seems to be 'anyone who has center-right views is consumed by hate and/or a useful idiot for the evil ones, and anyone who has further right views is a jackbooted fascist'.
Now, given that the views they cannot tolerate are nothing compared to the NRxers, in a way this strikes me as absurd hysteria. But in another way this makes sense (except for the overreaction). I don't think most people really grasp that, for instance, P(women are better at maths than men on average) should be independent of whether one wants it to be true, or whether one hates women. And while LWers probably grasp this in theory, I would doubt that these beliefs and values are actually uncorrelated among LWers, since we are not perfect Baysian reasoners (or, to put it another way, there is a difference between knowing the path and walking the path).
So far, this is probably fairly obvious. Its also fairly clear that, unless everyone believes you to be a perfect Bayesian reasoner, it is certainly possible that by holding certain beliefs you are signalling moral stances even though this should be independent.
When I worried that the correlation between testosterone and politics means that political opinions are hopelessly biased by emotions, it was pointed out to me that it could be valid for emotions to affect values if not probability estimates. At the time I accepted this, but now I have largely changed my mind, at least WRT politics on LW.
The reason is that whatever we value, we should hold that the survival of civilisation is a subgoal. (Voluntary human extinction movement excepted).
As an example, there are NRxers who believe that there is a substantial probability that tolerance of homosexuality will destroy civilisation. I don't believe this, but to leave a line of retreat... well, IIRC future civilisation could be between 30 orders of magnitude and infinitely bigger than current civilisation, dependent upon the laws of physics. I put it to you that if
P(tolerance of homosexuality will destroy civiliseation)-P(tolerance of homosexuality will save civiliseation)>10^-30
Then a utilitarian has to be against tolerance of homosexuality, and it doesn't matter whether you hate gays or not, it doesn't matter if you have gay friends or indeed if you are gay. Its a simple (edit: actually, its quite complicated) cost-benefit calculation. (Although, of course this does not mean that campaigning on this points would be a productive use of your time).
If you have a different utility function than 'value all human-equivlanet life-years equally', then I think this argument should still hold with only slight changes. 10^30 is a very big number, after all.
I should emphasise that I'm not saying that this does justify homophobia. For one thing, I think that a general principal of not defecting against people who do not defect against you could arguably help save civilisation. What I am saying is that the issue of whether we should tolerate homosexuality (for instance) should be a matter of probability estimates and values almost all of us hold in common. Whether one actually loves or hates gays is irrelevant.
That different rationalists hold wildly differing opinions on this matter (as with various other political matters), and moreover polarised positions, is bad news for Aumann's agreement theorem and motivated cognition and so forth.
Or perhaps it is a sign that deontological or virtue ethics have advantages? I am aware that what I have written probably sounds shockingly cold and calculating to many people.
EDIT: I am not trying to say that tolerance of homosexuality fails the cost-benefit calculation. I am not trying to pick on left-wing people for saying that their opponents are evil, I used to think that anyone who was against of homosexuality was evil, but then I changed my mind. I realise the right wing also uses 'my political opponents are evil' retoric, but the left tries to frame everything as heroic rebels vs the evil empire, with an almost complete refusal to discuss or consider actual policies, whereas I think the right discusses actual politics more.
And whoever just downvoted every single comment in the thread, you are not helping.
Given the attitude of nearly every previous civilization towards homosexuality (including our own until ~30 years ago) I don't see how you can justify assigning this a value anywhere close to P(tolerance of homosexuality will destroy civiliseation).
So does this count as defecting? What about this?
A large part of my argument is based on my understanding that the Roman empire and Greece and so forth did tolerate homosexuality. AFAIK intolerance of homosexuality in the west started with Christianity.
If you are right that every past civilization was intolerant of homosexuality, then P(tolerance of homosexuality will destroy civiliseation) would obviously have to increase a lot.
Yes and yes.
Did the Romans and Greeks "tolerate homosexuality" in the sense we understand that phrase today? They certainly didn't have gay weddings. And allowing people to have homosexual affairs as long as you marry a woman would not nowadays be thought of as toleration, but as an anti-gay double standard.
I think the Romans and the Greeks did not "tolerate", but rather "accepted and celebrated as a morally and socially fine practice". Not to mention that from a contemporary perspective they were all pedophiles and corrupters of youth, anyways X-D
Not when the "passive" partner was a mature adult man, IIRC.
Sort of, the passive partner had to have lower social status then the active partner. For example, at least in Rome, using slaves as the passive partner was common.
Thinking about this conversation again, a few things struck me:
1) When I am thinking about the value of "P(tolerance of homosexuality will destroy civiliseation)" I can recognise a state of mind where I have logical reasons to believe something, but I also have strong motivated cognition. And this is a state of mind which often, but not always, leads to making mistakes
2) My defection argument is dubious, given the other various examples of behaviour, such as the links you provided, which also count as defection.
3) By tolerance I generally mean not physically threatening or harassing people. I don't mean, for instance, ranting about 'hetronormitivity'.
Well one problem is that these day SJW's are trying to get away with calling all kinds of things "physically threatening" and "harassing".
So what should I conclude about your attitude towards men from your use of "testosterone" in that sentence?
Well, ideally you would conclude that I was thinking about the digit ratios measured in the LW survey, which collates with testosterone but not estrogen.
Estrogen does affect politics too, and when an experiment proved this and was reported in popular science magazines (scientific american, I think) the feminists lost their minds and demanded that the reporter be fired, despite the fact that both the reporter and the scientists were female.
EDIT: and the article was, in fact, censored.
Are you referring to this article "The Fluctuating Female Vote: Politics, Religion, and the Ovulatory Cycle"? As discussed here?
Yes, I am.
What do you think of Gelman's criticism of the paper as, on scientific grounds, complete tosh? Or as he puts it, after a paragraph of criticisms that amount to that verdict, "the evidence from their paper isn’t as strong as they make it out to be"?
Well, the statistical criticisms they mention seem less damning than the statistical problems of the average psych paper.
This does seem rather large, unless they specifically targeted undecided swing voters. But its far from the only psych paper with unreasonably large effect size.
Basically, this paper probably actually only constitutes weak evidence, like most of psycology. But it sounds good enough to be published.
Incidentally, I have a thesis in mathematical psychology due in in a few days, in which I (among other things) fail to replicate a paper published in Nature, no matter how hard I massage the data.
Talk about faint praise!
It's far from the only psych paper Gelman has slammed either.
Such volumes of faint praise!
The work of Ioannidis and others is well-known, and it's clear that the problems he identifies in medical research apply as much or more to psychology. Statisticians such as Gelman pound on junk papers. And yet people still consider stuff like the present paper (which I haven't read, I'm just going by what Gelman says about it) to be good enough to be published. Why?
Gelman says, and I quote, "...let me emphasize that I’m not saying that their claims (regarding the effects of ovulation) are false. I’m just saying that the evidence from their paper isn’t as strong as they make it out to be." I think he would say this about 90%+ of papers in psych.
Yes. I think he would too. So much the worse for psychology.
Now consider what kind of publication biases incidents like that introduce.
Well, one would hope that journals would continue to publish, but the public understanding of science is inevitably going to suffer.
How about what's actually likely to happen, as opposed to what one would hope would happen.
What is likely to happen is that publication bias increases against non-PC results.
Correct.
You may have heard accusations that conservatives are "anti-science". Most of said "anti-science" behavior is conservatives applying a filter to scientific results attempting to correct for the above bias.
Of course this doesn't give one a licence to simply ignore science that disagrees with one's politics. Perhaps a ratio of two PC papers are as reliable as one non-PC paper? Very difficult to properly calibrate I would think, and of course the reliability varies from field to field.
The problem is that the experiment likely didn't prove it. A single experiment doesn't prove anything. Then the reporter overstate the results with is quite typical for science reporters and people complained.
Yes, it is true that there are massive problems in failure to replicate in psychology, not to mention bad statistics etc. However, a single experiment is still evidence in favour.
Actually, the reporter understated the results, for instance by including this quote from an academic who disgrees:
Thing is, Prof. Carroll is not a neuroscientist. So what gives her the right to tell neuroscientists that they are wrong about neuroscience?
Whether the reporter should be fired is not only about the quality of the experiment.
The journalist in this case.
What criteria would you advocate then?
Yes, obviously she has the legal right to argue about things she has no understanding of, and equally obviously I was not talking about legal rights.
Reporters do this all the time. And yet they only get punished for it if the result is politically incorect.
Yes, reporters get away with a lot. That doesn't make it better.
I think you should distinguish between
Not to mention that you sound Pascal-mugged.
Why shouldn't one want the statement: 'women are better at maths than men on average' to be true? Note, don't confuse the above statement with the statement: 'men are worse then [this fixed level] at maths on average'.
Well, its certainly widely considered that wanting there to be differences between the sexes is wrong, or at least it is if men are better at something. Personally I don't care whether men or women are better at maths, but if most people do, then I suppose they are entitled to their own values.
I'm not sure about that. Near as I can tell their values here are either poorly thought out or insane. Consider the following thought experiment:
Suppose men are on average better at math then women. Suppose you could reduce the male average to the female average by pressing a button, should you?
Well, that both decreases inequality and lowers the average. A better thought experiment would be to ask whether, if you had a button which would affect the next generation of children (so you do not infringe on the rights of people who already exist) to increase math ability in women but decrease it in men, should you use it to bring the averages in line?
Far stranger actually is that some people seem to be strongly attached to the idea that men and women are equally strong on average, even though this is obviously not true.
You can take this further. Would the world be better if everyone was equally good at everything? Seems kinda dull to me.
Well, that would make the universe less organized and in particular make it harder to find the people with the best people in math, so likely retard scientific progress somewhat.
If you want to find the best people in maths, you are far better off testing them, rather than reasoning based on the base rate, unless the inter-group difference is very large.
The sentence you quote doesn't make a statement about whether one should want it to be true. It makes a statement about "wanting it to be true" being independent from 'being true".
I just want to point out this is more or less what was called conservatism for a long time, before it got more radical. If you look up e.g. Edmund Burke's works, you find precisely the attitude that civilization is worth preserving, yet it is something so fragile, so brittle, radical changes could easily break it. So the basic idea was to argue with the progressivist idea that history has a built-in course, going from less civilized to more civilized, and we will never become less civilized than today, so the only choice is how fast we progress for more, Burke and other early conservatives proposed more of an open-ended view of history where civilization can be easily broken. Or, a cyclical view, like empires raise and fall. Part of the reason why they considered civilization so brittle was that they believed in original sin making it difficult for human minds to resists temptations towards destructive actions, like destructive competition. An atheist version of the same belief would be that human minds did not evolve for the modern environment, the same destructive competitive instincts that worked right back then could ruin stuff today. To quote Burke: "Society cannot exist, unless a controlling power upon will and appetite be placed somewhere; and the less of it there is within, the more there must be without. It is ordained in the eternal constitution of things, that men of intemperate minds cannot be free. Their passions forge their fetters."
This moderate view characterized conservatism for a long time, for example, National Review's 1957 takedown of Ayn Rand was in this Burkean spirit.
However throughout the 20th century, conservatism has all but disappeared from Europe and and it turned into something quite radical in America. Far more than a civilization-preserving school of ideas, it became something more radical - just look at National Review now and compare it with this 1957 article. I don't really know the details what happened (I guess the religious right awakened, amongst others), but it seems conservatism in its original form have pretty much disappeared from both continents.
Today, this view would be more characterized as moderate e.g. David Brooks seems to be one of the folks who still stick to this civilization-preserval philosophy.
My point is, you probably need to find people who self-identify as moderates and test it on them. e.g. moderatepolitics.reddit.com
What on earth are you talking about? Take a typical left-wing position from ~50 years ago (or heck ~10 years ago). Transport it to today, and it would be considered unacceptably radically right-wing. Hence the reason left-wing polititians constantly have their positions "evolve".
For example, the parent said:
That is, nearly the whole political spectrum from as recently as ~15 years ago is now considered "evil" by 'mainstream' leftists.
Look, policies are the least important part of political identities. Personality, tone, mood, attitude, and so on, people's general disposition are the defining features and in this sense yes the Michele Malkin types today are far, far more radical than the Whitaker Chambers types back then.
It is a huge mistake to focus on policies when understanding political identities. Something entirely personal, such as parenting styles are far, far more predictive. A policy is something that can be debated to pieces. It is far too pragmatic. People can come up with all kinds of clever justifications. But if a person tells me their gut reaction when they see a parent discipline a child with a light slap and I know pretty much everything I need to know about their political disposition and attitude, philosophy, approach to society and life in general, views of human nature and so on, so everything that really drives these things. Or, another example, the gut reaction they have to a hunter boasting with a trophy. This pretty much tells everything.
And in both your examples, what today comes across as the "conservative" reaction was the standard reaction of everybody except parts of the far left ~50 years ago.
Random policy thought I just had: Hire retired whores to teach sex ed classes. There are no better experts, and they'll (hopefully) be more inclined to teach what people actually want and need to know, rather than transparently disguising scare-em-straight tactics as education.
[Edit: I'm not entirely sure why this got downvoted as heavily as it did; it's the sort of pulling-policy-ropes-sideways thing that I would have expected to go over better here than most places. I'll retract it, but I'll wait a few days first in case someone cares to enlighten me.]
Since you seem to be sincere in asking for reasons:
"Whore" is considered an unpleasant word by many people. That combined with the overall tone may have made people think your intention was trollish
You seem to deeply misunderstand the dynamics that lead to ssex eduation being the way it is. There is no plausible transition from the way the world exists at present to one where retired sex workers were employed in the school system to teach sex education.
a) Because the majority still have moral objections to sex work and it is illegal in many places.
b) there is no common agreement that children should be taught about sex full stop, much less about sexual techniques aimed at pleasure. The only way the very minimal sex education that does exist has been allowed has come to exist is because it framed in terms of health
Thanks for paying the karma toll to answer me.
I picked up the usage from a couple of sex workers' blogs. Now that it's brought to my attention, though, I think they were explicitly trying to reclaim the word, which implies there was a problem with it to begin with. I should have caught that before using it in other venues.
Guilty on tone if not trollishness. I'll admit I'm seethingly hostile to grade school in general and sex ed/drug ed/anything with the same general characteristics in particular; I consider the latter fundamentally dishonest and an insult to the students.
Agreed. I presented the idea because it seemed both good and original; I know it's not politically tenable. The issues you mention are real ones; I just file them both under "people are crazy, the world is mad."
In general almost no school classes are taught by domain experts.
But are the even the best experts? Prostitutes are in interactions that are focused on giving their client pleasure in the least amount of time instead of focused on the enjoyment of both parties .
On of the most important lessons that a school could teach on the subject might be: "Talk with your partner about what they enjoy and communicate your own desires." That's much different in a non-money based interaction.
Perhaps I'm just parsing your words wrong, but it looks as if you're suggesting that most non-commercial sexual interactions have "in the least amount of time" as a major goal. I'm fairly sure that's far from the case.
(I agree with your other point, and would add that many -- I suspect most, and perhaps a large majority -- of non-commercial sexual interactions are not purely sexual; they occur in a context of some kind of ongoing relationship. That can make a substantial difference too.)
(In case anyone else is confused by gjm's confusion, the words "in the least amount of time" in ChristianKl's comment used to come after "instead of focused on the enjoyment of both parties" rather than before.)
Thank you.
There are no better experts at impersonal sex carefully walled off from the real "you". They are probably pretty good at separating johns from their money, too...
I see so much wrong with this that I don't know where to start:
The Future of Sex: It Gets Better
http://www.wsj.com/articles/the-future-of-sex-it-gets-better-1430104231
For one thing, the author, Laura Berman, Ph.D., (an alleged "sex educator") in an essay about "the future of sex" doesn't address the trend towards the increasing eviction of more and more of young men from the dating pool because young women find them "boring." This trend has advanced far along in Japan, the country that seems to exist about 20 years ahead of the rest of the developed world, where reportedly a quarter of the men in their early 30's have had no sexual experience. (And Japanese men live relatively close to other Asian countries popular with Western sex tourists, like the Philippines and Thailand; so you wonder why they don't seek out prostitutes in those countries just a short flight away.)
And two, consider her nonsense about:
And,
This show that Berman doesn't see the importance to a young man's development that comes from getting into sexual relationships through dating real, live women. In her absurd futurist vision, men would do just fine by staying emotionally and socially stunted from masturbating with sexbots and plugging into orgasmatrons.
On second thought, perhaps Berman does see the sexual eviction crisis, and she proposes these speculative technologies as a way to keep the sexually yucky boys away from trying to get into dates and relationships with the cool girls like her.
Then why bring it up on LW?
GiveWell partners with co-founder of Instagram and his fiancée.
http://blog.givewell.org/2015/04/23/co-funding-partnership-with-kaitlyn-trigger-and-mike-krieger/
My scorecard:
GiveWell Loses
Small amount of productivity (Teaching Kaitlyn)
Large amount of credibility (Donors influenced priorities)
GiveWell Gains
$750,000
Higher expected donations from Krieger and his network of wealthy friends
What do you think of the partnership? I am disappointed, but open to changing my mind.
GiveWell already has donors that might influence priorities. I don't think having donors means losing credibility.
My attempt to delve into Chinese philosophy has brought me to Xunzi. Only the last sentence is short enough to be a quote on its own, but I feel it is strengthened by the paragraph leading to it so much that I have to quote the whole paragraph (which I've separated into multiple paragraphs for readability):
Story-like Object: FAQ on LoadBear's Instrument of Precommitment
My shoulder's doing better, so I'm getting back into 'write /something/ every day' by experimenting with a potential story-like object at https://docs.google.com/document/d/1nRSRWbAqtC48rPv5NG6kzggL3HXSJ1O93jFn3fgu0Rs/edit . It's extremely bare-bones so far, since I'm making up the worldbuilding as I go, and I just started writing an hour ago.
I welcome all questions that I can add to it, either here or there.
It seems to me that a lot of "smart" people are capable of applying their intelligence in some spheres, but not others.
It also seems to me that this view is shared by other people. Can anyone point me to an article that does a good job arguing for it?
Tangential point: in deciding how smart I think someone is, for me, a lot of it has to do with how low they're capable of stooping. (I know this is just me saying "this is how I define a word", which is a pretty useless thing to say... but at the same time maybe there's something deeper that I'm trying to articulate that I haven't been able to spell out precisely with the above statement, but that maybe people could understand via some sort of empathetic inference?)
Well, since I'm on LW the first article to come to mind was Outside the Laboratory, although that's not really arguing for the proposition per se.
As for the stooping thing, I'm not entirely sure what you mean, but the first thing that came to mind was that maybe you have a rule out rather than rule in criteria for judging intelligence? As in: someone can say a bunch of smart things, but at best that just earns them provisional smart status. On the other hand if they say one sufficiently dumb thing that's enough to rule them out as being truly intelligent.
I thought Outside the Laboratory was a good discussion of "smart" people not applying their intelligence outside their sphere, thanks!
What is your exact claim? That people don't have the ability to apply their intelligence if they chose to do so or that they simple don't choose to apply their intelligence?
What to you mean with intelligence? If it's something like rational thinking, many people use different standards in different domains. A person who on the one hand believes that placebo-blind trials are necessary to establish causation can still believe that it's possible to analyse causation of single events in history and learn from that history.
Hansons Don't be a rationalist might be interesting.
Can we finance cryogenics by revival awards?
Create a market for frozen humans. The reward is for the agent who performs the revival. Investors can either search for revival technology and patent it, or they can invest in frozen humans, which they can sell to agents who wish to attempt revival.
That sounds like an excellent plot for a dystopian horror movie.
What about revival attempts that fail such that they kill the patient? e.g. destructive scan for an upload that turns out not to be accurate enough to run? How can we discourage people from taking unnaceptable risks with our frozen bodies just to exploit us for a quick buck, without also discouraging them from trying to revise us at all?
Or which is accurate enough to run but not accurate enough to be on a meaningful level "the same person".
Oh, that could be even worse incentives-wise. As far as the patient's subjective experience goes, it's a fatal accident. As far as the people reviving them care? If the patient is alive-looking enough to collect the prize, they've succeeded and any efforts to get more accurate scanning tools involved would be a pointless waste of money.
Can I add an image to the file database, and/or add an image to a post I plan to make in Discussion? The sandbox doesn't explain how to do it, although I did manage to add (well, preview) an already-existing image called Example.jpg.
If you click on the insert/edit image button you get a window with image options. Within that window, to the right of the "Image URL" textbox, there is a button with mouseover text "Browse." Clicking that will open up a new window that lets you upload files (Choose File) and use files you've already uploaded.
Alcor 2015 Conference October 9-11, 2015 The Alcor 2015 Conference will be held on October 9-11, 2015 at the Scottsdale Resort and Conference Center at McCormick Ranch, located at 7700 East McCormick Parkway, Scottsdale, AZ 85258.
STAY TUNED FOR MORE DETAILS
http://www.alcor.org/AboutAlcor/conference.html