Comment author:gwern
01 November 2012 09:17:32PM
*
1 point
[-]
I just finished the CMU OLI Probability & Statistics course, which I started... somewhere back in March or June. I think, overall, it's a pretty good statistics course. What I like best about it is that it is heavy about quizzes and exercises with real-world datasets, so I learned a bit more about R as well as learning the basics.
It covers from a fairly practical standpoint: data graphing, stuff like means or medians or distributions, the rules of probability, conditional probability, probability trees, Bayes's theorem, binomials and the normal distribution in particular, confidence intervals, z-tests, t-tests, ANOVA f-tests, the chi-squared test, linear models.
It has some drawbacks, of course: it's largely NHST-based as one would expect; the Java applets make copy-and-paste impossible on my Linux system which made answering questions a bit annoying; the R code is not really explained so you have to figure things out yourself; there's a jump in difficulty between the units and the one on basic laws of probability seems weirdly long and interminable and in general, parts of it can be very repetitious (if I never have to specify what is the null hypothesis and what is H_1, it will be too soon) and trivial leading to occasional '-_- yeah whatever' reactions where I get sick of a pedantic question and just click through the possibilities.
But overall I'm pretty glad I did it. I understand much better the tools I was using to analyze my self-experiments and hopefully it'll be a good base for tackling a Bayesian textbook like Kruschke's 2010 Doing Bayesian Data Analysis.
I was recently reading an outraged discussion of the warnings New York City had gotten about the risk of flooding, and I asked what less currently obvious infrastructure threats were being ignored. I didn't get much discussion there, so I'm asking here.
Comment author:AnotherIdiot
29 October 2012 01:51:39PM
*
0 points
[-]
My [uninformed] interpretation of mathematics is that it is an abstraction which does exist in this world, which we have observed like we might observe gravity. We then go on to infer things about these abstract concepts using proofs.
So we would observe numbers in many places in nature, from which we would make a model of numbers (which would be an abstract model of all the things which we have observed following the rules of numbers), and from our model of numbers we could infer properties of numbers (much like we can infer things about a falling ball from our model of gravity), and these inferences would be "proofs" (and thankfully, because numbers are so much simpler than most things, we can list all our assumptions and have perfect information about them, so our inferences are indeed proofs in the sense that we can be certain of them).
But it seems like a common view that mathematics has some sort of special place in the universe, above the laws of physics, and I don't really know what arguments people have for believing this. What are the arguments for this belief?
Edit: Reformulated my question to make it more specific.
Comment author:Epiphany
28 October 2012 08:27:36AM
*
5 points
[-]
I'm having a pretty intense reaction to reading certain articles and could use some support or a solution:
Here's what I read and my reactions:
Feynman's Cargo Cult Science (Which is about how a lot of scientific studies are done badly, often due to researchers not being allowed to do the research correctly.)
"During many of my 20 years at Stanford University, Albert Bandura and I tried to hold on to a science-based clinical training program. The bizarre situation we faced there is of more than personal and historical interest: I suspect that many of the same conflicts still exist and motivate the efforts described by Baker and colleagues. Bandura and I, and our students and other colleagues, were discovering the remarkable discrepancies between what the scientific work was revealing and the requirements imposed by the pressures for maintaining accreditation. The professional accreditation requirements insisted on continuing practices whose value was contradicted by the empirical findings. Those requirements not only flew in the face of the data but also made enormous demands on faculty and student time in the clinical program."
What meaning is there in doing anything (being a doctor or a psychologist for instance... or any number of other professions) if we can't even trust the research or the schooling? How can I make a difference in the world or do anything useful with no real knowledge? How do you find meaning, LessWrong?
Thank goodness I found this place. I am in love with the glimmers of sanity I see here. Before I found LessWrong I was just kind of... "WTF humanity is a mess." Now it's more like "WTF humanity is a mess but at least there's a group of people trying not to be." If anyone is up to describing this wonderful and horrible feeling in their own words, I could really use to feel related to about this.
Do you know of a website where one can look up a piece of research to see what flaws it has? Is one planned? I need this because it would take a very long time for me to read enough on each relevant topic to discover whether a piece of research I want to use is flawed or not. For instance, Feynman explained about how lots of studies have been done with mazes and rats, but people didn't seem to realize that the rats were using methods to find the food that were unexpected and all sorts of stuff has to be controlled for ranging from the scent of food to the type of flooring in the maze. If you don't know that all of these things need to be controlled for, you won't know that the vast majority of studies done on putting rats into mazes are useless. It's simply not realistic to expect ourselves to be able to single handedly give every single study we read a thorough enough review to detect all the flaws. I love research, but I now feel that it's futile. Does anyone know a solution? I know that peer reviewed journals are supposed to address this type of problem, but I don't see the online studies that I find being rated or marked as flawed in an obvious way.
Comment author:satt
04 November 2012 03:00:31PM
2 points
[-]
What meaning is there in doing anything (being a doctor or a psychologist for instance... or any number of other professions) if we can't even trust the research or the schooling? How can I make a difference in the world or do anything useful with no real knowledge?
That makes things sound worse than they are. I disagree that we have no real knowledge, and I'm also not sure about lumping doctors or psychologists together in this context. In medicine there are effects so huge that explaining them away as publication bias or spurious correlations is implausible (maybe because the relative risk is so huge, as with smoking causing lung cancer, or because the base rate is so low, as with asbestos causing malignant mesothelioma), so I count them as real knowledge. But I don't know of similarly huge effects in psychology, so psychology might differ in that key respect.
(Here's a speculative tangent that belongs in brackets. The foregoing might partly explain bad epistemic habits in research. Historically, lots of research went into things we basically fixed with magic bullets. So it didn't much matter when people suppressed negative results or leaned heavily on observational studies; the true effect of the magic bullets was so huge that it held up despite the biases. This might've gotten researchers into the habit of not worrying about, or not finding out about, methodological biases. But now we're searching for smaller effects those biases matter.)
Better still, most of the problems you refer to above are solvable. We could, for instance,
publish negative results
learn about warning signs that can indicate flaky study results
force researchers to publicly announce trials and their endpoint measures before their trials begin
force researchers to disclose funding sources and possible conflicts of interest
put more effort into searching the grey literature and foreign literature when reviewing studies
focus on randomized experiments (and use placebo controls where applicable in medical or psychological trials) over observational studies
impress the importance of evidence-based methods & treatments on practitioners and professional organizations
use statistical tests for detecting publication biases in reviews and meta-analyses
So, supposing I did accept the premise that the research base is so bad as to make doctors and psychologists useless, there'd still be an obvious alternative to giving up and walking away: I could become an epidemiologist or a medical statistician or a policy pundit, and encourage people to do the things I listed above.
Comment author:Epiphany
04 November 2012 07:05:12PM
*
0 points
[-]
Thank you for responding to this, Satt. I really did need some input here, and it's very good to see another perspective and to have been shown a whole list of things that could be done.
I am in an unusually bad situation because the subject I'm most interested in is psychology. I noticed something was wrong with the psychology industry while I was still young enough to avoid getting into it. The three main problems are:
That you have to diagnose people immediately to collect insurance payments when in reality it takes a long time to know whether there's even anything wrong with them at all, and being deemed "messed up" by a professional could be very hurtful to the patient.
I could tell that a lot of what was passing for therapy was BS and decided there must be something drastically wrong with the schooling. I didn't know that that it was this bad, but I am glad I noticed something was drastically wrong early on.
I am primarily interested in gifted adults. Neither an abnormal psychology degree or developmental psychology degree would give me a solid understanding of gifted adults - those are focused on the average Joe and children with learning disabilities respectively. Gifted adults are neither very well served by the typical therapist (imagine taking a space ship to a car mechanic) or by schooling methods intended for children with learning disabilities. I didn't realize that my main interest was in gifted adults until later, but I could tell that the psychology that I had been exposed to wasn't what I was looking for. I have a space ship myself, and wanted psychology that taught me about space ships like mine.
So I went to college for web design instead. I studied psychology on my own. I love being a web developer, a lot, but I want to really make a difference in the world and I don't feel that adding little buttons to websites is making that happen. Of course, web development can be used for making a difference, too, but if most of what I know about psychology is wrong (it quite possibly could be?) then how am I supposed to pursue my main interest? I was hoping to do self-improvement writing, and I can still do that at any time, and possibly gain an audience that way, but if the foundation of knowledge I am working from is bad, then it's not useful to do so. What I want to get from writing about self-improvement is meaning, not money, so that would be unacceptable to me.
Something occurred to me: I've learned enough about the psychology of gifted adults now that I'd probably have a strong advantage when it comes to writing review articles or meta-analyses on gifted adults. I'm not credentialed, so could not give the articles any traditional "credibility" (that's in quotes for a reason, now that I know all of this...). However, considering the circumstance (that getting an accredited psychology degree requires you to learn a bunch of mumbo-jumbo and that they don't teach about gifted adults anyway), I'm thinking that getting a degree would not increase the quality of my articles substantially enough to justify spending tens of thousands of dollars and so many hours on it. Reading the key books on research practices would probably be the best action, though I do not know what they are.
If you (or other LWers) have thoughts on how to approach this sticky problem, I'm interested in hearing them.
Comment author:MixedNuts
04 November 2012 07:43:50PM
0 points
[-]
What do you mean by "gifted adults"? Just "adults with very high IQ"? I think there's a standard trick for that when you pen them all together and then you have a regular human society where the social effects of giftedness disappear. Or do gifted people have abnormal psychology in absolute terms, not just relative with alienation and boredom and so on?
Comment author:Epiphany
04 November 2012 08:22:45PM
*
1 point
[-]
There are lots and lots of definitions for "gifted". State's legal definitions range from vague things like "people with a talent" to numerical specifications. The gist: I've seen definitions that range from a rarity of 1 in 4 to 1 in 50. Truth be told, my real interest is highly gifted adults and geniuses, not just "gifted adults" in general.
From what I've read, "highly gifted" tends to be associated with IQs > 145.
The people in each IQ range have their own characteristics. People with IQs near 130 tend to be more popular. People with IQs around 160 or greater have difficulty fitting in and tend to limit social contact because they are too different. These are relative obviously. It has been observed that people with IQs over 145 frequently have enough intensity that it results in them coming across in an energetic way that is called a variety of things from electric to charismatic. This appears to be genetic. There are other things like how exceptionally gifted children have trouble answering "simple" questions and doing "simple" tasks like "draw a bird" - too many options come to mind, and they have to choose, then, between 100 kinds of birds.
This is just the tip of the iceburg when it comes to the differences that have been talked about. I am not sure that any one piece of research I've read is true, but there are probably over a hundred differences that have been either researched or observed by psychologists who work with gifted individuals. I have observed a lot of these differences for myself, and have seen patterns. I can also use what I know to make guesses about who is gifted and how gifted they are and I am usually close. I feel certain that there are a huge number of differences of both types, though what, specifically they are and how common they are to each IQ range would be hard to say.
Also, I don't think it's called "abnormal psychology" when there's nothing wrong with them.
Comment author:Epiphany
03 November 2012 12:39:51AM
*
0 points
[-]
suddenly thinks of a coping strategy
Wikipedia addresses this... I was just reading the wiki on the Paleo diet and saw a bunch of stuff about repeatability and study relevance like:
Loren Cordain, a proponent of a low-carbohydrate Paleolithic diet, responded to the U.S. News ranking, stating that their "conclusions are erroneous and misleading" and pointing out that "five studies, four since 2007, have experimentally tested contemporary versions of ancestral human diets and have found them to be superior to Mediterranean diets, diabetic diets and typical western diets in regards to weight loss, cardiovascular disease risk factors and risk factors for type 2 diabetes."[27] The editors of U.S. News replied that their ranking included a review of all five studies which found that all of them were small and/or of short duration.
I realize Wikipedia isn't credible for citing or anything but I feel heartened because:
I bet they often link to a credible meta-analysis, making it easier to find them (I've been told by Gwern that one way of coping with this is to read a meta-analysis because it gives you a number of advantages over reading individual pieces of research).
It serves as a method for finding out about some of the flaws you need to look for when reading studies on the topic.
It often lists a collection of relevant research, which can save time.
It might be a good starting point for creating your own thorough reviews of studies because a lot of things will already have been hashed out, so it's just a matter of verifying that what's there is correct, which should save time if you build on it.
Hm...
Wikipedia is not a perfect solution but I think this will help me cope.
.oO I wonder if there are features that could be added to Wikipedia that would encourage the entries to transform into credible meta-analyses...
Comment author:gwern
03 November 2012 01:29:06AM
1 point
[-]
A very good Wikipedia article will be equivalent to a review article, but such an article isn't a meta-analysis: it doesn't include only studies which can be boiled down to a few summary statistics like d. There's also little way of being sure that the article is comprehensive and unbiased - one reason meta-analyses usually make a point of how they did a big search on Pubmed and looked through hundreds of results etc.
I don't know what features could be added to deal with either problem. Any meta-analyses tucked into WP articles would be rightly considered Original Research.
Comment author:[deleted]
27 October 2012 08:56:44AM
5 points
[-]
Probabilistic Voting
4chan apparently faked bieber having cancer and got some fans to cut their hair off.
On 4chan, I just saw someone say "We rolled to see which celebrity's fans we'd troll into thinking said celebrity had cancer. One thing lead to another."
That got me thinking about the whole "rolling" thing. If you're not familiar, on 4chan every post has a sequence number. The /b/ board is fast enough that you can't really predict the numbers. Having an authoritative common-knowledge source of randomness available for literally zero effort has led to some interesting coordination strategies and community norms.
There's lots of interesting ways that gets used, but right now, the coordination thing is what interests me. Some interesting observations:
People second ideas, quote them, edit them, etc, such that there is an evolving pool of ideas with probability of winning proportional to popularity (bypasses a lot of the crap in voting systems).
The cost of creating new ideas or minor variations is zero.
Absence of the normal incentives to vote strategically; you put forth your best idea. (There is the consideration of optimizing your idea for getting seconded.)
No complex counting algorithm. As soon as the winning idea is posted, everyone knows it and starts acting on it.
Anways, I thought that might be interesting. I'd like to see some more work on this.
Comment author:DanArmak
26 October 2012 07:44:41PM
*
0 points
[-]
How to comment when I arrive late at a post with many comments?
I usually only read LW every day or two. I'm also in the GMT+2 timezone, so US people mostly comment while I'm asleep. So when I reach an interesting post, like this one just now, it already has many comments. I really want to reply to some of them and to the post itself, but first I need to read all of the comments and internalize everything that has been said already, or I risk repeating what others have already pointed out. For a post with hundreds of comments, this is a lot of work.
I would welcome any tips for being better, or more efficient, at this.
Comment author:Alicorn
26 October 2012 03:27:48AM
8 points
[-]
It recently occurred to me that there is a near-example of (hostile) acausal interaction in popular culture. In the second Robert Downey Jr. Sherlock Holmes movie, he and Professor Moriarty have an entire "conversation" without speaking aloud, each simulating the other so they can decide what to do in their fight. It's rendered in a very comprehensible way, too, considering how weird a concept acausal interaction is. (It's not a perfect example since they do interact, but the conversation itself happens entirely in simulation.)
There are lots of examples in the movies of two geniuses facing off and one asserting that the other can simulate the first so well as to understand and counter a particular plan; that is, of A simulating B simulating A. This example has the advantage of showing the hypothetical, rather than asserting it.
This is an example of mainstream concept of exploring the game tree. It's worth promoting, but I prefer not to call it acausal interaction.
Comment author:[deleted]
25 October 2012 10:34:35AM
*
0 points
[-]
Has someone been karmassinating me? I'm pretty sure the karma scores of almost all comments of mine from 22 October 2012 09:15:59AM to 24 October 2012 04:54:24PM are lower than they used to be. (What is the proper thing to do when one notices something like this, BTW? I'm not sure it's posting in the open thread, but I can't think of anything else.)
Comment author:Gabriel
29 October 2012 03:07:20PM
0 points
[-]
I agree to an extent with the fictional Abe Lincoln quote. Quoting famous people serves mostly as a means of signaling so that whatever you're saying sounds more convincing. The actual epistemic value of quotes is so low (if it's even positive at all) that it's justifiable to burden yourself with the task of checking the exact origin of every single quote you encounter before you start repeating it. (But it won't be "all the time", as you have to check only once per quote.)
Comment author:Neph
24 October 2012 08:56:00AM
1 point
[-]
hello, all. first post around here =^.^= I've been working my way through the core sequences, slowly but surely, and I ran into a question I couldn't solve on my own. please note that this question is probably the stupidest in the universe.
what is the difference between the Bayesian and Frequentist points of view?
let me clarify: in Eli Yudkowsky's explanation of Bayes' theorem, he presented an iconic problem:
"1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?"
to my understanding of the Bayesian perspective, the answer would be 7.8% and would represent the degree of uncertainty that the subject has breast cancer
to my understanding of the Frequentist perspective, the answer would be 7.8% and would represent the frequency of subjects that both have cancer and tested positive.
a keen observer will understand where my confusion comes from- on my way through the core sequences, I have heard much from the Bayesian side, but nothing from the Frequentist side, making it seem artificially non-existent.
Comment author:Alejandro1
09 November 2012 07:06:36AM
0 points
[-]
The classical way of explaining the difference is through the example of a coin that you know is biased, but you don't know whether heads or tails is favored and by how much. What is the probability that the next toss will be heads?
Supposedly, a frequentist would say that there is an objective answer, given by the bias of the coin which also equals the proportion of heads in a long run. You just don't know what it is, the only thing you know is that it is not 1/2. A Bayesian would say by contrast that since you have no information to favor one side over the other, the probability (degree of belief) you have to assign at this point is 1/2.
This only explains the question of Frequentism vs Bayesianism as philosophical interpretations of "what probability is". The practical issue of Frequentism vs Bayesianism as concrete statistical methods is often tangled with this one in discussions, but it is really a separate matter.
Comment author:[deleted]
09 November 2012 04:28:16AM
0 points
[-]
I had the same issue, and I'm personally not convinced there's an actual "Bayesian vs frequentist" conflict as framed in the sequences. Both are useful ways of thinking in different scenarios.
To use Emile's example, there's a distinction between the probability that you think the millionth digit of pi is even or odd, and whether it really is even or odd. Even though you don't know the millionth digit offhand, it can be computed and has a definite value, so it really doesn't matter what you think it is. Saying 50:50, or more generally an equal probability distribution, is in my mind basically the same as saying "I don't know" (i.e. "I have zero evidence for deciding one way or the other.")
There's also a difference between the parity of the millionth digit of pi, and, for example, the wind speed at an arbitrary place and future time. It's impossible to calculate, so instead you can apply Bayesian methods and estimate a range of values based on prior knowledge, and any historical data you might have access to.
Comment author:Emile
24 October 2012 10:16:12AM
7 points
[-]
The bayesian/frequentist distinction can cover three different things that may occasionally be mixed up:
The core philosophical disagreement (the "proper" one) about whether probabilities an agent's knowledge / uncertainty about the world, or whether they represent frequencies of some event. For example, a frequentist in this sense might say that it's meaningless to talk about the probability that the millionth binary digit of pi is even or odd. I think frequentist epistemology is mostly discredited, but that it used to be dominant.
There are a bunch of hodge-podge statistical methods and tests (like p-values); and later on attempts to unify everything in terms of bayesian methods. People used to the "old" methods may not particularly call themselves "frequentists" or care that much about such labels; those pushing for the new (better) methods are the ones stressing the distinction (hunting down the sin of frequentism), sometimes to the annoyance of the rest.
Thinking in probabilities versus thinking in frequencies (80 women out of a hundred); the human brain works better when a problem is presented in terms of frequencies
Comment author:Emile
24 October 2012 10:01:50AM
0 points
[-]
I don't think Bayesians and Frequentists would answer that question differently; frequentists also use Bayes' Theorem, they just don't base all their philosophy on it.
If I would be better off taking both boxes,
I desire to choose to take both boxes;
If I would be better off taking only box B,
I desire to choose to take only box B;
Let me not become attached to decision I may not want.
Comment author:wedrifid
24 October 2012 02:54:16PM
*
0 points
[-]
If I would be better off taking both boxes,
I desire to choose to take both boxes;
If I would be better off taking only box B,
I desire to choose to take only box B;
Let me not become attached to decision I may not want.
This doesn't help in the general case. See, for example...
If I would be better giving Parfit $100,
I desire to choose to give Parfti $100;
If I would be better off keeping my $100,
I desire to choose to keep my $100;
Let me not become attached to decision I may not want.
A step closer to accurate (albeit in need of elegant wording) would be:
If I would be better off having precommited to taking both boxes
I desire to choose to take both boxes...
Big hypothetical question. Context: I'm in an Internet argument with someone who won't take my word for the physics; he challenged me to find someone else who would say the same thing.
Assume the universe runs on Newtonian mechanics. (Ignore the question of how human biochemistry works.)
Measure the position and velocity of every particle at some given time t.
Run the universe forward until a later time t1. Assume that at that time I am sitting on my porch, drinking coffee. Assume further that this condition conserves momentum and energy.
Restart the universe at time t, and again run forward until t1. Assume that this time, you see me jumping in the lake instead of drinking my coffee.
Question: Do these observations violate the axioms of Newtonian physics? If so, which ones?
No mention of chaos or of quantum mechanics, please: We're assuming perfect control of all variables to avoid the first one, and just handwaving away the second one.
It shouldn't be necessary, but please state your credentials.
Newton’s equations of motion tell us that a mass at rest at the apex of a dome with the shape specified here can spontaneously move. It has been suggested that this indeterminism should be discounted since it draws on an incomplete rendering of Newtonian physics, or it is “unphysical,” or it employs illicit idealizations. I analyze and reject each of these reasons.
This article surveys the difficulties in establishing determinism for classical physics within the context of several distinct foundational approaches to the discipline. It explains that such problems commonly emerge due to a deeper problem of ‘missing physics'.
Personalized medicine is back again. I can't tell whether the number of incarnations is a bad sign or if Jaan Tallinn being in on it is a powerful good sign.
Comment author:drethelin
23 October 2012 08:52:05PM
0 points
[-]
I would guess something to do with the founder/funding troubles, based on the current incarnation not including the one from the apparent first incarnation. I don't have actual information on the topic though.
Comment author:PECOS-9
22 October 2012 06:34:46PM
1 point
[-]
Does anybody have ideas for potential applications of lucid dreaming? It's been discussed a bit here and here before.
Aside from seemingly being a very good source of fun, I'm trying to think of other ways to use lucid dreaming.
For instance, mental visualization/rehearsal has been shown to be effective at improving ability in various skills, so it seems likely that rehearsal during lucid dreams should have similar (and possibly greater) benefits, though I don't know of any studies looking into this.
Even if you've never lucid dreamed yourself, I'd appreciate it if some of you brainstormed some ideas for novel uses for lucid dreams.
Comment author:VincentYu
23 October 2012 07:52:04PM
2 points
[-]
I had a brief look at the literature about a month ago and didn't find much. There is some evidence of performance enhancement from practicing motor tasks in lucid dreams (Erlacher, 2010), but the mechanism is unknown. Stumbrys et al. had two very speculative studies on asking dream characters within lucid dreams for help with problem solving (2010, 2011); they concluded that dream characters are terrible at arithmetic, but may be able to help with 'creative' tasks (I don't see good evidence for that from their data).
Nocturnal dreams can be considered as a kind of simulation of the real world on a higher cognitive level. Within lucid dreams, the dreamer is able to control the ongoing dream content and is free to do what he or she wants. In this pilot study, the possibility of practicing a simple motor task in a lucid dream was studied. Forty participants were assigned to a lucid dream practice group, a physical practice group and a control group. The motor task was to toss 10-cent coins into a cup and hit as many as possible out of 20 tosses. Waking performance was measured in the evening and on the next morning by the participants at home. The 20 volunteers in the lucid dream practice group attempted to carry out the motor task in a lucid dream on a single night. Seven participants succeeded in having a lucid dream and practiced the experimental task. This group of seven showed a significant improvement in performance (from 3.7 to 5.3); the other 13 subjects showed no improvement (from 3.4 to 2.9). Comparing all four groups, the physical practice group demonstrated the highest enhancement in performance followed by the successful lucid dream practice group. Both groups had statistically significant higher improvements in contrast to the nondreaming group and the control group. Even though the experimental design is not able to explain if specific effects (motor learning) or unspecific effects (motivation) caused the improvement, the results of this study showed that rehearsing in a lucid dream enhances subsequent performance in wakefulness. To clarify the factors which increased performance after lucid dream practice and to control for confounding factors, it is suggested that sleep laboratory studies should be conducted in the future. The possibilities of lucid dream practice for professional sports will be discussed.
I've never heard of a study of whether improving skills via lucid dreaming works.
Two things that make me really want to learn how to do it, are free sex and improving my social skills by getting into unusual social situations that I couldn't try in waking life. I have heard anecdotal accounts of people using lucid dreaming for these purposes.
Is your government infected with viruses, worms, malware and spyware? Do you keep calling tech support but end up playing phone tag? Did your brother-in-law who's supposedly this big expert come over last year to fix it, but only make it worse? Do you feel frustrated, confused, apathetic and annoyed? Does your stomach cramp up every time you hear the word "change"?
Neighbor, we have just the red pill for you. Don't ask what's in it. You don't want to know. Here's a glass of water - don't think, just swallow.
The fact that government isn't as good as it says it is, or that progressive ideas aren't fully consistent doesn't mean that either are fully dispensable, nor is it particularly clear that people who want to eliminate government have to stop any minor involvement they have (like voting) in order to achieve that goal.
He's reminding me of Michael Vassar's observation that geeks want explicit language in a way that most people don't. The fact that what government is and does isn't a good match for the way government is usually described isn't a good reason for eliminating government.
His point that people generally don't know anything about governing is salient, but does he have any experience running something more challenging than a solo blog?
To my mind, democracy still has the advantage that it makes it clear to politicians that there's a limit to how badly they can get away with treating the public.
He cheats a little on the the communists vs. Nazis numbers-- 6 million is just the Jews murdered by Nazis. Another five or six million Roma, homosexuals, criminals, etc. were killed in the death camps, and some 25
million (very rough estimate) were killed as a result of the Nazi side of WWII. I have no idea whether Japan would have started its war if Germany hadn't been its ally.
This being said, I agree that communism has a worse record than Nazism, but a better reputation. However, in the US and Europe, there are violent neo-Nazis but (unless I've missed something) little or nothing in the way of violent communists, so it makes sense to be more concerned about Nazis.
My problem with him is the general problem with radicals-- he needs to offer better arguments that what he's suggesting will be reliably better than the current set-up. Speaking of Nazis and Communists, it's possible to make things a lot worse because your theory sounds so attractive.
It was amusing to see that Mencius Moldbug, Dark Lord of the Convoluted Sentence, is a pretty average speaker.
Comment author:thomblake
23 October 2012 07:23:57PM
5 points
[-]
I have no idea whether Japan would have started its war if Germany hadn't been its ally.
Probably. They didn't have anything like a formal military alliance until the Anti-Comintern Act of 1936, but the war in East Asia arguably started in 1931 when Japan invaded Manchuria.
Comment author:drethelin
23 October 2012 06:26:20PM
*
5 points
[-]
Yeah, I view Moldbug as someone who looks at your house and is right when he says maybe the toilet shouldn't drain into the shower, but then suggests you can use fusion to run all your appliances and power your helicopter
Comment author:taelor
24 October 2012 01:22:36AM
2 points
[-]
I think the problem with Moldbug is that he's so firmly wedded himself to fighting against the whiggish naratives that are so deeply embeded in our historical accounts that he falls into the very trap that Herbert Butterfield, the original critic of whiggish naratives, warned of:
Further, it cannot be said that all faults of bias may be balanced by work that is deliberately written with the opposite bias; for we do not gain true history by merely adding the speech of the prosecution to the speech for the defence; and though there have been Tory – as there have been many Catholic – partisan histories, it is still true that there is no corresponding tendency for the subject itself to lean in this direction; the dice cannot be secretly loaded by virtue of the same kind of original unconscious fallacy.
(On an unrelated note, I occasionally find myself falling into a different, more sublte trap that Butterfield also warned of:
The watershed is broken down if we place the Reformation in its historical context and if we adopt the point of view which regards Protestantism itself as the product of history. But here greater dangers lurk and we are bordering on heresy more blasphemous than that of the whigs, for we may fall into the opposite fallacy and say that the Reformation did nothing at all. If there is a deeper tide that rolls below the very growth of Protestantism nothing could be more shallow than the history which is mere philosophising upon such a movement, or even the history which discovers it too soon. And nothing could be more hasty than to regard it as a self-standing, self-determined agency behind history, working to its purpose irrespective of the actual drama of events. It might be used to show that the Reformation made no difference in the world, that Martin Luther did not matter, and that the course of the ages is unaffected by anything that may happen; but even if this were true the historian would not be competent to say so, and in any case such a doctrine would be the very negation of history. It would be the doctrine that the whole realm of historical events is of no significance whatever. It would be the converse of the whig over-dramatization. The deep movement that is in question does not explain everything, or anything at all. It does not exist apart from historical events and cannot be disentangled from them. Perhaps there is nothing the historian can do about it, except to know that it is there. One fallacy is to be avoided, and once again it is the converse of that of the whigs.
Comment author:[deleted]
23 October 2012 05:49:08PM
*
6 points
[-]
My problem with him is the general problem with radicals-- he needs to offer better arguments that what he's suggesting will be reliably better than the current set-up. Speaking of Nazis and Communists, it's possible to make things a lot worse because your theory sounds so attractive.
I agree. A strong argument in favour of our current order (social democracy) is the Burkean conservative one. I've said in the past that Moldbug is good at diagnosing but bad at providing treatments and I think his plan as it stands is more likely to go terribly wrong than terribly right. But hey we tried socialism so many times in so many different places, and we still haven't given up on it, can't we try Neocameralism in a charter city somewhere?
However, in the US and Europe, there are violent neo-Nazis but (unless I've missed something) little or nothing in the way of violent communists, so it makes sense to be more concerned about Nazis.
There are plenty of violent left anarchists / anti-fa (Communists in the sense Moldbug is using) in Europe. To cite an example from Greece:
Protesters set fire to a Marfin Bank branch on Stadiou Street with Molotov cocktails; witnesses said that protestors marching past the bank ignored the employees' cries for help, while others chanted anti-capitalist slogans.[33][34][45] Most of the bank's employees managed to escape the burning building, but two employees who jumped from the second-story balcony were injured and two women and a man were found dead after the fire was extinguished.
Social Justice in action, I'm sure the protesters had "legitimate grievances" which foreign media where sympathetic to. Question time, if Neo-Nazis had burned down a building do you think it more or less likely for you to have heard of an incident like this? Can Neo-Nazis ever have "legitimate grievances"?
Indeed we have a ready made test case for this, check out foreign reports on Golden Dawn then compare them to their actual relevance. The double standard regarding this is ridiculous.
As is the amount of resources spent on "fighting" the far right in the EU compared to the amount dedicated to fighting the far left. Even if ceteris paribus Nazis (in the wider sense of the word) are more competent at takeovers and causing damage than Commies (in the wider sense of the word), diminishing returns have almost certainly kicked in for fighting Nazis but not for fighting Commies.
It was amusing to see that Mencius Moldbug, Dark Lord of the Convoluted Sentence, is a pretty average speaker.
Yeah Good writer =/= Good speaker. Unfortunately Eliezer seems to be another example of this.
Comment author:ArisKatsaris
26 October 2012 06:04:14AM
*
0 points
[-]
The murderousness of certain Greek left-wingers is true, but I would wish that you didn't downplay the murderousness of Golden Dawn. They contributed to the slaughter of Srebenica in Bosnia -- they are currently killing immigrants, They are officially in the parliament and yet they've not ceased with their numerous death threats against everyone who stands in their way.
Sorry, but though the murderousness of certain off-parliament Greek left-wingers is certainly a fact, and the sympathy they receive from inside the parliament likewise, the actual bloody neonazi murderers are in the Greek parliament. With 7% vote they're already killing people and nobody here really gives a damn, are you sure they won't commit acts of genocide when they reach 30%?
Comment author:[deleted]
26 October 2012 08:52:22AM
*
2 points
[-]
They contributed to the slaughter of Srebenica in Bosnia
You are referring to the Greek Volunteer Guard? Some allegedly had links to Greek Neo-Nazi groups including Golden Dawn, though you have to admit terming that as "Golden Dawn contributed to the slaughter at Screbrenica" is importing stronger connotations.
but I would wish that you didn't downplay the murderousness of Golden Dawn.
I didn't intend to downplay their murderousness, I wished to downplay the relevance of media reports on them. Which I think are disproportionate to their importance for non-Greeks. Also I hoped people would note the soft handed treatment anarchist/communist/anti-fa violence is given compared to the uniform condemnation of far right violence.
Comment author:Alejandro1
23 October 2012 06:28:18PM
0 points
[-]
I upvoted for the first paragraph. Then I wanted to cancel the upvote when I read the paragraphs after the quote about Greece (which I deemed too adversarial for a friendly discussion). In the process I discovered the nonobvious fact that one must click again in the upvote button to cancel it: clicking downvote brings it to -1 instead of just canceling the upvote.
Comment author:[deleted]
23 October 2012 07:02:35PM
*
3 points
[-]
Didn't meant to be adversarial towards Nancy I hope she doesn't take it that way. I was taking a strong stance that is of course political on what interests and biases Western media generally have. I edited the style, is it better now?
It didn't feel adversarial to me-- I'd forgotten about far left violence in Europe.
I did hear about it-- you can more or less assume that if it's on the BBC radio news programs, I've heard about it. This doesn't mean it will come to mind when I'm making sweeping generalizations.
Comment author:Alejandro1
23 October 2012 08:26:14PM
1 point
[-]
To be honest, I don't think you should revise your writing based on what just one random LWer (me) thinks. I just wanted to share the discovery I made about canceling upvotes, which was new and unintuitive to me. If I had read your last paragraphs before upvoting, I would have just refrained from voting in either direction and I would not have written any critical comment.
If you really want to know, though, the part that bugged me most was the paragraph immediately after the quote. ("Social justice in action…") It is snarky; maybe not towards Nancy as such, but certainly against the general opposed political position. I think the "no mindkilling" general code should preclude using snark in a political discussion, since its purpose, roughly, is to lower the status of the opposed viewpoint without adding substance (relative to a non-snarky rewrite).
But as I said, I doubt you should care too much about this opinion and rewrite your post.
Comment author:[deleted]
23 October 2012 09:40:58PM
*
2 points
[-]
Don't be silly you are a member of the LessWrong community in good standing, I appreciate such feedback. I now see your point about snark, but I was also trying to refer to a particular post by Moldbug, to make this more explicit I've added a link there.
How do a lot of you guys read so many things so quickly and retain all the knowledge? This seems like perhaps THE MOST VALUABLE skill I could learn, and I can't find ANY good resources on it!
Comment author:[deleted]
22 October 2012 09:58:38PM
3 points
[-]
This stuff takes practice in general. Note-taking and spaced repetition help. Maybe don't worry about best practices or "the right way" to do it at first -- anything's probably better than nothing.
One thing that can help is to always read with a goal in mind. Reflect on what you really want to get out of whatever it is you're reading. Maybe don't just "take notes" but try to build a concise summary, map out the main argument, or write a review. Look for something to bring up in conversation with a friend, or come up with three questions to ask the author. Always be noticing your confusion. Read the end-of-chapter problems before reading the chapter. (Of course it could be bad to read with the specific goal of answering a single narrow question, if you end up just scanning for the answer and missing out on other value.)
Comment author:DaFranker
22 October 2012 06:11:23PM
*
1 point
[-]
I've once been told the keys were an arcane ritual called "taking good notes" combined with the Level 5 Bayesjutsu called "Condense your probability mass" and "Test your predictions".
Attempts at piercing the veil of secrecy and/or locating a tutor or manual on these rituals and techniques have proven fruitless to date. Reports of such findings have all turned out to be hoaxes or were never confirmed, potentially as the finders became part of the group which maintains the secrecy.
Comment author:blashimov
21 October 2012 09:53:53PM
0 points
[-]
Does anyone know of an online resource (or book) that goes through typical mental illnesses or neurological patterns that lead people to believe they've been possessed by demons? Google is swamped with religious blogs, and my google-fu is failing to cut through. Context: Somebody said (paraphrase) here's a youtube video of a guy acting kinda demonic, then everybody prays and he gets better. What is the "atheist" explanation? So I went around looking and didn't have much luck, and now I am really curious. I'm assuming that even if some are "fake," some people actually believe they are possessed. Also, yes, I know trying to convince a believer is probably a lost cause, but I'm curious for my own sake now.
Comment author:lsparrish
20 October 2012 07:15:22PM
0 points
[-]
In the new sequence Highly Advanced Epistemology 101 for Beginners EY has made use of exercise questions / statements intended to be pondered prior to continuing. He has labeled these "koans" but is open to suggestions for a better word, as a koan means something a bit more specific than that to Zen people. Any ideas? Here are the "koans" from this sequence in order of appearance:
If the above is true, aren't the postmodernists right? Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?
If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?
What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?
"You say that a universe is a connected fabric of causes and effects. Well, that's a very Western viewpoint - that it's all about mechanistic, deterministic stuff. I agree that anything else is outside the realm of science, but it can still be real, you know. My cousin is psychic - if you draw a card from his deck of cards, he can tell you the name of your card before he looks at it. There's no mechanism for it - it's not a causal thing that scientists could study - he just does it. Same thing when I commune on a deep level with the entire universe in order to realize that my partner truly loves me. I agree that purely spiritual phenomena are outside the realm of causal processes, which can be scientifically understood, but I don't agree that they can't be real."
"Does your rule there forbid epiphenomenalist theories of consciousness - that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness has always been that we can imagine a universe in which all the atoms are in the same place and people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. The usual effect of the brain generating consciousness is missing, but consciousness doesn't cause anything else in turn - it's just a passive awareness - and so from the outside the universe looks the same. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."
Does the idea that everything is made of causes and effects meaningfully constrain experience? Can you coherently say how reality might look, if our universe did not have the kind of structure that appears in a causal model?
Comment author:Dolores1984
20 October 2012 07:35:25PM
3 points
[-]
I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.
Comment author:lsparrish
20 October 2012 08:28:52PM
2 points
[-]
There is a cost to doing nothing as well. Calling them koans potentially has the following effects:
Makes people think that rationality is Zen.
Makes people think Zen is rational.
Irritates people who know/care more about Zen than average.
Signals disrespect of specialized knowledge.
Encourages a norm of misusing/inflating terms beyond their technical use.
The question is whether it is more costly to make the change or not. How costly is the change? Are the costs long-term or short-term? (The costs of not making the change are mostly long-term.)
Also relevant: Apart from avoiding the above costs, are there benefits to changing it to something else? (For example, a better term could make the articles more interesting and intuitive to beginners than "koan" does.)
Comment author:[deleted]
21 October 2012 10:28:00AM
*
2 points
[-]
Knowing the kind of people who read LW, I guess that on reading “koan” more people will think about hacker koans than Zen kōans (also given no macron on the O).
Comment author:gwern
21 October 2012 01:30:42AM
1 point
[-]
Is it useful to learn to REM-nap even though I don't plan to sleep polyphasically? Is it worth going through adaptation?
I don't entirely follow... Is there such a thing as 'learning to REM-nap' without the proposed mechanism of the pressure of sleep rebound forcing a REM rebound during the space of a nap?
Comment author:carey
21 October 2012 02:58:00AM
*
0 points
[-]
I mean I am interested in undergoing adaptation through sleep deprivation, then something like uberman then everyman.
It would not be viable for me to stay in a polyphasic schedule next year. Ultimately, I will have to return to something largely along the lines of segmented or monophasic. Still, I have heard that undergoing polyphasic-style adaptation can help you to become acclimatised to getting REM sleep in a 20-30 minute period, something I currently can't do, but might be useful if I have a sleep debt or if I know I'm going to do an all-nighter etc.
So the idea is adapting to polyphasic then switching back to segmented or monophasic. Would I expect to nap better afterwards? Is this likely to be useful or worthwhile?
Comment author:gwern
21 October 2012 03:06:07AM
0 points
[-]
O. I dunno. I have some more doubts about polyphasic sleep these days; last time I checked in the Zeo forums, no one had posted a complete writeup demonstrating a polyphasic lifestyle much less accompanying metrics that the lifestyle hadn't hurt them (I'd particularly like spaced repetition statistics). And since Zeos provide real data, much more so than blog posts claiming successful adaptation...
I attribute [god becaming BFF rather than The Law] to the material comfort of modern existence, it encourages metaphysical optimism that wasn't tenable when everyone was regularly confronted with extreme suffering
Seems plausible. We still do have extreme suffering thought, we just don't see it in our day to day lives. Aguably we are worse people from a virtue ethics perspective.
I don't think we have good reasons for metaphysical optimism regardless of that issue however. My argument against it is anthropic. Assuming there are many possible metaphysics, a position that might be trivially false, I don't know enough to comment on that, we can infer that human values being complex only a tiny fraction of them are favourable.
Our physical surroundings can't help but be at least somewhat favourable. We can't help but be on a planet in the goldilocks zone in a universe with its particular value for the gravitational constant, because if we weren't there wouldn't be anyone around to make the observation.
When it comes to metaphysics we most certainly can make observations in a universe (metauniverse?) where the metaphysics have horrible things in store for us.
This argument works for the laws of our universe too. They are provably minimally friendly to the development of intelligence, but are very likely not friendly to its long term survival or flourishing. And all this is assuming an uncaring universe a caring one may be much worse in the uniquely horrible way an almost friendly AI would be.
The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.
Comment author:[deleted]
19 October 2012 03:23:03PM
0 points
[-]
Does the LessWrong community have a consensus on the subject of moral accountability, to the same extent that it has a consensus on things like free will and reductionism? If so, what is that consensus?
My opinion on the subject is, essentially: it's irrational to think people are morally culpable for their actions because their behavior is completely contingent upon their neurochemistry, which they have no control over. You can't blame a psychopath for having the specific cognitive makeup that made him a psychopath. Also, things outside of his control such as environment, parenting, etc. went into making him a psychopath. So trying to put "blame" on him for doing something bad, or wanting to see him suffer "because he deserves it", is irrational. Standard determinism, really. Not a very unique or original perspective, but one that's quite at odds with the view of the general population.
I've never really seen this mentioned very much on this website. Do LessWrongers generally take this view? Are there some good articles, both on and off LessWrong, that talk about this in much detail (whether they're arguing for or against my position)? I'd appreciate it if someone recommended some to me, as I find this subject fascinating.
Comment author:Larks
22 October 2012 04:31:28PM
1 point
[-]
their behavior is completely contingent upon their neurochemistry, which they have no control over.
People do have control over their neurochemistry. Invoking the classic compatabilist conception of free will, if they wanted to have different neurochemistry they would.
Comment author:Kaj_Sotala
22 October 2012 07:30:44AM
2 points
[-]
What you say is true to some extent, but there's also the fact that holding people morally responsible actually changes their behavior, and if we didn't hold anyone morally responsible for anything, people would behave worse.
Comment author:drethelin
19 October 2012 05:02:50PM
0 points
[-]
Moral Accountability is a lot like justice: It has a lot of psychological hooks in the human mind that make it very useful for enforcing how you want your society to be, and in the ancestral environment was probably linked far more closely to utility than it is today. The on margin effects of either cultural edifice might be good or bad but we should be careful about trying to dismantle either one.
Comment author:jimrandomh
19 October 2012 04:29:55PM
6 points
[-]
Does the LessWrong community have a consensus on the subject of moral accountability, to the same extent that it has a consensus on things like free will and reductionism?
I don't know if this is a matter of consensus, but I generally see it as a matter of translating from third-person deontology to consequentialism by way of third-person virtue ethics and game theory: rather than work with concepts like "culpability" directly, I ask first whether an act is evidence that someone's likely to do other bad acts, and how well that risk can be mitigated, and second whether punishing that sort of act would make it rarer by enough to outweigh the cost and damage of punishing.
I launched a project this week to replace [physical books] with digital versions which is moving at a decent rate of ten shelves a day.
Some questions, for anyone who uses digital books a lot: what readers -- both hardware and software -- do you recommend, and why? What determines whether you obtain a book on paper or as bits? Do you find the usability problems I list below?
I don't have an e-reader, although I do have computers and the Mac Kindle application. But I've never bought an e-book, because the convenience of a book that takes up no space has not yet outweighed the problems I see with them, even though the space that paper books take up is a major problem for me.
An e-book can vanish into thin air if the publisher decides to un-publish it. This has actually happened.
All the other obvious DRM issues.
I can't easily consult half a dozen books at once.
I can't flip through an e-book with anything like the convenience of a physical book.
Until we get A3-sized Retina screens the visual bandwidth will be nothing like as great as with paper.
The paper format has a record of compatibility of many times the entire history of computing. I have books that were manufactured more than a century ago, and they're as readable as when they came off the press.
I've seen enough people's accounts of dreadful usability problems in the reading software to conclude that most of it is written by dolts.
I used to print out scientific papers for reading, but I stopped that some years ago and only print them now when there's something I need to study intensively, at which point most of those usability considerations kick in. At this point, I can't see myself buying e-books except for the sort of mid-list SF where I would drop the physical book in a charity bin after reading.
Comment author:VincentYu
19 October 2012 03:41:42PM
*
1 point
[-]
I keep all of my books as PDFs on Mendeley. If a PDF is not available, I buy a hard copy through Amazon and send it to 1DollarScan to be converted to a scanned PDF.
I can't easily consult half a dozen [e]books at once.
In terms of screen estate, I agree, but in terms of looking for something in textbooks, I find it much easier to consult multiple ebooks at once, since I can easily search through tens of them in a second.
Visual bandwidth: I cannot relate, I read just as fast on any screen as on paper, with no noticeable eye strain.
Pictures?
Compatibility: how many old books you own are not available in digital form?
I don't know. I shall check.
ETA: I have checked. Of the last 30 books I bought (a number decided by "ok, that's enough"), 13 are available as e-books (determined by looking them up on Amazon). Every book in the sample published since 2010 was available on Kindle; only two books published before then were (2002 and 2006).
VincentYu mentioned 1DollarScan, a service for (destructively) scanning books to PDF, but transatlantic shipping costs for a thousand books, plus scanning at $3 per book make it rather expensive for me to make a serious dent in my book stacks.
Presumably, as formats change, the books get converted.
That's a large presumption. Electronic documents easily die of obsolescing formats. "If it doesn't survive, it wasn't important" is not a good rule -- ask any historian.
Comment author:drethelin
19 October 2012 08:09:44PM
0 points
[-]
Pictures and graphs generally work fine on newer works but I find that charts can be pretty badly optimized on older works that have been adapted cheaply. I read comics on my iPhone but the comics app is much more optimized for this than ereaders are.
Comment author:latanius
20 October 2012 05:41:11AM
1 point
[-]
Try k2pdfopt! I use it all of the time with scientific papers, with lots of formulas, and it works quite well. It practically converts the pdf to images and slices them up, outputting another pdf, but the size increase is not too significant (still usable file sizes with multiple-hundred page long books).
Comment author:ciphergoth
19 October 2012 11:27:31AM
1 point
[-]
Decision theory and selfish donating
Suppose an author I like says she'll write a new work if she gets enough donations. Under CDT, it's clear to me that it can't make sense for me to donate - my donation can't increase the probability of me reading the book enough to pay for the cost, and there are much more efficient ways for me to give altruistically. What do other decision theories have to say about this?
Comment author:wedrifid
19 October 2012 09:34:30PM
2 points
[-]
Suppose an author I like says she'll write a new work if she gets enough donations. Under CDT, it's clear to me that it can't make sense for me to donate - my donation can't increase the probability of me reading the book enough to pay for the cost, and there are much more efficient ways for me to give altruistically. What do other decision theories have to say about this?
Short answer: CDT doesn't donate. EDT, TDT and UDT all donate (assuming enough others are mutually known to be like you).
TDT was literally made for this kind of situation (because it's just a Newcomblike problem). UDT differs from TDT only in areas a bit more obscure than this. EDT is also designed to handle this perfectly too (ie. to get you the book for minimal price). If you donate evidence does suggest that enough people will donate to get you the book but if you don't donate evidence suggests that you will not.
Comment author:DaFranker
19 October 2012 07:37:37PM
*
2 points
[-]
TDT has to say that if the scenario where everyone donates you win, and you know that everyone else is using TDT or that the distribution of decision algorithms is likely to give sufficient "donate" outputs to make it better expected utility, then you should donate. Of course, if you have reliable data on others' decision algorithms, I'm pretty sure CDT and EDT and any other decision theory I've read about will boil down to an expected utility calculation or something pretty close.
Basically, as Vaniver says, all good DTs pretty much agree on this. TDT, CDT and EDT all agree that if you have common knowledge of a sufficient number of other people using the same decision theory (or, with more complicated calculations, various possible theories including those three) are interested in the book, you should all donate. This common knowledge, however, is usually the extremely costly, high-information-value part - the part about figuring out whether to donate or not seems trivial by comparison.
Comment author:wedrifid
19 October 2012 09:06:25PM
*
3 points
[-]
Basically, as Vaniver says, all good DTs pretty much agree on this. TDT, CDT and EDT all agree that if you have common knowledge of a sufficient number of other people using the same decision theory (or, with more complicated calculations, various possible theories including those three) are interested in the book, you should all donate. This common knowledge, however, is usually the extremely costly, high-information-value part - the part about figuring out whether to donate or not seems trivial by comparison.
I don't think this is correct. The CDT agents would all agree that they all should donate and would support the implementation of a simple mutual commitment protocol. If they couldn't arrange a way to compel each other to not defect on the commons problem they would be sad but defect themselves. Fortunately there are already existent online donation systems are sufficient. You just need one of the ones that returns pledged funds if the target goal isn't met and a carefully calculated target goal.
At the extremes of perfect CDT agents you'd have to fiddle with the details a little more and, for example, make it forbidden for one agent to donate twice in order to allow that any will even donate once. But we can assume either all those details are handled or the CDT agents aren't quite that ridiculous and consider the precommitment mechanism adequate. Another thing they would do is arrange a taxation system enforced by people with guns with the relevant commons problems to be solved specified by (necessarily compulsory) voting.
Of course, the other thing groups of CDT agents would do is arrange a free market capitalism system wherein products are payed for and people who don't pay don't get the stuff. A more efficient system would also allow the author easy access to a loan based on the awareness of the loan giver of the desire for the books. Then she would actually get most of the money from the sales of said books.
Comment author:Vaniver
20 October 2012 05:36:58AM
0 points
[-]
A more efficient system would also allow the author easy access to a loan based on the awareness of the loan giver of the desire for the books. Then she would actually get most of the money from the sales of said books.
Right- where again the primary block is the mutual information required.
Comment author:Vaniver
19 October 2012 02:41:32PM
*
0 points
[-]
As far as I can tell, any decision theory that disagrees with CDT in this case is mistaken. The author (or you) need to sweeten the deal; either the benefits need to be better, or the cost needs to be lower. Typical ways to improve the benefit are to attach status or other goods to the donation- whenever I talk about the Kickstarter projects I back, I make sure to mention that, you know, I backed them.
Comment author:Vaniver
22 October 2012 06:59:51PM
0 points
[-]
Yeah, but the conversation is about collective patronage in general, not about specific projects, and it seemed like it would detract from my point to also brag with my comment.
In gist, if your ingroup does things that harm others, you are likely to subsequently shift your moral attitudes away from principles that tell you that harming others is wrong, and towards principles that value loyalty and obedience.
A quote from near the end:
Although we conceive of morality shifting as motivated by the need to protect one’s identity, and thus as a beneficial mechanism to the individual, we expect it to have much more negative consequences for intergroup relations and for society at large. It can give more leeway in the mistreatment of outgroup members, or lead to their exclusion from the scope of justice (Opotow, 1990), reducing the chance of seeing such mistreatment as violating principles of harm and fairness. Morality shifting can thus be seen as a mechanism that allows people to make a virtue of evil (see Reicher, Haslam, & Rath, 2008). Once the shift occurs, further actions are even more likely to be interpreted from a loyalty/authority perspective rather than from a harm/fairness perspective.
This seems like it may be part of the cult attractor; and is also a good reason to keep your identity small; it effectively means that your ingroup doing harmful things can act as a murder pill for you.
Comment author:taelor
21 October 2012 02:12:53AM
*
0 points
[-]
In gist, if your ingroup does things that harm others, you are likely to subsequently shift your moral attitudes away from principles that tell you that harming others is wrong, and towards principles that value loyalty and obedience.
A more generalized version of this would read: "if your ingroup does [x], you are likely to subsequently shift your moral attitudes away from principles that tell you that [x is bad], and towards principles that [tell you that x is good as long as it's your ingroup doing it]." The chapter's from Cialdini's Influence social proof and identity self-modification seem relevent.
Comment author:Viliam_Bur
19 October 2012 10:28:01AM
4 points
[-]
Maybe this is how "being a member of a group which slowly shifts towards evil" feels from inside: Increasingly realizing the importance of loyalty, and that fairness is not as important as it seemed once.
So when you notice yourself thinking: "well, technically this is not completely fair, but our group is good and we do many good things, so in the long term I can do more good by sticking to my group than by needlessly opposing it on a minor issue", you have an evidence of your group becoming just a little bit more evil.
(To be precise, "a little bit more evil" can still be predominantly good, and can still be your best available choice. It's just good to notice this feeling, especially if it starts happening rather frequently.)
Comment author:[deleted]
18 October 2012 08:48:07PM
*
3 points
[-]
A question came up in response to EY's recent sequence posts that I'd like someone to take a shot at: EY seems to me at least to be saying that the universe is a 'fabric of causal relations' or is 'made of cause and effect' or something like that.
He's also said that probability (and so causal relations, given how he understands them) are 'subjectively objective'.
The first claim implies that casual relations are fundamental to the universe, the second implies that they're ways in which limited observers and agents deal with what is fundamental. As such, the two claims seem to be inconsistent. What's going on here?
The solution is that causal relations are a map, reality is the territory. You and I could very possibly have different causal structures in mind when we're talking about, e.g., moving billiard balls, and we can both be correct if we have different sets of information. There is only one reality, but there are many correct maps of reality, each one corresponding to a different set of previous information.
Comment author:[deleted]
29 October 2012 07:43:16PM
0 points
[-]
If I understand you, you're saying that causal relations are a (perhaps necessary) feature of the map but are not features of the territory. Is that correct? If so, it seems like the claim "the universe is a fabric of causal relations' is strictly speaking false, or at least it's only true if by 'the universe' we mean the map rather than the territory, which would be weird.
I made a mistake, but I think fixing the first sentence is all that I need to do. (Maybe I merely misspoke, but I'm not sure what I was thinking, even only a couple hours ago).
The first sentence should read something like: Reality is a particular causal web, but the correct model of that causal web depends on your state of information. In other words, the subjectively objective component only comes in when we try to infer something about the causal web that is reality.
Comment author:TimS
18 October 2012 09:02:11PM
0 points
[-]
I've wondered the same thing.
Some of it might be that no two agents will have the same experiences, and so they will not have the same probabilities assigned to particular propositions even if they started with the same priors, has identical sense-receptors, and are both perfect Bayesians.
But it seems misleading to use the label "subjectively objective" for that phenomena. And I might be totally off track, in which case I am totally confused about what "subjectively objective" is supposed to be about.
Comment author:Matt_Simpson
29 October 2012 05:51:44PM
*
0 points
[-]
But it seems misleading to use the label "subjectively objective" for that phenomena. And I might be totally off track, in which case I am totally confused about what "subjectively objective" is supposed to be about.
Probability is subjective in one sense and objective in another sense. It's subjective in that the correct answer to "What's the probability of A?" depends on who is asking the question. It's objective in that the answer depends on who is asking the question only through the information she has and not, e.g., who she is. Part of the reason to call it subjectively objective is to acknowledge that critics of Bayesian epistemology/probability/statistics are correct, in part, when they complain that it's subjective. The objective part answers the criticism by pointing out that probability is subjective in a very benign sense and in precisely the sense we intuitively expect it to be. E.g. "Mary didn't know Jack had pocket aces, so in her situation thinking that she was highly likely to have the winning hand was correct."
Comment author:chaosmosis
18 October 2012 08:17:21PM
*
4 points
[-]
Three Words: Little Mermaid Fanfiction.
Featuring Rationalist!Feminist!Determinator!Ariel, fighting against both the machinations of an Ursula with a massively increased power level (think Cthulhu's little sister) and her violent and patriarchal father, and the society that he defends.
I would like to write this, but I'm not confident that I've got the skills or knowledge to do so (specifically I need to read a lot more on feminism, also I've never written fanfiction before). Please PM me any ideas about anything that you think might improve the story, whether that's general writing advice or a specific scene or a character development ark or stuff about feminism I should read or anything else.
This could be absolutely fantastic, the source material allows for a lot of manueverability and I think canon Ariel's personality would only require some minor tweaking (mostly with the feminism) in order to fit the mold of what I've got in mind. View this and her curiousity just oozes off the film. There is so much potential here that it is just ridiculous.
Comment author:CronoDAS
22 October 2012 07:13:54AM
1 point
[-]
Incidentally, there was a TV cartoon series based on the movie, which takes place when Ariel was younger and hadn't yet developed her humanity obsession.
Comment author:chaosmosis
20 October 2012 06:51:53AM
*
0 points
[-]
I don't know what to do with Ursula. It's tempting to make her into the overzealous feminist strawman, but that seems like a weak fight, ideologically, and that's not a message that I really want to send out. Ursula needs to stand in clear contrast to both Ariel and the patriarchal society which rejects her. It would also be nice if Ursula was relatable.
The best idea I've had so far is to make Ursula an extremely jaded and manipulative and pragmatic woman, who neglects what's good in relationships and focuses but this conflict with the Eldritch horror awesomeness that I had planned. I've got vague ideas of how to reconcile the two, but input on this would help a lot.
Having Ursula's default state be an even more powerful version of her boss form was one of the main inspirations for this fiction. Ursula has the potential to be a really cool character, and she's shaping the way that I approach my ideas about the mermaid culture and Ariel's character. I love villains, so I would really appreciate it if people helped me to not screw this one up.
Your link just led to an Aladdin icon, so I assume you had something else in mind.
When I was rereading the thread, it also occurred to me that Ursula was the hard part. My take is that she's what she is for much the same reason crime bosses are what they are-- power, safety, and excitement-- with the last two having to be balanced. It might be interesting if there was a family tradition of being outlaw magic users.
However, I'm not a feminist, though I agree with a lot of feminist ideas. I think men and women are fairly similar, and that means some women are going to be very bad news. I'm inclined to think that the status differences between men and women have a lot to do with men being (for reasons that aren't clear to me) better at group violence. It's not about the upper body strength.
Ursula could be an outcast from her own society because she's mean and irresponsible. You could spin a story about her which goes either way-- the octos are actually dominant (or at least secure/isolated), and they exile their criminals who then predate various cultures the octos don't care about.
Alternatively, the mers dominate the octos, and Ursula has ambition and no place inside respectable mer society to use it.
Real world octopi are short-lived. How would that affect their approach to prisoner's dilemmas? A claim that they're unreliable because of their short lives could also be used to justify prejudice against them.
However, I'm just noodling here-- I've only seen the movie once.
Comment author:chaosmosis
18 October 2012 11:09:54PM
*
1 point
[-]
This is going to be really difficult to execute. If anyone else wants this basic premise, please take it. I'd love to read someone else's take on these ideas or ideas like this.
Also, don't take this outline as a promise. I reserve the right to completely change the story's meaning and plot as I wish.
Comment author:chaosmosis
18 October 2012 10:52:48PM
*
1 point
[-]
These ideas are courtesy of MixedNuts, please give him the (+) karma and not me. I'll take all the (-) karma.
Limyaael's rants (and everything else on that site).
Think about where she got her ideas from. In the movie everything stems from her humanity hobby and her teenage rebellion against her helicopter dad (who is overprotective but not evil; upping the bigotry sounds worthwhile but making him not genuinely concerned for her well-being sounds like a loss of complexity). If she's going to have any explicit feminist ideals, why? Did she come up with them herself, or is there a feminist movement, or is she part of a different movement whose ideas she ran away with? Is that movement old, with several waves, or just finding its voice? How divided is it? Is she concerned about straightforward rights, first-wave style, or is she getting into the philosophical significance of gender roles? Is she selfish, as in the movie, or concerned about helping other mermaids/human wannabes/females of all species? How does she relate to her sisters, and other people who tell her "I don't want to be liberated, thank you very much"?
Intersectionality: is she all about feminism/humanity/whatever she stands for, and if so does she try to make that work for all mermaids, or does she conveniently forget that not everyone is a sheltered princess with no responsibilities other than an occasional concert? Or does she fight for other causes - because she needed to make accommodations for non-princesses, or because she has a general philosophical system she noticed applies elsewhere? If so, what does she do about, say, the absolute monarchy? (Mad props if you dissect the class/race implications of Sebastian's Under the sea attitude and include the blackfish.)
How is society for people who are neither royalty, working for royalty, or evil mages, anyway?
In the movie, she's hopelessly naive about humans. In particular, she thinks human women are much freer than mermaids, and it doesn't really hit her that they don't know fish are people. Does she start out that way, and if so what happens when she learns it? Once she knows it, what does she do about it? Does she genuinely care about understanding and helping humans or does she just go "ooh, shiny"?
How alone is she? In the movie, her sidekicks don't share any of her ideas and are pretty much incompetent. Does she look for better allies? How does she value cooperation relative to personal friendship?
Where does she stand, on the diplomat - firebrand - murderous fanatic spectrum? Does that change over time, and why?
Does she have to be straight? (Limyaael's campaign for asexual female characters has gotten into me.)
What's the relationship between Triton and Ursula, in terms of power imbalance, current truce (if so) conditions, ability to destroy each other if they sacrifice everything for it and conditions for being willing to do so?
What do her sisters want? How do they feel about Triton, about society, about Ariel being the favorite, about Ariel's weird ideas, about Triton's reactions?
What's Ursula's backstory? How and why did she learn magic? Why does she use her powers for this specific job? Does she really believe her "but all the while I've been a saint" claims (there's some possible commentary on "People should be able to sign any contract including leonine ones"-brand libertarianism here), or does she use that to fly under some legal radar, or to ensnare her victims? How eager is she to drop the act (and if it's not entirely an act, how does she justify it) when convenient? Do her victims know that, and how does it affect their willingness to do business with her?
Why do people defend a bigoted society? Some major villains can just be evil, but a whole society can't; they must have reasons that sound good from their perspective, possibly with complex justifications.
Comment author:blashimov
19 October 2012 05:56:26PM
3 points
[-]
Are you going to have fish be sentient? Are all animals sentient Disney-style? If you are trying to make an at all coherent world, I'd just ditch the sentient fish part. Otherwise, I will honestly never read this because I won't be able to get over the horror of billions of sentient death just constantly. MOR!Harry panic about snakes right there. That is a really, really, weird world where humans haven't noticed as well. Fish are really, really, stupid. Hence we eat them en masse before we even started farming.
Comment author:chaosmosis
18 October 2012 11:01:07PM
*
0 points
[-]
My first thought is that this will be even more worked than I planned on. These are great questions.
I need to put a lot of time into this, no one should expect the story to get started for at least a few months.
I need actual women or actual feminists to talk to me; I live in a red state and don't ever see these people speaking up about patriarchy. I'm only familiar with feminism through books, and a couple discussions every now and then. What are the biggest pitfalls that I risk? Whose books should I read?
Tentative advice: Read books by women with female viewpoint characters. Make note of anything that seems odd, especially if you see it from more than one author.
Comment author:drethelin
19 October 2012 04:52:18PM
2 points
[-]
Sunshine, by Robin Mckinley.
Paladin of Souls and Cordelia's Honor (I liked this one way more, and the series it's at the start of is fantastic, though the main character of that one is male) by Lois Mcmaster Bujold
In the Garden of Iden, by Kage Baker, the start of one my other favorite scifi series.
Comment author:drethelin
19 October 2012 04:43:15PM
1 point
[-]
So if you're like me, you start reading that book, and almost immediately need to read a bunch of other books, because the main character has read them and how can I understand without reading them? I think I can resist a lot of them, and there's already a good amount of overlap but when she starts actually mentioning plot points from other books in ways that seem emotionally relevant is when I need to read them. So I can recommend the start of this book but am now reading Triton before I can get back to it.
If I were a very cruel person, I'd recommend Greer Gilman's Moonwise-- it surpasses the formal specifications (female author, main characters are two middle-aged women and two goddesses), but it's extremely referential we'd probably never see you again, and honestly, it's probably not particularly relevant to chaosmosis' quest.
Comment author:chaosmosis
18 October 2012 10:51:25PM
*
3 points
[-]
I was worried about it being a huge mess, at first, but putting them out in the open will allow for more criticism and dialogue, so that was a mistake. I was a bit tired when I posted that comment. I'll post your comments here then.
Comment author:evand
19 October 2012 04:27:29AM
0 points
[-]
Interesting, but has problems with helium supply, even at smaller scales.
Breathing tanks are problematic. If you carry a breathing tank, the simplest approach involves venting a lot of helium. Scrubbers, recycling, and O2 replenishment carries nontrivial risk of death without warning. Capturing the exhaled helium requires heavy, power-hungry compressors.
Building present nontrivial air-quality engineering problems, and need more than airlocks retrofitted on to make them airtight.
Also, it's far from obvious that 20% is the optimum O2 content, though I think it's quite well supported to say that 100% is too much.
Comment author:gwern
19 October 2012 04:34:04AM
0 points
[-]
My own belief is that before we start trying to get rid of nitrogen with helium or hydrogen, we ought to check first that increasing oxygen doesn't deliver some or all the benefits.
Comment author:evand
19 October 2012 01:10:21PM
0 points
[-]
Well, you can't increase it too far. The fire hazard gets insane pretty quick. 30% O2 is probably OK, but does have a substantial fire risk increase; 40% probably isn't. Also, increased oxygen has long-term health impacts (I don't remember details; I could look them up if you're curious), but I don't think we know what level those start at to any precision.
I suppose fire risk isn't a huge deal if you're using portable breathing tanks, but you do still need to investigate health impacts. They're long-term enough that you could ignore them for initial study of the cognitive effects, though.
Comment author:gwern
19 October 2012 02:45:38PM
*
1 point
[-]
I suppose fire risk isn't a huge deal if you're using portable breathing tanks, but you do still need to investigate health impacts.
Yeah, I'm well-aware of the dangers of oxygen fire (from learning about the Apollo program); oxygen tanks are probably how this would be implemented. Of course, I'm not sure that the benefit could possibly justify the expense of oxygen tanks but just the result would be interesting. (Perhaps one could justify some sort of oxygen alarm.)
Comment author:wgd
21 October 2012 02:14:39AM
*
0 points
[-]
Actually, I don't think oxygen tanks are that expensive relative to the potential gain. Assuming that the first result I found for a refillable oxygen tank system is a reasonable price, and conservatively assuming that it completely breaks down after 5 years, that's only $550 a year, which puts it within the range of "probably worthwhile for any office worker in the US" (assuming an average salary of $43k) if it confers a performance benefit greater than around 1.2% on average.
These tanks supposedly hold 90% pure oxygen, and are designed to be used with a little breathing lasso thing that ends up with you breathing around 30% oxygen (depending on the flow rate of course).
Comment author:gwern
21 October 2012 02:43:05AM
*
0 points
[-]
Oh, that is interesting. I was sort of assuming that you would have to pay for each refill and that a recharger wouldn't be just <$3k.
Also, interesting links. Connecting psychometric tasks to actual monetary value is always tricky, but those studies certainly suggest there might be meaningful benefit (but the benefit will be weaker at 30% oxygen - the links seem to all be at 40%).
One big problem there is that $3k is a lot to pay up front. But on the upside, if you can change the flow rate, I suspect it wouldn't be too hard to blind the oxygen content...
"It's a guy, pretty much as intelligent as, and at least twice as effective as a dozen Ph.D's in philosophy examining and discussing how to think better. Look, just... Here, The Simple Truth. Read this. It's basically this kind of thinking applied to everything."
Not that brief, but it's gotten at least a few interested in LW.
Does anyone have any good brief ways of describing LW to outsiders that have been effective? This comes up quite a bit for me with friends and family.
I think the most effective rhetorical technique would be very sensitive to the kind of person you are describing it to. I don't know if it is good, but I once said something like "it is about how you can avoid certain kinds of errors in your thinking, so that you can make better decisions".
I just read it, and while I enjoyed the book, I'm rather sceptical about the book's main point -- that consciousness (in the way the book describes) only arrived ~ 1000 BCE. The evidence provided by the Jaynes Society doesn't really convince me either.
Jaynes is not a crackpot in the Von Däniken/Hancock school, but I found his evidence lacking for his extraordinary claim. What do you think?
Comment author:ciphergoth
17 October 2012 09:56:18PM
3 points
[-]
Is there a word for a person, or an agent, that self-modifies to find something more painful, in order to change someone else's incentives, as described here? Obviously there are some choice phrases we might like to use about such a person, but most of them - eg "moral blackmail" - seem insufficiently precise. Is there a term that captures specifically this, and not other behaviour we don't like? If not, what might be a good, specific term?
Comment author:J_Taylor
25 October 2012 03:41:12AM
1 point
[-]
Have you read Schelling? He discusses a wide variety of maneuvers that are much like this. However, I can think of no standard names for this technique.
I suppose you could call such agents voluntary human shields.
It seems like the sort of thing that one would accuse another of, in order to score political points by making others feel ashamed to have sympathized with the person so accused. IOW, making the accusation is a much cheaper form of manipulation than actually doing the self-modification — and can be used to undermine many claims that one person is harming another. Thus, we should expect to hear the accusation from people who would like to go on harming others and getting away with it.
Comment author:ciphergoth
19 October 2012 08:32:09AM
2 points
[-]
I wouldn't be surprised to see examples of people saying "you don't really feel bad, you're faking it" which is a very different thing, and there's an example of people saying "we mustn't incentivize these hypothetical Muslims to self-modify in this way". But can you point me to an example of what you describe happening - of someone saying "you, the actual real person I am replying to, have self-modified to find something more painful in order to change other people's incentives"?
Comment author:ciphergoth
17 October 2012 10:22:43PM
0 points
[-]
I have used that term for this, but it's not very precise: the Wikipedia entry has the monster absorbing positive utility rather than threatening negative, and there's no mention of self-modification.
Comment author:wgd
19 October 2012 06:41:32PM
*
0 points
[-]
The self-modification isn't in itself the issue though is it? It seems to me that just about any sort of agent would be willing to self-modify into a utility monster if it had an expectation of that strategy being more likely to achieve its goals, and the pleasure/pain distinction is simply adding a constant (negative) offset to all utilities (which is meaningless since utility functions are generally assumed to be invariant under affine transformations).
I don't even think it's a subset of utility monster, it's just a straight up "agent deciding to become a utility monster because that furthers its goals".
Comment author:[deleted]
17 October 2012 04:54:28PM
*
2 points
[-]
One of the snarky comments on Edward Feser's blog put words to my general feeling about him:
There was never any pretense that it was about "nothingness" in the sense Feser would like it have been. So why does he pretend that it was? He's on a divine mission to uncover straw men.
(Naturally the locals call this commenter an atheist troll and ask him to "do the reading" -- no doubt "read the Sequences" in the local dialect -- but he retorts, "It makes no difference to you whether I do the reading or not. You complain either way.")
EDIT: If you haven't read Feser before, his standard blog-writing process is:
Take some atheist claim.
Reduce it to Aristotlelian jargon. (optionally admitting that this changes the meaning of the original claim entirely)
Show that some medieval philosopher said the jargon claim was nonsense.
Accuse the original atheist of spewing nonsense and being unaware of the true philosophical underpinnings of Christianity.
I can't speak for his blog writings (since I have only read a few articles), but I have read his book on Nozick and am almost done with his book on Aquinas.
Show that some medieval philosopher said the jargon claim was nonsense.
Accuse the original atheist of spewing nonsense and being unaware of the true philosophical underpinnings of Christianity.
I have no reason to doubt your claim, but it seems plausible that he is right in this case (if, in fact, he does so accuse atheists in this way). Why? Because I had 4 years of Bible class in high school and studied philosophy of religion at university and yet still only understood the straw man versions (most likely unintentional straw men, mind you) of the arguments made by "some medieval philosopher", or had any idea about the philosophical "underpinnings of Christianity".
It wasn't until I got interested enough in the history of science to actually bother to read primary texts (in astronomy, alchemy, and "physics") that I was able to get my mind situated in such a way that I could look around at the world from within these alien Medieval paradigms and see that some of these claims weren't just silly bullshit.
Anyway, if it takes such a roundabout sequence of obscure studies to even begin to make sense of this stuff, it is no wonder that modern atheists (or virtually all Christians, for that matter) have trouble getting it right.
Much of my undergraduate degree in philosophy was reading medieval texts (and later) from Christian philosophers, and I agree - most atheists I've encountered just don't understand the philosophical underpinnings. I *think * Dawkins circa The God Delusion is one of these, but I haven't read the book and my impressions are likely colored by my teachers and friends in undergrad who were largely sophisticated Christians.
That being said, most Christians don't seem to understand these underpinnings either.
Comment author:DanArmak
26 October 2012 07:29:31PM
0 points
[-]
Did a higher percentage of educated medieval or classical Christians understand this paradigm? Or was it reserved, as now, to extremely well educated, smart, specialized theologians?
Did a higher percentage of educated medieval or classical Christians understand this paradigm? Or was it reserved, as now, to extremely well educated, smart, specialized theologians?
I'm not in a very good position to answer this question with an acceptable degree of accuracy. The periods I am referring to had very low literacy rates, so I don't have much access to the thoughts of uneducated, non-smart, non-specialized medieval persons.
Comment author:Kaj_Sotala
22 October 2012 07:26:07AM
*
5 points
[-]
I've noticed that reading old texts with alien mindsets is an instant idea generator for fantasy settings. (Seriously, I don't get I've never heard anyone suggest the notion of "fantasy writers should read old texts" before. They're just filled with peculiar ideas about the world that one can import directly to a fantasy setting.) Would you happen to have any recommendations on texts that would be particularly suitable for this?
In undergrad, one of my friends and I came up with the idea of a medieval-philosophy based magic system for a fantasy setting -- essentially Platonic realism as magic. Wizards spent all their time collecting metaphysical materials, contemplating forms and such. Directly inspired by reading Plato, Plotinus and, I think, Augustine.
Christian theology offers rich pickings. Did you know it has a closed timelike loop? Satan was cast out from heaven because he refused God's command to bow to Man, which the angels must do (despite man being created a little lower than the angels) because God incarnated as a man, which he did to redeem Man from his fall, who fell because Satan tempted him, because Satan sought revenge upon God, because God cast him out from heaven. There is also the concept of a "type of Christ", where "type" has an old sense of "prototype". King David, Abraham, and Adam are examples. In science-fictional terms, the eruption of God into Time in the person of Jesus was such a momentous event that it sent back echoes of itself into the past, calling into being the history that called the Incarnation into being.
Parallelisms between techno-singularitarian ideas and Christian notions of salvation have often been made, usually with the implication that singularitarianism is just disguised religion and the technological arguments are mere rationalisation ("rapture of the nerds"). But suppose it's the other way round? Religion results from mankind's dim groping towards the techno-singularitarian truth, assisted by the occasional superpowerful alien or entity from outside the Matrix, inciting the major prophets and Messiah figures of history. The enlightenment that most religious traditions have sought consists of access to the real truth of things, but limited human minds are unable to comprehend it. Religion is, to use Vernor Vinge's term, "godshatter".
And the Jews are clearly an alien genetic/memetic breeding project.
Comment author:MixedNuts
28 October 2012 11:13:51AM
2 points
[-]
My local evangelist insists that angels are lower than humans. We're children, not servants. So they're supposed to obey us (they work for the family), and we only have to obey them insofar as they convey messages from Dad. If we mess up we can be forgiven, whereas angels get booted to hell with no second chances. (The Lord is kind of a shitty employer.)
Also according to my local evangelist, Satan's fall happened in two parts. First, he was the hottest piece of ass in Heaven, which caused him to pull a Narcissus and demand worship from other angels. But the Lord can't share worship, it's part of the class restrictions. So he grew mightily pissed and cast Satan down to Earth, whose inhabitants Satan turned into demons. Second, the Lord made Adam and gave him Earth to rule over, since the current Earthlings weren't anyone he cared for. Squatting Satan wasn't happy with his new landlord, so he tempted him and got cast down to Hell for that.
Many stories can be spun from the materials, as would be expected of godshatter. "A little lower than the angels" is actually from Psalms 8:5 (and quoted in Hebrews 2:7). Interestingly, the next verse says "Thou madest him to have dominion over the works of thy hands", which together implies that the angels are not the work of God's hands. This is not the only place where a hint of polytheism breaks through.
Comment author:MixedNuts
28 October 2012 01:23:43PM
2 points
[-]
It's ambiguous whether the translation should be "lower than the angels" or "lower than yourself". (That's what you get for not classifying angels as deities.) Oddly, Hebrews 2:7 is always translated using angels, though the text is the same in Hebrew versions, probably intentionally backtranslating from Greek. (ותחסרהו מעט מאלוהים, וכבוד והדר תעטרהו)
Even weirder, translations of Hebrews 2:7 in other languages tend to say "You have lowered him under the angels for a short time", not created so permanently. But translations of Psalm 8:5 are all about "created lower than", with the same disagreement about the relevant celestial being.
I can't find any Greek translations of Psalm 8:5 so I can't tell if they match Hebrews 2:7, and anyway it'd be Modern Greek.
I've noticed that reading old texts with alien mindsets is an instant idea generator for fantasy settings. (Seriously, I don't get I've never heard anyone suggest the notion of "fantasy writers should read old texts" before. They're just filled with peculiar ideas about the world that one can import directly to a fantasy setting.) Would you happen to have any recommendations on texts that would be particularly suitable for this?
That is a great question. The first thing to come to mind actually isn't all that old, but from the Early Modern Period. To a large extent, the Renaissance was much more mystical than the Late Middle Ages (seriously, compare Galileo's Platonism to Swineshead's Scholasticism). The paradigm I'm referring to is usually referred to as the "natural magic tradition" and is exemplified by thinkers like Paracelsus (The Hermetic and Alchemical Writings of Paracelsus) and Heinrich Cornelius Agrippa von Nettesheim (Three Books of Occult Philosophy).
The SEP entry for Agrippa sounds downright HPMOR-esque:
De occulta philosophia in its early form showed Agrippa's determination to transform magic into a useful science that would draw together all branches of magical learning, set those materials into a single philosophical framework, purge magic of the evil and demonic practices that had caused it to be regarded as a wicked science, and turn it into knowledge that would be beneficial to humanity. His goal was a total regeneration of magic, transforming it into a science that would enable the magus, or learned practitioner of magic, to perform marvelous works that would contribute to the welfare of humanity (Kavey 2010)....
Such learning is esoteric. Because of the power it confers, it would be potentially dangerous to religion, society, and individuals if it fell into the hands of the crude and ignorant masses. It must be communicated only to individuals whom the magician (the magus) knew to be worthy, both intellectually and morally, people who would use this power for the benefit of humanity (OP 3:2).
Comment author:[deleted]
18 October 2012 09:43:05AM
4 points
[-]
Someone tells me, "1 + 1 = 2."
I tell them, "Ah, but if you take one cloud and another cloud, and add them together, you still get one cloud, so 1 + 1 = 1."
Neither claim is "silly bullshit", but the conclusion of the second sentence is clearly broken. I have no reason to doubt Feser is a domain expert in theology. It's what he does with his expertise that bothers me.
Anyway, if it takes such a round about sequence of obscure studies to even begin to make sense of this stuff, it is no wonder that modern atheists (or virtually all Christians, for that matter) have trouble getting it right.
That's exactly the point. Christianity is already a sociological fact that bares almost no resemblance to whatever kind of Christianity it is that would "get it right."
Comment author:wedrifid
18 October 2012 10:52:28AM
5 points
[-]
I tell them, "Ah, but if you take one cloud and another cloud, and add them together, you still get one cloud, so 1 + 1 = 1."
Neither claim is "silly bullshit"
I'm comfortable calling that claim silly bullshit. In fact, I can't think of a better word for it. It is exactly the kind of thing the phrase "silly bullshit" is there to describe.
Yeah, I think I see what you mean. Feser seems to want to take apart arguments put forward by the atheist in the street in a no-holds-barred style, but then berates atheists that do the same to the Christian in the street, rather than only grappling with the arguments advanced by the masters of theology.
I was leafing through a copy of Marc Hauser's Moral Minds off a friend's bookshelf at the weekend, and it made me realise why I'd gone off reading books lately: the original content is too hard to find amongst the material I'm already familiar with.
I don't want to read another introduction to Chomsky's theory of universal grammar. I don't need another primer on ev-psych. I'm not interested in having the Trolley Problem explained to me again. What I would like is a concise breakdown of the core arguments, linking to other sources to explain things I might not already be familiar with.
This would end up looking a little like a Wikipedia article, or more to the point, a Less Wrong post. We have our fair share of book reviews, but they tend to select for books in which there's value in reading the whole thing, rather than those which have some novel content amongst mostly familiar territory, (what I took away from the recent chapter-by-chapter review of Causality was that I should totally read the book).
Is anyone else in this boat? Could it be worth organising some sort of book review/summarisation group?
Comment author:DanArmak
26 October 2012 07:33:46PM
1 point
[-]
That's the benefit of online linkable texts as opposed to books.
On the net, if you want to mention a Sequence post or a Wikipedia article, you just link to it and the reader either knows or can quickly check whether they've read it before.
In a book, if you just name-drop something like "evo-psych", the reader might have a very different, limited, or wrong conception of the subject. If you refer to another book or article that explains the subject, the reader isn't likely to have read it unless it's a very famous textbook or popular exposition (like The Selfish Gene), because there are many equally good books on any subject. So for the reader to make sure they're on the same page as the author, the book must include a long explanation of the subject referred to - even if it's not the actual topic and author would rather leave it out.
Comment author:Curiouskid
21 October 2012 05:18:26AM
*
2 points
[-]
I have the exact same problem. You forgot about Phineus Gage and getting a pole stuck through your head.
I think one way of solving this would be to use something like workflowy to make the entire book a zoomable/compressible bullet list. That way, the book would have a section heading like "explanation of Chomsky's theory of Universal grammer" you could literally just skip that entire branch of the book (and if any part of it were referenced, you could jump back to it (because it's digital) ) .
Also, a lot of LWers (myself included) are looking to build better argument mapping software for a wikipedia of arguments type resource (though that's a bit simplified).
EDIT:
Also, you could compile all the different 1 sentence, 1 page, 5 page, 1 chapter, explanations from several different authors. for any particular bullet point.
The only reason that I still read most books is that it is very low cost for me.
I think I get some benefit from reading through things I already know, though. It's going to help me remember it and the explanations are going to be somewhat different and so I'm going to get a better understanding of it overall.
I would join the group. We could do it through goodreads or a similar, better designed, site if you know of one.
Comment author:Morendil
18 October 2012 06:12:33AM
8 points
[-]
Aye. I'd be keen to join some sort of book club for smart people, where you could see others' bookshelves a la LibraryThing, but on top of that also have very short reviews letting you know what to expect from each book.
Most books tend to fall into two broad categories: things you already mostly know, and things you care little about. The rare high-value book is one that has just enough connection to what you already know, and makes you care about a whole new domain. (An exceptional book, like GEB, will make you care about many new domains at once.)
One recently read book that was very high value because it covered ground that was totally new to me: Abbott's System of Professions. Typically books in the sociology of professions had focused on the "trappings", professional societies, regulation and so on. Abbott pointed out that professions were the emergent result of a complex system of jurisdictional disputes, and the only way you can understand a profession is by looking at the others that compete with it for dominion over its topics. Abbot's analysis is so wide-ranging that it connects in several places with topics I care about; for instance when he analyzes "the construction of the 'personal problems' jurisdiction", a tug-of-war between the clergy, the (early) "neurologists", and psychiatry; or when he sketches the early history of the information professions - I hadn't realized that librarians were among the first such.
Comment author:Lightwave
17 October 2012 12:51:37PM
*
1 point
[-]
I'm planning on doing a presentation on cognitive biases and/or behavioral economics (Kahneman et al) in front of a group of university students (20-30 people). I want to start with a short experiment / demonstration (or two) that will demonstrate to the students that they are, in fact, subject to some bias or failure in decision making. I'm looking for suggestion on what experiment I can perform within 30 minutes (can be longer if it's an interesting and engaging task, e.g. a game), the important thing is that the thing being demonstrated has to be relevant to most people's everyday lives. Any ideas?
I also want to mention that I can get assistants for the experiment if needed.
Edit: Has anyone at CFAR or at rationality minicamps done something similar? Who can I contact to inquire about this?
Comment author:maia
18 October 2012 01:27:08AM
1 point
[-]
For something very brief, anchoring bias is easy to demonstrate and fairly dramatic. I tried this on a friend a couple weeks ago, anchoring her on 1 million people as the population of Ghana; she guessed 900,000. Turned out to be 25 million.
Comment author:Kindly
18 October 2012 12:00:37AM
0 points
[-]
90% might not be the best number for demonstrating the idea of a confidence interval. It's too close to 100%. There's not much room to be underconfident. What about 50% confidence intervals?
Comment author:Lightwave
17 October 2012 01:48:23PM
2 points
[-]
Well the thing is that people actually get this right in real life (e.g. with the rule 'to drink you must be over 18'). I need something that occurs in real life and people fail at it.
Comment author:Vaniver
17 October 2012 05:42:23PM
0 points
[-]
Well the thing is that people actually get this right in real life (e.g. with the rule 'to drink you must be over 18'). I need something that occurs in real life and people fail at it.
No, people are more likely to get it right in real life. Some fraction of your audience will get it wrong, even with ages and drinks.
They get it correct when it's in an appropriate social context, not simply because it's happening in real life. If it didn't happen in real life, confirmation bias wouldn't be a real thing.
Comment author:Lightwave
17 October 2012 02:27:23PM
2 points
[-]
Right, but I want to use a closer to real life situation or example that reduces to the wason selection task (and people fail at it) and use that as the demonstration, so that people can see themselves fail in a real life situation, rather than in a logical puzzle. People already realize they might not be very good at generalized logic/math, I'm trying to demonstrate that the general logic applies to real life as well.
Comment author:Barry_Cotter
17 October 2012 01:13:08PM
*
1 point
[-]
Confirmation bias, the triplet number test where the rule is “Any triplet where the second number is greater than the first and the third greatet than the second”. Original credit (edit:for my exposure)to Eliezer in HPmoR but I thought of it because that was what Yvain did at a meetup I was at.
Comment author:wedrifid
17 October 2012 01:34:29PM
3 points
[-]
Confirmation bias, the triplet number test where the rule is “Any triplet where the second number is greater than the first and the third greatet than the second”. Original credit to Eliezer in HPmoR but I thought of it because that was what Yvain did at a meetup I was at.
To be clear, since reading this made me double-take, I think by "original credit" you mean "original credit for your personal exposure to the concept".
Comment author:Metus
17 October 2012 10:11:41AM
0 points
[-]
In a more general effort to improve my health, or at least slowing its deterioration, I am thinking about constructing a hybrid standing desk. Now I do not have enough money to afford an actual convertible standing desk and I would very much like the convertible part. So I am thinking about a wall mount for my monitor or maybe even better some similar kind of adjustable mount that allows the necessary range of height to switch between sitting and standing. The problem then is still the keyboard. I already have a wireless keyboard, so switching it would not be a problem, but on what would I put it?
Comment author:DanArmak
26 October 2012 07:39:22PM
0 points
[-]
Another possibility is to have two (smaller) desks side by side, one at sitting height and one at standing height. Use a wireless keyboard, or one with a long enough cable, that you can easily move it between tables. Mount the screen on an arm that is mounted between the two desks, swivels left-right and is long enough to reach the center point of each desk.
Comment author:DanArmak
26 October 2012 07:35:52PM
0 points
[-]
Depending on your monitor mount, you could attach a keyboard tray to it. Some higher-end monitor arm manufacturers will sell you a compatible tray, or you could take an existing tray and hard-mount it yourself.
Comments (271)
I just finished the CMU OLI Probability & Statistics course, which I started... somewhere back in March or June. I think, overall, it's a pretty good statistics course. What I like best about it is that it is heavy about quizzes and exercises with real-world datasets, so I learned a bit more about R as well as learning the basics.
It covers from a fairly practical standpoint: data graphing, stuff like means or medians or distributions, the rules of probability, conditional probability, probability trees, Bayes's theorem, binomials and the normal distribution in particular, confidence intervals, z-tests, t-tests, ANOVA f-tests, the chi-squared test, linear models.
It has some drawbacks, of course: it's largely NHST-based as one would expect; the Java applets make copy-and-paste impossible on my Linux system which made answering questions a bit annoying; the R code is not really explained so you have to figure things out yourself; there's a jump in difficulty between the units and the one on basic laws of probability seems weirdly long and interminable and in general, parts of it can be very repetitious (if I never have to specify what is the null hypothesis and what is H_1, it will be too soon) and trivial leading to occasional '-_- yeah whatever' reactions where I get sick of a pedantic question and just click through the possibilities.
But overall I'm pretty glad I did it. I understand much better the tools I was using to analyze my self-experiments and hopefully it'll be a good base for tackling a Bayesian textbook like Kruschke's 2010 Doing Bayesian Data Analysis.
(Google+ mirror)
I was recently reading an outraged discussion of the warnings New York City had gotten about the risk of flooding, and I asked what less currently obvious infrastructure threats were being ignored. I didn't get much discussion there, so I'm asking here.
New journal using video as a medium hopes to reduce failure to replicate.
If you've got some spare time to blow, there's always yet another interpretation of quantum mechanics up on the arXiv.
Or perhaps you'd rather something more classical? How about a correspondence theorem between QM and thermodynamics?
Cute description of magnetic fields &c. I did not previously know what the hell a field was, and now I might.
My [uninformed] interpretation of mathematics is that it is an abstraction which does exist in this world, which we have observed like we might observe gravity. We then go on to infer things about these abstract concepts using proofs.
So we would observe numbers in many places in nature, from which we would make a model of numbers (which would be an abstract model of all the things which we have observed following the rules of numbers), and from our model of numbers we could infer properties of numbers (much like we can infer things about a falling ball from our model of gravity), and these inferences would be "proofs" (and thankfully, because numbers are so much simpler than most things, we can list all our assumptions and have perfect information about them, so our inferences are indeed proofs in the sense that we can be certain of them).
But it seems like a common view that mathematics has some sort of special place in the universe, above the laws of physics, and I don't really know what arguments people have for believing this. What are the arguments for this belief?
Edit: Reformulated my question to make it more specific.
It's more fun to think of the reverse relationship!
I'm having a pretty intense reaction to reading certain articles and could use some support or a solution:
Here's what I read and my reactions:
Feynman's Cargo Cult Science (Which is about how a lot of scientific studies are done badly, often due to researchers not being allowed to do the research correctly.)
The PLOS Medicine article "Why Most Published Research Findings Are False"
An article about how psychologists aren't usually using the treatments most supported by science which links to a document that contains a horrifying account:
http://www.psychologicalscience.org/journals/pspi/inpress/baker.pdf
I'm having a variety of reactions:
What meaning is there in doing anything (being a doctor or a psychologist for instance... or any number of other professions) if we can't even trust the research or the schooling? How can I make a difference in the world or do anything useful with no real knowledge? How do you find meaning, LessWrong?
Thank goodness I found this place. I am in love with the glimmers of sanity I see here. Before I found LessWrong I was just kind of... "WTF humanity is a mess." Now it's more like "WTF humanity is a mess but at least there's a group of people trying not to be." If anyone is up to describing this wonderful and horrible feeling in their own words, I could really use to feel related to about this.
Do you know of a website where one can look up a piece of research to see what flaws it has? Is one planned? I need this because it would take a very long time for me to read enough on each relevant topic to discover whether a piece of research I want to use is flawed or not. For instance, Feynman explained about how lots of studies have been done with mazes and rats, but people didn't seem to realize that the rats were using methods to find the food that were unexpected and all sorts of stuff has to be controlled for ranging from the scent of food to the type of flooring in the maze. If you don't know that all of these things need to be controlled for, you won't know that the vast majority of studies done on putting rats into mazes are useless. It's simply not realistic to expect ourselves to be able to single handedly give every single study we read a thorough enough review to detect all the flaws. I love research, but I now feel that it's futile. Does anyone know a solution? I know that peer reviewed journals are supposed to address this type of problem, but I don't see the online studies that I find being rated or marked as flawed in an obvious way.
http://www.bmj.com/content/331/7514/433
("Most published research findings are false... including this one.") ("I heard you like publication bias")
Whoa neat. Yes, this brings to mind a certain internet meme... (:
That makes things sound worse than they are. I disagree that we have no real knowledge, and I'm also not sure about lumping doctors or psychologists together in this context. In medicine there are effects so huge that explaining them away as publication bias or spurious correlations is implausible (maybe because the relative risk is so huge, as with smoking causing lung cancer, or because the base rate is so low, as with asbestos causing malignant mesothelioma), so I count them as real knowledge. But I don't know of similarly huge effects in psychology, so psychology might differ in that key respect.
(Here's a speculative tangent that belongs in brackets. The foregoing might partly explain bad epistemic habits in research. Historically, lots of research went into things we basically fixed with magic bullets. So it didn't much matter when people suppressed negative results or leaned heavily on observational studies; the true effect of the magic bullets was so huge that it held up despite the biases. This might've gotten researchers into the habit of not worrying about, or not finding out about, methodological biases. But now we're searching for smaller effects those biases matter.)
Better still, most of the problems you refer to above are solvable. We could, for instance,
So, supposing I did accept the premise that the research base is so bad as to make doctors and psychologists useless, there'd still be an obvious alternative to giving up and walking away: I could become an epidemiologist or a medical statistician or a policy pundit, and encourage people to do the things I listed above.
Thank you for responding to this, Satt. I really did need some input here, and it's very good to see another perspective and to have been shown a whole list of things that could be done.
I am in an unusually bad situation because the subject I'm most interested in is psychology. I noticed something was wrong with the psychology industry while I was still young enough to avoid getting into it. The three main problems are:
That you have to diagnose people immediately to collect insurance payments when in reality it takes a long time to know whether there's even anything wrong with them at all, and being deemed "messed up" by a professional could be very hurtful to the patient.
I could tell that a lot of what was passing for therapy was BS and decided there must be something drastically wrong with the schooling. I didn't know that that it was this bad, but I am glad I noticed something was drastically wrong early on.
I am primarily interested in gifted adults. Neither an abnormal psychology degree or developmental psychology degree would give me a solid understanding of gifted adults - those are focused on the average Joe and children with learning disabilities respectively. Gifted adults are neither very well served by the typical therapist (imagine taking a space ship to a car mechanic) or by schooling methods intended for children with learning disabilities. I didn't realize that my main interest was in gifted adults until later, but I could tell that the psychology that I had been exposed to wasn't what I was looking for. I have a space ship myself, and wanted psychology that taught me about space ships like mine.
So I went to college for web design instead. I studied psychology on my own. I love being a web developer, a lot, but I want to really make a difference in the world and I don't feel that adding little buttons to websites is making that happen. Of course, web development can be used for making a difference, too, but if most of what I know about psychology is wrong (it quite possibly could be?) then how am I supposed to pursue my main interest? I was hoping to do self-improvement writing, and I can still do that at any time, and possibly gain an audience that way, but if the foundation of knowledge I am working from is bad, then it's not useful to do so. What I want to get from writing about self-improvement is meaning, not money, so that would be unacceptable to me.
Something occurred to me: I've learned enough about the psychology of gifted adults now that I'd probably have a strong advantage when it comes to writing review articles or meta-analyses on gifted adults. I'm not credentialed, so could not give the articles any traditional "credibility" (that's in quotes for a reason, now that I know all of this...). However, considering the circumstance (that getting an accredited psychology degree requires you to learn a bunch of mumbo-jumbo and that they don't teach about gifted adults anyway), I'm thinking that getting a degree would not increase the quality of my articles substantially enough to justify spending tens of thousands of dollars and so many hours on it. Reading the key books on research practices would probably be the best action, though I do not know what they are.
If you (or other LWers) have thoughts on how to approach this sticky problem, I'm interested in hearing them.
What do you mean by "gifted adults"? Just "adults with very high IQ"? I think there's a standard trick for that when you pen them all together and then you have a regular human society where the social effects of giftedness disappear. Or do gifted people have abnormal psychology in absolute terms, not just relative with alienation and boredom and so on?
There are lots and lots of definitions for "gifted". State's legal definitions range from vague things like "people with a talent" to numerical specifications. The gist: I've seen definitions that range from a rarity of 1 in 4 to 1 in 50. Truth be told, my real interest is highly gifted adults and geniuses, not just "gifted adults" in general.
From what I've read, "highly gifted" tends to be associated with IQs > 145.
The people in each IQ range have their own characteristics. People with IQs near 130 tend to be more popular. People with IQs around 160 or greater have difficulty fitting in and tend to limit social contact because they are too different. These are relative obviously. It has been observed that people with IQs over 145 frequently have enough intensity that it results in them coming across in an energetic way that is called a variety of things from electric to charismatic. This appears to be genetic. There are other things like how exceptionally gifted children have trouble answering "simple" questions and doing "simple" tasks like "draw a bird" - too many options come to mind, and they have to choose, then, between 100 kinds of birds.
This is just the tip of the iceburg when it comes to the differences that have been talked about. I am not sure that any one piece of research I've read is true, but there are probably over a hundred differences that have been either researched or observed by psychologists who work with gifted individuals. I have observed a lot of these differences for myself, and have seen patterns. I can also use what I know to make guesses about who is gifted and how gifted they are and I am usually close. I feel certain that there are a huge number of differences of both types, though what, specifically they are and how common they are to each IQ range would be hard to say.
Also, I don't think it's called "abnormal psychology" when there's nothing wrong with them.
suddenly thinks of a coping strategy
Wikipedia addresses this... I was just reading the wiki on the Paleo diet and saw a bunch of stuff about repeatability and study relevance like:
I realize Wikipedia isn't credible for citing or anything but I feel heartened because:
I bet they often link to a credible meta-analysis, making it easier to find them (I've been told by Gwern that one way of coping with this is to read a meta-analysis because it gives you a number of advantages over reading individual pieces of research).
It serves as a method for finding out about some of the flaws you need to look for when reading studies on the topic.
It often lists a collection of relevant research, which can save time.
It might be a good starting point for creating your own thorough reviews of studies because a lot of things will already have been hashed out, so it's just a matter of verifying that what's there is correct, which should save time if you build on it.
Hm...
Wikipedia is not a perfect solution but I think this will help me cope.
.oO I wonder if there are features that could be added to Wikipedia that would encourage the entries to transform into credible meta-analyses...
A very good Wikipedia article will be equivalent to a review article, but such an article isn't a meta-analysis: it doesn't include only studies which can be boiled down to a few summary statistics like d. There's also little way of being sure that the article is comprehensive and unbiased - one reason meta-analyses usually make a point of how they did a big search on Pubmed and looked through hundreds of results etc.
I don't know what features could be added to deal with either problem. Any meta-analyses tucked into WP articles would be rightly considered Original Research.
Probabilistic Voting
4chan apparently faked bieber having cancer and got some fans to cut their hair off.
On 4chan, I just saw someone say "We rolled to see which celebrity's fans we'd troll into thinking said celebrity had cancer. One thing lead to another."
That got me thinking about the whole "rolling" thing. If you're not familiar, on 4chan every post has a sequence number. The /b/ board is fast enough that you can't really predict the numbers. Having an authoritative common-knowledge source of randomness available for literally zero effort has led to some interesting coordination strategies and community norms.
There's lots of interesting ways that gets used, but right now, the coordination thing is what interests me. Some interesting observations:
People second ideas, quote them, edit them, etc, such that there is an evolving pool of ideas with probability of winning proportional to popularity (bypasses a lot of the crap in voting systems).
The cost of creating new ideas or minor variations is zero.
Absence of the normal incentives to vote strategically; you put forth your best idea. (There is the consideration of optimizing your idea for getting seconded.)
No complex counting algorithm. As soon as the winning idea is posted, everyone knows it and starts acting on it.
Anways, I thought that might be interesting. I'd like to see some more work on this.
How to comment when I arrive late at a post with many comments?
I usually only read LW every day or two. I'm also in the GMT+2 timezone, so US people mostly comment while I'm asleep. So when I reach an interesting post, like this one just now, it already has many comments. I really want to reply to some of them and to the post itself, but first I need to read all of the comments and internalize everything that has been said already, or I risk repeating what others have already pointed out. For a post with hundreds of comments, this is a lot of work.
I would welcome any tips for being better, or more efficient, at this.
It recently occurred to me that there is a near-example of (hostile) acausal interaction in popular culture. In the second Robert Downey Jr. Sherlock Holmes movie, he and Professor Moriarty have an entire "conversation" without speaking aloud, each simulating the other so they can decide what to do in their fight. It's rendered in a very comprehensible way, too, considering how weird a concept acausal interaction is. (It's not a perfect example since they do interact, but the conversation itself happens entirely in simulation.)
this clip
There are lots of examples in the movies of two geniuses facing off and one asserting that the other can simulate the first so well as to understand and counter a particular plan; that is, of A simulating B simulating A. This example has the advantage of showing the hypothetical, rather than asserting it.
This is an example of mainstream concept of exploring the game tree. It's worth promoting, but I prefer not to call it acausal interaction.
Has someone been karmassinating me? I'm pretty sure the karma scores of almost all comments of mine from 22 October 2012 09:15:59AM to 24 October 2012 04:54:24PM are lower than they used to be. (What is the proper thing to do when one notices something like this, BTW? I'm not sure it's posting in the open thread, but I can't think of anything else.)
A Lesson in Skepticism
"Not checking the exact origin of every single quote all the time makes you a shitty skeptic." -Abraham Lincoln
I agree to an extent with the fictional Abe Lincoln quote. Quoting famous people serves mostly as a means of signaling so that whatever you're saying sounds more convincing. The actual epistemic value of quotes is so low (if it's even positive at all) that it's justifiable to burden yourself with the task of checking the exact origin of every single quote you encounter before you start repeating it. (But it won't be "all the time", as you have to check only once per quote.)
hello, all. first post around here =^.^= I've been working my way through the core sequences, slowly but surely, and I ran into a question I couldn't solve on my own. please note that this question is probably the stupidest in the universe.
what is the difference between the Bayesian and Frequentist points of view?
let me clarify: in Eli Yudkowsky's explanation of Bayes' theorem, he presented an iconic problem:
to my understanding of the Bayesian perspective, the answer would be 7.8% and would represent the degree of uncertainty that the subject has breast cancer
to my understanding of the Frequentist perspective, the answer would be 7.8% and would represent the frequency of subjects that both have cancer and tested positive.
a keen observer will understand where my confusion comes from- on my way through the core sequences, I have heard much from the Bayesian side, but nothing from the Frequentist side, making it seem artificially non-existent.
The classical way of explaining the difference is through the example of a coin that you know is biased, but you don't know whether heads or tails is favored and by how much. What is the probability that the next toss will be heads?
Supposedly, a frequentist would say that there is an objective answer, given by the bias of the coin which also equals the proportion of heads in a long run. You just don't know what it is, the only thing you know is that it is not 1/2. A Bayesian would say by contrast that since you have no information to favor one side over the other, the probability (degree of belief) you have to assign at this point is 1/2.
This only explains the question of Frequentism vs Bayesianism as philosophical interpretations of "what probability is". The practical issue of Frequentism vs Bayesianism as concrete statistical methods is often tangled with this one in discussions, but it is really a separate matter.
I had the same issue, and I'm personally not convinced there's an actual "Bayesian vs frequentist" conflict as framed in the sequences. Both are useful ways of thinking in different scenarios.
To use Emile's example, there's a distinction between the probability that you think the millionth digit of pi is even or odd, and whether it really is even or odd. Even though you don't know the millionth digit offhand, it can be computed and has a definite value, so it really doesn't matter what you think it is. Saying 50:50, or more generally an equal probability distribution, is in my mind basically the same as saying "I don't know" (i.e. "I have zero evidence for deciding one way or the other.")
There's also a difference between the parity of the millionth digit of pi, and, for example, the wind speed at an arbitrary place and future time. It's impossible to calculate, so instead you can apply Bayesian methods and estimate a range of values based on prior knowledge, and any historical data you might have access to.
The bayesian/frequentist distinction can cover three different things that may occasionally be mixed up:
The core philosophical disagreement (the "proper" one) about whether probabilities an agent's knowledge / uncertainty about the world, or whether they represent frequencies of some event. For example, a frequentist in this sense might say that it's meaningless to talk about the probability that the millionth binary digit of pi is even or odd. I think frequentist epistemology is mostly discredited, but that it used to be dominant.
There are a bunch of hodge-podge statistical methods and tests (like p-values); and later on attempts to unify everything in terms of bayesian methods. People used to the "old" methods may not particularly call themselves "frequentists" or care that much about such labels; those pushing for the new (better) methods are the ones stressing the distinction (hunting down the sin of frequentism), sometimes to the annoyance of the rest.
Thinking in probabilities versus thinking in frequencies (80 women out of a hundred); the human brain works better when a problem is presented in terms of frequencies
I don't think Bayesians and Frequentists would answer that question differently; frequentists also use Bayes' Theorem, they just don't base all their philosophy on it.
I've come up with a litany that would be to instrumental rationality what the Litany of Tarski is to epistemic rationality, expressing the sentiment in "Newcomb's Problem and Regret of Rationality":
If I would be better off taking both boxes,
I desire to choose to take both boxes;
If I would be better off taking only box B,
I desire to choose to take only box B;
Let me not become attached to decision I may not want.
This doesn't help in the general case. See, for example...
If I would be better giving Parfit $100,
I desire to choose to give Parfti $100;
If I would be better off keeping my $100,
I desire to choose to keep my $100;
Let me not become attached to decision I may not want.
A step closer to accurate (albeit in need of elegant wording) would be:
If I would be better off having precommited to taking both boxes
I desire to choose to take both boxes...
“Would” has to be interpreted à la Gary Drescher in Good and Real.
A wording slightly less inelegant than yours would be "If I am better off being the kind of person that gives Parfit $100..."
Big hypothetical question. Context: I'm in an Internet argument with someone who won't take my word for the physics; he challenged me to find someone else who would say the same thing.
Question: Do these observations violate the axioms of Newtonian physics? If so, which ones?
No mention of chaos or of quantum mechanics, please: We're assuming perfect control of all variables to avoid the first one, and just handwaving away the second one.
It shouldn't be necessary, but please state your credentials.
There have been recent discussions on determinism in Newtonian/classical mechanics within the philosophy of science literature. See, e.g.:
Norton, John D. 2008. “The Dome: An Unexpectedly Simple Failure of Determinism.” Philosophy of Science 75:786–98. doi:10.1086/594524
Wilson, Mark. 2009. “Determinism and the Mystery of the Missing Physics.” British Journal for the Philosophy of Science 60:173–93. doi:10.1093/bjps/axn052
http://www.panacearesearch.com/about/
Personalized medicine is back again. I can't tell whether the number of incarnations is a bad sign or if Jaan Tallinn being in on it is a powerful good sign.
https://sites.google.com/site/medicineispersonal2/our-company/our-staff
Any idea why it went poof before?
I would guess something to do with the founder/funding troubles, based on the current incarnation not including the one from the apparent first incarnation. I don't have actual information on the topic though.
Can anyone recommend a short explanation of the idea of Hegelian dialectic that doesn't make me want to self-immolate?
More generally, what, if anything is worth studying/salvaging from Hegel?
Does anybody have ideas for potential applications of lucid dreaming? It's been discussed a bit here and here before.
Aside from seemingly being a very good source of fun, I'm trying to think of other ways to use lucid dreaming.
For instance, mental visualization/rehearsal has been shown to be effective at improving ability in various skills, so it seems likely that rehearsal during lucid dreams should have similar (and possibly greater) benefits, though I don't know of any studies looking into this.
Even if you've never lucid dreamed yourself, I'd appreciate it if some of you brainstormed some ideas for novel uses for lucid dreams.
I had a brief look at the literature about a month ago and didn't find much. There is some evidence of performance enhancement from practicing motor tasks in lucid dreams (Erlacher, 2010), but the mechanism is unknown. Stumbrys et al. had two very speculative studies on asking dream characters within lucid dreams for help with problem solving (2010, 2011); they concluded that dream characters are terrible at arithmetic, but may be able to help with 'creative' tasks (I don't see good evidence for that from their data).
On lucid dream induction, Stumbrys et al. (2012) is a useful review.
Erlacher's abstract (emphasis mine):
I've never heard of a study of whether improving skills via lucid dreaming works.
Two things that make me really want to learn how to do it, are free sex and improving my social skills by getting into unusual social situations that I couldn't try in waking life. I have heard anecdotal accounts of people using lucid dreaming for these purposes.
By the way, this is a really good book on lucid dreaming on the SIAI library thing account: http://www.amazon.ca/Exploring-World-Dreaming-Stephen-Laberge/dp/034537410X
Mencius Moldbug: How to Reboot the US Government
New short talk by Moldbug! :D
I didn't know he was so young.
Neither did I.
The fact that government isn't as good as it says it is, or that progressive ideas aren't fully consistent doesn't mean that either are fully dispensable, nor is it particularly clear that people who want to eliminate government have to stop any minor involvement they have (like voting) in order to achieve that goal.
He's reminding me of Michael Vassar's observation that geeks want explicit language in a way that most people don't. The fact that what government is and does isn't a good match for the way government is usually described isn't a good reason for eliminating government.
His point that people generally don't know anything about governing is salient, but does he have any experience running something more challenging than a solo blog?
To my mind, democracy still has the advantage that it makes it clear to politicians that there's a limit to how badly they can get away with treating the public.
He cheats a little on the the communists vs. Nazis numbers-- 6 million is just the Jews murdered by Nazis. Another five or six million Roma, homosexuals, criminals, etc. were killed in the death camps, and some 25 million (very rough estimate) were killed as a result of the Nazi side of WWII. I have no idea whether Japan would have started its war if Germany hadn't been its ally.
This being said, I agree that communism has a worse record than Nazism, but a better reputation. However, in the US and Europe, there are violent neo-Nazis but (unless I've missed something) little or nothing in the way of violent communists, so it makes sense to be more concerned about Nazis.
My problem with him is the general problem with radicals-- he needs to offer better arguments that what he's suggesting will be reliably better than the current set-up. Speaking of Nazis and Communists, it's possible to make things a lot worse because your theory sounds so attractive.
It was amusing to see that Mencius Moldbug, Dark Lord of the Convoluted Sentence, is a pretty average speaker.
Probably. They didn't have anything like a formal military alliance until the Anti-Comintern Act of 1936, but the war in East Asia arguably started in 1931 when Japan invaded Manchuria.
Yeah, I view Moldbug as someone who looks at your house and is right when he says maybe the toilet shouldn't drain into the shower, but then suggests you can use fusion to run all your appliances and power your helicopter
I think the problem with Moldbug is that he's so firmly wedded himself to fighting against the whiggish naratives that are so deeply embeded in our historical accounts that he falls into the very trap that Herbert Butterfield, the original critic of whiggish naratives, warned of:
(On an unrelated note, I occasionally find myself falling into a different, more sublte trap that Butterfield also warned of:
I agree. A strong argument in favour of our current order (social democracy) is the Burkean conservative one. I've said in the past that Moldbug is good at diagnosing but bad at providing treatments and I think his plan as it stands is more likely to go terribly wrong than terribly right. But hey we tried socialism so many times in so many different places, and we still haven't given up on it, can't we try Neocameralism in a charter city somewhere?
There are plenty of violent left anarchists / anti-fa (Communists in the sense Moldbug is using) in Europe. To cite an example from Greece:
Social Justice in action, I'm sure the protesters had "legitimate grievances" which foreign media where sympathetic to. Question time, if Neo-Nazis had burned down a building do you think it more or less likely for you to have heard of an incident like this? Can Neo-Nazis ever have "legitimate grievances"?
Indeed we have a ready made test case for this, check out foreign reports on Golden Dawn then compare them to their actual relevance. The double standard regarding this is ridiculous.
As is the amount of resources spent on "fighting" the far right in the EU compared to the amount dedicated to fighting the far left. Even if ceteris paribus Nazis (in the wider sense of the word) are more competent at takeovers and causing damage than Commies (in the wider sense of the word), diminishing returns have almost certainly kicked in for fighting Nazis but not for fighting Commies.
Yeah Good writer =/= Good speaker. Unfortunately Eliezer seems to be another example of this.
The murderousness of certain Greek left-wingers is true, but I would wish that you didn't downplay the murderousness of Golden Dawn. They contributed to the slaughter of Srebenica in Bosnia -- they are currently killing immigrants, They are officially in the parliament and yet they've not ceased with their numerous death threats against everyone who stands in their way.
Sorry, but though the murderousness of certain off-parliament Greek left-wingers is certainly a fact, and the sympathy they receive from inside the parliament likewise, the actual bloody neonazi murderers are in the Greek parliament. With 7% vote they're already killing people and nobody here really gives a damn, are you sure they won't commit acts of genocide when they reach 30%?
You are referring to the Greek Volunteer Guard? Some allegedly had links to Greek Neo-Nazi groups including Golden Dawn, though you have to admit terming that as "Golden Dawn contributed to the slaughter at Screbrenica" is importing stronger connotations.
I didn't intend to downplay their murderousness, I wished to downplay the relevance of media reports on them. Which I think are disproportionate to their importance for non-Greeks. Also I hoped people would note the soft handed treatment anarchist/communist/anti-fa violence is given compared to the uniform condemnation of far right violence.
I upvoted for the first paragraph. Then I wanted to cancel the upvote when I read the paragraphs after the quote about Greece (which I deemed too adversarial for a friendly discussion). In the process I discovered the nonobvious fact that one must click again in the upvote button to cancel it: clicking downvote brings it to -1 instead of just canceling the upvote.
Didn't meant to be adversarial towards Nancy I hope she doesn't take it that way. I was taking a strong stance that is of course political on what interests and biases Western media generally have. I edited the style, is it better now?
It didn't feel adversarial to me-- I'd forgotten about far left violence in Europe.
I did hear about it-- you can more or less assume that if it's on the BBC radio news programs, I've heard about it. This doesn't mean it will come to mind when I'm making sweeping generalizations.
To be honest, I don't think you should revise your writing based on what just one random LWer (me) thinks. I just wanted to share the discovery I made about canceling upvotes, which was new and unintuitive to me. If I had read your last paragraphs before upvoting, I would have just refrained from voting in either direction and I would not have written any critical comment.
If you really want to know, though, the part that bugged me most was the paragraph immediately after the quote. ("Social justice in action…") It is snarky; maybe not towards Nancy as such, but certainly against the general opposed political position. I think the "no mindkilling" general code should preclude using snark in a political discussion, since its purpose, roughly, is to lower the status of the opposed viewpoint without adding substance (relative to a non-snarky rewrite).
But as I said, I doubt you should care too much about this opinion and rewrite your post.
Don't be silly you are a member of the LessWrong community in good standing, I appreciate such feedback. I now see your point about snark, but I was also trying to refer to a particular post by Moldbug, to make this more explicit I've added a link there.
An amusing thought occurred to me while reading HPMOR. Harry Potter may already be able to rule the world in chapter 6 by doing the following:
How do a lot of you guys read so many things so quickly and retain all the knowledge? This seems like perhaps THE MOST VALUABLE skill I could learn, and I can't find ANY good resources on it!
Use your imaginary friend whom you try to explain the gist of what you've just read when, say, brushing your teeth. :)
(Actually writing down an explanation would certainly be more effective but not as fast).
This stuff takes practice in general. Note-taking and spaced repetition help. Maybe don't worry about best practices or "the right way" to do it at first -- anything's probably better than nothing.
One thing that can help is to always read with a goal in mind. Reflect on what you really want to get out of whatever it is you're reading. Maybe don't just "take notes" but try to build a concise summary, map out the main argument, or write a review. Look for something to bring up in conversation with a friend, or come up with three questions to ask the author. Always be noticing your confusion. Read the end-of-chapter problems before reading the chapter. (Of course it could be bad to read with the specific goal of answering a single narrow question, if you end up just scanning for the answer and missing out on other value.)
I'm reminded of an OB post from a couple years ago: http://www.overcomingbias.com/2010/05/chase-your-reading.html
Making good cards for spaced repetition may help.
I've once been told the keys were an arcane ritual called "taking good notes" combined with the Level 5 Bayesjutsu called "Condense your probability mass" and "Test your predictions".
Attempts at piercing the veil of secrecy and/or locating a tutor or manual on these rituals and techniques have proven fruitless to date. Reports of such findings have all turned out to be hoaxes or were never confirmed, potentially as the finders became part of the group which maintains the secrecy.
Does anyone know of an online resource (or book) that goes through typical mental illnesses or neurological patterns that lead people to believe they've been possessed by demons? Google is swamped with religious blogs, and my google-fu is failing to cut through. Context: Somebody said (paraphrase) here's a youtube video of a guy acting kinda demonic, then everybody prays and he gets better. What is the "atheist" explanation? So I went around looking and didn't have much luck, and now I am really curious. I'm assuming that even if some are "fake," some people actually believe they are possessed. Also, yes, I know trying to convince a believer is probably a lost cause, but I'm curious for my own sake now.
In the new sequence Highly Advanced Epistemology 101 for Beginners EY has made use of exercise questions / statements intended to be pondered prior to continuing. He has labeled these "koans" but is open to suggestions for a better word, as a koan means something a bit more specific than that to Zen people. Any ideas? Here are the "koans" from this sequence in order of appearance:
The Useful Idea of Truth
The Fabric of Real Things
I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.
There is a cost to doing nothing as well. Calling them koans potentially has the following effects:
The question is whether it is more costly to make the change or not. How costly is the change? Are the costs long-term or short-term? (The costs of not making the change are mostly long-term.)
Also relevant: Apart from avoiding the above costs, are there benefits to changing it to something else? (For example, a better term could make the articles more interesting and intuitive to beginners than "koan" does.)
Knowing the kind of people who read LW, I guess that on reading “koan” more people will think about hacker koans than Zen kōans (also given no macron on the O).
I made an outline of polyphasic sleep recently. Feel free to read it or contribute stuff that hasn't been added yet.
https://workflowy.com/shared/5c919540-f8e7-a677-bbf9-e4ebe18b2948/
I don't entirely follow... Is there such a thing as 'learning to REM-nap' without the proposed mechanism of the pressure of sleep rebound forcing a REM rebound during the space of a nap?
I mean I am interested in undergoing adaptation through sleep deprivation, then something like uberman then everyman.
It would not be viable for me to stay in a polyphasic schedule next year. Ultimately, I will have to return to something largely along the lines of segmented or monophasic. Still, I have heard that undergoing polyphasic-style adaptation can help you to become acclimatised to getting REM sleep in a 20-30 minute period, something I currently can't do, but might be useful if I have a sleep debt or if I know I'm going to do an all-nighter etc.
So the idea is adapting to polyphasic then switching back to segmented or monophasic. Would I expect to nap better afterwards? Is this likely to be useful or worthwhile?
O. I dunno. I have some more doubts about polyphasic sleep these days; last time I checked in the Zeo forums, no one had posted a complete writeup demonstrating a polyphasic lifestyle much less accompanying metrics that the lifestyle hadn't hurt them (I'd particularly like spaced repetition statistics). And since Zeos provide real data, much more so than blog posts claiming successful adaptation...
Related to: Denying the Cat: A Wonderful Chesterton Quote
Seems plausible. We still do have extreme suffering thought, we just don't see it in our day to day lives. Aguably we are worse people from a virtue ethics perspective.
I don't think we have good reasons for metaphysical optimism regardless of that issue however. My argument against it is anthropic. Assuming there are many possible metaphysics, a position that might be trivially false, I don't know enough to comment on that, we can infer that human values being complex only a tiny fraction of them are favourable.
Our physical surroundings can't help but be at least somewhat favourable. We can't help but be on a planet in the goldilocks zone in a universe with its particular value for the gravitational constant, because if we weren't there wouldn't be anyone around to make the observation.
When it comes to metaphysics we most certainly can make observations in a universe (metauniverse?) where the metaphysics have horrible things in store for us.
This argument works for the laws of our universe too. They are provably minimally friendly to the development of intelligence, but are very likely not friendly to its long term survival or flourishing. And all this is assuming an uncaring universe a caring one may be much worse in the uniquely horrible way an almost friendly AI would be.
99 clever tips to make your life easier
I'm hoping that these will not just be useful in themselves, but also inspire a more ingenious attitude.
Disappearing link?
I think you left something out of this comment. Probably most of it.
added: In the likely event that Arundelo is correct, the broken code will appear when you click the edit button and you can clean it up.
Thanks-- yes, I'd put a return after one of the parenthesis, and more importantly, didn't check on the comment before I left for the evening.
Does the LessWrong community have a consensus on the subject of moral accountability, to the same extent that it has a consensus on things like free will and reductionism? If so, what is that consensus?
My opinion on the subject is, essentially: it's irrational to think people are morally culpable for their actions because their behavior is completely contingent upon their neurochemistry, which they have no control over. You can't blame a psychopath for having the specific cognitive makeup that made him a psychopath. Also, things outside of his control such as environment, parenting, etc. went into making him a psychopath. So trying to put "blame" on him for doing something bad, or wanting to see him suffer "because he deserves it", is irrational. Standard determinism, really. Not a very unique or original perspective, but one that's quite at odds with the view of the general population.
I've never really seen this mentioned very much on this website. Do LessWrongers generally take this view? Are there some good articles, both on and off LessWrong, that talk about this in much detail (whether they're arguing for or against my position)? I'd appreciate it if someone recommended some to me, as I find this subject fascinating.
People do have control over their neurochemistry. Invoking the classic compatabilist conception of free will, if they wanted to have different neurochemistry they would.
What you say is true to some extent, but there's also the fact that holding people morally responsible actually changes their behavior, and if we didn't hold anyone morally responsible for anything, people would behave worse.
There is actually an article here titled Causality and Moral Responsibility. You may want to read the linked prerequisites first.
Moral Accountability is a lot like justice: It has a lot of psychological hooks in the human mind that make it very useful for enforcing how you want your society to be, and in the ancestral environment was probably linked far more closely to utility than it is today. The on margin effects of either cultural edifice might be good or bad but we should be careful about trying to dismantle either one.
I don't know if this is a matter of consensus, but I generally see it as a matter of translating from third-person deontology to consequentialism by way of third-person virtue ethics and game theory: rather than work with concepts like "culpability" directly, I ask first whether an act is evidence that someone's likely to do other bad acts, and how well that risk can be mitigated, and second whether punishing that sort of act would make it rarer by enough to outweigh the cost and damage of punishing.
How did software get so reliable without proof?
Steven_Bukal writes:
Some questions, for anyone who uses digital books a lot: what readers -- both hardware and software -- do you recommend, and why? What determines whether you obtain a book on paper or as bits? Do you find the usability problems I list below?
I don't have an e-reader, although I do have computers and the Mac Kindle application. But I've never bought an e-book, because the convenience of a book that takes up no space has not yet outweighed the problems I see with them, even though the space that paper books take up is a major problem for me.
I used to print out scientific papers for reading, but I stopped that some years ago and only print them now when there's something I need to study intensively, at which point most of those usability considerations kick in. At this point, I can't see myself buying e-books except for the sort of mid-list SF where I would drop the physical book in a charity bin after reading.
I keep all of my books as PDFs on Mendeley. If a PDF is not available, I buy a hard copy through Amazon and send it to 1DollarScan to be converted to a scanned PDF.
In terms of screen estate, I agree, but in terms of looking for something in textbooks, I find it much easier to consult multiple ebooks at once, since I can easily search through tens of them in a second.
Personally, I don't buy physical books anymore, though I so have a (small) library where some old books would be hard to find online.
Pictures?
I don't know. I shall check.
ETA: I have checked. Of the last 30 books I bought (a number decided by "ok, that's enough"), 13 are available as e-books (determined by looking them up on Amazon). Every book in the sample published since 2010 was available on Kindle; only two books published before then were (2002 and 2006).
VincentYu mentioned 1DollarScan, a service for (destructively) scanning books to PDF, but transatlantic shipping costs for a thousand books, plus scanning at $3 per book make it rather expensive for me to make a serious dent in my book stacks.
That's a large presumption. Electronic documents easily die of obsolescing formats. "If it doesn't survive, it wasn't important" is not a good rule -- ask any historian.
Pictures and graphs generally work fine on newer works but I find that charts can be pretty badly optimized on older works that have been adapted cheaply. I read comics on my iPhone but the comics app is much more optimized for this than ereaders are.
Try k2pdfopt! I use it all of the time with scientific papers, with lots of formulas, and it works quite well. It practically converts the pdf to images and slices them up, outputting another pdf, but the size increase is not too significant (still usable file sizes with multiple-hundred page long books).
Thanks! This isn't actually useful to me since I read almost nothing really hardcore on my phone but it's good to know about.
What about them?
I'm wondering if this has been studied.
Decision theory and selfish donating
Suppose an author I like says she'll write a new work if she gets enough donations. Under CDT, it's clear to me that it can't make sense for me to donate - my donation can't increase the probability of me reading the book enough to pay for the cost, and there are much more efficient ways for me to give altruistically. What do other decision theories have to say about this?
Short answer: CDT doesn't donate. EDT, TDT and UDT all donate (assuming enough others are mutually known to be like you).
TDT was literally made for this kind of situation (because it's just a Newcomblike problem). UDT differs from TDT only in areas a bit more obscure than this. EDT is also designed to handle this perfectly too (ie. to get you the book for minimal price). If you donate evidence does suggest that enough people will donate to get you the book but if you don't donate evidence suggests that you will not.
where this assumption is so restrictive the real answer is probably "don't donate."
Thus we see that assurance contracts can be useful even for a population EDT/TDT/UDT agents.
TDT has to say that if the scenario where everyone donates you win, and you know that everyone else is using TDT or that the distribution of decision algorithms is likely to give sufficient "donate" outputs to make it better expected utility, then you should donate. Of course, if you have reliable data on others' decision algorithms, I'm pretty sure CDT and EDT and any other decision theory I've read about will boil down to an expected utility calculation or something pretty close.
Basically, as Vaniver says, all good DTs pretty much agree on this. TDT, CDT and EDT all agree that if you have common knowledge of a sufficient number of other people using the same decision theory (or, with more complicated calculations, various possible theories including those three) are interested in the book, you should all donate. This common knowledge, however, is usually the extremely costly, high-information-value part - the part about figuring out whether to donate or not seems trivial by comparison.
I don't think this is correct. The CDT agents would all agree that they all should donate and would support the implementation of a simple mutual commitment protocol. If they couldn't arrange a way to compel each other to not defect on the commons problem they would be sad but defect themselves. Fortunately there are already existent online donation systems are sufficient. You just need one of the ones that returns pledged funds if the target goal isn't met and a carefully calculated target goal.
At the extremes of perfect CDT agents you'd have to fiddle with the details a little more and, for example, make it forbidden for one agent to donate twice in order to allow that any will even donate once. But we can assume either all those details are handled or the CDT agents aren't quite that ridiculous and consider the precommitment mechanism adequate. Another thing they would do is arrange a taxation system enforced by people with guns with the relevant commons problems to be solved specified by (necessarily compulsory) voting.
Of course, the other thing groups of CDT agents would do is arrange a free market capitalism system wherein products are payed for and people who don't pay don't get the stuff. A more efficient system would also allow the author easy access to a loan based on the awareness of the loan giver of the desire for the books. Then she would actually get most of the money from the sales of said books.
Right- where again the primary block is the mutual information required.
Apologies if sidetracking a hypothetical into the real world: kickstarter attempts to solve this problem.
Fuzzies?
As far as I can tell, any decision theory that disagrees with CDT in this case is mistaken. The author (or you) need to sweeten the deal; either the benefits need to be better, or the cost needs to be lower. Typical ways to improve the benefit are to attach status or other goods to the donation- whenever I talk about the Kickstarter projects I back, I make sure to mention that, you know, I backed them.
You missed an opportunity here. ;)
Yeah, but the conversation is about collective patronage in general, not about specific projects, and it seemed like it would detract from my point to also brag with my comment.
Via Reddit: Morality shifting in the context of intergroup violence.
In gist, if your ingroup does things that harm others, you are likely to subsequently shift your moral attitudes away from principles that tell you that harming others is wrong, and towards principles that value loyalty and obedience.
A quote from near the end:
This seems like it may be part of the cult attractor; and is also a good reason to keep your identity small; it effectively means that your ingroup doing harmful things can act as a murder pill for you.
Gah... just when I wasn't terrified of politics anymore...
A more generalized version of this would read: "if your ingroup does [x], you are likely to subsequently shift your moral attitudes away from principles that tell you that [x is bad], and towards principles that [tell you that x is good as long as it's your ingroup doing it]." The chapter's from Cialdini's Influence social proof and identity self-modification seem relevent.
Maybe this is how "being a member of a group which slowly shifts towards evil" feels from inside: Increasingly realizing the importance of loyalty, and that fairness is not as important as it seemed once.
So when you notice yourself thinking: "well, technically this is not completely fair, but our group is good and we do many good things, so in the long term I can do more good by sticking to my group than by needlessly opposing it on a minor issue", you have an evidence of your group becoming just a little bit more evil.
(To be precise, "a little bit more evil" can still be predominantly good, and can still be your best available choice. It's just good to notice this feeling, especially if it starts happening rather frequently.)
A question came up in response to EY's recent sequence posts that I'd like someone to take a shot at: EY seems to me at least to be saying that the universe is a 'fabric of causal relations' or is 'made of cause and effect' or something like that.
He's also said that probability (and so causal relations, given how he understands them) are 'subjectively objective'.
The first claim implies that casual relations are fundamental to the universe, the second implies that they're ways in which limited observers and agents deal with what is fundamental. As such, the two claims seem to be inconsistent. What's going on here?
The solution is that causal relations are a map, reality is the territory. You and I could very possibly have different causal structures in mind when we're talking about, e.g., moving billiard balls, and we can both be correct if we have different sets of information. There is only one reality, but there are many correct maps of reality, each one corresponding to a different set of previous information.
If I understand you, you're saying that causal relations are a (perhaps necessary) feature of the map but are not features of the territory. Is that correct? If so, it seems like the claim "the universe is a fabric of causal relations' is strictly speaking false, or at least it's only true if by 'the universe' we mean the map rather than the territory, which would be weird.
I made a mistake, but I think fixing the first sentence is all that I need to do. (Maybe I merely misspoke, but I'm not sure what I was thinking, even only a couple hours ago).
The first sentence should read something like: Reality is a particular causal web, but the correct model of that causal web depends on your state of information. In other words, the subjectively objective component only comes in when we try to infer something about the causal web that is reality.
I've wondered the same thing.
Some of it might be that no two agents will have the same experiences, and so they will not have the same probabilities assigned to particular propositions even if they started with the same priors, has identical sense-receptors, and are both perfect Bayesians.
But it seems misleading to use the label "subjectively objective" for that phenomena. And I might be totally off track, in which case I am totally confused about what "subjectively objective" is supposed to be about.
Probability is subjective in one sense and objective in another sense. It's subjective in that the correct answer to "What's the probability of A?" depends on who is asking the question. It's objective in that the answer depends on who is asking the question only through the information she has and not, e.g., who she is. Part of the reason to call it subjectively objective is to acknowledge that critics of Bayesian epistemology/probability/statistics are correct, in part, when they complain that it's subjective. The objective part answers the criticism by pointing out that probability is subjective in a very benign sense and in precisely the sense we intuitively expect it to be. E.g. "Mary didn't know Jack had pocket aces, so in her situation thinking that she was highly likely to have the winning hand was correct."
Edit: clarified example
Three Words: Little Mermaid Fanfiction.
Featuring Rationalist!Feminist!Determinator!Ariel, fighting against both the machinations of an Ursula with a massively increased power level (think Cthulhu's little sister) and her violent and patriarchal father, and the society that he defends.
I would like to write this, but I'm not confident that I've got the skills or knowledge to do so (specifically I need to read a lot more on feminism, also I've never written fanfiction before). Please PM me any ideas about anything that you think might improve the story, whether that's general writing advice or a specific scene or a character development ark or stuff about feminism I should read or anything else.
This could be absolutely fantastic, the source material allows for a lot of manueverability and I think canon Ariel's personality would only require some minor tweaking (mostly with the feminism) in order to fit the mold of what I've got in mind. View this and her curiousity just oozes off the film. There is so much potential here that it is just ridiculous.
Is this going to be based of the Disney film or the Hans Christian Andersen story?
Mostly Disney, but I might borrow things from HCA.
You might want to look at the various tales and plays named Undine, too.
Incidentally, there was a TV cartoon series based on the movie, which takes place when Ariel was younger and hadn't yet developed her humanity obsession.
I don't know what to do with Ursula. It's tempting to make her into the overzealous feminist strawman, but that seems like a weak fight, ideologically, and that's not a message that I really want to send out. Ursula needs to stand in clear contrast to both Ariel and the patriarchal society which rejects her. It would also be nice if Ursula was relatable.
The best idea I've had so far is to make Ursula an extremely jaded and manipulative and pragmatic woman, who neglects what's good in relationships and focuses but this conflict with the Eldritch horror awesomeness that I had planned. I've got vague ideas of how to reconcile the two, but input on this would help a lot.
Having Ursula's default state be an even more powerful version of her boss form was one of the main inspirations for this fiction. Ursula has the potential to be a really cool character, and she's shaping the way that I approach my ideas about the mermaid culture and Ariel's character. I love villains, so I would really appreciate it if people helped me to not screw this one up.
EDIT: Removed faulty link.
Your link just led to an Aladdin icon, so I assume you had something else in mind.
When I was rereading the thread, it also occurred to me that Ursula was the hard part. My take is that she's what she is for much the same reason crime bosses are what they are-- power, safety, and excitement-- with the last two having to be balanced. It might be interesting if there was a family tradition of being outlaw magic users.
However, I'm not a feminist, though I agree with a lot of feminist ideas. I think men and women are fairly similar, and that means some women are going to be very bad news. I'm inclined to think that the status differences between men and women have a lot to do with men being (for reasons that aren't clear to me) better at group violence. It's not about the upper body strength.
Ursula could be an outcast from her own society because she's mean and irresponsible. You could spin a story about her which goes either way-- the octos are actually dominant (or at least secure/isolated), and they exile their criminals who then predate various cultures the octos don't care about.
Alternatively, the mers dominate the octos, and Ursula has ambition and no place inside respectable mer society to use it.
Real world octopi are short-lived. How would that affect their approach to prisoner's dilemmas? A claim that they're unreliable because of their short lives could also be used to justify prejudice against them.
However, I'm just noodling here-- I've only seen the movie once.
This is going to be really difficult to execute. If anyone else wants this basic premise, please take it. I'd love to read someone else's take on these ideas or ideas like this.
Also, don't take this outline as a promise. I reserve the right to completely change the story's meaning and plot as I wish.
PM'd a bunch of ideas, but I dunno why you don't want them public.
These ideas are courtesy of MixedNuts, please give him the (+) karma and not me. I'll take all the (-) karma.
Are you going to have fish be sentient? Are all animals sentient Disney-style? If you are trying to make an at all coherent world, I'd just ditch the sentient fish part. Otherwise, I will honestly never read this because I won't be able to get over the horror of billions of sentient death just constantly. MOR!Harry panic about snakes right there. That is a really, really, weird world where humans haven't noticed as well. Fish are really, really, stupid. Hence we eat them en masse before we even started farming.
My first thought is that this will be even more worked than I planned on. These are great questions.
I need to put a lot of time into this, no one should expect the story to get started for at least a few months.
I need actual women or actual feminists to talk to me; I live in a red state and don't ever see these people speaking up about patriarchy. I'm only familiar with feminism through books, and a couple discussions every now and then. What are the biggest pitfalls that I risk? Whose books should I read?
"The Omniscient Breasts" might be a somewhat useful post when writing female characters.
Tentative advice: Read books by women with female viewpoint characters. Make note of anything that seems odd, especially if you see it from more than one author.
Recommendations, anyone?
Sunshine, by Robin Mckinley.
Paladin of Souls and Cordelia's Honor (I liked this one way more, and the series it's at the start of is fantastic, though the main character of that one is male) by Lois Mcmaster Bujold
In the Garden of Iden, by Kage Baker, the start of one my other favorite scifi series.
Among Others by Jo Walton.
So if you're like me, you start reading that book, and almost immediately need to read a bunch of other books, because the main character has read them and how can I understand without reading them? I think I can resist a lot of them, and there's already a good amount of overlap but when she starts actually mentioning plot points from other books in ways that seem emotionally relevant is when I need to read them. So I can recommend the start of this book but am now reading Triton before I can get back to it.
If I were a very cruel person, I'd recommend Greer Gilman's Moonwise-- it surpasses the formal specifications (female author, main characters are two middle-aged women and two goddesses), but it's extremely referential we'd probably never see you again, and honestly, it's probably not particularly relevant to chaosmosis' quest.
However, I've started a reading group about it.
My book queue is already functionally infinite so adding another infinite to it doesn't really harm me :)
I was worried about it being a huge mess, at first, but putting them out in the open will allow for more criticism and dialogue, so that was a mistake. I was a bit tired when I posted that comment. I'll post your comments here then.
Direct Effects of Low-to-Moderate CO2 Concentrations on Human Decision-Making Performance
Gotta get working on those pressurized oxygen filled buildings
The effect size of the higher level of CO2 on some of the tests is ridiculous. Reminds me of Cochran's speculation on nitrogen content.
Interesting, but has problems with helium supply, even at smaller scales.
Breathing tanks are problematic. If you carry a breathing tank, the simplest approach involves venting a lot of helium. Scrubbers, recycling, and O2 replenishment carries nontrivial risk of death without warning. Capturing the exhaled helium requires heavy, power-hungry compressors.
Building present nontrivial air-quality engineering problems, and need more than airlocks retrofitted on to make them airtight.
Also, it's far from obvious that 20% is the optimum O2 content, though I think it's quite well supported to say that 100% is too much.
My own belief is that before we start trying to get rid of nitrogen with helium or hydrogen, we ought to check first that increasing oxygen doesn't deliver some or all the benefits.
Well, you can't increase it too far. The fire hazard gets insane pretty quick. 30% O2 is probably OK, but does have a substantial fire risk increase; 40% probably isn't. Also, increased oxygen has long-term health impacts (I don't remember details; I could look them up if you're curious), but I don't think we know what level those start at to any precision.
I suppose fire risk isn't a huge deal if you're using portable breathing tanks, but you do still need to investigate health impacts. They're long-term enough that you could ignore them for initial study of the cognitive effects, though.
Doesn't breathing too much oxygen make you age faster?
Yeah, I'm well-aware of the dangers of oxygen fire (from learning about the Apollo program); oxygen tanks are probably how this would be implemented. Of course, I'm not sure that the benefit could possibly justify the expense of oxygen tanks but just the result would be interesting. (Perhaps one could justify some sort of oxygen alarm.)
Actually, I don't think oxygen tanks are that expensive relative to the potential gain. Assuming that the first result I found for a refillable oxygen tank system is a reasonable price, and conservatively assuming that it completely breaks down after 5 years, that's only $550 a year, which puts it within the range of "probably worthwhile for any office worker in the US" (assuming an average salary of $43k) if it confers a performance benefit greater than around 1.2% on average.
These tanks supposedly hold 90% pure oxygen, and are designed to be used with a little breathing lasso thing that ends up with you breathing around 30% oxygen (depending on the flow rate of course).
Since 30-40% oxygen concentrations seem to increase word recall by a factor of 30-50% and reduce reaction time by ~30%, improve 2-back performance by ~15%, and improve mental arithmetic accuracy by ~20% for 3-digit numbers, it seems pretty likely that the overall benefit of oxygen supplementation while working could be greater than breakeven.
Oh, that is interesting. I was sort of assuming that you would have to pay for each refill and that a recharger wouldn't be just <$3k.
Also, interesting links. Connecting psychometric tasks to actual monetary value is always tricky, but those studies certainly suggest there might be meaningful benefit (but the benefit will be weaker at 30% oxygen - the links seem to all be at 40%).
One big problem there is that $3k is a lot to pay up front. But on the upside, if you can change the flow rate, I suspect it wouldn't be too hard to blind the oxygen content...
Does anyone have any good brief ways of describing LW to outsiders that have been effective? This comes up quite a bit for me with friends and family.
HPMOR for everything
Just without the magic. Which makes all the miracles so painfully slow.
"It's a guy, pretty much as intelligent as, and at least twice as effective as a dozen Ph.D's in philosophy examining and discussing how to think better. Look, just... Here, The Simple Truth. Read this. It's basically this kind of thinking applied to everything."
Not that brief, but it's gotten at least a few interested in LW.
"Computer programmers trying to do philosophy" is how I describe LW to myself, but I don't know how effective that'd be for outsiders.
I just call it a cult
At least call it a rationality cult.
I think the most effective rhetorical technique would be very sensitive to the kind of person you are describing it to. I don't know if it is good, but I once said something like "it is about how you can avoid certain kinds of errors in your thinking, so that you can make better decisions".
What do people think about Jaynes' (the other one) The Origin of Consciousness in the Breakdown of the Bicameral Mind ?
I just read it, and while I enjoyed the book, I'm rather sceptical about the book's main point -- that consciousness (in the way the book describes) only arrived ~ 1000 BCE. The evidence provided by the Jaynes Society doesn't really convince me either.
Jaynes is not a crackpot in the Von Däniken/Hancock school, but I found his evidence lacking for his extraordinary claim. What do you think?
Is there a word for a person, or an agent, that self-modifies to find something more painful, in order to change someone else's incentives, as described here? Obviously there are some choice phrases we might like to use about such a person, but most of them - eg "moral blackmail" - seem insufficiently precise. Is there a term that captures specifically this, and not other behaviour we don't like? If not, what might be a good, specific term?
Have you read Schelling? He discusses a wide variety of maneuvers that are much like this. However, I can think of no standard names for this technique.
I suppose you could call such agents voluntary human shields.
"Utility martyr"?
It seems like the sort of thing that one would accuse another of, in order to score political points by making others feel ashamed to have sympathized with the person so accused. IOW, making the accusation is a much cheaper form of manipulation than actually doing the self-modification — and can be used to undermine many claims that one person is harming another. Thus, we should expect to hear the accusation from people who would like to go on harming others and getting away with it.
I wouldn't be surprised to see examples of people saying "you don't really feel bad, you're faking it" which is a very different thing, and there's an example of people saying "we mustn't incentivize these hypothetical Muslims to self-modify in this way". But can you point me to an example of what you describe happening - of someone saying "you, the actual real person I am replying to, have self-modified to find something more painful in order to change other people's incentives"?
Subset of utility monster, I think.
I have used that term for this, but it's not very precise: the Wikipedia entry has the monster absorbing positive utility rather than threatening negative, and there's no mention of self-modification.
The self-modification isn't in itself the issue though is it? It seems to me that just about any sort of agent would be willing to self-modify into a utility monster if it had an expectation of that strategy being more likely to achieve its goals, and the pleasure/pain distinction is simply adding a constant (negative) offset to all utilities (which is meaningless since utility functions are generally assumed to be invariant under affine transformations).
I don't even think it's a subset of utility monster, it's just a straight up "agent deciding to become a utility monster because that furthers its goals".
One of the snarky comments on Edward Feser's blog put words to my general feeling about him:
(Naturally the locals call this commenter an atheist troll and ask him to "do the reading" -- no doubt "read the Sequences" in the local dialect -- but he retorts, "It makes no difference to you whether I do the reading or not. You complain either way.")
EDIT: If you haven't read Feser before, his standard blog-writing process is:
I can't speak for his blog writings (since I have only read a few articles), but I have read his book on Nozick and am almost done with his book on Aquinas.
I have no reason to doubt your claim, but it seems plausible that he is right in this case (if, in fact, he does so accuse atheists in this way). Why? Because I had 4 years of Bible class in high school and studied philosophy of religion at university and yet still only understood the straw man versions (most likely unintentional straw men, mind you) of the arguments made by "some medieval philosopher", or had any idea about the philosophical "underpinnings of Christianity".
It wasn't until I got interested enough in the history of science to actually bother to read primary texts (in astronomy, alchemy, and "physics") that I was able to get my mind situated in such a way that I could look around at the world from within these alien Medieval paradigms and see that some of these claims weren't just silly bullshit.
Anyway, if it takes such a roundabout sequence of obscure studies to even begin to make sense of this stuff, it is no wonder that modern atheists (or virtually all Christians, for that matter) have trouble getting it right.
Much of my undergraduate degree in philosophy was reading medieval texts (and later) from Christian philosophers, and I agree - most atheists I've encountered just don't understand the philosophical underpinnings. I *think * Dawkins circa The God Delusion is one of these, but I haven't read the book and my impressions are likely colored by my teachers and friends in undergrad who were largely sophisticated Christians.
That being said, most Christians don't seem to understand these underpinnings either.
Did a higher percentage of educated medieval or classical Christians understand this paradigm? Or was it reserved, as now, to extremely well educated, smart, specialized theologians?
I'm not in a very good position to answer this question with an acceptable degree of accuracy. The periods I am referring to had very low literacy rates, so I don't have much access to the thoughts of uneducated, non-smart, non-specialized medieval persons.
I've noticed that reading old texts with alien mindsets is an instant idea generator for fantasy settings. (Seriously, I don't get I've never heard anyone suggest the notion of "fantasy writers should read old texts" before. They're just filled with peculiar ideas about the world that one can import directly to a fantasy setting.) Would you happen to have any recommendations on texts that would be particularly suitable for this?
In undergrad, one of my friends and I came up with the idea of a medieval-philosophy based magic system for a fantasy setting -- essentially Platonic realism as magic. Wizards spent all their time collecting metaphysical materials, contemplating forms and such. Directly inspired by reading Plato, Plotinus and, I think, Augustine.
Ars Magica does something somewhat similar.
Christian theology offers rich pickings. Did you know it has a closed timelike loop? Satan was cast out from heaven because he refused God's command to bow to Man, which the angels must do (despite man being created a little lower than the angels) because God incarnated as a man, which he did to redeem Man from his fall, who fell because Satan tempted him, because Satan sought revenge upon God, because God cast him out from heaven. There is also the concept of a "type of Christ", where "type" has an old sense of "prototype". King David, Abraham, and Adam are examples. In science-fictional terms, the eruption of God into Time in the person of Jesus was such a momentous event that it sent back echoes of itself into the past, calling into being the history that called the Incarnation into being.
Parallelisms between techno-singularitarian ideas and Christian notions of salvation have often been made, usually with the implication that singularitarianism is just disguised religion and the technological arguments are mere rationalisation ("rapture of the nerds"). But suppose it's the other way round? Religion results from mankind's dim groping towards the techno-singularitarian truth, assisted by the occasional superpowerful alien or entity from outside the Matrix, inciting the major prophets and Messiah figures of history. The enlightenment that most religious traditions have sought consists of access to the real truth of things, but limited human minds are unable to comprehend it. Religion is, to use Vernor Vinge's term, "godshatter".
And the Jews are clearly an alien genetic/memetic breeding project.
My local evangelist insists that angels are lower than humans. We're children, not servants. So they're supposed to obey us (they work for the family), and we only have to obey them insofar as they convey messages from Dad. If we mess up we can be forgiven, whereas angels get booted to hell with no second chances. (The Lord is kind of a shitty employer.)
Also according to my local evangelist, Satan's fall happened in two parts. First, he was the hottest piece of ass in Heaven, which caused him to pull a Narcissus and demand worship from other angels. But the Lord can't share worship, it's part of the class restrictions. So he grew mightily pissed and cast Satan down to Earth, whose inhabitants Satan turned into demons. Second, the Lord made Adam and gave him Earth to rule over, since the current Earthlings weren't anyone he cared for. Squatting Satan wasn't happy with his new landlord, so he tempted him and got cast down to Hell for that.
Many stories can be spun from the materials, as would be expected of godshatter. "A little lower than the angels" is actually from Psalms 8:5 (and quoted in Hebrews 2:7). Interestingly, the next verse says "Thou madest him to have dominion over the works of thy hands", which together implies that the angels are not the work of God's hands. This is not the only place where a hint of polytheism breaks through.
It's ambiguous whether the translation should be "lower than the angels" or "lower than yourself". (That's what you get for not classifying angels as deities.) Oddly, Hebrews 2:7 is always translated using angels, though the text is the same in Hebrew versions, probably intentionally backtranslating from Greek. (ותחסרהו מעט מאלוהים, וכבוד והדר תעטרהו)
Even weirder, translations of Hebrews 2:7 in other languages tend to say "You have lowered him under the angels for a short time", not created so permanently. But translations of Psalm 8:5 are all about "created lower than", with the same disagreement about the relevant celestial being.
I can't find any Greek translations of Psalm 8:5 so I can't tell if they match Hebrews 2:7, and anyway it'd be Modern Greek.
That is a great question. The first thing to come to mind actually isn't all that old, but from the Early Modern Period. To a large extent, the Renaissance was much more mystical than the Late Middle Ages (seriously, compare Galileo's Platonism to Swineshead's Scholasticism). The paradigm I'm referring to is usually referred to as the "natural magic tradition" and is exemplified by thinkers like Paracelsus (The Hermetic and Alchemical Writings of Paracelsus) and Heinrich Cornelius Agrippa von Nettesheim (Three Books of Occult Philosophy).
The SEP entry for Agrippa sounds downright HPMOR-esque:
Thanks!
Someone tells me, "1 + 1 = 2."
I tell them, "Ah, but if you take one cloud and another cloud, and add them together, you still get one cloud, so 1 + 1 = 1."
Neither claim is "silly bullshit", but the conclusion of the second sentence is clearly broken. I have no reason to doubt Feser is a domain expert in theology. It's what he does with his expertise that bothers me.
That's exactly the point. Christianity is already a sociological fact that bares almost no resemblance to whatever kind of Christianity it is that would "get it right."
I'm comfortable calling that claim silly bullshit. In fact, I can't think of a better word for it. It is exactly the kind of thing the phrase "silly bullshit" is there to describe.
Yeah, I think I see what you mean. Feser seems to want to take apart arguments put forward by the atheist in the street in a no-holds-barred style, but then berates atheists that do the same to the Christian in the street, rather than only grappling with the arguments advanced by the masters of theology.
I was leafing through a copy of Marc Hauser's Moral Minds off a friend's bookshelf at the weekend, and it made me realise why I'd gone off reading books lately: the original content is too hard to find amongst the material I'm already familiar with.
I don't want to read another introduction to Chomsky's theory of universal grammar. I don't need another primer on ev-psych. I'm not interested in having the Trolley Problem explained to me again. What I would like is a concise breakdown of the core arguments, linking to other sources to explain things I might not already be familiar with.
This would end up looking a little like a Wikipedia article, or more to the point, a Less Wrong post. We have our fair share of book reviews, but they tend to select for books in which there's value in reading the whole thing, rather than those which have some novel content amongst mostly familiar territory, (what I took away from the recent chapter-by-chapter review of Causality was that I should totally read the book).
Is anyone else in this boat? Could it be worth organising some sort of book review/summarisation group?
That's the benefit of online linkable texts as opposed to books.
On the net, if you want to mention a Sequence post or a Wikipedia article, you just link to it and the reader either knows or can quickly check whether they've read it before.
In a book, if you just name-drop something like "evo-psych", the reader might have a very different, limited, or wrong conception of the subject. If you refer to another book or article that explains the subject, the reader isn't likely to have read it unless it's a very famous textbook or popular exposition (like The Selfish Gene), because there are many equally good books on any subject. So for the reader to make sure they're on the same page as the author, the book must include a long explanation of the subject referred to - even if it's not the actual topic and author would rather leave it out.
I have the exact same problem. You forgot about Phineus Gage and getting a pole stuck through your head.
I think one way of solving this would be to use something like workflowy to make the entire book a zoomable/compressible bullet list. That way, the book would have a section heading like "explanation of Chomsky's theory of Universal grammer" you could literally just skip that entire branch of the book (and if any part of it were referenced, you could jump back to it (because it's digital) ) .
Also, a lot of LWers (myself included) are looking to build better argument mapping software for a wikipedia of arguments type resource (though that's a bit simplified).
EDIT:
Also, you could compile all the different 1 sentence, 1 page, 5 page, 1 chapter, explanations from several different authors. for any particular bullet point.
The only reason that I still read most books is that it is very low cost for me.
I think I get some benefit from reading through things I already know, though. It's going to help me remember it and the explanations are going to be somewhat different and so I'm going to get a better understanding of it overall.
I would join the group. We could do it through goodreads or a similar, better designed, site if you know of one.
Aye. I'd be keen to join some sort of book club for smart people, where you could see others' bookshelves a la LibraryThing, but on top of that also have very short reviews letting you know what to expect from each book.
Most books tend to fall into two broad categories: things you already mostly know, and things you care little about. The rare high-value book is one that has just enough connection to what you already know, and makes you care about a whole new domain. (An exceptional book, like GEB, will make you care about many new domains at once.)
One recently read book that was very high value because it covered ground that was totally new to me: Abbott's System of Professions. Typically books in the sociology of professions had focused on the "trappings", professional societies, regulation and so on. Abbott pointed out that professions were the emergent result of a complex system of jurisdictional disputes, and the only way you can understand a profession is by looking at the others that compete with it for dominion over its topics. Abbot's analysis is so wide-ranging that it connects in several places with topics I care about; for instance when he analyzes "the construction of the 'personal problems' jurisdiction", a tug-of-war between the clergy, the (early) "neurologists", and psychiatry; or when he sketches the early history of the information professions - I hadn't realized that librarians were among the first such.
I'm planning on doing a presentation on cognitive biases and/or behavioral economics (Kahneman et al) in front of a group of university students (20-30 people). I want to start with a short experiment / demonstration (or two) that will demonstrate to the students that they are, in fact, subject to some bias or failure in decision making. I'm looking for suggestion on what experiment I can perform within 30 minutes (can be longer if it's an interesting and engaging task, e.g. a game), the important thing is that the thing being demonstrated has to be relevant to most people's everyday lives. Any ideas?
I also want to mention that I can get assistants for the experiment if needed.
Edit: Has anyone at CFAR or at rationality minicamps done something similar? Who can I contact to inquire about this?
I've done this in a few small groups, using:
For something very brief, anchoring bias is easy to demonstrate and fairly dramatic. I tried this on a friend a couple weeks ago, anchoring her on 1 million people as the population of Ghana; she guessed 900,000. Turned out to be 25 million.
Get people to give 90% confidence intervals on 10 questions, and then at the end ask
"Ok, so who got all 10 within their intervals. 9? That's what you should have got... ok, 8? Still no-one? Ok, how about 7?"
90% might not be the best number for demonstrating the idea of a confidence interval. It's too close to 100%. There's not much room to be underconfident. What about 50% confidence intervals?
Have you tried it? I have, and I can tell you most people I tried it on are over-confident when asked for 90% intervals.
The Wason selection task is a good go-to example of confirmation bias.
Well the thing is that people actually get this right in real life (e.g. with the rule 'to drink you must be over 18'). I need something that occurs in real life and people fail at it.
No, people are more likely to get it right in real life. Some fraction of your audience will get it wrong, even with ages and drinks.
To a first approximation, people get it right in real life.
They get it correct when it's in an appropriate social context, not simply because it's happening in real life. If it didn't happen in real life, confirmation bias wouldn't be a real thing.
Right, but I want to use a closer to real life situation or example that reduces to the wason selection task (and people fail at it) and use that as the demonstration, so that people can see themselves fail in a real life situation, rather than in a logical puzzle. People already realize they might not be very good at generalized logic/math, I'm trying to demonstrate that the general logic applies to real life as well.
Confirmation bias, the triplet number test where the rule is “Any triplet where the second number is greater than the first and the third greatet than the second”. Original credit (edit:for my exposure)to Eliezer in HPmoR but I thought of it because that was what Yvain did at a meetup I was at.
To be clear, since reading this made me double-take, I think by "original credit" you mean "original credit for your personal exposure to the concept".
In a more general effort to improve my health, or at least slowing its deterioration, I am thinking about constructing a hybrid standing desk. Now I do not have enough money to afford an actual convertible standing desk and I would very much like the convertible part. So I am thinking about a wall mount for my monitor or maybe even better some similar kind of adjustable mount that allows the necessary range of height to switch between sitting and standing. The problem then is still the keyboard. I already have a wireless keyboard, so switching it would not be a problem, but on what would I put it?
Any ideas and opinions?
Another possibility is to have two (smaller) desks side by side, one at sitting height and one at standing height. Use a wireless keyboard, or one with a long enough cable, that you can easily move it between tables. Mount the screen on an arm that is mounted between the two desks, swivels left-right and is long enough to reach the center point of each desk.
Depending on your monitor mount, you could attach a keyboard tray to it. Some higher-end monitor arm manufacturers will sell you a compatible tray, or you could take an existing tray and hard-mount it yourself.