If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Interesting comment by Gregory Cochran on torture not being useless as is often claimed.
...Torture can be used to effectively extract information. I can give you lots of examples from WWII. People often say that it’s ineffective, but they’re lying or deluded. Mind you, you have to use it carefully, but that’s true of satellite photography. Note: my saying that it works does not mean that I approve of it.
... At the Battle of Midway, two American fliers, whose planes had been shot down near the Japanese carriers, were pulled out of the water and threatened with death unless they revealed the position of the American carriers. They did so, and were then promptly executed. Later, at Guadalcanal, the Japanese captured an American soldier who told them about a planned offensive – with that knowledge the Japanese withdrew from the area about to be attacked. I don’t why he talked [the guy didn't survive] – maybe a Japanese interrogator spent a long time building a bond of trust with that Marine. But probably not. For one thing, time was short. I see people saying that building such a bond is in the long run more effective, but of course in war, time is often short.
You could consider the va
We seem to like "protecting" ought by making false claims about what is.
Possibly related to the halo or overjustification effects; arguments as soldiers seems especially applicable - admitting that torture may actually work is stabbing one's other anti-torture arguments in the back.
I read somewhere that lying takes more cognitive effort than telling the truth. So it might follow that if someone is already under a lot of stress -- being tortured -- then they are more likely to tell the truth.
I decided to publish http://www.gwern.net/LSD%20microdosing ; summary:
Some early experimental studies with LSD suggested that doses of LSD too small to cause any noticeable effects may improve mood and creativity. Prompted by recent discussion of this claim and the purely anecdotal subsequent evidence for it, I decided to run a well-powered randomized blind trial of 3-day LSD microdoses from September 2012 to March 2013. No beneficial effects reached statistical-significance and there were worrisome negative trends. LSD microdosing did not help me.
Discussion elsewhere:
AI Box Experiment Update
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I'm posting this in the open thread because unlike my last few AI Box Experiments, I won’t be providing a proper writeup (and I didn't think that just posting "I won!" is enough to validate starting a new thread). I've been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.
Sorry, folks.
This puts my current AI Box Experiment record at 2 wins and 3 losses.
Other people have expressed similar sentiments, and then played the AI Box experiment. Even of the ones who didn't lose, they still updated to "definitely could have lost in a similar scenario."
Unless you have reason to believe your skepticism comes from a different place than theirs, you should update towards gatekeeping being harder than you think.
The Anti-Reactionary FAQ by Yvain. Konkvistador notes in the comments he'll have to think about a refutation, in due course.
I continue blogging on the topic of educational games: Teaching Bayesian networks by means of social scheming, or, why edugames don’t have to suck
As a part of my Master’s thesis in Computer Science, I am designing a game which seeks to teach its players a subfield of math known as Bayesian networks, hopefully in a fun and enjoyable way. This post explains some of the basic design and educational philosophy behind the game, and will hopefully also convince you that educational games don’t have to suck.
I will start by discussing a simple-but-rather-abstract math problem and look at some ways by which people have tried to make math problems more interesting. Then I will consider some of the reasons why the most-commonly used ways of making them interesting are failures, look at the things that make the problems in entertainment games interesting and the problems in most edutainment games uninteresting, and finally talk about how to actually make a good educational game. I’ll also talk a bit about how I’ll try to make the math concerning Bayesian networks relevant and interesting in my game, while a later post will elaborate more on the design of the game.
I was thinking recently that if soylent kicks something off and 'food replacement' -type things become a big deal, it could have a massive side effect of putting a lot of people onto diets with heavily reduced animal and animal product content. Its possible success could inadvertently be a huge boon for animals and animal activists.
Personally, I'm somewhat sympathetic towards veganism for ethical reasons, but the combination of trivial inconvenience and lack of effect I can have as an individual has prevented me from pursuing such a diet. Soylent would allow me to do so easily, should I want to. Similarly, there are people who have no interest in animal welfare at all. If 'food replacements' become big, it could mean for the incidental conversion of those who might have otherwise never considered veganism or vegetarianism to a lifestyle that fits within those bounds, for only their personal cost or convenience reasons.
I know someone who has a young child who is very likely to die in the near future. This person has (most likely) never heard of cryonics. My model of this person is very unlikely to decide to preserve their child even if they knew about it.
I don't know if I should say something. At first I was thinking that I should because the social ramifications are negligible. After thinking about it for a while, I changed my mind and decided that possibly I was just trying to absolve myself of guilt at the cost of offending a grieving parent. I am not sure if this is just rationalization.
Advice?
What expert advice is worth buying? Please be fairly specific and include some conditions on when someone should consider getting such advice and focus on individuals and families versus, say, corporations.
I ask because I recently brainstormed ways that I could be spending my money to make my life better and this was one thing that I came up with and realized I essentially never bought except for visiting my doctor and dentist. Yet there are tons of other experts out there willing to give me advice for a fee: financial advisers, personal trainers, nutritionists, lawyers, auto-mechanics, home inspectors, and many more.
How many people here use Anki, or other Spaced Repetition Software (SRS)?
[pollid:565]
I'm finding it pretty useful and wondering why I didn't use it more intensively before. Some stuff I've been adding into Anki:
I have much more stuff I'd like to Ankify (my notes on Machine Learning, databases, on the psychology of learning; various inspirational quotes, design patterns and high-level software architecture concepts ...).
Some ways I got better at using Anki:
People who want to eat fewer animal products usually have a set of foods that are always okay and a set of foods that are always not (which sometimes still includes some animal products, such as dairy or fish), rather than trying to eat animal products less often without completely prohibiting anything. I've heard that this is because people who try to eat fewer animal products usually end up with about the same diet they had when they were not trying.
I wonder whether trying to eat more of something that tends to fill the same role as animal products would be an effective way to eat fewer animal products.
I currently have a fridge full of soaking dried beans that I have to use up, and the only way I know how to serve beans is the same as the way I usually eat fish, so I predict I'll be eating much less fish this week than I usually do (because if I get tired of rice and beans, rice and fish won't be much of a change). I'm not sure whether my result would generalize to people who use more than five different dinner recipes, though. I should also add that my main goal is learning how to make cheap food taste good by getting more practice cooking beans - eating fewer animal products would just be a side effect.
Now that I write this, I'm wishing I'd thought to record what food I ate before filling my fridge with beans. (I did write down what I could remember.)
I would like recommendations for a small, low-intensity course of study to improve my understanding of pure mathematics. I'm looking for something fairly easygoing, with low time-commitment, that can fit into my existing fairly heavy study schedule. My primary areas of interest are proofs, set theory and analysis, but I don't want to solve the whole problem right now. I want a small, marginal push in the right direction.
My existing maths background is around undergrad-level, but heavily slanted towards applied methods (calculus, linear algebra), statist...
...I sometimes use the term ‘accessible’ in the Microsoft sense.
The mouthful version of ‘accessible’ is something like this: To abstractly describe the character of a human interactive or processed experience when it is tailored to not exceed the limitations of the particular human being to which it is being presented.
So, if you are blind or paralyzed, your disability prevents you from using a computer terminal in the normal way without some assistive technology. If you are confined to a wheelchair, you cannot easily enter a bu
Is disgust "conservative"? Not in a Liberal society (or likely anywhere else) by Dan Kahan
His argument against Haidt's ideas on differences between liberals and conservatives related to his moral foundation theory differing psychology is similar to the ones Vladimir_M and Bryan Caplan made, but he upgrades it with a plausible explanation for why it might seem otherwise. The references are well worth checking out.
I recently found out a surprising fact from this paper by Scott Aaronson. P=NP does not (given current results) imply that P=BQP. That is, even if P=NP there may still be substantial speedups from quantum computing. This result was surprising to me, since for most computational classes we normally think about that are a little larger than P, they end up equaling P if P=NP. This is due to the collapse of the polynomial hierarchy. Since we cannot resolve that BQP lives in the polynomial hierarchy, we can't make that sort of argument.
Apparently recent work shows that direct giving of grants in developing countries has high rates of return. This more or less confirms what Givewell has said before about microfinance.
LW tells people to upvote good comments and downvote bad comments. Where do I set the threshold of good/bad? Is it best for the community if I upvote only exceptionally good comments, or downvote only very bad comments, or downvote all comments that aren't exceptionally good, or something else? Has this been studied? Is it possible to make a karma system where this question doesn't arise?
Why are AMD and Intel so closely match in terms of processor power?
If you separated two groups and incentivized them to develop the best processors and came back in 20 years, I wouldn't expect both groups to have done approximately comparably. Particularly so if the one that is doing better is given more access to resources. I can think of a number of potential explanations none of which are entirely satisfactory to me, though. Some possibilites:
I find in general very hard to predict what kind of acceptance will my post receive, basing on the karma point of each.
While as a policy I try not to post strategically (that is, rationality quotes, pandering to the Big Karmers, etc.), but just only those things I find relevant or interesting for this site, I have found no way to reliably gauge the outcome.
It is particularly bewildering to me that comments that (I hope) are insightful gets downvoted to the limit of oblivion or simply ignored while comments or requests of clarification are the most upvoted.
Have someone constructed a model of how the consensus works here on LW? Just curious...
I don't know about other people but when I upvote a simple question I'm saying "yeah I was wondering this too"
I am interested in reading further on objective vs subjective Bayesianism, and possibly other models of probability. I am particularly interested in something similar to option 4 in What Are Probabilities, Anyway. Any recommendations on what I should read?
I recently memorized an 8-word passphrase generated by Diceware.
Given recent advances in password cracking, it may be a good time to start updating your accounts around the net with strong, prescriptively-generated passphrases.
Added: 8-word passphrases are overkill for most applications. 4-word passphrases are fairly secure under most circumstances, and the circumstances where in which they are not may not be helped by longer passphrases. The important thing is avoiding password reuse and predictable generation mechanisms.
I find myself over sensitive to negative feedback and under-responsive to positive feedback.* Does anyone have any advice/experience on training myself to overcome that?
*This seems to be a general issue in people with depression/anxiety, I think its something to do with how dopamine and serotonin mediate the reward system but I'm not an expert on the subject. Curiously sociopaths have the opposite issue, underresponding to negative feedback.
I'd like to highly recommend Computational Complexity by Christos H. Papadimitriou. Slightly dated in a fast changing field, but really high quality explanations. Takes a bit more of a logic-oriented approach than Hopcroft and Ullman in Introduction to Automata Theory, Languages, and Computation. I think this topic is extremely relevant to decision theory for bounded agents.
Those who have been reading LessWrong in the last couple of weeks will have little difficulty recognizing the poster of the following. I'm posting this here, shorn of identities and content, as there is a broader point to make about Dark Arts.
These are, at the time of writing, his two most recent comments. I will focus on the evidential markers, and have omitted everything else. I had to skip entirely over only a single sentence of the original, and that sentence was the hypothetical answer to a rhetorical question.
...That's very interesting. At what point
Here is a problem that I regularly face:
I have a hard time terminating certain subroutines in my brain. This most regularly happens when I am thinking about a strategy game or math that I am really interested in. I will continue thinking about whatever it is that is distracting me even when I try not to.
The most visible consequence of this is that it sometimes interferes with my sleep. I usually get to bed at a regular time, but if I get distracted it could take hours for me to get to sleep, even if I cut myself off from outside stimulus. It can also be a ...
Has anyone had any experience with http://sundayassembly.com ?
I'd love to hear some first hand accounts. It sounds like all the things I enjoyed about going to church when I was a Christian, with the Christianity part.
Overview of systemic errors in science-- wishful thinking, lack of replication, inept use of statistics, sloppy peer review. Probably not much new to most readers here, but it's nice to have it all in one place. The article doesn't address fraud very much because it may have a small effect compared to unintentionally getting things wrong.
Account of a retraction by an experiment's author Doing the decent thing when Murphy attacks. Most painful sentence: "First, we found that one of the bacterial strains we had relied on for key experiments was mislabel...
Stock market investment would seem like a good way to test predictive skills, have there been any attempts to apply lw style rationality techniques to it?
Awhile back I posted a comment on the open thread about the feasibility of permanent weight-loss. (Basically: is it a realistic goal?) I didn't get a response, so I'm linking it here to try again. Please respond here instead of there. Note: most likely some of my links to studies in that comment are no longer valid, but at least the citations are there if you want to look those up.
I believe there is a named cognitive bias for this concept but a preliminary search hasn't turned anything up: The tendency to use measures or proxies that are easily available rather than the ones that most accurately measure the cared about outcome.
anyone know what it might be called?
Calling all history buffs:
I have this fragment of a memory of reading about some arcane set of laws or customs to do with property and land inheritance. It prevented landowners from selling their land, or splitting it up, for some reason. This had the effect of inhibiting agricultural development sometime in the feudal era or perhaps slightly after. Anyone know what I'm talking about?
(I'm aware of the opposite problem, that of estates being split up among all children (instead of primogeniture) which caused agricultural balkanization and prevented economies of scale.)
This sounds like the system that France had before the first French Revolution. That is, up until 1789; I'm not sure when it started. I wouldn't be surprised if a similar system existed in other European countries at around the same time, but I'm not sure which. (I've only been reading history for a couple years, and most of it has been research for fiction I wanted to write, so my knowledge is pretty specifically focused.)
Under this system, the way property is inherited depends on the type of property. Noble propes is dealt with in the way you describe - it can't be sold or given away, and when the owner dies, it has to be given to heirs, and it can't be split among them very much. My notes say the amount that goes to the main heir is the piece of land that includes the main family residence plus 1/2 - 4/5 of everything else, which I think means there's a legal minimum within that range that varies by province, but I'm not completely sure. Propes* includes lands and rights over land (land ownership is kind of weird at this time - you can own the tithe on a piece of land but not the land itself, for example) that one has inherited. Noble propes is propes that belongs to a noblepers...
As I want to fix my sleep (cycle) I am looking for a proper full spectrum light to screw in my desk light. But when I shop for "full spectrum" light it turns out that they only have three peaks and do not even come near a black body in lighting. Is there something for less than a small fortune for a student like I am looking for? E27 socket, available in the EU.
I can ask more generally: What is the lighting situation at your desk and at your home? I aim for lighting very low in blue in the evening and as close to full daylight during work. For th...
I have a question about Effective Altruism:
The essence of EA is that people are equal, regardless of location. In other words, you'd rather give money to poor people in far away countries than people in your own country if it's more effective, even though the latter feel intuitively more close to you. People care more about their own countries' citizens even though they may not even know them. Often your own country's citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your ...
Maybe it is a problem of puchasing fuzzies and utilons together, and also being hypocritical about it.
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don't get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
"I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy." So, why don't you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. "Because I care about my neighbors more. They are... uhm... my tribe." So you also support your tribe. That's not completely selfless. "That's a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what's your...
So, what's all this about a Postivist debacle I keep hearing? Who were the positivists, what did we have in common with them, what was different, and how and why did they fail?
...Positivism states that the only authentic knowledge is that which allows verification and assumes that the only valid knowledge is scientific.[2] Enlightenment thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought. Sociologica
How can I learn to sleep in a noisy environment?
For several years now I've lived in loud apartments, where I can often hear conversations or music late into the night.
I often solve this problem by wearing earplugs. However, I don't want to sleep with earplugs every night, and so I've made a number of attempts to adjust to the noise without earplugs, either going "cold-turkey" for as long as I can stand, or by progressively increasing my exposure to night-time noise.
Despite several years of attempts, I don't think I've habituated at all. What giv...
Let's assume society decides that eating meat from animals lacking self-awareness is ethical, and anything with self-awareness is not ethical to eat, and that we have a reliable test to tell the difference. Is it ethical to deliberately breed tasty animals to lack self-awareness, both before or after their species has self-awareness?
My initial reaction to the latter is 'no, it's not ethical, because you would necessarily be using force on self-aware entities as part of the breeding process'. The first part of the question seems to lean towards 'yes', but t...
I've got a few questions about Newcomb's Paradox. I don't know if this has already been discussed somewhere on LW or beyond (granted, I haven't looked as intensely as I probably should have) but here goes:
If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 milli...
I had a random-ish thought about programming languages, which I'd like comments on: It seems to me that every successful programming language has a data structure that it specialises in and does better than other languages. Exaggerating somewhat, every language "is" a data structure. My suggestions:
Now this list is missing some languages, for lack of my familiarity with them, and also some structures. For example, is there a language which "is" strings? And on this model, what is Java?
I've got a few questions about Newcomb's Paradox. I don't know if this has already been discussed somewhere on LW or beyond (granted, I haven't looked as intensely as I probably should have) but here goes:
If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 million in the second box? If I had some evidence that people actually have one-boxed and gotten the $1 million then I would put more weight on the idea that he actually has $1 million to spare, and more weight on the possibility that Omega is a good/perfect predictor.
If I attempt some sort of Bayesian update on this information (the five previous people two-boxed and got $1,000) these two explanations seem to equally explain this fact. The probability of Omega putting the $1,000 in the previous five peoples' boxes given that he's a perfect predictor seems to be observationally equivalent to the probability that Omega doesn't ever put $1 million in the second box.
Then again, if Omega actually knew my reasoning process, he might actually provide me with the evidence that would make me choose to one-box over two-box.
It also seems to me that if my subjective confidence in Omega's abilities of prediction are over 51%, then it makes more sense to one-box than two-box... if my math/intuition about this is correct. Let's say my confidence in Omega's abilities of prediction are at 50%. If I two-box, there are two possible outcomes: I either get only $1,000 or I get $1,001,000. Both outcomes have a 50% chance of happening due to my subjective prior, so my decision theory algorithm is 50% $1,000 + 50% $1,001,000. This sums to a total utility/cash of $501,000.
If I one-box, there are also two possible outcomes: I either get $1,000,000 or I lose $1,000. Both outcomes, again, have a 50% chance of happening due to my subjective probability about Omega’s powers of prediction, so my decision theory algorithm is 50% $1,000,000 + 50% -$1,000. This sums to $499,000 in total utility.
Does that seem correct, or is my math/utility off somewhere?
Lastly, has something like Newcomb's Paradox been attempted in real life? Say with five actors and one unsuspecting mark?