How can we spread rationality and good decision making to people who aren't inclined to it?
I recently chatted up a friendly doorman who I normally exchange brief pleasantries with. He told me that he was from a particularly rough part of town and that he works three jobs to support his family. He also told me not to worry because he has a new business that makes a lot of money, although had to borrow gas money to get to work that day. He said that he was part of a "travel club". I immediately felt bad because I had a gut feeling he was talking about some multi-level marketing scheme. I asked him if it was and he confirmed it was, but disagreed that it was a "scheme". He told me that he is trying to recruit his family, and that the business model encourages recruiting among family and friends. Skip 45 minutes of him selling it to me, I left him with a warning to be cautious because these things can be fly by night operations and that most people need to lose for a very few to win, and that he can win only if he is part of the very best promoters/sellers. I said that on purpose to gauge whether he is a true believer or a wolf in sheep's clothing but hi...
Would there be interest in me writing a post, or a series of posts, summarizing Richard Feldman's Epistemology textbook? Feldman's textbook is widely used in philosophy classes, and contains some surprisingly reasonable views (given what you may have heard about mainstream philosophy).
I'm partly considering it because it might be a useful way to counteract some common myths about what all philosophers supposedly know about evidence, the problem of induction, and so on. But I seem to have given away my copy, and a replacement would be $40 for a volume that's under 200 pages. So I want to gauge interest first.
Question about EA and CFAR. I think I've heard some people express sentiments that CFAR might be a good place for EAs to donate, due to the whole "raising the sanity waterline" thing.
On its face, this seems silly to me. From the outside view, CFAR just looks like a small self-help organization, though probably better than most such organizations, and it seems unlikely that it'll affect any significant portion of the population.
I think CFAR is great; I went to minicamp, and I think it probably improved my life, although I suspect I'm not as enthusiastic about it as most people who went. But if I were to give CFAR any money, it would be because it helps me and people I know, not because I think it's actually likely to have a large impact on the world.
Are there people around here who believe CFAR is actually likely to have a large impact on the world? Could you explain your reasoning why?
Longitudinal study of men and happiness
...“At a time when many people around the world are living into their tenth decade, the longest longitudinal study of human development ever undertaken offers some welcome news for the new old age: our lives continue to evolve in our later years, and often become more fulfilling than before. Begun in 1938, the Grant Study of Adult Development charted the physical and emotional health of over 200 men, starting with their undergraduate days. The now-classic ‘Adaptation to Life’ reported on the men’s lives up to age 55 and helped us understand adult maturation. Now George Vaillant follows the men into their nineties, documenting for the first time what it is like to flourish far beyond conventional retirement. Reporting on all aspects of male life, including relationships, politics and religion, coping strategies, and alcohol use (its abuse being by far the greatest disruptor of health and happiness for the study’s subjects), ‘Triumphs of Experience’ shares a number of surprising findings. For example, the people who do well in old age did not necessarily do so well in midlife, and vice versa. While the study confirms that recovery from a lo
Last night I found myself thinking, "Well, suppose there's no Singularity coming any time soon. The FAI project will still have gotten a bunch of nerds working together on a project aimed at the benefit of all humanity — including formalizing a lot of ethics — who might otherwise have been working on weapons, wireheading, or something else awful. That's gotta be a good thing, right?"
Then I realized this sounds like rationalization.
Which got me to thinking about what my concerns are about this stuff.
My biggest AI risk worries right now are more immediate than paperclip optimizers. They're wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn't even to make its owners happy — just to make them rich — and it certainly doesn't care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.
Even assuming that such systems believe that crashing the economy would be bad for their owners, I expect that for the vast majority of living and potential humans, world dominance by such systems would constitute a Bad Ending.
It does not seem to me that it would require self-modifying emergent AI to bring about such a Bad Ending; and no exotic technologies such as computronium — just the continuation of current trends.
Some physicists have managed to put an 810-atom molecule, weighing over 10,000 amu, in superposition. Was linked there from this blog post, which gives a good summary.
An interesting paper: http://www.econ.ucsb.edu/papers/wp01-12.pdf
tl;dr -- Abstract (emphasis mine)
"We document a lower bound for the control premium: agents’ willingness to pay to control their own payoff. Participants choose between an asset that will pay only if they later answer a particular quiz question correctly and one that pays only if their partner answers a different question correctly. However, they first estimate the likelihood that each asset will pay off. Participants are 20% more likely to choose to control their payoff than a group of payoff-maximizers with accurate beliefs. While some of this deviation is explained by overconfidence, 34% of it can only be explained by the control premium. The average participant expresses a control premium equivalent to 8% to 15% of the expected asset-earnings. Our resu lts show that even agents with accurate beliefs may incur costs to avoid delegating and suggest that to correctly infer beliefs from choices, one should account for the control premium."
How to make it easier to receive constructive criticism?
Typically finding out about the flaws in something that we did feels bad because we realize that our work was worse than we thought, so receiving the criticism feels like ending up in a worse state than we were in before. One way to avoid this feeling would be to reflect on the fact that the work was already flawed before we found out about it, so the criticism was a net improvement, allowing us to fix the flaws and create a better work.
But thinking about this once we've already received the criticism rarely helps that much, at least in my experience. It's better be to consciously remind yourself that your work is always going to have room for improvement, and that it is certain to have plenty of flaws you're ignorant of, before receiving the criticism. That way, your starting mental state will be "damn, this has all of these flaws that I'm ignorant about", and ending up in the post-criticism state where some of the flaws have been pointed out, will feel like a net improvement.
Another approach would be to take the criticism as evidence of the fact that you're working in a field where success is actually worth bein...
I think most difficulty with receiving criticism is knowing with certainty the intention behind it is constructive. If I'm sure I actually made a serious and relevant mistake, it's much easier to receive criticism.
Some IRC discussion reminded me that LWers might enjoy a SF short story I wrote some time ago: "Men of Iron".
Orthodox statistics beware: Bayesian radicals spotted:
...A group of international Bayesians was arrested today in the Rotterdam harbor. According to Dutch customs, they were attempting to smuggle over 1.5 million priors into the country, hidden between electronic equipment. The arrest represents the largest capture of priors in history.
“This is our biggest catch yet. Uniform priors, Gaussian priors, Dirichlet priors, even informative priors, it’s all here,” says customs officers Benjamin Roosken, responsible for the arrest. (…)
Sources suggest that the shipm
This seems like a big deal:
http://www.pnas.org/content/early/2013/10/28/1313476110.full.pdf
Basically, dude illustrates equivalence between p-values and Bayes factors and concludes that 17-25% of studies with a p-value acceptance threshold of 0.05 will be wrong. This implies that the lack of reproducibility in science isn't necessarily due to egregious misconduct, etc., but rather insufficiently strict statistical standards.
So is this new/interesting, or do I just naively think so because it's not my field?
I'm planning to do a series of posts of myself systematically reading the Sequences and commenting on them. Anyone did this before?
Trying to find a link I saw about CFAR doing some publishing some preliminary research on rationality techniques, including a finding that a technique they expected to work didn't actually work. Does anyone know what I'm talking about? My Google fu is failing me, to the point that I'm wondering if I'm imagining it.
There is an atheist argument, "Religious people are only religious because they want to control other people or are controlled by them. Religion is a system of authoritarian control."
There is a religious argument, "Atheists are only atheists because they want to rebel against God. Atheism is an act of rebellion."
Are these extensionally equivalent?
Are there other common arguments from opposed viewpoints that pair up like this?
A scenario which occurred to me and I found strange at first glance: Consider a fair coin, and two people -- Alice who is 99.9% sure the coin is fair and who can update on evidence like a fine Bayesian, and Bob who says he's perfectly sure the coin is biased to show heads and does not update on the evidence at all.
Nonetheless the perfectly correct Alice (who effectively needs choose randomly and might as well always say 'heads') and the perfectly incorrect Bob (who always says 'heads' because he's always certain that'll be the correct answer) have the same...
Doesn't seem very strange to me. For any (realistic) situation, there are any number of irrelevant false beliefs that you could have while still managing to predict the result correctly. Or even relevant false beliefs that nonetheless produced the right prediction: e.g. a tribe that believed in spirits might believe that sexual intercourse attracted a disembodied spirit into a woman's body and caused it to grow a new body for itself, which would be false but still lead to the correct prediction of (intercourse -> pregnancy).
How did you get this number? Is it lower or higher than Laplace's rule of succession would suggest? Have you ever seen such a comment work?
I have once seen such a comment produce an admission, but I don't think it was very productive. In fact, I think the two people disagreed on what happened.
added: maybe you are distinguishing between your explicitly asking for the person from everyone else complaining to the general public, with your method a priori better, but untested. From my observation so once working out of fifty, Laplace tells me 4%, compatible with your higher 5-10%.
Not sure what to label or call this thinking error I had recently, but it seems as if it might come up more often, although I can not come up with another example for now.
I know someone who signed up for a marathon and I was to be part of the coordination of getting her to the official transportation area. After signing up online she was able to choose from a set of departure times for the bus take runners to the start line. Her wave is set to start at 10:15 a.m. She wanted to select the latest possible departure to avoid idle time standing around tryin...
I'm starting practice drills for stenographic typing. The software (Plover) and the theory/typing drills (I'm using http://qwertysteno.com/Home/) are available for free, and the hardware is cheap (and I already have it).
What I'm really curious about, though, is the value you can get out of roughly doubling my typing speed from 80 WPM to 160. There's the time saved, but that's offset by the time spent learning steno. Really the big benefit is time-shifting the work of typing out English words, from "in the middle of having a thought" to the stenot...
Personally, I estimate the value of learning to type faster at approximately zero, because I can type faster (about 70 WPM) than I can decide what I want to type. How much time do you spend wishing you were able to type faster, because your fingers aren't keeping up with your brain?
While having heard of AutoHotkey a long time ago I just started using it and it's extremly useful.
One example would be opening wikipedia with the clipboard content as search string. It just takes 3 lines to assign that task to 'Windows Key + W'. I can't grasp why they didn't recommend us student at computer science to get proficient with it. It"s useful for automating common tasks.
It"s much easier to get results for learning programming through automating task with autohotkey than it's through learning it with simple python programs that serve ...
I have a constant feeling that I had a great idea or an important thought just now or just a few minutes ago. I know I have recurring thoughts - not of the bad kind, mind you - that I deem quite useful but I am never sure that this feeling of forgetting is with respect to those recurring ideas or something new. Does this kind of thing ring a bell?
So the latest patches from Microsoft on Tuesday crashes my internet browsers. I'm sure something happens to someone every time, this is a reminder to make sure you have adequate space for system restore points. I didn't for some reason.
PSA: Sign up for Medfusion (or your region's equivalent) if your doctor offers it.
Yesterday I asked my doctor's nurse a question electronically. I had a symptom and I was unsure if it required a visit to the practice. The nurse responded the next day saying the symptom was benign and would go away. This saved me a copayment and a trip outside.
Is there a LW consensus on the merits of Bitcoin? Namely, is it the optimal place to invest money, especially in regards to mining equipment?
I think the value is liable to increase fairly dramatically over time, and that buying/mining Bitcoins will prove incredibly profitable, but I'd like the input of this community before I decide whether or not to put money forth for this venture.
My general impression about mining is that right now it's a horrible idea to get involved in it as the necessary investment/expertise keeps increasing and there seems to be a problem where there's a big pipeline of already-paid-for ASICs which cannot justify their purchase cost but where the least lossy strategy is to run them and recoup as much of the loss as possible (which pushes up the difficulty massively and makes additional capital investments awful ideas). If one wants exposure, buying bitcoins seems like the best approach right now.
Personally, I estimate the value of learning to type faster at approximately zero, because I can type faster (about 70 WPM) than I can decide what I want to type. How much time do you spend wishing you were able to type faster, because your fingers aren't keeping up with your brain?
It's less a question of average composition (deciding what to write) speed, and more a question of how much I'm keeping in memory. With a slower typing speed, I have to keep more in memory about how I want to finish the thoughts I'm having, and have more difficulty and frustration involved in the process.
In other words, composition isn't a marathon, but a series of sprints. Each sprint is a race to get the thoughts you have out of short term memory and into storage. You'd probaply find your composition speed increase with your typing speed, as you can focu... (read more)