Comment author:lukeprog
06 March 2013 07:52:53PM
*
22 points
[-]
Why am I not signed up for cryonics?
Here's my model.
In most futures, everyone is simply dead.
There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.
What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.
I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.
I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.
Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.
Comment author:Elithrion
07 March 2013 12:07:10AM
*
7 points
[-]
So are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude? I'm having trouble imagining that. In my mind, you are most likely to be revived because the reviver feels some sort of moral obligation towards you, so the future in which this happens should, on the whole, be pretty decent. If it's a future of eternal torture, it seems much less likely that something in it will care enough to revive some cryonics patients when it could, for example, design and make a person optimised for experiencing the maximal possible amount of misery. Or, to put it differently, the very fact that something wants to revive you suggests that that something cares about a very narrow set of objectives, and if it cares about that set of objects it's likely because they were put there with the aim of achieving a "good" outcome.
(As an aside, I'm not very averse to "worse-than-death" outcomes, so my doubts definitely do arise partially from that, but at the same time I think they are reasonable in their own right.)
Comment author:CarlShulman
14 August 2013 01:27:45AM
*
3 points
[-]
This seems strangely averse to bad outcomes to me. Are you taking into account that the ratio between the goodness of the best possible experiences and the badness of the worst possible experiences (per second, and per year) should be much closer to 1:1 than the ratio of the most intense per second experiences we observe today, for reasons discussed in this post?
Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don't see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.
Whoa. What? I notice that I am confused. Requesting additional information.
Most of the time, if I read something like that, I'd assume it was merely false—empty posturing from someone who didn't understand the implications of what they were writing. In this case, though... everything else I've seen you write is coherent and precise. I'm inclined to believe your words literally, in which case either A) I'm missing some sort of context or qualifiers or B) you really ought to see a therapist or something.
Do you mean you're not averse to death decades from now? Does that feel different from the possibility of getting hit by a bus next week?
(Only tangentially related, but I'm curious: what's your order of magnitude probability estimate that cryonics would actually work?)
No, I'm sorry, but there are simply many atheists who really aren't that scared of non-existence. We don't seek it out, we do prefer continuation of our lives and its many joys, but dying doesn't scare the hell out of us either.
This, in me at least, has nothing to do with depression or anything that requires therapy. I'm not suicidal in the least; even though I'd be scared of being trapped in an SF-style dystopia that didn't allow me to so suicide.
Comment author:tut
07 March 2013 05:31:27PM
6 points
[-]
“I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.” ― Mark Twain
Comment author:lukeprog
07 March 2013 12:53:24AM
*
2 points
[-]
Whoa. What?
Sorry, I just meant that I seem to be less averse to death than other people. I'd be very sad to die, and not have the chance to achieve my goals, but I'm not as terrified as death as many people seem to be. I've clarified the original comment.
Comment author:James_Miller
24 December 2013 05:51:15PM
*
0 points
[-]
In most futures, everyone is simply dead.
If there is a high probability of these bad futures happening before you retire, this belief reduces the cost of cryonics to you in terms of the opportunity cost of instead putting money into retirement accounts.
In the really bad futures you probably don't experience extra suffering if you sign up for cryonics because all possible types of human minds get simulated.
Comment author:CellBioGuy
01 March 2013 12:43:51PM
*
17 points
[-]
A newcomet from the oort cloud, >10 km wide, has been discovered that is doing a flyby of Mars in October of 2014. The current orbit is rather uncertain, but it is probably passing within 100,000 km and the max likelihood is ~35,000 km. There is a tiny but non-negligable chance this thing could actually hit the red planet, in which case we would get to witness an event on the same order of magnitude as the K-T event that killed off the non-avian dinosaurs! (and lose everything we have on the surface of the planet and in orbit.)
I, for one, hope it hits. That would not be a once in a lifetime opportunity. That would be a ONCE IN THE HISTORY OF HOMINID LIFE opportunity! We would get to observe a large impact on a terrestrial body as it happened and watch the aftermath as it played out for decades!
As is, the most likely situation though is one in which we get to closely sample and observe the comet with everything we have in orbit around Mars. The orbit will be nailed down better in a few months when the comet comes out from the other side of the sun.
And to quote myself towards the end of the last open thread:
I don't know if this has been brought up around here before, but the B612 foundation is planning to launch an infrared space telescope into a venus-like orbit around 2017. It will be able to detect nearly every earth-crossing rock larger than 150 meters wide, and a significant fraction down to a few at 30ish meters. The infrared optics looking outwards makes it much easier to see the warm rocks against the black of space without interference from the sun and would quickly increase the number of known near earth objects by two orders of magnitude. This is exactly the mission I've been wishing / occasionally agitating for NASA to get off their behinds and do for five years. They've got the contract with Ball Aerospace to build the spacecraft and plan to launch on a Falcon 9 rocket. And they accept donations.
Comment author:gwern
01 March 2013 05:29:31PM
10 points
[-]
I saw a mention of that elsewhere, but I didn't realize that the core had a lower bound of 10km. Wow. I really hope it impacts too; we saw some chatter about the need for a space guard with a dinky little thing hitting Chelyabinsk, but imagine the effect of watching a dinosaur-killer hit Mars!
Different sources seem to have different orbital calculations, this one indicates a most likely close approach of ~100,000 kilometers with the uncertainty wide enough to include a close approach of 0 km.
If nothing else, we very well may get pictures from the surface rovers of the head of a comet literally filling the sky.
Comment author:Thomas
02 March 2013 11:53:09AM
*
2 points
[-]
I am flabbergasted, I have no explanation for this situation.
If this comet is really that big and has approximately said flyby orbit, how frequent are those? If one every thousand years, there were 60000 of those since the TC event. How come we had only one collision of this magnitude?
Maybe they are less frequent. How lucky we are then to witness one of them right now? Too lucky, I guess.
As on the other hand, it looks we are just too lucky to have no major collision of that kind relatively recently, if they were quite common.
Maybe I am missing something odd. Like an unexpected gravity or other effect, by which an actual collision is much more difficult. Something in line with this. What makes sense, but only after a careful consideration.
Maybe a planet like Mars or Earth repels comets somehow? Dodge them somehow? Some weird effect like this?
I recommend Taleb's The Black Swan. The major premise is that people tend to underestimate the likelihood of weird events. It's not that they can predict any particular weird event, it's about overall likelihood of weird events with large consequences.
Comment author:CellBioGuy
02 March 2013 03:32:20PM
*
2 points
[-]
Another way of stating it in this circumstance: there are so many different things that we would consider ourselves lucky to see or that we would notice as unusual that even if the probability of any one of them is low the probability that we see something isn't that low.
Comment author:CellBioGuy
02 March 2013 03:25:40PM
*
0 points
[-]
If you are randomly shooting a rock through the solar system, "close approach of mars within 100,000 km" is 870 times as likely as "hitting mars". That brings a 'once in 100 million years (really roughly guessing based on what I know of earth's geological history)' event down to the order of 'once in a hundred thousand years', and the proper reference class of things we would be considering ourselves this lucky to see is probably more like 'close approach of a large comet to a terrestrial body' rather than singling out mars in particular. I don't know enough about distributions of comet orbital energies to consider different likelihoods of comets having parabolic orbits that bring them closer to the center of the solar system versus further away to compare the odds of things going near the different terrestrial planets with different orbits.
The gravity of a planet actually slightly increases the fraction of randomly-shot-past-them objects that hit them over just sweeping out their surface area through space, but for something with a relative velocity of 55 km/s (!) that effect is tiny.
Comment author:Thomas
02 March 2013 06:16:09PM
0 points
[-]
If so, we are indeed very lucky to observe an event, which happens every 100 000 years or so.
OTOH, I've conclude, that it is in fact less likely for a planet to be hit by a random comet than it is for a big massless balloon of the same size, to be hit by the same comet.
Why is that? Roughly speaking, if the comet is heading toward some future geometric meeting point, the planet will accelerate it by its own gravity and the comet will come too early and therefore flies by. It's a very narrow set of circumstances for an actual collision to take place.
A bit counter intuitive but it explains why we have so few actual collisions, despite of the heavy traffic. Collisions do happen, but less often than a random chance would suggest. The gravity protects us mostly.
Zeo users should assume the worst and take action accordingly:
Update your sleep data and then export all your sleep data from the Zeo website as a CSV (the bar on the right hand side, in tiny grey text)
Upgrade your Zeo with the new firmware if you have not already done so, so it will store unencrypted data which can be accessed without the Zeo website.
I'm sad that they're closing down. I've run so many experiments with my Zeo, and there doesn't seem to be any successor devices on the horizon: all the other sleep devices I've read of are lame accelerometer-based gizmos.
Comment author:skjonas
13 March 2013 04:53:28PM
1 point
[-]
I'm sad about this as well. The Zeo has been the only QS thing that I've been able to get my girlfriend to use, and it has increased her understanding of her sleep patterns dramatically.
I now look back with a twinge of anger at all the times that someone told me that they track their stages of sleep too, but with their iphone app and "it was only a dollar."
And to be clear, you can only upgrade the firmware on the Zeo bedside unit, right?
Comment author:gwern
13 March 2013 01:09:03AM
2 points
[-]
Depends. If you know that it's shutting down, are willing to handle the data exporting yourself, and also are willing to possibly pay rising costs for a Zeo unit and replacement headbands...
I know I don't intend to stop (already bought another 3 replacement headbands on Amazon), but I've already used my Zeo for a long time and seem to be pretty unusual in how much I use it.
The firmware is no longer available on their site. I tried to email them, but I got an automated response telling me that customer service is no longer responding to emails and to check the help on their site. Can anyone share the 2.6.3R firmware?
Also, Amazon is sold out of the bedside headbands. Bad timing for me - I only have one left.
Also, Amazon is sold out of the bedside headbands. Bad timing for me - I only have one left.
2 or 3 days after I went around all Paul Revere-style, I was told that Amazon had run out. So I guess they turned out to not have many at all. (I had 3 left over from previously, and bought another 3, so I figure I should be able to get at least 3 more years out of my Zeo.)
Comment author:Emily
14 March 2013 07:02:15AM
0 points
[-]
I finally just started using RSS feeds and it has improved my workflow dramatically. Now they're breaking my system on me?! Thanks for letting me know...
Comment author:shminux
16 March 2013 07:12:35AM
0 points
[-]
I've imported my feeds into Google Currents, since it can also be used to read regular news, not just feeds, which I do, anyway. Trying it out now, hopefully Google will be improving it, if they want the reader users to stay with Google.
Comment author:shminux
18 March 2013 11:32:56PM
1 point
[-]
Update: So far Google Currents sucks for feeds. Totally unintuitive layout and gestures, does not show new feeds (or I cannot find where it does), the formatting of several items is so poor, I give up and go to the original site. Switching back to Google Reader until something better comes along.
Comment author:Yvain
02 March 2013 03:01:10AM
10 points
[-]
I posted this in the waning days of the last open thread, but I hope no one will mind the slight repeat here.
The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.
Comment author:ModusPonies
02 March 2013 09:08:24PM
9 points
[-]
A call for advice: I'm looking into cognitive behavioral therapy—specifically, I'm planning to use an online resource or a book to learn CBT methods in hopes of preventing my depression from recurring. It looks like these methods have a good chance of working, although the evidence isn't as strong as for in-person CBT. At this point, I'm trying to decide which resources to learn from. Any recommendations or anecdotes would be appreciated.
Free guided meditations for "The Mindful Way Through Depression" (get some practice before using "working with difficulty" meditation):
streamable or downloadable
Over the past month, I have started taking melatonin supplements, instigated a new productivity system, implemented significant changes in diet and begun a new fitness routine. February is also a month where I anticipate changes in my mood. I find myself moderately depressed and highly irritable with no situational cause, and I have no idea which of these things, if any, are responsible.
This is not ideal.
I'd been considering breaking my calendar down into two-week blocks, and staging interventions in accordance with this. Then the restless spirit of Paul Graham sat on my shoulder and told me to turn it into an amazing web service that would let people assign themselves into self-experimental cohorts, where they're algorithmically assigned to balanced blocks so that effects of overlapping interventions can be teased apart.
I've never really gotten that into the whole Quantified Self thing, but I'd be keen to see if something like this existed already. If not, I'd consider putting such a thing together.
Any discussion/observations on this general subject?
Comment author:gwern
01 March 2013 05:23:43PM
2 points
[-]
Then the restless spirit of Paul Graham sat on my shoulder and told me to turn it into an amazing web service that would let people assign themselves into self-experimental cohorts, where they're algorithmically assigned to balanced blocks so that effects of overlapping interventions can be teased apart.
So it's a web service that would spit out a random Latin square and then run ANOVA on the results for you?
I don't think I've heard of such a thing. (Most people who would follow the balanced design and understand the results are already able to do it for themselves in R/Stata/SPSS etc.) Statwing.com might have something useful, they seemed to be headed in that direction of 'making statistics easy'.
I was imagining a site that would look at all the different things you're trying at the moment, look at all the things other people are trying, and give you a macro-schedule for starting them that works towards establishing cyclicality across all users.
It could also manage your micro-schedule, (prompt you to take a pill, do twenty sit-ups, squirt cold water in your right ear, etc.), ask for metrics and let users log salient information and observations. Come to think of it, once that infrastructure is already in place, there's no reason you couldn't open it up as a platform for more legitimate and formal trials.
Comment author:gwern
01 March 2013 05:55:00PM
4 points
[-]
Mm. So not just scheduling your own interventions but try to balance across users too... No, I don't know of anything like that. CureTogether actually got some research published, but I don't think randomization or balancing was involved. (And trying to get nootropics or self-help geeks to collectively do something is like trying to herd deaf cats into pushing wet spaghetti...)
Comment author:[deleted]
01 March 2013 03:52:07PM
1 point
[+]
(4
children)
Comment author:[deleted]
01 March 2013 03:52:07PM
1 point
[-]
When I found myself depressed and irritable on a diet, it seemed to be evidence that I was hungry. Is there any food or drink that you can try consuming to stave off that feeling, while still following the diet? As an example my diet allowed me to consume unlimited amounts of unprocessed fruit, so if I felt depressed and irritable, I could eat that until I felt better, and not hurt my diet at all.
As you seem to recognize in your reply to Gwern, this probably cannot function as a stand-alone feature, but needs to sit atop a Quantified Self platform. The minimal system is one that just keeps track of your data, while making data entry easier than existing systems. The next step is to figure out what things you're tracking correspond to what things I'm tracking. This is difficult to combine with the flexibility of allowing the tracking of anything.
Why haven't you gotten into the Quantified Self thing? At the very least, they probably have better answers to this question.
Quantified Self seems like one of those things you have to be into, and I'm just not that into it.
It seems to me that a lot of the QS-types take an almost recreational pleasure in what they're doing. I understand that. I get a similar sort of pleasure from other things, but not this. I'd like the information, but there's only so much effort I'm prepared to spend on getting it.
It seems plausible to me that traditional financial advice assumes that you have traditional goals (e.g. eventually marrying, eventually owning a house, eventually raising a family, and eventually retiring). Suppose you are an aspiring effective altruist and willing to forgo one or more of these. How does that affect how closely your approach to finances should adhere to traditional financial advice?
Comment author:Viliam_Bur
12 March 2013 09:18:36AM
2 points
[-]
I would say that at the beginning you have to make a choice -- will you contribute financially or personally?
If you want to contribute financially, you simply want to maximize your income, minimize your expenses, and donate the money to effective charities. (You only minimize your expenses to the level where it does not hurt your income. For example if keeping the high income requires you to have a car and expensive clothes, then the car and clothes are necessary expenses. Also you need to protect your health, including your mental health: sometimes you have to relax to avoid burning out.) Focus on your professional skills and networking.
If you want to contribute personally, you need to pay your living expenses, either from donated money, or by retiring early (the latter is probably less effective). Focus on social skills and research.
The house and family seem unnecessary (at least for the model strawman altruist).
So apparently I should be somewhat concerned about dying by poisoning. Any simple tips for avoiding this? It looks like the biggest killers are painkillers and heavy recreational drugs, neither of which I take, so I might be safe.
It's basically "applied statistics/some machine learning in R": we get a quick tour of data clean up and munging, basic stats, material on working with linear & logistic models, use of common visualization and clustering approaches, prediction with linear regression and trees and random forests, then uses of simulation such as bootstrapping.
There's a lot of material to cover, and while there's plenty of worked out examples in the lectures, I don't see anyone learning R or statistics just from this course - you should definitely have used R to some degree before (at least running some t-tests or graphs), and you will definitely benefit from already knowing what a p-value is and how you would calculate it by hand (because eg. you'll be flummoxed when the lecturer Leek works out a confidence interval 'by hand' while coding - "where does this magic value 1.96 come from?!").
On the plus side, I liked all the examples and the curriculum seems useful and well-chosen. It's a reasonable introduction to 'data science'. I think my time wasn't wasted doing this Coursera: I'm more comfortable with some of the more advanced/exotic techniques, and picked up many R tips, some of which have come in handy already (eg. some of the data munging tips were useful in working with Touhou music data, and I've been able to replace all my homebrew Haskell multiple-correction code in various nootropics & Zeo experiments with a standard R library function p.adjust, which I had no idea existed until the lecture on multiple comparison introduced it to me) - although as of yet I have not used bootstraps or random forests* or splines in anger. (But if any is thinking about doing it in the future, see my comment about the prerequisites.)
On the negative side: like most of the other students, I think this should've been a longer course than 8 weeks and that the estimate of 5hrs/wk is misleading. The pace was very unforgiving. I was relatively well-prepared for this course, but I still wound up submitting for the second data analysis assignment a paper I think was very substandard. Why? Well, though we had two weeks or so to do it, I deliberately didn't do much work on it in the first week because in the first assignment you couldn't do a good job without the lectures from the week before the assignment was due, and I didn't want to get bushwhacked again; but in the actual week before, I got completely distracted by my Touhou music project, and so I wound up just not having the time or energy to do it. Similar things happened to a lot of other students: there was no slack or recovery time.
(There were also the usual teething problems of any new course: wrong or misleading quizzes, errors in lectures, that sort of thing. The peer review grading seems particularly poor, with the required grades being based on pretty superficial aspects of the submitted analyses.)
Comment author:Viliam_Bur
12 March 2013 09:50:28AM
1 point
[-]
I like this approach more, because... I would be more likely to try that at home.
Most of the items are easy to buy anywhere. I would have most inconvenience getting the following: Body fortress whey protein, Jamba Juice beverage, source of life liquid gold. Could they be replaced with something more generic?
Also, eating the raw egg feels like a bad idea.
Without these ingredients, I would be very likely to try it now.
Comment author:[deleted]
12 March 2013 10:44:44AM
0 points
[-]
I had body fortress at home, and jamba juice was on the website. Just use some kind of wheatgrass and whey protein. Doesn't have to be source of life either as long as its high quality. I've seen ortho core and orange triad recommended on body building forums . The whole recipe is suggestions anyway. I also see no reason not to, say, use kale and raspberries instead of spinach and blueberries; maybe that will help if I get bored with the taste. I hope you keep me posted if you try this.
Comment author:[deleted]
16 March 2013 07:06:52PM
*
1 point
[+]
(0
children)
Comment author:[deleted]
16 March 2013 07:06:52PM
*
1 point
[-]
I was taking a friend's word on how amazingly beneficial wheatgrass juice is, until he claimed I could get everything I needed from wheatgrass indefinitely, which seemed outright crazy. So I researched it myself and I didn't find compelling evidence it's any more beneficial than normal vegetables. I have some in my freezer so I'm going to use it but unless you have a cheap source I don't think it's worth it, given that it tends to be expensive and taste like lawn clippings. This is embarassing.
Comment author:Gabriel
10 March 2013 11:56:34PM
4 points
[-]
I've just noticed that hovering the mouse pointer over a post's or comment's score now displays a balloon pop-up with information how large percentage of votes was positive. New feature or am I just really bad at noticing black stuff appearing suddenly on my screen?
Anyway, it's pretty nice. You can, for example, upvote a comment from 0 to 1, notice that the positive vote ratio changes only by a few percent and suddenly realize that there's a war going on in there.
Comment author:Dorikka
06 March 2013 06:08:10AM
4 points
[-]
Does anyone know which of the books on the academic side of CFAR's recommended reading list are likely to be instrumentally useful to someone who's been around here a couple years and has read most of the Sequences? It seems likely that there's some useful material in there, but I'd rather avoid reviewing a bunch of stuff.
I haven't signed up yet because I'm still assessing whether the overhead of filling it out is going to be too much of a trivial inconvenience, but thought some others might be interested. From poking around, it looks like it has a lot of potential but is still a little raw. It has the core game elements firmly in place but lacks the public status/accountability elements of good games (through acheivements/badges) and Fitocracy (through community/public accountability).
I've been using it for something like a week and am finding it moderately useful. Its two big advantages are that it hijacks my pathological desire to watch my numbers go up, and the near-complete lack of customization. (When using a calendar, I have to think of when the task is due. When using beeminder, I have to think about how frequently I'll be doing the task. With this, for any possible task, there are no fiddly bits to get in the way of just shutting up and putting it in the list.) The drawbacks are the weak enforcement and the near-complete lack of customization.
Comment author:FiftyTwo
07 March 2013 01:39:06AM
0 points
[-]
I've been looking for something like this for a while after success with fitocracy. (I tried to make one myself, but failed due to lack of relevant skills and interest).
I have been reading up on religious studies (yes, I ignored that generally sound advice never to study anything with the word 'studies' in the name) in order to better understand Chinese religion.
Unexpectedly, I have found the native concepts are useful (perhaps even more useful) outside the realm of religion. That is to say, distinctions like universalist/particularist, conversion/heritage, and concepts like orthodoxy, orthopraxy, reification, etc... are useful for thinking about apparently "non-religious" ideologies (including, to some extent, my own).
My first instinct when hearing a claim is to try and figure out if it is true, but I fear I have been missing the point (since much of the time, the truth of the claim is irrelevant to the speaker) and instead should focus more on the function a given (stated) belief plays in the life (especially the social life) of the person making the assertion (at least, on the margin).
However, even in countries with high gender equality, sex differences in math and reading scores persisted in the 75 nations examined by a University of Missouri and University of Leeds study. Girls consistently scored higher in reading, while boys got higher scores in math, but these gaps are linked and vary with overall social and economic conditions of the nation.
Comment author:Ritalin
10 March 2013 09:42:16PM
3 points
[-]
Saving the world though ECONOMICS
In a world of magic and fantasy, there exist two worlds: the Human World and the Demon World of fantasy creatures. Fifteen years ago, the "War of the Southern Kingdoms” broke out between both sides, each intending to conquer the other. Both sides were locked in a stalemate, until a young male human decides to do something about it. Known as the Hero, he is a skilled and powerful warrior who has traveled to the Demon World to end their evil by killing their leader, the Demon Queen.
But what surprised the Hero when he storms the Demon Queen's castle is that the latter doesn't want a fight. She just wanted to reveal to him a sordid truth: the war has never always been about good versus evil — it's a far more complicated affair, with each side being equally good and evil all the same.
On one hand, the war helped unite erstwhile feuding kingdoms against a common enemy. On the other hand, it allowed opportunists to take advantage of their own races and get rich off the war — powerful, corrupt humans control the poor and weak, while warmongering demon clans harass pacifistic ones. Then there's the prospects should one side win: the losers gets oppressed, while the winners break down into infighting over the spoils. Prematurely ending the war is an even worse idea, because so much money, time and resources have been spent for the war effort soldiers could never get any compensation should a ceasefire be signed immediately, causing each side to break down into civil war against their former employers.
Fortunately, the Demon Queen has a better idea, and she wants the Hero's help: forge a peaceful end to the war with the least repercussions by playing behind the scenes and at the same time introduce sweeping reforms on all levels of society. Convinced, the Hero agrees to join her as they try to forge a peaceful way out, gaining allies and companions in the process.
Is anyone else watching Maoyuu Maou Yuusha, or reading the relevant novels? It's about as close to rationalist fiction as I've ever seen a commercial work be. It goes way further than the premise; a strong spirit of secular humanism is embedded into the story and its characters, and it's got some of the finest examples examples of a Patrick Stewart Speech I've seen this side of fantasy.
Comment author:gwern
10 March 2013 10:24:38PM
0 points
[-]
I found the premise really cool, but I'm still waiting for the season to finish and the anime bloggers sum up whether it managed to deliver a good plot arc or not. (It may turn out to be one of those series where you're better off just reading the novels or whatever.)
Comment author:tgb
03 March 2013 04:19:30AM
*
3 points
[-]
Link: This Story Stinks: article on a study showing that reader's perception of a blog post is changed when they read comments. In particular, any comments involving ad hominens or being generally rude polarize people's views. Full paper link.
I've been trying out the brain-training software from Posit science. I've definitely gotten better at some of their training material (tracking objects in a crowd of identical objects and seeing briefly shown motion), but I'm not sure whether it's improving my life.
Have any of you tried Posit's BrainHQ? If so, how has it worked out for you?
The training exercises look like they're only available as expensive software, but if you do their free exercises, they'll offer a $10/month option.
I found out about Posit from this video-- Merzenich clearly has something to sell, but nothing he said seemed like obvious nonsense.
Comment author:[deleted]
01 March 2013 03:19:20PM
12 points
[-]
I wanted to apologize for the post I made on Discussion yesterday. I hope one of the mods deletes it. I should have thought more carefully before posting something controversial like that. I made multiple errors in the process of writing the post. One of the biggest mistakes I made was mentioning the name of a certain organization in particular, in a way that might harm that organization.
In the future, before I post anything, I will ask myself, "Will this post raise or lower the sanity waterline?" The post I made clearly didn't really do much for the former, and could easily have contributed to the latter. For that I am filled with regret.
I have a part-time job, and I will be donating at least $150 of my income to the organization I mentioned and possibly harmed in the previous post I made.
I'm not making this comment for the purpose of gaining back karma; I'm making it because I still want to be taken seriously in this community as a rationalist. I know that this may never happen, now, but if that's the case, I can always just make another account. Less Wrong is amazing, and I like it here.
Comment author:Kaj_Sotala
01 March 2013 04:55:10PM
10 points
[-]
If you're not making mistakes, you're not taking risks, and that means you're not going anywhere. The key is to make mistakes faster than the competition, so you have more chances to learn and win.
Comment author:wedrifid
02 March 2013 01:44:47AM
5 points
[-]
I'm not making this comment for the purpose of gaining back karma; I'm making it because I still want to be taken seriously in this community as a rationalist. I know that this may never happen, now, but if that's the case, I can always just make another account.
Based on your handle I assumed you already had another account. I do suggest making another one now. There is no need to take that baggage with you---leave that kind of shit as anonymous.
Comment author:Viliam_Bur
02 March 2013 07:53:05PM
1 point
[-]
There could be a plugin for this. Imagine that before sending a post, you have to answer a few questions, such as: "Your certainty that this post will move the sanity waterline in a positive direction".
But we are only humans. We would learn very soon to ignore it, and just check the "right" answers automatically.
Maybe it would work better if it displayed only randomly, once in a few comments. And then the given comment could be sent to reviewers, who could inflict huge negative karma if they strongly disagree with the estimate.
Or perhaps there could be an option to click "I am sure this comment is useful and harmless" when sending a comment. A comment without this option gets +1 karma on upvote and -1 on downvote; a comment with this option gets +2 on upvote and -5 on downvote. This could make people think before posting.
Comment author:drethelin
03 March 2013 03:34:46AM
4 points
[-]
I like the idea of a questionnaire that pops up randomly when making a comment at a rate of maybe 1-10 percent. possible example questions:
Do you think this comment is funny?
Do you think this comment is useful to the person you're responding to?
Do you think this comment is useful to anyone but the person you're responding to?
Do you think this comment will have positive karma? How much?
Would you make this comment to anyone's face?
etc
Displaying one or more of these at a rate that makes you think but not at a rate that would be super annoying would be fun and provide some neat databases.
On the other hand I'm sure programming it would be a bitch.
Comment author:CoffeeStain
11 March 2013 08:50:45AM
2 points
[-]
I'm having a motivation block that I'm not sure how to get around. Basically whenever I think about performing an intellectual activity, I have a sudden negative reaction that I'm incapable of succeeding. This heavily lowers my expectation that doing these activities will pay off, most destructively so the intellectual activity of figuring out why I have these negative reactions.
In particular, I worry about my memory. I feel like it's slipping from what it used to be, and I'm only 24. It's like, if only I could keep the details of the memory tricks in my head long enough I might be able to improve it. :) Only partially kidding.
In short, it takes a lot of effort for me to feel like I'm succeeding at succeeding. And I don't know why.
Specifically regarding memory, things don't need to be in your head for you to remember them. Start writing stuff down. All the stuff. Doesn't matter where. Anywhere is better than nowhere. I recommend Workflowy.
Comment author:Viliam_Bur
12 March 2013 09:39:36AM
2 points
[-]
You are not specific enough about the memory. If you start forgetting your own name or something like this, you should visit a doctor. But if you only forget some details from what you learned at school, that means that you already have learned many things; so many that your day simply is not long enough to review all of them (and you also have to focus on many other different things). You have to develop the art of note taking. The more you have to know, the more critical this skill becomes. It is an illusion to try keeping everything in your head just because that strategy worked when you only knew a little.
The difficulty of succeeding may mean that you have already picked most of the low-hanging fruit. Just like in a computer game, the higher levels get more difficult. The difficulty does not mean that you are less powerful; it means that you are powerful enough to work on the more difficult tasks. Also, some tasks require time and discipline; you simply cannot master them at your first attempt.
I think you have to apply two kinds of fixes: psychological and organizational. Don't ignore either of them. It is important to make yourself feel better. And it is also important to use better tools. Without better tools your success is limited. But your mind remains the most important of your tools.
Many thanks. My memory issue certainly isn't any sort of disorder, and indeed the sort of success I'd like to have with it are of a high level. There has been a decline in the last few years of my (formerly exceptional) abilities here, and I need to find ways to increase my attention to it as a graspable and controllable challenge/problem.
Generally my ability to deal with attention, focus, and memory issues correlates to my day-to-day mood and self-confidence. I've found a coach through the community here to help me find ways to combat these slightly more fundamental issues. It is good, though, to see the wide variety of talk here about improving focus, overcoming "Ugh fields," and the like.
Fundamentally, my issue is one of keeping a particular skill in practice, and so I appreciate your practical suggestions. University offers an environment that more constantly practices skills such as learning, remembering, and new-paradigm thinking. My work environment offered similar challenges for a year or so, but I've since gained an expertise that is more valuable to use than to grow.
Today I gave a presentation to a group of 50 software developers in my company, and I was pleasantly surprised at my abilities. Apparently all of my on-the-fly speaking skills (which I had presumed dead since school) were just latent, if out of practice until the adrenaline kicked it back online. This was in no small part due, I suppose, than some mental tricks I've learned here into convincing myself of my future success, based on previous successes.
Just typing for my own benefit now. Thank you very much for your advice!
Comment author:Viliam_Bur
13 March 2013 09:01:28AM
*
2 points
[-]
Glad to be useful. In similar situations I often don't know how much the advice I would have given to myself also applies to other people.
For me, the greatest memory-related shock was about 1 year after finishing university. I found my old paper with notes for the final exam, and I realized I didn't understand half of the questions. Not only was I unable to answer them, but I had problem finding any related association. For the whole year in my job I was doing something completely different, and I forgot many things without even being aware that it happened. (The problem is, despite having studied computer science and working as a programmer, I never use 95-99% of what I learned at school. I know a lot of theory, I should be able to invent a new programming language and write a compiler with some basic optimization; but in real life I mostly do web interfaces for databases, over and over again.) Now I am sorry I didn't make better notes at university. But at the time, I was so proud that I understand everything. I didn't have experience with what happens when you simply never think about a topic for years. If you are 24, this may be already happening or going to happen to you, too.
A few years forward, my programming career was progressing: I wrote code for two years in Java, then seven years in something else. Then I returned to Java and was like: oh, here is the forgetting again! This time I was lucky, because I simply downloaded the official documentation, read it from the beginning to the end, and most forgotten memories returned quickly. (I didn't have the note-making skill yet, but I already had the habit of always looking at the authoritative documentation first.) But then I realized that "learning to forget" is a stupid strategy when it comes to really useful things, so I started to make notes. (First I spent a lot of time trying to find a good software for that, and then ended writing my own. Today, I would probably use some existing tool.) Now when I learn something related to programming, I immediately start writing notes. At the beginning, they are chaotic a bit, but I can always refactor them later. I tried to use the same habit in other areas of life, but somehow it didn't work. Recently I started using Anki when learning human languages. The difference is, with human languages, you need to keep it all in your head, all the time, because you never know when you will need a word. With computer languages, remembering is not necessary, only the ability to find it quickly; and it is good to have the knowledge divided by topics. I could use Google for many questions, but some topics are rather difficult to find this way (either because many people ask the question and nobody provides an answer; or when many people provide incorrect information), and I believe I can write the information in the format best legible to me.
For the mood, reminding yourself of your past successes is very good. Sometimes people don't see the forest for the trees. A great success may require thousand days of work, and when you wake up on the day#470 and you don't see any progress compared with the days #469 and #468, it is easy to believe that you are not going anywhere. If you have a list of successes, and you see that every other year something great happens, that puts things into better perspective. (But it also goes the other way round. If you procrastinate, it is easy to believe that you are on the way to your next goal, when in fact you are going nowhere.)
Comment author:FiftyTwo
08 March 2013 08:23:23PM
2 points
[-]
Apparently conscientiousness correlates strongly with a lot of positive outcomes. But unfortunately I seem to be very low on it.* Is there anything I can do to train it?
*Standard disclaimers about self assessment apply.
Comment author:beoShaffer
08 March 2013 09:37:04PM
1 point
[-]
You can get actual big five tests online (see the latest LW survey for an example). The big 5 tend to be pretty stable, but putting your self in a social group that has the trait you want is relatively effective. Also, there is a whole lot of YMMV on which one(s) to use, but organizational/productivity tools like Getting Things Done can allow you to act in usefully conscientious ways without changing your personality per se.
Yeah, me too. I found getting a relatively structured job with standards so low that my (minimal) natural levels of professionalism exceeded their requirements successful. Conscientiousness is really, really difficult to train, but you can move further from your current base by changing the people you hang out with or work with. Industriousness, OTOH, is trainable. Last comment I saw about this, good paper linked.
You can do better but having low conscientiousness still blows.
Comment author:Elithrion
07 March 2013 08:08:50PM
*
2 points
[-]
So, I notice some of the top contributors have the "Overview" page that appears when you click on their name display their LW wiki user page instead of the standard recent comments/posts summary (gwern, for example). Is that only for super-awesome people or is there some way to enable it that I failed to find?
[edit:] Okay, apparently patience is the key. It started working for me somewhere between 24 and 48 hours after I made the wiki page for my username.
Comment author:wedrifid
10 March 2013 09:55:43PM
2 points
[-]
So, I notice some of the top contributors have the "Overview" page that appears when you click on their name display their LW wiki user page instead of the standard recent comments/posts summary (gwern, for example). Is that only for super-awesome people or is there some way to enable it that I failed to find?
I find this feature really damn annoying. I don't want to see people's wiki profile. If I click on the name it is because I want to see the posts and comments. It would be great if this 'feature' could be disabled.
Comment author:Elithrion
10 March 2013 10:13:03PM
1 point
[-]
Aw, but I wrote something relevant on mine! (Although most people don't seem to, admittedly.) I guess it'd be ideal if there were an option to enable/disable it for yourself and also to enable/disable skipping that page and going to comments when viewing.
Comment author:[deleted]
11 March 2013 01:23:57PM
1 point
[+]
(0
children)
Comment author:[deleted]
11 March 2013 01:23:57PM
1 point
[-]
Mine used to say that I was the same username on LW-wiki as on LW-Main, but I cleared it because it became redundant with this feature. Unfortunately I don't have rights to delete pages on the wiki, which is also mildly annoying for me if I want to look at my own comments.
Comment author:Kawoomba
07 March 2013 08:39:14AM
2 points
[-]
It occurs to me that there is a roadblock for an AI to go foom; namely that it first has to solve the same "keep my goals constant while rewriting myself" problem that MIRI is trying to solve.
Otherwise the situation would be analogous to Gandhi being offered a pill that has a chance of making him into anti-Gandhi, and declining it.
If the superhuman - but not yet foomed! - AI is not yet orders of magnitude smarter than a hoo-mon, it may be a while before it is willing to power-up / go foom, since it would not want to jeapardize its utility function along the way.
Just because it can foom does not imply it'll want to foom (because of the above).
Comment author:drethelin
07 March 2013 08:55:01AM
1 point
[-]
This is interesting, though I think it's less relevant for an entity made out of readable code. In the pill situation, if ghandi fully understood both his own biochemistry and the pill, all chance would be removed from the equation.
Comment author:Kawoomba
07 March 2013 09:00:03AM
*
0 points
[-]
edit: More relevant reply:
A human researcher would see all of the AI's code and the "pill" (the proposed change), yet even without that element of "chance" it is not yet a solved problem what the change would end up doing.
If the first human-programmed foom-able AI is not yet orders of magnitude smarter than a human - and it's doubtful it would be, given that it's still human-designed, then the AI would have no advantage in understanding its own code that the human researcher wouldn't have.
If the human researcher cannot yet solve keeping the utility function steady under modifications, why should the similar-magnitude-of-intelligence AI (both have full access to the code-base)?
Just remember that it's the not-yet-foomed AI that has to deal with these issues, before it can go weeeeeeeeeeeeeeeeKILLHUMANS (foom).
Comment author:palladias
06 March 2013 07:26:32AM
2 points
[-]
I've just moved to the Bay Area, and, as I'm unsubscribing from all my DC-area theatre/lecture/fun event listservs, I am sad I don't yet know what to replace them with!
What mailing lists will tell me about theatre, lectures, book clubs, social dance, costuming, etc in Berkeley and environs?
Does anyone know if there any negative effects of drinking red bull or similar energy drinks regularly?
I typically use tea (caffeine) as my stimulant of choice on a day to day basis, but the effects aren't that large. During large Magic: the Gathering tournaments, I typically drink a red bull or two (depending on how deep into the tournament I go) in order stay energetic and focused - usually pretty important/helpful since working on around 4 hours of sleep is the norm for these things.
Red bull works so well that I'm considering promoting it to semi-daily use, but I'd like to know exactly what I'm buying if I do this.
Edit: After saying it out loud, I just realized that if I use red bull regularly, it might lose its effects due to caffeine/whatever dependency. TNSTAAFL strikes again :-/ Still interested in any evidence though.
Comment author:[deleted]
04 March 2013 05:04:46PM
2 points
[-]
What is the purpose of the monthly quotes thread? (To post quotes, obviously.) But it seems to me that a lot of the time, it's just an excuse for applause lights.
Best case, someone finds a quote that expresses a rationality idea that I agree with but couldn't articulate as eloquently as the quote. This is particularly nice when it comes from an unexpected source; when I see good rationality coming from places I didn't expect, it's evidence that the corresponding ideas are good ideas rather than just, say, ideas popular on LW.
Comment author:Thomas
03 March 2013 09:59:34AM
*
2 points
[-]
How to instantly know, which articles I have already read on LW (or elsewhere)?
Well, if I have a camera on my computer, it could trace my eyes and displayed article and do some time based guesses what had actually has been read by me. Then it should be displayed with a yellowish background next time.
Just a suggestion.
P.S.
Or at least, it should be a I HAVE READ IT! button somewhere. With an personal marks how good it was. Independent of the up/down vote thumbs.
Comment author:FiftyTwo
03 March 2013 03:27:25PM
0 points
[-]
Presumably if you have browsing history stored on your computer you could have an indicator if a web address had been accessed before? (Presumably using the same function that makes links blue/purple.)
This comment discusses information hazards, but not in much detail.
"Don't link to possible information hazards on Less Wrong without clear warning signs."
— Eliezer, in the previous open thread.
"Information hazard" is a Nick Bostrom coinage. The previous discussion of this seems to have focused on what Bostrom calls "psychological reaction hazard" — information that will make (at least some) people unhappy by thinking about it. Going through Bostrom's paper on the subject, I wonder if these other sorts of information hazards should also be avoided here:
Distraction hazards — addictive products, games, etc.; especially those that have been optimized to be so. Examples: Links to video games; musical earworms; discussions of addictive drug use; porn.
Role model hazards — discussions of people doing harmful things; bad examples that readers might imitate. Examples: Talking about suicide and thoughts leading to it; fatalistic discussion of bad habits.
Biasing hazards — information that amplifies existing biased beliefs. Examples skipped to avoid a distracting political discussion here.
Embarrassment hazard — discussions of embarrassing things happening to people in the community. Examples: Links to scandalous or distorted stories about members of the community; gossip in general.
Comment author:Adele_L
01 March 2013 09:44:22PM
10 points
[-]
Another thing that seems to fit this pattern that I have seen elsewhere is a Trigger Warning, which is used before people discuss something like rape, discrimination, etc... which can remind people who have experienced those about it, causing some additional trauma from the event.
Comment author:ModusPonies
03 March 2013 08:20:49AM
*
3 points
[-]
Has anyone here ever decided not to read something because it had a trigger warning? I can't imagine doing so myself, but that may be the typical mind fallacy.
Has anyone here ever decided not to read something because it had a trigger warning? I can't imagine doing so myself, but that may be the typical mind fallacy.
I have chosen not to consume media (including but not limited to text) because of an explicit trigger warning. Not often, though; most trigger warnings relate to topics I don't have trauma about.
More often, I have chosen to defer consuming media because of an explicit trigger warning, to a time and place when/where emotional reactions are more appropriate.
I have consumed media in the absence of such warnings that, had such a warning been present, I would have likely chosen to defer. In some cases this has had consequences I would have preferred to avoid.
Comment author:tut
03 March 2013 09:03:25AM
*
5 points
[-]
I haven't, but I think that were trigger warnings are appropriate is in things that hurt a few people disproportionately. If something hurts everyone that reads it you shouldn't write it at all, and if it hurts no one more than it is worth it isn't a case for trigger warnings. But if it is something that needs to be said to many people, and there is a significant group (perhaps those that have had a certain experience) who would suffer a lot from reading it, then you put a trigger warning that would be recognized by that group at the top.
TLDR If most people never care about trigger warnings, then they might work as intended.
Comment author:erratio
03 March 2013 06:03:03PM
*
2 points
[-]
I have chosen not to Google something that I was warned would involve seeing particularly horrific images. I imagine that if said topic was put in blog post form with a trigger warning up the top, I would probably choose not to read it.
EDIT: It's probably worth adding that I adopted this policy after discovering the hard way that there are things out there I would really prefer not to see/hear about.
Comment author:[deleted]
04 March 2013 01:03:56PM
0 points
[-]
I haven't, but I have never experienced a serious trauma that I don't want to be reminded to me, so I'm not the kind of person that people who write trigger warnings are thinking about.
Agreed — Bostrom's classification "psychological reaction hazard" seems like it should include "trigger" as a subset — both the original sense of "PTSD trigger" and the more general sense that seems popular today, which might be expanded as "information that will remind you of something that it hurts to be reminded of."
Comment author:Alejandro1
01 March 2013 07:31:22PM
*
8 points
[-]
As for distraction hazards, I have often seen links to TvTropes been posted with a warning sign, both here and in other sites. (Sometimes a plain "Warning: TvTropes link", sometimes a more teasing "Warning: do not click link unless you have hours to spare today".)
Or "Warning: Daily Mail" (or other sites working on the click-troll business model): linking to a site your readers may object to feeding even with a click. Knowledge hazard, in that even when such sites are more right than their usual level they tend to be wrong.
Comment author:shminux
01 March 2013 08:21:04PM
7 points
[-]
Why stop there? Employment hazard (NSFW), Copyright hazard (link to torrent, sharing site or a paper copied from behind a paywall), Relationship hazard (picture of a gorgeous guy/girl), dieting hazard (discussion of what goes well with bacon)...
Well, the ones I mentioned are drawn from Bostrom's paper (although they aren't all of his categories). Eliezer seemed to be specifically discouraging a class of psychological reaction hazards while using the more general term "information hazard" to do it; I thought to inquire into what folks thought of other classes of information hazard.
Comment author:Elithrion
02 March 2013 05:31:30AM
2 points
[-]
I think it'd be nice to have a (probably monthly) "Ideas Feedback Thread", where one would be able to post even silly and dubious ideas for critique without fear of karma loss. Rules could be that you upvote top level idea comments if they sound interesting (even if wrong), and downvote only if you're really sure that it's very easy to find out they're bad (e.g. covered in core sequences). Could also be used for getting feedback on draft posts and whatnot.
The plan being that questionable ideas are put into their own thread for feedback, instead of being potentially turned into questionable posts. At the same time, it would give people a place to be wrong and get feedback without fear of repercussions and hopefully without forming negative associations with doing stuff on Less Wrong.
(Potential downsides include that it could steal from of other, more filtered, content locations if people feel it's a less risky place to post things, that posting in the thread may potentially be seen as low-status, and that someone reading recent comments may vote in not the intended way or feel burdened with reading less filtered content. I feel that these are probably outweighed by upsides.)
Comment author:Elithrion
02 March 2013 05:19:29PM
0 points
[-]
I think open threads are in practice already this.
Not that I have noticed. Open Threads seem to primarily be "here's a cool thing I'd like to let you know about". If I want to post something like "The 'you are cloned and play prisoner's dilemma against yourself' example against CDT is actually pretty bad. Solving it doesn't require UDT/TDT so much as self-modification, with which even CDT would be able to easily solve it." (with a few more lines of explanation), for example, my model of Open thread predicts that if I'm wrong, I'll be downvoted a few times, and may or may not get good feedback. Also that Open Thread is meant for things that are more of interest to everyone, rather than being fairly specific. Which is why I'm not posting anything like that, even though I'm 80% sure that particular example is correct and may be of interest to at least some people.
Excessively encouraging such things could breed crackpots.
In what way? I doubt existing visitors to Less Wrong would be significantly more likely to generate crackpot ideas because of the existence of a thread, and I doubt even more that more crackpots would come to Less Wrong to participate in one thread. It may, admittedly, reduce conformity if people find unexpected support for non-mainstream ideas, however I'm not sure that most would consider that a bad thing.
my model of Open thread predicts that if I'm wrong, I'll be downvoted a few times, and may or may not get good feedback.
I think downvotes would depend on how you present your idea. If you present your idea as if you're already convinced you're right, and you're not, I think that would lead to downvotes. But if you preface your idea with "hey, here's something I thought of, dunno if it works, would appreciate feedback," I think that would be fine. What people respond negatively to, I think, is not wrongness so much as arrogant wrongness. (Or at least that appears to be what I respond negatively to.)
I doubt existing visitors to Less Wrong would be significantly more likely to generate crackpot ideas because of the existence of a thread
My model of the median LessWronger is closer to a crackpot than yours, maybe. Not that I think this is uniformly a bad thing; I have a vague suspicion that the brains of crackpots and the brains of curious, successful thinkers are probably pretty similar (e.g. because of stuff like this post). But it's easy to read the Sequences and think "man, I totally understand decision theory and also quantum mechanics now, I'm going to go off and have a bunch of ideas about them" and to be honest I don't want to encourage this.
I like this proposal. In the past, people (including me) have complained that LW doesn't get enough posts on topics where there's likely to be a lot of controversy or high variance in an item's score, 'cause people don't like getting downvoted more than they like getting upvoted.
Comment author:Epiphany
02 March 2013 05:00:51AM
*
1 point
[-]
Can anyone tell me the name of this subject or direct me to information on it:
Basically, I'm wondering if anyone has studied recent human evolution - the influence of our own civilized lifestyle on human traits. For example: For birth control pills to be effective, you have to take one every day. Responsible people succeed at this. Irresponsible people may not. Therefore, if the types of contraceptives that one can forget to use are popular enough methods of birth control, the irresponsible people might outnumber responsible people in a very short period of time. (Currently about half the pregnancies in the USA are unintended, and probably 40% of those pregnancies go full term and result in a child being born. As you can imagine, it really wouldn't take very long for the people with genes that can cause irresponsibility to outnumber the others this way...)
Any search terms? Anyone know the name of this topic or recall book titles or other sources about it?
Comment author:Kaj_Sotala
02 March 2013 08:34:54AM
7 points
[-]
The 10,000 Year Explosion discusses the effects that civilization has had on human evolution in the last 10,000 years. (There's also this QA with its authors.) Not sure whether you'd count that as "recent".
Gregory Clark's work A Farewell to Alms discusses human micro-evolution taking place within the last few centuries, but is highly controversial (or so I hear).
Yeah, that's like saying you could domesticate foxes in less than a human generation, or have adult lactose tolerance increase from 0% to 99.x% in some populations in under 4,000 years. Does this guy think we're completely credulous?
Comment author:[deleted]
04 March 2013 04:29:10AM
1 point
[+]
(0
children)
Comment author:[deleted]
04 March 2013 04:29:10AM
1 point
[-]
The traits that I am aware of that show strong evolution all have had thousands of years to be selected for, like lactose tolerance in people descended from herders, resistance to high altitude with a hemoglobin change in Tibet, apparent sexual selection for blue eyes in Europeans and thick hair in East Asians, smaller stature in basically all long-term agriculturalist populations...
-Cellbioguy, elsewhere in thread.
I suspect you've misidentified his contention here; he clearly doesn't seem to think humans haven't evolved within the Holocene.
Comment author:Kaj_Sotala
02 March 2013 06:20:20PM
1 point
[-]
I don't remember it doing so, but it's two years since I read it and I did so practically in one sitting, so I don't remember much that I wouldn't have written down in the post.
Comment author:Costanza
04 March 2013 05:11:22PM
0 points
[-]
The infamous Steve Sailer has written a lot about cousin marriage , which, in practice, seems to be correlated with arranged marriage in many cultures (including the European royals in past centuries). Perhaps a lot of arranged marriages in practice may lead to inbreeding, with the genetic dangers that follow.
I'm also wondering about the effects of anonymous sperm banks, where relatively well-off women may pay to choose a biological father on the basis of -- whatever available information they may choose to consider. What factors, in a man they will never meet, do they choose for their offspring?
I'm not a domain expert, but my standing assumption is that even the last few hundred years of human history were just too short to have a noticeable effect on allele frequencies. I would be very interested to hear evidence to the contrary, though.
Although a negative relationship between fertility and education has been described consistently in most countries of the world, less is known about the relationship between intelligence and reproductive outcomes. Also the paths through which intelligence influences reproductive outcomes are uncertain. The present study uses the NLSY79 to analyze the relationship of intelligence measured in 1980 with the number of children reported in 2004, when the respondents were between 39 and 47 years old. Intelligence is negatively related to the number of children, with partial correlations (age controlled) of −.156, −.069, −.235 and −.028 for White females, White males, Black females and Black males, respectively. This effect is related mainly to the g-factor. It is mediated in part by education and income, and to a lesser extent by the more “liberal” gender attitudes of more intelligent people. In the absence of migration and with constant environment, genetic selection would reduce the average IQ of the US population by about .8 points per generation.
Recent experiences have suggested to me that there is a positive correlation between rationality and prosopagnosia. One hypothesis is that dealing with prosopagnosia requires using Bayes to recognize people, so it naturally provides a training ground for Bayesian reasoning. But I'm curious about other possible hypotheses as well as additional anecdotal evidence for or against this conclusion.
I learned that a surprising number of people involved with CFAR / MIRI have prosopagnosia. (Well, either that or I'm miscalibrated about the prevalence of prosopagnosia.)
I know 4 (I think?) people with prosopagnosia and maybe 800 people total, so my first guess is 0.5%. Wikipedia says 2.5% and the internet says it's difficult to determine the true prevalence because many people don't realize they have it (generalizing from one example, I assume). The observed prevalence in CFAR / MIRI is something like 25%?
So another plausible hypothesis is that rationalists are unusually good at diagnosing their own prosopagnosia and the actual base rate is higher than one would expect based on self-reports.
Comment author:erratio
08 March 2013 03:10:02AM
1 point
[-]
Theory off the top of my head: The causation is in the wrong direction. People who are rational are far more likely to be very systems-oriented, have limited social experiences as children (by having different interests and/or being too dang smart), be highly introverted, and other factors that correlate with being around other people a lot less than your typical person. There's nothing wrong with our hardware per se, it's just that we missed out on critical training data during the learning period,
Anecdotal: I have mild prosopagnosia. I have a lot of trouble recognising people outside their expected context, I make heavy use of non-facial cues. I'm pretty good at putting specific names to specific faces on demand when it feels important enough, although see prev point about expected context. I don't feel like I use anything resembling Bayesian reasoning, I feel like I have the same sense of recognition that I imagine most people have, it's just less dependent on seeing their face and more on other traits (most typically voice and manner of movement).
Comment author:[deleted]
04 March 2013 09:16:29PM
1 point
[+]
(0
children)
Comment author:[deleted]
04 March 2013 09:16:29PM
1 point
[-]
Has anyone indexed the set of Five-Second Skill posts on Less Wrong? E.g. Get Curious, the Algorithm for Beating Procrastination, Value of Information etc.
Comment author:gwern
04 March 2013 08:34:27PM
*
1 point
[-]
I've been working on a little project compiling Touhou music statistics. One major database may be unavailable to me from anywhere but Toranoana, and the total cost of reshipping will be ~$25 and take several weeks to get to me. This would be annoying, expensive, and slow.
In case my other strategies fail, are there any LWers in Japan who either owe me a favor or are willing to do me a favor in buying a CD off Tora and sending me the spreadsheets etc on it? (I'd be happy to cover the purchase cost with Paypal or a donation somewhere or something.)
Comment author:FiftyTwo
04 March 2013 03:33:57PM
1 point
[-]
Does anyone have sources on active steps that can be taken to improve gender diversity in organisations?
There is a lot of writing on the subject, but I'm finding it difficult to find sources that compare the effectiveness of different measures, with figures showing change, controlling for variables etc.
Comment author:Viliam_Bur
05 March 2013 08:30:02AM
*
0 points
[-]
I would like to see the results too, but I doubt they exist (beyond the obvious: if you want to have 50% male and 50% female employees, make an internal rule to hire 50% men and 50% women).
Beyond evidence... my heuristic would be to start the organization with gender diversity. It should be easier to find e.g. 3 men and 3 women to start an organization, then to have an organization of 100 men and later think about how to make it more friendly for women.
EDIT: Also, you should not have a bottom line already written that having 50:50 ratio is improving. People do have different preferences. A ratio other than 50:50 might reflect the true level of interest in the base population.
Comment author:wedrifid
05 March 2013 10:22:43AM
1 point
[-]
I would like to see the results too, but I doubt they exist (beyond the obvious: if you want to have 50% male and 50% female employees, make an internal rule to hire 50% men and 50% women).
To be precise: Hire in the direction of 50% men and 50% women. Depending on retention rates this may need to be skewed in either direction.
Comment author:Manfred
02 March 2013 11:52:30AM
*
1 point
[-]
Quick clarification of Eliezer's Mixed Reference, intended for me from twelve hours ago:
'External reality' is assumed to mean the stuff that doesn't change when you change your mind about it. This is a pretty good fit to what people mean when they say something like "exists" and didn't preface it with "cogito ergo." It's what can be meaningfully talked about if the minds talking are close enough that "change your mind" is close to "change which mind."
External reality can be logical, because the trillionth digit of pi doesn't change even if you change your mind about it. Or it can be physical, because dogs don't disappear if you decide there are no dogs nearby. ("why do dogs / suddenly disappear / every time / you are near.")
If we look inside of peoples' heads, logical external reality seems to be universal and specific - minds are computation, and so if you can do some fairly general stuff like labeling the output of an algorithm you haven't evaluated yet, you can have logical "external reality," which now appears to be somewhat of a misnomer, but oh well. "Stuff that doesn't change when you change your mind about it" is still too long.
Physical reality, on the other hand, is much more general and contingent - it's just a catch-all term for "hey, I know we're a mind and have logical reality and that good stuff, but there happens to be a world out here!" In fact, it's tempting to just say "if it doesn't change when you change your mind and it's not a logical thing, it's a physical thing." The label external reality might make more sense being applied to this stuff, since "physical" carries some connotation that isn't necessarily accurate.
Comment author:beoShaffer
01 March 2013 10:38:38PM
1 point
[-]
I remember seeing something about Islamic law and the ability to will money to charities meant to exist in perpetua and now I can't find it. Does anyone know what I'm talking about.
Comment author:thomblake
28 March 2013 02:07:14AM
0 points
[-]
I am in Berkeley for a few days, primarily Thursday march 28th. Please text me at 203-710-5337 if you'd like to catch up or have any ideas for a thing I shouldn't miss.
Comment author:MileyCyrus
14 March 2013 08:30:24PM
*
0 points
[-]
If computer hardware improvement slows down, will this hasten or delay AGI?
My naive hypothesis is that if hardware improvement slows, more work will be put into software improvement. Since AGI is a software problem, this will hasten AGI. But this is not an informed opinion.
Comment author:Thomas
09 March 2013 02:41:29PM
0 points
[-]
I've just learned that if it is July or a later month, it is more probable that the current year has begun with Friday, Sunday, Tuesday or Wednesday. If it is June or an earlier month, it is more probable that the current year has begun with Monday, Saturday or Thursday.
Comment author:Thomas
11 March 2013 01:53:48PM
*
1 point
[-]
There are more days in July to December, than in January to June. So it is a little more likely for a random observer to find himself in the later 6 months.
But if he finds himself before the July, it is more likely that it is a leap year, with an additional day, than otherwise would be.
This increased probability for a leap year skews the probability distribution for the first day of the year also.
Comment author:[deleted]
12 March 2013 01:30:29PM
0 points
[-]
Correct me if I'm wrong, but isn't the probability of a year being a leap year approximately 25%, completely independent of what month it is? (This seems like one of those unintuitive-but-correct probability puzzles...)
Comment author:Kawoomba
12 March 2013 02:09:07PM
2 points
[-]
For all intents and purposes, yes. Well, for nearly all intents and purposes, since there is in fact a very slight difference:
Imagine the year only had 2 months, PraiseKawoombaMonth, and KawoombaPraiseMonth, each of those having 30 days. However, every other year the first month gets cut to 1 day to compensate for some unfortunate accident involving shortening the orbital period. Still, for any given year the probability of being a leap year is 50%.
Now you get woken from cryopreservation (high demand for fresh slaves) and, asking what time it is, only get told it's PraiseKawoombaMonth (yay!). This is evidence that strongly informs you that you are probably in one of the equi-month years, since otherwise it would be very unlikely for you to find yourself in PraiseKawoombaMonth.
Snap, back to reality: Same thing if you're told it's August, the chance of being in August at any given time is lower in a leap year, since the fraction of August per year is lower. There's just more February to go around!
Sorry for the quality of the explanation. It's the only way I can explain things to my kids.
Comments (237)
Why am I not signed up for cryonics?
Here's my model.
In most futures, everyone is simply dead.
There's a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.
What are the relative sizes of those slivers, and how much more likely am I to be revived in the "better" futures than in the "worse" futures? I really can't tell.
I don't seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.
I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don't sign up. I ask to be cremated instead.
Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.
This does, however, put me into disagreement with both Robin Hanson ("More likely than not, most folks who die today didn't have to die!") and Eliezer Yudkowsky ("Not signing up for cryonics [says that] you've stopped believing that human life, and your own life, is something of value").
So are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude? I'm having trouble imagining that. In my mind, you are most likely to be revived because the reviver feels some sort of moral obligation towards you, so the future in which this happens should, on the whole, be pretty decent. If it's a future of eternal torture, it seems much less likely that something in it will care enough to revive some cryonics patients when it could, for example, design and make a person optimised for experiencing the maximal possible amount of misery. Or, to put it differently, the very fact that something wants to revive you suggests that that something cares about a very narrow set of objectives, and if it cares about that set of objects it's likely because they were put there with the aim of achieving a "good" outcome.
(As an aside, I'm not very averse to "worse-than-death" outcomes, so my doubts definitely do arise partially from that, but at the same time I think they are reasonable in their own right.)
Yes. Like, maybe the latter probability is only 10 or 100 times greater than the former probability.
This seems strangely averse to bad outcomes to me. Are you taking into account that the ratio between the goodness of the best possible experiences and the badness of the worst possible experiences (per second, and per year) should be much closer to 1:1 than the ratio of the most intense per second experiences we observe today, for reasons discussed in this post?
Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don't see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.
For some of the same reasons depressed people take drugs to elevate their mood.
I like that post very much. I'm trying to make such an update, but it's hard to tell how much I should adjust from my intuitive impressions.
I responded to this as a post here: http://lesswrong.com/r/discussion/lw/lrf/can_we_decrease_the_risk_of_worsethandeath/
Whoa. What? I notice that I am confused. Requesting additional information.
Most of the time, if I read something like that, I'd assume it was merely false—empty posturing from someone who didn't understand the implications of what they were writing. In this case, though... everything else I've seen you write is coherent and precise. I'm inclined to believe your words literally, in which case either A) I'm missing some sort of context or qualifiers or B) you really ought to see a therapist or something.
Do you mean you're not averse to death decades from now? Does that feel different from the possibility of getting hit by a bus next week?
(Only tangentially related, but I'm curious: what's your order of magnitude probability estimate that cryonics would actually work?)
No, I'm sorry, but there are simply many atheists who really aren't that scared of non-existence. We don't seek it out, we do prefer continuation of our lives and its many joys, but dying doesn't scare the hell out of us either.
This, in me at least, has nothing to do with depression or anything that requires therapy. I'm not suicidal in the least; even though I'd be scared of being trapped in an SF-style dystopia that didn't allow me to so suicide.
What's that quote that says something to the nature of "I didn't exist for billions of years before I was born, and it didn't bother me one bit" ?
“I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.” ― Mark Twain
Sorry, I just meant that I seem to be less averse to death than other people. I'd be very sad to die, and not have the chance to achieve my goals, but I'm not as terrified as death as many people seem to be. I've clarified the original comment.
If there is a high probability of these bad futures happening before you retire, this belief reduces the cost of cryonics to you in terms of the opportunity cost of instead putting money into retirement accounts.
In the really bad futures you probably don't experience extra suffering if you sign up for cryonics because all possible types of human minds get simulated.
A new comet from the oort cloud, >10 km wide, has been discovered that is doing a flyby of Mars in October of 2014. The current orbit is rather uncertain, but it is probably passing within 100,000 km and the max likelihood is ~35,000 km. There is a tiny but non-negligable chance this thing could actually hit the red planet, in which case we would get to witness an event on the same order of magnitude as the K-T event that killed off the non-avian dinosaurs! (and lose everything we have on the surface of the planet and in orbit.)
I, for one, hope it hits. That would not be a once in a lifetime opportunity. That would be a ONCE IN THE HISTORY OF HOMINID LIFE opportunity! We would get to observe a large impact on a terrestrial body as it happened and watch the aftermath as it played out for decades!
As is, the most likely situation though is one in which we get to closely sample and observe the comet with everything we have in orbit around Mars. The orbit will be nailed down better in a few months when the comet comes out from the other side of the sun.
And to quote myself towards the end of the last open thread:
I saw a mention of that elsewhere, but I didn't realize that the core had a lower bound of 10km. Wow. I really hope it impacts too; we saw some chatter about the need for a space guard with a dinky little thing hitting Chelyabinsk, but imagine the effect of watching a dinosaur-killer hit Mars!
For future reference, the JPL small body database entry on the comet:
http://ssd.jpl.nasa.gov/sbdb.cgi?sstr=C%2F2013%20A1;orb=1;cov=0;log=0;cad=1;rad=0#cad
Different sources seem to have different orbital calculations, this one indicates a most likely close approach of ~100,000 kilometers with the uncertainty wide enough to include a close approach of 0 km.
If nothing else, we very well may get pictures from the surface rovers of the head of a comet literally filling the sky.
I am flabbergasted, I have no explanation for this situation.
If this comet is really that big and has approximately said flyby orbit, how frequent are those? If one every thousand years, there were 60000 of those since the TC event. How come we had only one collision of this magnitude?
Maybe they are less frequent. How lucky we are then to witness one of them right now? Too lucky, I guess.
As on the other hand, it looks we are just too lucky to have no major collision of that kind relatively recently, if they were quite common.
Maybe I am missing something odd. Like an unexpected gravity or other effect, by which an actual collision is much more difficult. Something in line with this. What makes sense, but only after a careful consideration.
Maybe a planet like Mars or Earth repels comets somehow? Dodge them somehow? Some weird effect like this?
I recommend Taleb's The Black Swan. The major premise is that people tend to underestimate the likelihood of weird events. It's not that they can predict any particular weird event, it's about overall likelihood of weird events with large consequences.
Another way of stating it in this circumstance: there are so many different things that we would consider ourselves lucky to see or that we would notice as unusual that even if the probability of any one of them is low the probability that we see something isn't that low.
I second the book recommendation by the way.
Flabbergasted no more! There was no collision, of course.
Should have known it, immediately!
If you are randomly shooting a rock through the solar system, "close approach of mars within 100,000 km" is 870 times as likely as "hitting mars". That brings a 'once in 100 million years (really roughly guessing based on what I know of earth's geological history)' event down to the order of 'once in a hundred thousand years', and the proper reference class of things we would be considering ourselves this lucky to see is probably more like 'close approach of a large comet to a terrestrial body' rather than singling out mars in particular. I don't know enough about distributions of comet orbital energies to consider different likelihoods of comets having parabolic orbits that bring them closer to the center of the solar system versus further away to compare the odds of things going near the different terrestrial planets with different orbits.
The gravity of a planet actually slightly increases the fraction of randomly-shot-past-them objects that hit them over just sweeping out their surface area through space, but for something with a relative velocity of 55 km/s (!) that effect is tiny.
Should we bring Shoemaker-Levy into this discussion?
If so, we are indeed very lucky to observe an event, which happens every 100 000 years or so.
OTOH, I've conclude, that it is in fact less likely for a planet to be hit by a random comet than it is for a big massless balloon of the same size, to be hit by the same comet.
Why is that? Roughly speaking, if the comet is heading toward some future geometric meeting point, the planet will accelerate it by its own gravity and the comet will come too early and therefore flies by. It's a very narrow set of circumstances for an actual collision to take place.
A bit counter intuitive but it explains why we have so few actual collisions, despite of the heavy traffic. Collisions do happen, but less often than a random chance would suggest. The gravity protects us mostly.
Zeo Inc is almost certainly shutting down.
Zeo users should assume the worst and take action accordingly:
I'm sad that they're closing down. I've run so many experiments with my Zeo, and there doesn't seem to be any successor devices on the horizon: all the other sleep devices I've read of are lame accelerometer-based gizmos.
I'm sad about this as well. The Zeo has been the only QS thing that I've been able to get my girlfriend to use, and it has increased her understanding of her sleep patterns dramatically.
I now look back with a twinge of anger at all the times that someone told me that they track their stages of sleep too, but with their iphone app and "it was only a dollar."
And to be clear, you can only upgrade the firmware on the Zeo bedside unit, right?
I don't know anything about the mobile unit.
What about aspiring Zeo users? Is it too late to get in on this?
Depends. If you know that it's shutting down, are willing to handle the data exporting yourself, and also are willing to possibly pay rising costs for a Zeo unit and replacement headbands...
I know I don't intend to stop (already bought another 3 replacement headbands on Amazon), but I've already used my Zeo for a long time and seem to be pretty unusual in how much I use it.
Thanks for the heads up.
The firmware is no longer available on their site. I tried to email them, but I got an automated response telling me that customer service is no longer responding to emails and to check the help on their site. Can anyone share the 2.6.3R firmware?
Also, Amazon is sold out of the bedside headbands. Bad timing for me - I only have one left.
I am not sure whether there was not some per-user customization or something, but for what it's worth, here's the copy of my firmware: http://dl.dropbox.com/u/85192141/firmware-v2.6.3R-zeo.img
2 or 3 days after I went around all Paul Revere-style, I was told that Amazon had run out. So I guess they turned out to not have many at all. (I had 3 left over from previously, and bought another 3, so I figure I should be able to get at least 3 more years out of my Zeo.)
Google Reader is being killed 1 July 2013. Export your OPML and start searching for a new RSS reader...
I finally just started using RSS feeds and it has improved my workflow dramatically. Now they're breaking my system on me?! Thanks for letting me know...
Do you suggest any particular RSS readers?
I'm already considering moving to email and running the whole thing on my home server.
No. I've seen Newsblur and Netvibes mentioned but I've never used them. Some discussion in
Meh, I guess we have a few months to see people's reports on the alternatives.
Don't waste time on researching today's best feed reader. That data will be obsolete in 3 months. - LifeProTips @ Reddit
At least in the case of NewsBlur we'll have to wait to see people's reports, since they are being hammered by all the Reader refugees.
i'm happy w/ feedly and haven't been asked for money yet (have only used for 2 days)
I've imported my feeds into Google Currents, since it can also be used to read regular news, not just feeds, which I do, anyway. Trying it out now, hopefully Google will be improving it, if they want the reader users to stay with Google.
Update: So far Google Currents sucks for feeds. Totally unintuitive layout and gestures, does not show new feeds (or I cannot find where it does), the formatting of several items is so poor, I give up and go to the original site. Switching back to Google Reader until something better comes along.
I posted this in the waning days of the last open thread, but I hope no one will mind the slight repeat here.
The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.
A call for advice: I'm looking into cognitive behavioral therapy—specifically, I'm planning to use an online resource or a book to learn CBT methods in hopes of preventing my depression from recurring. It looks like these methods have a good chance of working, although the evidence isn't as strong as for in-person CBT. At this point, I'm trying to decide which resources to learn from. Any recommendations or anecdotes would be appreciated.
My wife's a psychologist and depression is one of her specialties. Here are her recommendations:
Self-Therapy for Your Inner Critic book
Free guided meditations for "The Mindful Way Through Depression" (get some practice before using "working with difficulty" meditation): streamable or downloadable
And the associated book
Please let us know how it goes.
I''v had success with Introducing Cognitive Behavioural Therapy: A Practical Guide
I recommend Feeling Good by David Burns. It'a a very good overview into CBT, covers all types of medication and was also recommended by Lukeprog IIRC.
I am also interested in learning more about CBT.
For various reasons, I cannot make open threads anymore, ever again.
Message acknowledged. We appreciate your good work. And godspeed, Grognor.
El psy congroo.
Over the past month, I have started taking melatonin supplements, instigated a new productivity system, implemented significant changes in diet and begun a new fitness routine. February is also a month where I anticipate changes in my mood. I find myself moderately depressed and highly irritable with no situational cause, and I have no idea which of these things, if any, are responsible.
This is not ideal.
I'd been considering breaking my calendar down into two-week blocks, and staging interventions in accordance with this. Then the restless spirit of Paul Graham sat on my shoulder and told me to turn it into an amazing web service that would let people assign themselves into self-experimental cohorts, where they're algorithmically assigned to balanced blocks so that effects of overlapping interventions can be teased apart.
I've never really gotten that into the whole Quantified Self thing, but I'd be keen to see if something like this existed already. If not, I'd consider putting such a thing together.
Any discussion/observations on this general subject?
So it's a web service that would spit out a random Latin square and then run ANOVA on the results for you?
I don't think I've heard of such a thing. (Most people who would follow the balanced design and understand the results are already able to do it for themselves in R/Stata/SPSS etc.) Statwing.com might have something useful, they seemed to be headed in that direction of 'making statistics easy'.
I was imagining a site that would look at all the different things you're trying at the moment, look at all the things other people are trying, and give you a macro-schedule for starting them that works towards establishing cyclicality across all users.
It could also manage your micro-schedule, (prompt you to take a pill, do twenty sit-ups, squirt cold water in your right ear, etc.), ask for metrics and let users log salient information and observations. Come to think of it, once that infrastructure is already in place, there's no reason you couldn't open it up as a platform for more legitimate and formal trials.
Mm. So not just scheduling your own interventions but try to balance across users too... No, I don't know of anything like that. CureTogether actually got some research published, but I don't think randomization or balancing was involved. (And trying to get nootropics or self-help geeks to collectively do something is like trying to herd deaf cats into pushing wet spaghetti...)
When I found myself depressed and irritable on a diet, it seemed to be evidence that I was hungry. Is there any food or drink that you can try consuming to stave off that feeling, while still following the diet? As an example my diet allowed me to consume unlimited amounts of unprocessed fruit, so if I felt depressed and irritable, I could eat that until I felt better, and not hurt my diet at all.
As you seem to recognize in your reply to Gwern, this probably cannot function as a stand-alone feature, but needs to sit atop a Quantified Self platform. The minimal system is one that just keeps track of your data, while making data entry easier than existing systems. The next step is to figure out what things you're tracking correspond to what things I'm tracking. This is difficult to combine with the flexibility of allowing the tracking of anything.
Why haven't you gotten into the Quantified Self thing? At the very least, they probably have better answers to this question.
Quantified Self seems like one of those things you have to be into, and I'm just not that into it.
It seems to me that a lot of the QS-types take an almost recreational pleasure in what they're doing. I understand that. I get a similar sort of pleasure from other things, but not this. I'd like the information, but there's only so much effort I'm prepared to spend on getting it.
It seems plausible to me that traditional financial advice assumes that you have traditional goals (e.g. eventually marrying, eventually owning a house, eventually raising a family, and eventually retiring). Suppose you are an aspiring effective altruist and willing to forgo one or more of these. How does that affect how closely your approach to finances should adhere to traditional financial advice?
I would say that at the beginning you have to make a choice -- will you contribute financially or personally?
If you want to contribute financially, you simply want to maximize your income, minimize your expenses, and donate the money to effective charities. (You only minimize your expenses to the level where it does not hurt your income. For example if keeping the high income requires you to have a car and expensive clothes, then the car and clothes are necessary expenses. Also you need to protect your health, including your mental health: sometimes you have to relax to avoid burning out.) Focus on your professional skills and networking.
If you want to contribute personally, you need to pay your living expenses, either from donated money, or by retiring early (the latter is probably less effective). Focus on social skills and research.
The house and family seem unnecessary (at least for the model strawman altruist).
So apparently I should be somewhat concerned about dying by poisoning. Any simple tips for avoiding this? It looks like the biggest killers are painkillers and heavy recreational drugs, neither of which I take, so I might be safe.
Put your poisson control center on speeddial?
They can't do anything but advise you to lower your lambda!
I finished Coursera "Data Analysis" last night. (It started back in January.)
It's basically "applied statistics/some machine learning in R": we get a quick tour of data clean up and munging, basic stats, material on working with linear & logistic models, use of common visualization and clustering approaches, prediction with linear regression and trees and random forests, then uses of simulation such as bootstrapping.
There's a lot of material to cover, and while there's plenty of worked out examples in the lectures, I don't see anyone learning R or statistics just from this course - you should definitely have used R to some degree before (at least running some t-tests or graphs), and you will definitely benefit from already knowing what a p-value is and how you would calculate it by hand (because eg. you'll be flummoxed when the lecturer Leek works out a confidence interval 'by hand' while coding - "where does this magic value 1.96 come from?!").
On the plus side, I liked all the examples and the curriculum seems useful and well-chosen. It's a reasonable introduction to 'data science'. I think my time wasn't wasted doing this Coursera: I'm more comfortable with some of the more advanced/exotic techniques, and picked up many R tips, some of which have come in handy already (eg. some of the data munging tips were useful in working with Touhou music data, and I've been able to replace all my homebrew Haskell multiple-correction code in various nootropics & Zeo experiments with a standard R library function
p.adjust, which I had no idea existed until the lecture on multiple comparison introduced it to me) - although as of yet I have not used bootstraps or random forests* or splines in anger. (But if any is thinking about doing it in the future, see my comment about the prerequisites.)On the negative side: like most of the other students, I think this should've been a longer course than 8 weeks and that the estimate of 5hrs/wk is misleading. The pace was very unforgiving. I was relatively well-prepared for this course, but I still wound up submitting for the second data analysis assignment a paper I think was very substandard. Why? Well, though we had two weeks or so to do it, I deliberately didn't do much work on it in the first week because in the first assignment you couldn't do a good job without the lectures from the week before the assignment was due, and I didn't want to get bushwhacked again; but in the actual week before, I got completely distracted by my Touhou music project, and so I wound up just not having the time or energy to do it. Similar things happened to a lot of other students: there was no slack or recovery time.
(There were also the usual teething problems of any new course: wrong or misleading quizzes, errors in lectures, that sort of thing. The peer review grading seems particularly poor, with the required grades being based on pretty superficial aspects of the submitted analyses.)
* EDIT: I have since employed random forests or bootstrapping in http://www.gwern.net/Weather , http://www.gwern.net/hpmor , & http://www.gwern.net/Google%20shutdowns
So you guys remember soylent? I was thinking I could get similar benefits blending simple foods and adding a good multivitamin to fill in any gaps.
So I've worked on it on and off for a couple of days, and here is a shot at what a whole food soylent might contain:
http://nutritiondata.self.com/facts/recipe/2786310/2
So um if anybody wants to confirm or critique this, that would be cool
I like this approach more, because... I would be more likely to try that at home.
Most of the items are easy to buy anywhere. I would have most inconvenience getting the following: Body fortress whey protein, Jamba Juice beverage, source of life liquid gold. Could they be replaced with something more generic?
Also, eating the raw egg feels like a bad idea.
Without these ingredients, I would be very likely to try it now.
Phone isn't letting me press edit- I'll probably cook the egg. Don't want the raw whites to bind to the biotin
I had body fortress at home, and jamba juice was on the website. Just use some kind of wheatgrass and whey protein. Doesn't have to be source of life either as long as its high quality. I've seen ortho core and orange triad recommended on body building forums . The whole recipe is suggestions anyway. I also see no reason not to, say, use kale and raspberries instead of spinach and blueberries; maybe that will help if I get bored with the taste. I hope you keep me posted if you try this.
I was taking a friend's word on how amazingly beneficial wheatgrass juice is, until he claimed I could get everything I needed from wheatgrass indefinitely, which seemed outright crazy. So I researched it myself and I didn't find compelling evidence it's any more beneficial than normal vegetables. I have some in my freezer so I'm going to use it but unless you have a cheap source I don't think it's worth it, given that it tends to be expensive and taste like lawn clippings. This is embarassing.
Approximately how much does this cost per day? How does it taste?
I'll let you know in a little bit by editing this to answer your question, because I haven't tried it yet
I've just noticed that hovering the mouse pointer over a post's or comment's score now displays a balloon pop-up with information how large percentage of votes was positive. New feature or am I just really bad at noticing black stuff appearing suddenly on my screen?
Anyway, it's pretty nice. You can, for example, upvote a comment from 0 to 1, notice that the positive vote ratio changes only by a few percent and suddenly realize that there's a war going on in there.
New feature.
Does anyone know which of the books on the academic side of CFAR's recommended reading list are likely to be instrumentally useful to someone who's been around here a couple years and has read most of the Sequences? It seems likely that there's some useful material in there, but I'd rather avoid reviewing a bunch of stuff.
Gamification of productivity:https://habitrpg.com/splash.html
I haven't signed up yet because I'm still assessing whether the overhead of filling it out is going to be too much of a trivial inconvenience, but thought some others might be interested. From poking around, it looks like it has a lot of potential but is still a little raw. It has the core game elements firmly in place but lacks the public status/accountability elements of good games (through acheivements/badges) and Fitocracy (through community/public accountability).
UPDATE: signed up, will report back next month
I've been using it for something like a week and am finding it moderately useful. Its two big advantages are that it hijacks my pathological desire to watch my numbers go up, and the near-complete lack of customization. (When using a calendar, I have to think of when the task is due. When using beeminder, I have to think about how frequently I'll be doing the task. With this, for any possible task, there are no fiddly bits to get in the way of just shutting up and putting it in the list.) The drawbacks are the weak enforcement and the near-complete lack of customization.
I've been looking for something like this for a while after success with fitocracy. (I tried to make one myself, but failed due to lack of relevant skills and interest).
Will try it for a week and report back.
I have been reading up on religious studies (yes, I ignored that generally sound advice never to study anything with the word 'studies' in the name) in order to better understand Chinese religion.
Unexpectedly, I have found the native concepts are useful (perhaps even more useful) outside the realm of religion. That is to say, distinctions like universalist/particularist, conversion/heritage, and concepts like orthodoxy, orthopraxy, reification, etc... are useful for thinking about apparently "non-religious" ideologies (including, to some extent, my own).
My first instinct when hearing a claim is to try and figure out if it is true, but I fear I have been missing the point (since much of the time, the truth of the claim is irrelevant to the speaker) and instead should focus more on the function a given (stated) belief plays in the life (especially the social life) of the person making the assertion (at least, on the margin).
Any bibliography you would like to recommend?
Also, would you care to expand on how precisely you find it useful?
Math and reading gaps between boys and girls
Link to original paper
Saving the world though ECONOMICS
Is anyone else watching Maoyuu Maou Yuusha, or reading the relevant novels? It's about as close to rationalist fiction as I've ever seen a commercial work be. It goes way further than the premise; a strong spirit of secular humanism is embedded into the story and its characters, and it's got some of the finest examples examples of a Patrick Stewart Speech I've seen this side of fantasy.
I found the premise really cool, but I'm still waiting for the season to finish and the anime bloggers sum up whether it managed to deliver a good plot arc or not. (It may turn out to be one of those series where you're better off just reading the novels or whatever.)
Link: This Story Stinks: article on a study showing that reader's perception of a blog post is changed when they read comments. In particular, any comments involving ad hominens or being generally rude polarize people's views. Full paper link.
I've been trying out the brain-training software from Posit science. I've definitely gotten better at some of their training material (tracking objects in a crowd of identical objects and seeing briefly shown motion), but I'm not sure whether it's improving my life.
Have any of you tried Posit's BrainHQ? If so, how has it worked out for you?
The training exercises look like they're only available as expensive software, but if you do their free exercises, they'll offer a $10/month option.
I found out about Posit from this video-- Merzenich clearly has something to sell, but nothing he said seemed like obvious nonsense.
Brain training doesn't usually transfer. The Posit studies haven't been much better than any others.
Even working memory training?
Looks like it.
I wanted to apologize for the post I made on Discussion yesterday. I hope one of the mods deletes it. I should have thought more carefully before posting something controversial like that. I made multiple errors in the process of writing the post. One of the biggest mistakes I made was mentioning the name of a certain organization in particular, in a way that might harm that organization.
In the future, before I post anything, I will ask myself, "Will this post raise or lower the sanity waterline?" The post I made clearly didn't really do much for the former, and could easily have contributed to the latter. For that I am filled with regret.
I have a part-time job, and I will be donating at least $150 of my income to the organization I mentioned and possibly harmed in the previous post I made.
I'm not making this comment for the purpose of gaining back karma; I'm making it because I still want to be taken seriously in this community as a rationalist. I know that this may never happen, now, but if that's the case, I can always just make another account. Less Wrong is amazing, and I like it here.
-- John. W. Holt
Agree with the first part but not (the wording of) the second part. If you know beforehand that something would be a mistake, don't be stupid.
But you shouldn't necessarily trust your brain to accurately predict whether things will be mistakes.
The question is where you cut off. What chance of making a mistake is acceptable?
JOHN HOLT!
*makes touchdown signal*
Based on your handle I assumed you already had another account. I do suggest making another one now. There is no need to take that baggage with you---leave that kind of shit as anonymous.
That account has been used regularly or semi-regularly for months, so despite the name it's not exactly a throwaway.
Can we still send you our ... you know ... merchandise?
Great! I'll explicitly use that heuristic myself from now on (if I remember to).
There could be a plugin for this. Imagine that before sending a post, you have to answer a few questions, such as: "Your certainty that this post will move the sanity waterline in a positive direction".
But we are only humans. We would learn very soon to ignore it, and just check the "right" answers automatically.
Maybe it would work better if it displayed only randomly, once in a few comments. And then the given comment could be sent to reviewers, who could inflict huge negative karma if they strongly disagree with the estimate.
Or perhaps there could be an option to click "I am sure this comment is useful and harmless" when sending a comment. A comment without this option gets +1 karma on upvote and -1 on downvote; a comment with this option gets +2 on upvote and -5 on downvote. This could make people think before posting.
I like the idea of a questionnaire that pops up randomly when making a comment at a rate of maybe 1-10 percent. possible example questions:
Displaying one or more of these at a rate that makes you think but not at a rate that would be super annoying would be fun and provide some neat databases.
On the other hand I'm sure programming it would be a bitch.
I'm having a motivation block that I'm not sure how to get around. Basically whenever I think about performing an intellectual activity, I have a sudden negative reaction that I'm incapable of succeeding. This heavily lowers my expectation that doing these activities will pay off, most destructively so the intellectual activity of figuring out why I have these negative reactions.
In particular, I worry about my memory. I feel like it's slipping from what it used to be, and I'm only 24. It's like, if only I could keep the details of the memory tricks in my head long enough I might be able to improve it. :) Only partially kidding.
In short, it takes a lot of effort for me to feel like I'm succeeding at succeeding. And I don't know why.
Specifically regarding memory, things don't need to be in your head for you to remember them. Start writing stuff down. All the stuff. Doesn't matter where. Anywhere is better than nowhere. I recommend Workflowy.
You are not specific enough about the memory. If you start forgetting your own name or something like this, you should visit a doctor. But if you only forget some details from what you learned at school, that means that you already have learned many things; so many that your day simply is not long enough to review all of them (and you also have to focus on many other different things). You have to develop the art of note taking. The more you have to know, the more critical this skill becomes. It is an illusion to try keeping everything in your head just because that strategy worked when you only knew a little.
The difficulty of succeeding may mean that you have already picked most of the low-hanging fruit. Just like in a computer game, the higher levels get more difficult. The difficulty does not mean that you are less powerful; it means that you are powerful enough to work on the more difficult tasks. Also, some tasks require time and discipline; you simply cannot master them at your first attempt.
I think you have to apply two kinds of fixes: psychological and organizational. Don't ignore either of them. It is important to make yourself feel better. And it is also important to use better tools. Without better tools your success is limited. But your mind remains the most important of your tools.
Many thanks. My memory issue certainly isn't any sort of disorder, and indeed the sort of success I'd like to have with it are of a high level. There has been a decline in the last few years of my (formerly exceptional) abilities here, and I need to find ways to increase my attention to it as a graspable and controllable challenge/problem.
Generally my ability to deal with attention, focus, and memory issues correlates to my day-to-day mood and self-confidence. I've found a coach through the community here to help me find ways to combat these slightly more fundamental issues. It is good, though, to see the wide variety of talk here about improving focus, overcoming "Ugh fields," and the like.
Fundamentally, my issue is one of keeping a particular skill in practice, and so I appreciate your practical suggestions. University offers an environment that more constantly practices skills such as learning, remembering, and new-paradigm thinking. My work environment offered similar challenges for a year or so, but I've since gained an expertise that is more valuable to use than to grow.
Today I gave a presentation to a group of 50 software developers in my company, and I was pleasantly surprised at my abilities. Apparently all of my on-the-fly speaking skills (which I had presumed dead since school) were just latent, if out of practice until the adrenaline kicked it back online. This was in no small part due, I suppose, than some mental tricks I've learned here into convincing myself of my future success, based on previous successes.
Just typing for my own benefit now. Thank you very much for your advice!
Glad to be useful. In similar situations I often don't know how much the advice I would have given to myself also applies to other people.
For me, the greatest memory-related shock was about 1 year after finishing university. I found my old paper with notes for the final exam, and I realized I didn't understand half of the questions. Not only was I unable to answer them, but I had problem finding any related association. For the whole year in my job I was doing something completely different, and I forgot many things without even being aware that it happened. (The problem is, despite having studied computer science and working as a programmer, I never use 95-99% of what I learned at school. I know a lot of theory, I should be able to invent a new programming language and write a compiler with some basic optimization; but in real life I mostly do web interfaces for databases, over and over again.) Now I am sorry I didn't make better notes at university. But at the time, I was so proud that I understand everything. I didn't have experience with what happens when you simply never think about a topic for years. If you are 24, this may be already happening or going to happen to you, too.
A few years forward, my programming career was progressing: I wrote code for two years in Java, then seven years in something else. Then I returned to Java and was like: oh, here is the forgetting again! This time I was lucky, because I simply downloaded the official documentation, read it from the beginning to the end, and most forgotten memories returned quickly. (I didn't have the note-making skill yet, but I already had the habit of always looking at the authoritative documentation first.) But then I realized that "learning to forget" is a stupid strategy when it comes to really useful things, so I started to make notes. (First I spent a lot of time trying to find a good software for that, and then ended writing my own. Today, I would probably use some existing tool.) Now when I learn something related to programming, I immediately start writing notes. At the beginning, they are chaotic a bit, but I can always refactor them later. I tried to use the same habit in other areas of life, but somehow it didn't work. Recently I started using Anki when learning human languages. The difference is, with human languages, you need to keep it all in your head, all the time, because you never know when you will need a word. With computer languages, remembering is not necessary, only the ability to find it quickly; and it is good to have the knowledge divided by topics. I could use Google for many questions, but some topics are rather difficult to find this way (either because many people ask the question and nobody provides an answer; or when many people provide incorrect information), and I believe I can write the information in the format best legible to me.
For the mood, reminding yourself of your past successes is very good. Sometimes people don't see the forest for the trees. A great success may require thousand days of work, and when you wake up on the day#470 and you don't see any progress compared with the days #469 and #468, it is easy to believe that you are not going anywhere. If you have a list of successes, and you see that every other year something great happens, that puts things into better perspective. (But it also goes the other way round. If you procrastinate, it is easy to believe that you are on the way to your next goal, when in fact you are going nowhere.)
Reassure yourself when you flinch and celebrate even the minor successes.
Apparently conscientiousness correlates strongly with a lot of positive outcomes. But unfortunately I seem to be very low on it.* Is there anything I can do to train it?
*Standard disclaimers about self assessment apply.
You can get actual big five tests online (see the latest LW survey for an example). The big 5 tend to be pretty stable, but putting your self in a social group that has the trait you want is relatively effective. Also, there is a whole lot of YMMV on which one(s) to use, but organizational/productivity tools like Getting Things Done can allow you to act in usefully conscientious ways without changing your personality per se.
Yeah, me too. I found getting a relatively structured job with standards so low that my (minimal) natural levels of professionalism exceeded their requirements successful. Conscientiousness is really, really difficult to train, but you can move further from your current base by changing the people you hang out with or work with. Industriousness, OTOH, is trainable. Last comment I saw about this, good paper linked.
You can do better but having low conscientiousness still blows.
Link missing.
No longer true. Cheers for the heads up.
So, I notice some of the top contributors have the "Overview" page that appears when you click on their name display their LW wiki user page instead of the standard recent comments/posts summary (gwern, for example). Is that only for super-awesome people or is there some way to enable it that I failed to find?
[edit:] Okay, apparently patience is the key. It started working for me somewhere between 24 and 48 hours after I made the wiki page for my username.
I find this feature really damn annoying. I don't want to see people's wiki profile. If I click on the name it is because I want to see the posts and comments. It would be great if this 'feature' could be disabled.
Aw, but I wrote something relevant on mine! (Although most people don't seem to, admittedly.) I guess it'd be ideal if there were an option to enable/disable it for yourself and also to enable/disable skipping that page and going to comments when viewing.
Mine used to say that I was the same username on LW-wiki as on LW-Main, but I cleared it because it became redundant with this feature. Unfortunately I don't have rights to delete pages on the wiki, which is also mildly annoying for me if I want to look at my own comments.
Make an userpage with the same name on the wiki, for example User:Gwern.
I did! That was my first guess. Does it take some time to update or something? (It's been ~20h)
It occurs to me that there is a roadblock for an AI to go foom; namely that it first has to solve the same "keep my goals constant while rewriting myself" problem that MIRI is trying to solve.
Otherwise the situation would be analogous to Gandhi being offered a pill that has a chance of making him into anti-Gandhi, and declining it.
If the superhuman - but not yet foomed! - AI is not yet orders of magnitude smarter than a hoo-mon, it may be a while before it is willing to power-up / go foom, since it would not want to jeapardize its utility function along the way.
Just because it can foom does not imply it'll want to foom (because of the above).
This is interesting, though I think it's less relevant for an entity made out of readable code. In the pill situation, if ghandi fully understood both his own biochemistry and the pill, all chance would be removed from the equation.
edit: More relevant reply:
A human researcher would see all of the AI's code and the "pill" (the proposed change), yet even without that element of "chance" it is not yet a solved problem what the change would end up doing.
If the first human-programmed foom-able AI is not yet orders of magnitude smarter than a human - and it's doubtful it would be, given that it's still human-designed, then the AI would have no advantage in understanding its own code that the human researcher wouldn't have.
If the human researcher cannot yet solve keeping the utility function steady under modifications, why should the similar-magnitude-of-intelligence AI (both have full access to the code-base)?
Just remember that it's the not-yet-foomed AI that has to deal with these issues, before it can go weeeeeeeeeeeeeeeeKILLHUMANS (foom).
I've just moved to the Bay Area, and, as I'm unsubscribing from all my DC-area theatre/lecture/fun event listservs, I am sad I don't yet know what to replace them with!
What mailing lists will tell me about theatre, lectures, book clubs, social dance, costuming, etc in Berkeley and environs?
Does anyone know if there any negative effects of drinking red bull or similar energy drinks regularly?
I typically use tea (caffeine) as my stimulant of choice on a day to day basis, but the effects aren't that large. During large Magic: the Gathering tournaments, I typically drink a red bull or two (depending on how deep into the tournament I go) in order stay energetic and focused - usually pretty important/helpful since working on around 4 hours of sleep is the norm for these things.
Red bull works so well that I'm considering promoting it to semi-daily use, but I'd like to know exactly what I'm buying if I do this.
Edit: After saying it out loud, I just realized that if I use red bull regularly, it might lose its effects due to caffeine/whatever dependency. TNSTAAFL strikes again :-/ Still interested in any evidence though.
What is the purpose of the monthly quotes thread? (To post quotes, obviously.) But it seems to me that a lot of the time, it's just an excuse for applause lights.
Best case, someone finds a quote that expresses a rationality idea that I agree with but couldn't articulate as eloquently as the quote. This is particularly nice when it comes from an unexpected source; when I see good rationality coming from places I didn't expect, it's evidence that the corresponding ideas are good ideas rather than just, say, ideas popular on LW.
Previous discussion of this issue
How to instantly know, which articles I have already read on LW (or elsewhere)?
Well, if I have a camera on my computer, it could trace my eyes and displayed article and do some time based guesses what had actually has been read by me. Then it should be displayed with a yellowish background next time.
Just a suggestion.
P.S.
Or at least, it should be a I HAVE READ IT! button somewhere. With an personal marks how good it was. Independent of the up/down vote thumbs.
Presumably if you have browsing history stored on your computer you could have an indicator if a web address had been accessed before? (Presumably using the same function that makes links blue/purple.)
I have several computers, as most people do. The user should trigger this history, not the computer.
Chrome Sync will sync your history across devices. I am skeptical that most people have several computers.
I have a small site feature question! What are those save buttons and what do they do, if anything? (They seem to not do what I think they should do.)
Looks to me like you can view your saved stuff at http://lesswrong.com/saved/
Ohh! Awesome. Yeah, that's what I was looking for! I never expected to find that link where it turned out to be. =/ Thank you!
This comment discusses information hazards, but not in much detail.
"Don't link to possible information hazards on Less Wrong without clear warning signs."
— Eliezer, in the previous open thread.
"Information hazard" is a Nick Bostrom coinage. The previous discussion of this seems to have focused on what Bostrom calls "psychological reaction hazard" — information that will make (at least some) people unhappy by thinking about it. Going through Bostrom's paper on the subject, I wonder if these other sorts of information hazards should also be avoided here:
Another thing that seems to fit this pattern that I have seen elsewhere is a Trigger Warning, which is used before people discuss something like rape, discrimination, etc... which can remind people who have experienced those about it, causing some additional trauma from the event.
Has anyone here ever decided not to read something because it had a trigger warning? I can't imagine doing so myself, but that may be the typical mind fallacy.
EDIT: People do use the warnings. Good to know.
I have chosen not to consume media (including but not limited to text) because of an explicit trigger warning. Not often, though; most trigger warnings relate to topics I don't have trauma about.
More often, I have chosen to defer consuming media because of an explicit trigger warning, to a time and place when/where emotional reactions are more appropriate.
I have consumed media in the absence of such warnings that, had such a warning been present, I would have likely chosen to defer. In some cases this has had consequences I would have preferred to avoid.
I haven't, but I think that were trigger warnings are appropriate is in things that hurt a few people disproportionately. If something hurts everyone that reads it you shouldn't write it at all, and if it hurts no one more than it is worth it isn't a case for trigger warnings. But if it is something that needs to be said to many people, and there is a significant group (perhaps those that have had a certain experience) who would suffer a lot from reading it, then you put a trigger warning that would be recognized by that group at the top.
TLDR If most people never care about trigger warnings, then they might work as intended.
I have chosen not to Google something that I was warned would involve seeing particularly horrific images. I imagine that if said topic was put in blog post form with a trigger warning up the top, I would probably choose not to read it.
EDIT: It's probably worth adding that I adopted this policy after discovering the hard way that there are things out there I would really prefer not to see/hear about.
I've decided not to listen to some radio segments because of such warnings. Similar principle.
Have you had an experience that might cause you to be triggered by the kind of thing that gets trigger warnings?
I haven't, but I have never experienced a serious trauma that I don't want to be reminded to me, so I'm not the kind of person that people who write trigger warnings are thinking about.
I know a person who chose not to read something (MAX Punisher #1) based on my warning of explicit sexual violence.
Anecdotal and incomplete, but most of an example case...
Agreed — Bostrom's classification "psychological reaction hazard" seems like it should include "trigger" as a subset — both the original sense of "PTSD trigger" and the more general sense that seems popular today, which might be expanded as "information that will remind you of something that it hurts to be reminded of."
As for distraction hazards, I have often seen links to TvTropes been posted with a warning sign, both here and in other sites. (Sometimes a plain "Warning: TvTropes link", sometimes a more teasing "Warning: do not click link unless you have hours to spare today".)
Or "Warning: Daily Mail" (or other sites working on the click-troll business model): linking to a site your readers may object to feeding even with a click. Knowledge hazard, in that even when such sites are more right than their usual level they tend to be wrong.
I wish links to Cracked.com also had a similar warning. (Well, now that I have LeechBlock installed that's no longer so much of an issue, but still.)
Why stop there? Employment hazard (NSFW), Copyright hazard (link to torrent, sharing site or a paper copied from behind a paywall), Relationship hazard (picture of a gorgeous guy/girl), dieting hazard (discussion of what goes well with bacon)...
Well, the ones I mentioned are drawn from Bostrom's paper (although they aren't all of his categories). Eliezer seemed to be specifically discouraging a class of psychological reaction hazards while using the more general term "information hazard" to do it; I thought to inquire into what folks thought of other classes of information hazard.
I think it'd be nice to have a (probably monthly) "Ideas Feedback Thread", where one would be able to post even silly and dubious ideas for critique without fear of karma loss. Rules could be that you upvote top level idea comments if they sound interesting (even if wrong), and downvote only if you're really sure that it's very easy to find out they're bad (e.g. covered in core sequences). Could also be used for getting feedback on draft posts and whatnot.
The plan being that questionable ideas are put into their own thread for feedback, instead of being potentially turned into questionable posts. At the same time, it would give people a place to be wrong and get feedback without fear of repercussions and hopefully without forming negative associations with doing stuff on Less Wrong.
(Potential downsides include that it could steal from of other, more filtered, content locations if people feel it's a less risky place to post things, that posting in the thread may potentially be seen as low-status, and that someone reading recent comments may vote in not the intended way or feel burdened with reading less filtered content. I feel that these are probably outweighed by upsides.)
I think open threads are in practice already this. Excessively encouraging such things could breed crackpots.
Not that I have noticed. Open Threads seem to primarily be "here's a cool thing I'd like to let you know about". If I want to post something like "The 'you are cloned and play prisoner's dilemma against yourself' example against CDT is actually pretty bad. Solving it doesn't require UDT/TDT so much as self-modification, with which even CDT would be able to easily solve it." (with a few more lines of explanation), for example, my model of Open thread predicts that if I'm wrong, I'll be downvoted a few times, and may or may not get good feedback. Also that Open Thread is meant for things that are more of interest to everyone, rather than being fairly specific. Which is why I'm not posting anything like that, even though I'm 80% sure that particular example is correct and may be of interest to at least some people.
In what way? I doubt existing visitors to Less Wrong would be significantly more likely to generate crackpot ideas because of the existence of a thread, and I doubt even more that more crackpots would come to Less Wrong to participate in one thread. It may, admittedly, reduce conformity if people find unexpected support for non-mainstream ideas, however I'm not sure that most would consider that a bad thing.
I think downvotes would depend on how you present your idea. If you present your idea as if you're already convinced you're right, and you're not, I think that would lead to downvotes. But if you preface your idea with "hey, here's something I thought of, dunno if it works, would appreciate feedback," I think that would be fine. What people respond negatively to, I think, is not wrongness so much as arrogant wrongness. (Or at least that appears to be what I respond negatively to.)
My model of the median LessWronger is closer to a crackpot than yours, maybe. Not that I think this is uniformly a bad thing; I have a vague suspicion that the brains of crackpots and the brains of curious, successful thinkers are probably pretty similar (e.g. because of stuff like this post). But it's easy to read the Sequences and think "man, I totally understand decision theory and also quantum mechanics now, I'm going to go off and have a bunch of ideas about them" and to be honest I don't want to encourage this.
I like this proposal. In the past, people (including me) have complained that LW doesn't get enough posts on topics where there's likely to be a lot of controversy or high variance in an item's score, 'cause people don't like getting downvoted more than they like getting upvoted.
Can anyone tell me the name of this subject or direct me to information on it:
Basically, I'm wondering if anyone has studied recent human evolution - the influence of our own civilized lifestyle on human traits. For example: For birth control pills to be effective, you have to take one every day. Responsible people succeed at this. Irresponsible people may not. Therefore, if the types of contraceptives that one can forget to use are popular enough methods of birth control, the irresponsible people might outnumber responsible people in a very short period of time. (Currently about half the pregnancies in the USA are unintended, and probably 40% of those pregnancies go full term and result in a child being born. As you can imagine, it really wouldn't take very long for the people with genes that can cause irresponsibility to outnumber the others this way...)
Any search terms? Anyone know the name of this topic or recall book titles or other sources about it?
The 10,000 Year Explosion discusses the effects that civilization has had on human evolution in the last 10,000 years. (There's also this QA with its authors.) Not sure whether you'd count that as "recent".
Gregory Clark's work A Farewell to Alms discusses human micro-evolution taking place within the last few centuries, but is highly controversial (or so I hear).
To almost anyone who knows much about evolutionary biology its not controvertial but positively laughable.
Cites?
Yeah, that's like saying you could domesticate foxes in less than a human generation, or have adult lactose tolerance increase from 0% to 99.x% in some populations in under 4,000 years. Does this guy think we're completely credulous?
-Cellbioguy, elsewhere in thread.
I suspect you've misidentified his contention here; he clearly doesn't seem to think humans haven't evolved within the Holocene.
Does it look at possible effects of arranged marriages?
I don't remember it doing so, but it's two years since I read it and I did so practically in one sitting, so I don't remember much that I wouldn't have written down in the post.
The infamous Steve Sailer has written a lot about cousin marriage , which, in practice, seems to be correlated with arranged marriage in many cultures (including the European royals in past centuries). Perhaps a lot of arranged marriages in practice may lead to inbreeding, with the genetic dangers that follow.
I'm also wondering about the effects of anonymous sperm banks, where relatively well-off women may pay to choose a biological father on the basis of -- whatever available information they may choose to consider. What factors, in a man they will never meet, do they choose for their offspring?
Wow. The article was fascinating. I devoured the whole thing. Thanks, Kaj. Do you know of additional information sources on the neurological changes?
Not offhand, but if you get the book, it has a list of references.
I have a notion that driving selects for having prudence and/or fast reflexes.
It's also one of the leading killers of young people , so it probably is one of the strongest selection pressures, though I'm not sure how strong.
Yes, that's why I was thinking about it. I'm not sure what other selective pressures are in play on people before they're finished reproducing.
Wild guess: try "human microevolution"?
I'm not a domain expert, but my standing assumption is that even the last few hundred years of human history were just too short to have a noticeable effect on allele frequencies. I would be very interested to hear evidence to the contrary, though.
http://www.sciencedirect.com/science/article/pii/S016028961000005X
You might be interested in Evolution, Fertility and the Ageing Population, which does some modelling on this.
Recent experiences have suggested to me that there is a positive correlation between rationality and prosopagnosia. One hypothesis is that dealing with prosopagnosia requires using Bayes to recognize people, so it naturally provides a training ground for Bayesian reasoning. But I'm curious about other possible hypotheses as well as additional anecdotal evidence for or against this conclusion.
What were the recent experiences?
I learned that a surprising number of people involved with CFAR / MIRI have prosopagnosia. (Well, either that or I'm miscalibrated about the prevalence of prosopagnosia.)
How prevalent do you think it is?
I know 4 (I think?) people with prosopagnosia and maybe 800 people total, so my first guess is 0.5%. Wikipedia says 2.5% and the internet says it's difficult to determine the true prevalence because many people don't realize they have it (generalizing from one example, I assume). The observed prevalence in CFAR / MIRI is something like 25%?
So another plausible hypothesis is that rationalists are unusually good at diagnosing their own prosopagnosia and the actual base rate is higher than one would expect based on self-reports.
That is a big difference.
Theory off the top of my head: The causation is in the wrong direction. People who are rational are far more likely to be very systems-oriented, have limited social experiences as children (by having different interests and/or being too dang smart), be highly introverted, and other factors that correlate with being around other people a lot less than your typical person. There's nothing wrong with our hardware per se, it's just that we missed out on critical training data during the learning period,
Anecdotal: I have mild prosopagnosia. I have a lot of trouble recognising people outside their expected context, I make heavy use of non-facial cues. I'm pretty good at putting specific names to specific faces on demand when it feels important enough, although see prev point about expected context. I don't feel like I use anything resembling Bayesian reasoning, I feel like I have the same sense of recognition that I imagine most people have, it's just less dependent on seeing their face and more on other traits (most typically voice and manner of movement).
Has anyone indexed the set of Five-Second Skill posts on Less Wrong? E.g. Get Curious, the Algorithm for Beating Procrastination, Value of Information etc.
I've been working on a little project compiling Touhou music statistics. One major database may be unavailable to me from anywhere but Toranoana, and the total cost of reshipping will be ~$25 and take several weeks to get to me. This would be annoying, expensive, and slow.
In case my other strategies fail, are there any LWers in Japan who either owe me a favor or are willing to do me a favor in buying a CD off Tora and sending me the spreadsheets etc on it? (I'd be happy to cover the purchase cost with Paypal or a donation somewhere or something.)
Does anyone have sources on active steps that can be taken to improve gender diversity in organisations?
There is a lot of writing on the subject, but I'm finding it difficult to find sources that compare the effectiveness of different measures, with figures showing change, controlling for variables etc.
I would like to see the results too, but I doubt they exist (beyond the obvious: if you want to have 50% male and 50% female employees, make an internal rule to hire 50% men and 50% women).
Beyond evidence... my heuristic would be to start the organization with gender diversity. It should be easier to find e.g. 3 men and 3 women to start an organization, then to have an organization of 100 men and later think about how to make it more friendly for women.
EDIT: Also, you should not have a bottom line already written that having 50:50 ratio is improving. People do have different preferences. A ratio other than 50:50 might reflect the true level of interest in the base population.
To be precise: Hire in the direction of 50% men and 50% women. Depending on retention rates this may need to be skewed in either direction.
It's unclear to me that much can be said about this subject across all organizations. Do you have a particular organization in mind?
Quick clarification of Eliezer's Mixed Reference, intended for me from twelve hours ago:
'External reality' is assumed to mean the stuff that doesn't change when you change your mind about it. This is a pretty good fit to what people mean when they say something like "exists" and didn't preface it with "cogito ergo." It's what can be meaningfully talked about if the minds talking are close enough that "change your mind" is close to "change which mind."
External reality can be logical, because the trillionth digit of pi doesn't change even if you change your mind about it. Or it can be physical, because dogs don't disappear if you decide there are no dogs nearby. ("why do dogs / suddenly disappear / every time / you are near.")
If we look inside of peoples' heads, logical external reality seems to be universal and specific - minds are computation, and so if you can do some fairly general stuff like labeling the output of an algorithm you haven't evaluated yet, you can have logical "external reality," which now appears to be somewhat of a misnomer, but oh well. "Stuff that doesn't change when you change your mind about it" is still too long.
Physical reality, on the other hand, is much more general and contingent - it's just a catch-all term for "hey, I know we're a mind and have logical reality and that good stuff, but there happens to be a world out here!" In fact, it's tempting to just say "if it doesn't change when you change your mind and it's not a logical thing, it's a physical thing." The label external reality might make more sense being applied to this stuff, since "physical" carries some connotation that isn't necessarily accurate.
I remember seeing something about Islamic law and the ability to will money to charities meant to exist in perpetua and now I can't find it. Does anyone know what I'm talking about.
Yo: http://www.gwern.net/The%20Narrowing%20Circle#islamic-waqfs
Thank you.
I am in Berkeley for a few days, primarily Thursday march 28th. Please text me at 203-710-5337 if you'd like to catch up or have any ideas for a thing I shouldn't miss.
If computer hardware improvement slows down, will this hasten or delay AGI?
My naive hypothesis is that if hardware improvement slows, more work will be put into software improvement. Since AGI is a software problem, this will hasten AGI. But this is not an informed opinion.
Are you familiar with the hardware overhang argument?
No, and Google is failing me. Is there somewhere I can read about it?
I've just learned that if it is July or a later month, it is more probable that the current year has begun with Friday, Sunday, Tuesday or Wednesday. If it is June or an earlier month, it is more probable that the current year has begun with Monday, Saturday or Thursday.
For the Gregorian calendar, of course.
This just in: Anthropics is still useless!
How come
There are more days in July to December, than in January to June. So it is a little more likely for a random observer to find himself in the later 6 months.
But if he finds himself before the July, it is more likely that it is a leap year, with an additional day, than otherwise would be.
This increased probability for a leap year skews the probability distribution for the first day of the year also.
This is how it comes.
Correct me if I'm wrong, but isn't the probability of a year being a leap year approximately 25%, completely independent of what month it is? (This seems like one of those unintuitive-but-correct probability puzzles...)
For all intents and purposes, yes. Well, for nearly all intents and purposes, since there is in fact a very slight difference:
Imagine the year only had 2 months, PraiseKawoombaMonth, and KawoombaPraiseMonth, each of those having 30 days. However, every other year the first month gets cut to 1 day to compensate for some unfortunate accident involving shortening the orbital period. Still, for any given year the probability of being a leap year is 50%.
Now you get woken from cryopreservation (high demand for fresh slaves) and, asking what time it is, only get told it's PraiseKawoombaMonth (yay!). This is evidence that strongly informs you that you are probably in one of the equi-month years, since otherwise it would be very unlikely for you to find yourself in PraiseKawoombaMonth.
Snap, back to reality: Same thing if you're told it's August, the chance of being in August at any given time is lower in a leap year, since the fraction of August per year is lower. There's just more February to go around!
Sorry for the quality of the explanation. It's the only way I can explain things to my kids.