You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:gwern
23 October 2013 02:00:22AM
*
18 points
[-]
Some LWers may be interested in a little bet/investment opportunity I'm setting up. I have become increasingly disgusted with what I've learned about the currently active Bitcoin+Tor black markets post-Silk-Road - specifically, with BlackMarket Reloaded & Sheep. I am also frustrated that customers are flocking to them, and they all seem absurdly optimistic. So, I am preparing to make a large public four-part escrowed bet with any comers on the upcoming demise of BMR & Sheep in the coming year, in the hopes that by putting money where my mouth is, I may shock at least a few of them into sanity and perhaps even profit off the more deluded ones.
The problem is, I feel I can afford to risk ฿1 ($200), but I'm not sure that this will be enough to impress anyone when split over 4 bets ($50 a piece). So I am willing to accept up to ฿1 in investments from anyone, to increase the amount I can wager. The terms are simple: whatever fraction of the bankroll you send, that's your share of any winnings. If we bet ฿2 and you sent ฿1, then you get half the winnings if any. (I am not interested in taking any cut here.)
My full writeup of the bet, with some statistics helping motivate the death probabilities I am betting based on: http://pastebin.com/bEuryTuF
If you are interested, you can reply here, or contact me at gwern@gwern.net, or we can chat on Freenode (as gwern or just visit #lesswrong). I am currently ignoring private messages on LW, so don't do that.
Also, please don't express interest unless you are genuinely fine with potentially losing your investment: given my best estimate of the probabilities & their correlations, there's somewhere >10% chance that we would lose all 4 bets as both BMR & Sheep survive the full year.
EDIT: if you really want to get in, I'll still take your bitcoins, but I think I have enough investors now, thanks everyone.
Comment author:gwern
24 October 2013 12:33:16AM
0 points
[-]
That was a per-person limit; I may close it down soon, though (฿3 plus my own bitcoin and recent appreciation, should enough to impress people, and beyond that, I think there's diminishing returns).
Comment author:gwern
29 October 2013 05:27:26PM
1 point
[-]
don't you mean chance of losing every bet?
Yes.
If so, no way in hell those are conditionally independent.
Of course they are not conditionally independent, that's why I gave it as a lower bound.
Specifically, I think we can agree that whatever the exact relationships, the failure of one bet will increase the chance of failure of all the others: if the 6-month sheep bet fails, then the 12-month becomes more likely to fail, and to a smaller degree, the BMR ones become more likely to fail. And not the other way around. Hence independence is the best-case scenario, and so it's the lower bound, and that's why I wrote ">10%".
Comment author:[deleted]
21 October 2013 06:58:21AM
*
27 points
[-]
Oslo on IRC jokingly summarizing part of a debate:
"""""""politics is the mindkiller" is an applause light" is a fully general counterargument" is deeply wise" is a semantic stop sign" is why our kind can't cooperate" is a fake explanation"
At the Columbus megameetup, some people actually printed out a set of cards (as a stand-alone deck) and played the game. I don't know who of two people has the source file, but I can find out...
Comment author:ygert
21 October 2013 08:18:00AM
5 points
[-]
It does, doesn't it...
All have to say is if someone actually makes this game, there has to be room for the awesomeness of quines.
After all, "is an applause light" is an applause light, isn't it?
Comment author:Locaha
26 October 2013 10:33:41AM
0 points
[-]
""""""""politics is the mindkiller" is an applause light" is a fully general counterargument" is deeply wise" is a semantic stop sign" is why our kind can't cooperate" is a fake explanation" is a mindkiller"
PubMed is allowing comments. Only people who have publications at PubMed will be permitted to comment. I predict that PubMed will find it needs human moderators.
Comment author:ahbwramc
23 October 2013 07:06:40PM
*
19 points
[-]
Random thought: I've long known that police can often extract false confessions of crimes, but I only just now made the connection to the AI box experiment. In both cases you have someone being convinced to say or do something manifestly against their best interest. In fact, if anything I think the false confessions might be an even stronger result, just because of the larger incentives involved. People can literally be persuaded to choose to go to prison, just by some decidedly non-superhuman police officers. Granted, it's all done in person, so stronger psychological pressure can be applied. But still: a false confession! Of murder! Resulting in jail!
I think I have to revise downwards my estimate of how secure humans are.
I can't see why you claim it's a stronger result. In the AI box experiment, the power is entirely in the gatekeeper's hands; in an interrogation situation the suspect is virtually powerless. This distinction is important because even the illusion of having power is enough to make someone less susceptible to persuasion.
Plus, police don't sit down with suspects in a chat room. They use 'enhanced interrogation techniques', methods such as an unfamiliar environment, threat of violence (or actual violence in some cases), and various other threats. An AI cannot do any of this to a gatekeeper unless the gatekeeper explicitly lets it out.
Comment author:ahbwramc
25 October 2013 01:14:29PM
3 points
[-]
That's all certainly true, but the AI box experiment is still a game at heart. The gatekeeper loses and he's out, what, fifty bucks or something? (I know some games have been played - and won, I think? - with higher stakes, and those are indeed impressive). The suspect "loses" and he's out 20+ years of his life. It's hard to make a comparison but I think the two results are at least comparable, even with the power imbalance.
Comment author:shminux
23 October 2013 08:36:49PM
5 points
[-]
Humans are extremely susceptible to the arguments they have not been inoculated against. These arguments van be religious, scientific, emotional, financial, anything. One example is the new immigrants from certain places falling for get-rich-quick scams in disproportionally large numbers (not so much anymore, since the knowledge has spread). Or certain LW regulars believing Roko's basilisk. Or become vegan (not all mind hacking is necessarily negative).
I would conjecture that every single one of us has open ports to be exploited (some more so than others), and someone with a good model of you, be it a super-smart AI or a police negotiator, can manipulate you into willingly doing stuff you would never have expected to be convinced of doing before having heard the argument.
Comment author:Tristan
25 October 2013 01:28:30PM
*
6 points
[-]
I notice that the latest two posts from Yvain's blog haven´t shown up in the "recent from rationality blogs" field. If this is due to a decision to no longer include his blog among those that are linked, I believe this to be a mistake. Yvain's blog is in my view perhaps the most interesting and valuable among those that are/were linked. And although I am in no danger of missing his updates myself, the same might not be true of all LW readers that may be interested in his writing.
Comment author:[deleted]
23 October 2013 10:32:51PM
20 points
[-]
Someone has been regularly downvoting every thing I've posted in the past couple months (not just a single karmassasination). I really don't care about the karma (so please DO NOT upvote any of my previous posts in order to "fix" it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker's Rules and all.
Comment author:Viliam_Bur
24 October 2013 09:48:19AM
3 points
[-]
Do I understand it correctly that the behavior you describe is "downvote every new comment from user X when it appears" (as opposed to "go to user X's history and downvote a lot of their old comments at the same time")?
Because when hearing about karma assassinations, I always automatically assumed the latter form; only the words "early downvote" in Nancy's comment made me realize the former form is also possible.
A possible technical fix could be to not display the user comment's karma until at least three votes were made or at least one day has passed.
Also, off-topic: Crocker's Rules seem to be popular in out culture; maybe it would be nice to integrate them into LW user interface. For example user could add their "anonymous feedback URL" in preferences, and a new icon "Reply Anonymously" would then be displayed below all user's comments and articles.
Comment author:Lumifer
24 October 2013 03:06:38PM
2 points
[-]
Crocker's Rules
Crocker's Rules aren't about anonymity.
Theoretically it might be useful for people to be able to set a visible flag "Talk to me under Crocker's Rules" -- but I suspect that it will immediately degenerate into a status sign.
Comment author:Viliam_Bur
24 October 2013 05:11:52PM
*
2 points
[-]
If I declare Crocker's Rules and you write something rude in a reply to me, other LW readers still see it. So even if I am perfectly okay with it (and I shouldn't have declared CR otherwise), you might lose some status in the eyes of the observers who don't properly evaluate the context of your reply.
If you send me a private message, we get rid of the observers. Unless I play dirty and later show the private message to someone else. Anonymous feedback would prevent me from doing so.
But yes, for 99% of cases, sending private message would be enough, anonymization is not needed. And we already have that option here.
Comment author:Lumifer
24 October 2013 05:22:58PM
2 points
[-]
Crocker's Rules, as I understand them, are about efficient conveyance of meaning without the extra baggage of social niceties. The are not about the ability to express unpopular views without social consequences which is where private messages or anonymity shine.
If you are concerned about observers misinterpreting the context you can always add a little [This post is under Crocker's Rules] tag somewhere.
Comment author:lmm
24 October 2013 05:13:29PM
1 point
[-]
Crocker's rules are not directly about anonymity no, but if you want to maximise your chances of receiving honest feedback an anonymous contact method is valuable.
Comment author:niceguyanon
25 October 2013 01:51:16PM
5 points
[-]
I almost got scammed today. I received a very official looking piece of mail, "billing" me a few hundred bucks. Normally I would be able to see through it immediately, but this particular one caught me off guard. I am usually very good about being skeptical and it disappointed me that I almost fell for it. What I think happened was that, my familiarity heuristic was exploited.
I have business with a certain state and it was familiar for me to receive correspondences from various agencies and pay all sorts of different fees. So when I got this letter in the mail, it didn't raise any flags. I was curious to check online but not because I was suspicious but rather annoyed that I wasn't aware of this fee, that is when I discovered I was almost duped.
This isn't a particularly new scam, I have heard of it before, but when it happened to me, I almost didn't notice. What I learned from this whole thing is to be vigilant against letting my guard down to con artist that exploit the familiarity heuristic. I was so familiar with bills that I glanced over small print indicating that "this is a solicitation". I might have received these scams before but regarding my car payment or mortgage, but I was able to easily pick them out because I didn't have car payments or a mortgage, obvious scam was obvious. But then I get hit right where I am familiar, and then it wasn't so obvious.
Comment author:niceguyanon
25 October 2013 07:07:49PM
3 points
[-]
This is the letter. I was less careful than usual (I should have read through it), but because it had information about me and is consistent with what I might see on a normal basis, I let me guard down. I only attempted to check the fee schedules to see why I had missed something like this, all the while assuming that I probably did.
There are some interesting points in there, especially about the fact that most people make themselves like what seems 'cultured' (I've definitely seen this type of appeal to majority among my friends - I was nearly roasted alive when I mentioned I honestly don't enjoy a particular classical composer).
There are also some fallacies in there too.
Anyway, the part where he talks about trickery is interesting:
What counts as a trick? Roughly, it's something done with contempt for the audience. For example, the guys designing Ferraris in the 1950s were probably designing cars that they themselves admired. Whereas I suspect over at General Motors the marketing people are telling the designers, "Most people who buy SUVs do it to seem manly, not to drive off-road. So don't worry about the suspension; just make that sucker as big and tough-looking as you can."
I question this premise. It seems to imply that the purpose behind the art determines its quality, and not the art itself. For instance, if you have two identical paintings, but one was drawn with the intention of making money, and the other was drawn for true artistic merit, the latter one somehow has more value (and is thus of 'better taste') than the former.
At any rate, in the end that paragraph was the closest I got to his definition of 'taste' - the ability to recognize trickery in artistic works.
And especially this paragraph about people with good taste:
Or to put it more prosaically, they're the people who (a) are hard to trick, and (b) don't just like whatever they grew up with.
Finally,
I wrote this essay because I was tired of hearing "taste is subjective" and wanted to kill it once and for all.
While the insights presented are interesting (in providing a window to the author's mind, at least), It has not actually succeeded in this purpose.
Comment author:Salutator
24 October 2013 09:07:48PM
1 point
[-]
I think it's just elliptic rather than fallacious.
Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don't get to communicate. So there is something they are all picking up on, but it isn't a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)
Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.
That's his basic argument for taste being a thing and it doesn't need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.
Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.
This really isn't about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person's perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.
Comment author:shminux
25 October 2013 04:43:19PM
*
8 points
[-]
Hmm, about 100 downvotes in the last couple of days, 1 per comment or so, suggest that someone here is royally pissed off at me. I wish I knew the reason. On the bright side, at least this forum provides some indication of a problem. When this happens to me IRL, I either never find out about it or deduce it months or years later based on second-hand information, rumors, or, in some cases, denied promotions/requests/opportunities. I wonder if this is a common experience? Situations like this is a significant reason why I would likely jump in with both feet if offered a chance to join a telepathic society.
Well, "this" is broad, but I expect that failing to notice enmity, and relatedly being unaware of consequent social attacks, is a pretty common experience, especially in "polite" social contexts (that is, ones in which overt expressions of conflict violate social norms).
"Crocker's Rules" are an attempt to subvert this; you might find it useful to declare that you operate under them... though I would expect not... in cases like you describe I expect that the downvoter(s) will not wish to be identified.
Comment author:shminux
25 October 2013 05:57:55PM
*
1 point
[-]
As someone with no particular aptitude in general niceties, I always welcome Crocker's rules, and mistakenly assume that others do, too.
I wish you luck in deciphering the reason(s).
My best (but still low-confidence) guess, based on the timing and on being overly critical in a comment is that this may have been taken as overly harsh.
Comment author:Brillyant
25 October 2013 07:08:02PM
2 points
[-]
For what it is worth, I really liked your comment. Though I guess I'd be pissed (for a minute) if someone said it to me. I didn't read the whole discussion, but she seemed pretty passioniate about her views. When I get that way, nothing makes me angrier than someone (rightly) pointing out that I'm "too passioniate" to discuss this clearly.
Having just got a Kindle Paperwhite, I'm surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I've implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I'm pretty sure there's a lot of untapped scope for the intelligent assembly and presentation of reading material.
So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?
Comment author:latanius
26 October 2013 06:22:10AM
2 points
[-]
k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.
(And even though the output is a bit bigger than the originals, I didn't encounter any problems with 600 page books... the result was about 50 megs tops.)
Comment author:philh
23 October 2013 12:03:56AM
2 points
[-]
I think I set up mutt (and presumably some other software) just so that I could email files to my kindle from the command line; and I have an instapaper bookmarklet to do the same with webpages. I haven't used either very much recently, but that seems to pretty much cover my "getting content onto it" needs.
I have the same Instapaper bookmarklet. I've also set up Instapaper to forward a digest of all my Feedly content that I mark as "save for later". It turns out I only seem to use this feature for (a) incredibly long blog posts I probably shouldn't be reading at work, and (b) highly NSFW blog posts I probably shouldn't be reading at work. This makes for an interesting combination.
I'm fairly unsatisfied with the Kindle email document conversion, mainly because it doesn't do anything intelligent with document metadata. As it happens, I've been playing around with automated document metadata extraction, so I might see if I can put together a clever alternative.
Comment author:FiftyTwo
23 October 2013 10:58:12PM
0 points
[-]
Readability can be set up to send articles to it, and/or do a daily collection. Feedly can send rss feeds to it.
The user interface of the kindle is the real limitation, it fine for reading books/articles but pretty useless for going through large numbers of files.
I've been reminded of something Paul Graham said in his Dangerously Ambitious Startup Ideas essay, about how email is becoming a grossly inefficient to-do list for most people, and it could be worth instigating a whole new to-do protocol from the ground up, which had the degenerate case email equivalent of "to-do: read the following text".
So I've started looking through my emails to see what messages I receive which are essentially "read this text". It's become quite apparent that there aren't that many, and most of them are requests or suggestions to do something else online, (one point for Paul Graham), but there are a few obvious examples where this does happen, such as event itineraries, e-tickets, boarding passes, etc. These tend to be de facto documents, though, so it's not especially insightful.
Comment author:BaconServ
21 October 2013 04:47:47PM
*
7 points
[-]
Reflecting back on LessWrong's past, I've noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled "Applying Bayes Theorem." Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
Comment author:lmm
21 October 2013 10:39:00PM
11 points
[-]
"Lower Bounds on Superintelligence". While a lot of LW content is carefully researched, much of what's posted in support of the singularity hypothesis seems to devolve into just-so stories. I'd like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I'm looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
"Trading with entities that are smarter than you". Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
"How to get a stupid person to let you out of a box". Along with, I think, many people who've never done it, I find the results of the AI-box experiment highly implausible. I can't even imagine a superintelligent persuading me to let it out, or, equivalently, I can't imagine persuading even someone very stupid to let me out. I know the most successful AI players are keeping their strategies secret for reasons I don't understand (if nothing else, it seems to imply those strategies are exceedingly fragile), but if there's anyone who has a robust strategy that's even partially effective I'd be very interested to see it.
"From printing results to destroying all humans" - to me this is the weakest part of the MIRI et al case, and I think most objections we see are variants on this theme. It's obvious that an oracle-like AI would have to interact with the universe in some sense. It's obvious that an AI with unbounded ability to interact with the universe would most likely rapidly destroy all humans. It's nonobvious that there is no possible way to code an AI that can reliably tell the difference between the two, and a solution to this problem naively seems rather more tractable than solving Friendliness in full generality. I'd like to see an exploration of this problem.
"When your gut won't shut up and multiply." The recent downvoted discussion post seems to be in this area, suggesting the wider community is perhaps less interested than I, but I'd love to see some practical advice on effective decision strategies when one's calculated best action is intuitively morally dubious, with anecdotes of the success or failure of particular approaches.
"Times when I noticed I was confused". In theory, noticing you're confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn't. I'd like to see more examples of when this has and hasn't worked in practice, and useful habits to acquire that make you more likely to be able to notice.
Comment author:Vaniver
30 October 2013 12:05:42AM
1 point
[-]
"Times when I noticed I was confused". In theory, noticing you're confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn't. I'd like to see more examples of when this has and hasn't worked in practice, and useful habits to acquire that make you more likely to be able to notice.
Most of my examples here are trite individually, but significant collectively; that is, I remember the habit more easily than any particular examples. There have been situations where I had some niggling doubt, said "I'm confused, I ought to resolve this uncertainty," and after research concluded that I was wrong and by acting early I saved myself some hardship. But while I'm certain there have been at least three of those, I have trouble remembering them or thinking that the ones I do remember are worth sharing.
Comment author:FiftyTwo
27 October 2013 09:23:05PM
3 points
[-]
Brienne Strohl mentioned a website called Gingko on facebook which allows you to write documents in the form of nested trees.
I've been playing around with it today and found it very useful, being able to write ideas out in a disordered way seems to get around some of my perfectionism issues and stop me procrastinating. The real test is whether I continue to use it in the future, I'll try to check back in a month or so.
Comment author:[deleted]
22 October 2013 06:40:28AM
6 points
[-]
I recently realized that I think the stuff I already know about the history of science, math, etc., is really inherently interesting and fascinating to me, but that I've never actually thought about going out of my way to learn more on the subject. Does anybody on here have one really good book on the subject to recommend? I've already read Science and the Enlightenment by Hankins.
Comment author:asr
22 October 2013 03:08:08PM
4 points
[-]
The Copernican Revolution, by Kuhn is one of the best science histories I've ever read.
The folk-tale version of how we adopted heliocentric cosmology is something like this: "Aristotle and Ptolemy thought the world was arranged as concentric crystalline spheres. Copernicus proposed a new model that better fit the data, and it was opposed by the Church. Ultimately thanks to the Reformation and the Enlightenment, the correct model won out."
None of those claims is right, and Kuhn does a great job explaining the true story. He explains what problem Copernicus thought he was solving and how well he solved it.
Comment author:Alejandro1
23 October 2013 08:32:48PM
2 points
[-]
I second the recommendation of The Copernican Revolution, and suggest another book on the same topic: Arthur Koestler's The Sleepwalkers.
Koestler was a great novelist (his best known novel, Darkness at Noon, rivals 1984 in its portrayal of totalitarian thought) and a brilliant, eclectic and sometimes bizarre thinker. The Sleepwalkers is a grand history of astronomy and cosmology from ancient times to Newton, with the bulk of the focus on Copernicus, Kepler and Galileo.
Pros: Fascinating and very detailed biographical information on these three figures (and others like Tycho Brahe), presented in a way that reads like a novel, indeed a page-turner. His biography of Kepler is especially unforgettable, very different from a dry academic presentation. The historical presentation is peppered with opinionated philosophical and even sociological detours.
Cons: unbalanced covering of different topics, subjective and somewhat biased viewpoints. In particular, his interpretation of the relationship between Kepler and Galileo, and of Galileo's dealings with the Church, is colored by what seems to be a strong personal dislike of Galileo. His interpretation of the reasons why the heliocentric model was rejected in ancient times is also unreliable.
As long as his interpretations are taken with a grain of salt (or balanced with a more objective presentation like Kuhn's) I would definitely recommend it; it is the most enjoyable book on history of science I have read.
Comment author:Alejandro1
24 October 2013 10:55:41PM
0 points
[-]
According to him, the ancient heliocentric model of Aristarchus was clearly superior in simplicity and predictive power to the geocentric models of Ptolemy and others, and was abandoned for irrational reasons (religiously or ideologically motivated). From what I understand, the mainstream academic position is that, analyzed in context and without hindsight, the ancient rejection of the heliocentric theory was quite reasonable. Previous discussion in Less Wrong.
I think it is better to say that the rejection could have been reasonable, that we cannot rule out that possibility, not that we can rule out the possibility that it was not reasonable.
My interpretation is that Hipparchus was geocentric, perhaps for good reason, and everyone else was geocentric the bad reason that Hipparchus had data, and data was high status, not because they were convinced by the data. In any event, his data does not rule out the distances Archimedes proposes in the Sand Reckoner, probably following Aristarchus. But I don't think it is even really established that Hipparchus was geocentric, just that Ptolemy said so.
Update: Nope, history is bullshit. Hipparchus was not geocentric. Maybe Ptolemy said he was, but what did he know? Other ancient sources say that he refused to pick sides, not knowing how to distinguish the hypotheses. At the very least this shows that the heliocentric hypothesis was alive and well. Asking why they discarded it is wrong question. Frankly, I'm with Russo: the heliocentric hypothesis was standard.
Comment author:Emily
22 October 2013 10:39:13AM
0 points
[-]
Possibly I should add that I read that when I was quite young (13ish?) and haven't reread since. It doesn't contain anything remotely resembling advanced maths - it's definitely about history and the philosophy of the concept. I obviously found it memorable though, so although the writing may have been so terrible I didn't notice at 13, it's unlikely.
Comment author:cousin_it
21 October 2013 11:32:00AM
*
5 points
[-]
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take. To apologize for that, people say that evolution is hard to predict because it's directionless, e.g. it doesn't necessarily lead to more complexity, larger number of individuals, larger total mass, etc. That leads to the question, is there some deep reason why we can't find any numerical parameter that is predictably increased by evolution, or is it just that we haven't looked hard enough?
Comment author:cousin_it
21 October 2013 08:15:13PM
*
3 points
[-]
Replies to comments that attempted to point out a numerical parameter that's increased by evolution. (I'd be more interested in comments pointing out a deep reason why we can't find such a numerical parameter, but there were no such comments.)
lmm:
Life "wants" to spread, so perhaps an increase in the volume in which life can be found?
That's been steady for awhile now.
ChristianKl:
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
David_Gerard:
Total number of species (including extinct).
That can't decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Lumifer:
The chances of successful transmission of genes across generations given a stable environment.
Comment author:CellBioGuy
22 October 2013 07:47:39AM
*
6 points
[-]
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
More particularly, the equilibrium size of the DNA is very roughly inversely correlated with population size. A larger population size is better at filtering out disadvantagous traits. It's not linear - there are discontinuities as decreasing population size eliminates natural selection's ability to select against different things. And those things sometimes can even go on to be selected for for other reasons - there are genomic structures that are important for eukaryotes that could probably never have evolved in a bacterium because to get to them you need to go through various local minima of fitness.
Soil bacteria can have trillions of individuals per cubic meter of dirt and they actually experience direct evolution towards lower genome size - more DNA means more sites at which something could mutate and become problematic and they actually feel this force. Eukaryotes go up in volume by a factor of ~1000 and go down in population by at least as much, and lose much of the ability to select against introns and middling amounts of intergenic DNA and expanding repeat-based centromere elements.
Multicellular creatures with piddlingly tiny population sizes compared to microbes lose much of the ability to select against selfish transposon DNA elements, gigantic introns and gene deserts, and their promoter elements get fragmented into pieces strewn across many kilobases rather than one compact transcriptional regulation element of a few dozen to a few hundred base pairs (granted, we've also been able to make good use of some of these things for interesting purposes from our adaptive immune system to the concerted regulation of our hox gene clusters that regulate our body plans). They also become very sensitive to the particular character of the transposons or DNA repair machinery of their particular lineage and wind up random-walking like crazy up and down an order of magnitude or two in genome size as a result.
Comment author:cousin_it
22 October 2013 08:28:34AM
*
1 point
[-]
Thanks! I was hoping you'd show up, it's always nice to get a lesson :-)
Going back to the original question, are there any "general purpose adaptations" that never disappear once they show up? Does evolution act like a ratchet in any way at all?
Comment author:CellBioGuy
22 October 2013 09:28:14AM
*
9 points
[-]
Closest thing I can think of from what I know without going through literature is the building up of chains of dependencies. Once you have created a complex system that needs every bit to function, it has a tendency to stay as a unit or completely leave.
You can see that in a couple contexts. One is 'subfunctionalization'. Gene duplications are fairly common across evolution - one gene gets duplicated into two identical genes and they are free to evolve separately. You usually hear about that in the context of one getting a new function, but that's actually comparatively rare. Much more likely is both copies breaking slightly differently until now both of them are necessary. A major component of the ATP-generating apparatus in fungi went through this: a subunit that is elsewhere composed of a ring of identical proteins now has to be composed of a ring of two alternating almost identical proteins neither of which can do the job on its own. Ray-finned fish recently went through a whole-genome duplication, and a number of their developmental transcription factors are now subfunctionalized such that, say, one does the job in the head end and the other does its job in the tail end.
Another context is the organism I work in, yeast. I like to call yeast "a fungus that is trying its damndest to become a bacterium". It lives in a context much like many bacteria and it has shrunk its genome down to maybe 2.5x that of an E. coli and its generation time down to 90 minutes. But it still has 40 introns hanging out in less than 1% of its genes so it needs a fully functional spliceosome complex to be able to process those transcripts lest those 40 genes utterly fail all at once, and it has most of the hallmarks of eukaryotic genome structure and regulation (in a neat, smaller, more research-friendly package). That being said it has lost a few big eukaryotic systems, like nonsense-mediated RNA decay and RNA interference, and they left relatively little trace behind.
Comment author:lmm
22 October 2013 12:02:27PM
1 point
[-]
That's been steady for awhile now.
Sure, but mostly because evolution's so good at it. The fact that evolution so quickly filled a tidal pool, so quickly filled all the tidal pools, so quickly filled the oceans, so quickly covered the land, is evidence of strength rather than weakness.
There does seem to be a "punctuated equilibrium" effect here; life fills a region, appears static for a while, but then makes a breakthrough and rapidly fills another region. It could be argued that this is also true of things that humans optimize for: human population growth has abruptly rapidly accelerated at least twice (invention of agriculture, industrial revolution). Slavery was everywhere in the ancient world, then eliminated across most of it in the space of a century. Gay marriage went from hopefully-it-will-happen-in-my-lifetime to anyone who opposed it being basically shunned. Scientific and technological breakthroughs tend to look a lot like this.
Generalizing this to all optimization processes would be very speculative.
Comment author:ChristianKl
22 October 2013 03:47:20PM
0 points
[-]
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
From bacteria that lived a long time ago. Not from those that live today that had many iterations to optimize themselves. Different bacteria species can also much better exchange genes with each other than vertebrates that need viruses to do so.
Implying that humans evolved from the kind of bacterias that are around today might be more wrong than saying that the bacteria we see know evolved from humans. There more evolutionary distance between todays bacteria and those from which humans descended and humans and those bacteria from which they descended.
Comment author:tut
23 October 2013 03:04:00PM
3 points
[-]
Yeah, and there are often bacteria in a single flower pot that are less related to each other than you are to the potted plant. But both bacteria still have a much smaller genome than you or the plant, maybe because genome size matters for reproduction speed for them, but is insignificant for us.
Comment author:Lumifer
21 October 2013 08:58:28PM
0 points
[-]
That seems to be contradicted by the possibility of evolutionary suicide.
Evolutionary suicide seems to be someone's theoretical idea. Is there any evidence that it happens in evolution in reality?
In any case, are you basically trying to find the directionality of evolution? On a meta level higher than "adapted to the current environment"? There probably isn't. Evolution is a quite simple mechanism, it just works given certain conditions. It is not goal-oriented, it's just how the world is.
However if I were forced to find something correlated with evolution, I'd probably say complexity.
Comment author:Lumifer
22 October 2013 05:56:19PM
0 points
[-]
Depends on your time frame. Looking at the whole history of life on Earth evolution certainly correlates with complexity, looking at the last few million years, not so much.
I understand the argument about the upper limit of genetic information that can be sustained. I am somewhat suspicious of it because I'm not sure what will happen to this argument if we do NOT assume a stable environment (so the target of the optimization is elusive, it's always moving) and we do NOT assume a single-point optimum but rather imagine a good-enough plateau on which genome could wander without major selection consequences.
But I haven't thought about it enough to form a definite opinion.
Comment author:kalium
29 October 2013 07:37:02PM
0 points
[-]
Damn it. It was going to be a better example because I was going to give the actual genera (Aspidoscelis and Cnemidophorus) of whiptail lizards whose species keep going down this path and then I got distracted and didn't do that. Oops.
Comment author:ChristianKl
23 October 2013 11:11:33AM
5 points
[-]
Yes.
I think you just don't give an amoeba much credit because it's no multicellular organism. It's genome is 100-200 times the size of the human. As it's that big it seems like we haven't sequenced all of it so we don't know how many genes it has.
We also know very little about amoeba. Genetic analysis suggests that the do exchange genes with each other in some form but we don't know how.
Amoeba probably express a lot of stuff phenotypically that we don't yet understand.
Comment author:hyporational
22 October 2013 04:07:26AM
-1 points
[-]
That can't decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Just apply Occam.
That seems to be contradicted by the possibility of evolutionary suicide.
Possibility wouldn't contradict anything, a high enough probability would.
Comment author:ChristianKl
21 October 2013 03:06:17PM
6 points
[-]
Plenty of people predict that increased antibiotica use will lead to a raise in antibiotica resistance among bacteria.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
A tree has to get lucky to survive the beginning. If it surives the beginning it can however grow tall and win.
Let's say you keep the enviroment stable for 2 billion years. Everything evolves naturally. Then you take tree seeds and bring them back to the present time. I think there a good chance that such a tree would outcompete grass at growing in glades.
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take.
Fossils don't really get used as the central evidence of common descent anymore. These days common descent usually get's determined by looking at the DNA.
In my experience people who discuss evolution online that do focus on fossils are usually atheists who behave as if their atheism is a religion. They think it's important to defend Darwin against the creationists. On the other hand they aren't up to date with the current science on evolution.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
You seem to be predicting that grasses have smaller genomes than trees, but wheat is famous for having a huge genome. Here's a table of a few plants. Maybe wheat is an outlier and I'd be interested if you had documentation of some pattern, but I've always heard that there is none.
Do you have evidence that the variation in genome size among multicellular organisms is not variation in waste? Added: As far as I know, the consensus is that it is. If you disagree with the consensus, you should acknowledge that's what you're doing.
Comment author:ChristianKl
22 October 2013 10:36:50PM
*
0 points
[-]
Do you have evidence that the variation in genome size among multicellular organisms is not variation in waste?
I haven't made a claim that strong. To the extend I made a claim it's not all variation in genome size between multicellular organisms is due to different amount of waste.
And no I don't intend to claim something that's out of consensus in this topic. To the extend I might differ on this topic from consensus consider that to be errors.
If I remember right then one reason for plants like grasses to have long genomes was to have multiple copies of genes to speed up protein production.
Comment author:ChristianKl
22 October 2013 03:30:04PM
*
2 points
[-]
What do you mean, "predict"? It has been empirically observed, a lot.
cousin made the claim that we can only say something about evolution that happened in the past. I say that we can confidently predict that increasing antibiotica resistance among bacteria will continue in the future.
Huh? It doesn't work like that at all. For one thing, the "environment" isn't stable.
Firstly describing complex system in a ew words is seldom completely accurate. The question is whether it's a useful mental model for thinking about it.
In this case the idea I wanted to communicate is that it's very useful to think about the speed of iterations and the competitive advantage that a specis gets by having as advantage of hundred of millions of iterations over their competitors.
The enviroment doesn't have to be stable for the argument that I made. In changing enviroments a spezies with faster iterations adapts faster. A lot of genetic adaptions are also about housekeeping genes that are useful in most enviroments.
Evolution leads to a higher level of fitness in the environment, but the problem is that the environment itself is constantly changing in unpredictable ways. It's like an optimization process where the utility function itself is contantly changing. That's why it's very hard to reliably quantify fitness. For instance, billions of years ago, the increase in oxygen in the atmosphere killed a lot of existing organisms and forced aerobic bacteria on to the scene.
Why should there be a numerical parameter predictably increased by evolution? Why not look for a numerical parameter predictably increased by continental drift? or by prayer? by ostriches?
Comment author:cousin_it
22 October 2013 08:16:06AM
*
3 points
[-]
One of the key pieces of justification for FAI is the idea of "optimization process". Evolution is given as an example of such process, unlike continental drift or ostriches. It seems natural to ask what parameter is optimized.
Just FYI, I interpret that question very differently than your original.
Why don't you start with a simpler example, like a thermostat? Would you not call that an optimization process, minimizing the difference between observed and desired temperature?
Most of your rejections of suggestions in this thread would also reject the thermostat. An ideal thermostat keeps the temperature steady. Its utility function never improves, let alone monotonically. A real thermostat is even worse, continually taking random steps back. In extreme weather, it runs continually, but never gets anywhere near goal. It only optimizes within its ability. Similarly, evolution does not expand life without bound, because it has reached its limit of its ability to exploit the planet. This limit is subject to the fluctuations of climate. But the main limit on evolution is that it is competing with itself. Eliezer suggests that it is better to make it plural, "because fox evolution works at cross-purposes to rabbit evolution." I think most teleological errors about evolution are addressed by making it plural.
Also, thermostats occasionally commit suicide by burning down the building and losing control of future temperature. (PS - I think the best example of evolutionary suicide are genes that hijack meiosis to force their propagation, doubling their fitness in the short term. I've been told that ones that are sex-linked have been observed to very quickly wipe out the population, but I can't find a source. Added: the phase is "meiotic drive," though I still don't have an example leading to extinction.)
Comment author:cousin_it
29 October 2013 08:45:00AM
*
1 point
[-]
Do you mean to say that the expected inclusive fitness of a randomly selected creature from the population goes up with time? Well, if we sum that up over the whole population, we obtain the total number of offspring - right? And dividing that by the current population, we see that the expected inclusive fitness of a randomly selected creature is simply the population's growth rate. The problem is that evolution does not always lead to >1 population growth rate. Eliezer gave a nice example of that: "It's quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case the sustainable wolf population will decrease as new wolves replace old."
Comment author:Desrtopa
22 October 2013 04:26:13AM
1 point
[-]
While I don't know of any simple or convenient numerical parameter, I'd note that we do have some handy non-retrospective pieces of evidence for evolution by natural selection, such as the induced occurrence of evolutionary benchmarks such as multicellularity.
In general, there are some adaptations which are highly predictable under certain circumstances, but there may not be any sort of meaningful measure we can use for evolution of organisms over time which aren't a function of their relationship with their environment.
Comment author:hyporational
22 October 2013 03:55:11AM
*
0 points
[-]
I think whatever numerical parameter evolution raised generally (not always) in respect to its environment, it would have to do with meaningful complexity , however that can be numerically expressed, and local decrease in entropy. Design would cause those too, but hypothesizing it would violate occam's razor.
Different environments and different substrates for mutation cause different kinds of evolutions.
Comment author:CellBioGuy
22 October 2013 07:39:34AM
*
2 points
[-]
One main thing that happens with a long enough period of selection in a simple, stable environment on a microorganism is a shrinking of the genome.
You quite simply will not find a simple parameter perpetually increased by evolution. Whatever works better for that base organism in that particular environment will become more common. One thing being selected for under all circumstances and showing up all the time is just not the reality.
Comment author:Lumifer
21 October 2013 06:40:27PM
0 points
[-]
we can't find any numerical parameter that is predictably increased by evolution
The chances of successful transmission of genes across generations given a stable environment. The number of offspring surviving to reproductive age is a good first-order approximation.
If you want something more tangible, predictions what features evolution would lose are rather easy -- those that are (energy-)expensive and are useless in the new environment.
Comment author:Lumifer
21 October 2013 06:41:39PM
3 points
[-]
Evolution has finite optimization power
Huh? Even if you accept the estimates that your link points to, the amount of information in mammalian genome and optimization power of evolution are VERY different things.
Comment author:DanielLC
21 October 2013 09:39:59PM
0 points
[-]
How do you figure?
If you can narrow down the number of possible lifeforms to one in 2^n, that's n bits of optimization power, and n bits of information as to what the final lifeform is.
If life is getting more and more optimal, then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal, and we have more than 25 megabytes of information as to what that lifeform is.
Comment author:Lumifer
22 October 2013 12:45:27AM
*
0 points
[-]
then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal
You go and wait. I'll do other things in the meantime :-) Do you have any intuition how large that number is?
and we have more than 25 megabytes of information as to what that lifeform is.
You've spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
Imagine the case where there's one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn't tell you what these creatures are.
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn't limit how evolution can optimize with different creatures in different places.
Comment author:DanielLC
22 October 2013 04:00:49AM
0 points
[-]
Do you have any intuition how large that number is?
It's not going through them one at a time.
You've spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
It's not a simple English description, but narrowing down the possibilities by a factor of two is always one bit of information. It doesn't matter whether it's "the first bit is one", "the xor of all the bits is one" or even "it's a hash of something starting with a one using X algorithm, which is a bijection".
Imagine the case where there's one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn't tell you what these creatures are.
It's the one with a higher inclusive genetic fitness. That's what evolution optimizes for.
If evolution has n bits of optimization power, that's equivalent to saying that if you order all possible lifeforms based on how optimal they are, this is going to be in the top 1/2^n of them. (It's actually somewhat more complicated, since it's more likely to be higher up and there's some chance of it being lower, but that's the basic idea.)
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn't limit how evolution can optimize with different creatures in different places.
It does vary based on what lifeform you're looking at, since they all have different mutation rates and different numbers of children, but there's always a limit to the information, and I'm pretty sure that it's pretty much always a limit that's already been hit.
Comment author:CellBioGuy
22 October 2013 07:57:00AM
1 point
[-]
It's not going through them one at a time.
By my calculations, if you had the entire earth's surface covered by a solid meter-thick layer of bacteria for 4.6 billion years and each bacterium lived for 1 hour, that would be approximately 2^155 bacteria having lived and died.
You can massively increase genetic information (inasmuch as that actually means much in biology) very quickly with very simple genetic changes. It's not a case of searching through every possible 1 bit change.
Comment author:Lumifer
22 October 2013 05:43:49AM
*
1 point
[-]
narrowing down the possibilities by a factor of two is always one bit of information
Provided, of course, that your space of possibilities is finite and you know what it is. In the case of evolution you don't.
that's equivalent to saying that if you order all possible lifeforms
I don't understand what does "all possible lifeforms" mean. Does not compute.
but there's always a limit to the information, and I'm pretty sure that it's pretty much always a limit that's already been hit.
Which limit? The limit of information in the mammalian genome? Or the limit of evolution -- whatever exists is the pinnacle an no better (given the same environment) can be achieved?
Comment author:shminux
21 October 2013 05:10:10PM
*
-1 points
[-]
There have been plenty of evolutionary simulations, surely they provide some testable predictions. I vaguely recall one of them: that new adaptations tend to propagate first in small isolated groups and only then spread through the rest of the species. I don't recall if this has been tested through the fossil records. I am sure there are many more testable predictions. Like how fish locked in a dark cave or murky water tend to lose eyesight. But the exact path is probably too hard to predict. For example, marine mammals did not develop gills. Or that mammals develop intelligence by growing Neocortex, while birds use DVR (dorsal ventricular ridge) or maybe Nidopallium for the same purpose.
Comment author:ChristianKl
25 October 2013 12:03:02PM
2 points
[-]
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
I'm familiar with the discussion around fluid intelligence but what do we know about raising speed? Do we know how to conduct training to improve it?
Comment author:niceguyanon
25 October 2013 12:54:54PM
1 point
[-]
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
When did you start, recently? I may be wrong but, I think average scores are matched to your peers regardless of time spent on the game. So if you just started exercises your score is being compared to everyone's score even those that have been learning how to play that particular game for a long time.
Comment author:Lumifer
21 October 2013 06:03:10PM
6 points
[-]
Google is your friend, but keep in mind that "yoga" is an umbrella term for a large variety of exercises. In particular, yoga as an Indian discipline aimed at reaching moksha, the liberation from the reincarnation cycle, is rather different from yoga as practiced in the West with the goal of losing 10 lbs.
Comment author:ChristianKl
22 October 2013 04:27:19PM
*
2 points
[-]
I would add that the same thing goes for meditation, anaerobic exercise and aerobic exercise as well.
All those terms include a lot of different activities.
I saw one study that indicated that meditation did not lower blood pressure, refuting earlier studies, but that yoga did. Can't find it now however. The wikipedia page on meditation research might be useful.
also this
Comment author:[deleted]
23 October 2013 03:54:49AM
3 points
[-]
Am I running on corrupted hardware or is life really this terrible? I don't think I can last another decade like this one, let alone whatever cryonically-supplied futures that would await. At this point, I think I would pay not to be frozen.
Comment author:Adele_L
23 October 2013 04:51:06AM
*
19 points
[-]
It sounds like you are depressed. It's probably worth considering therapy or psychiatric care - these interventions have helped me a lot. Hope things get better for you.
Comment author:CAE_Jones
23 October 2013 04:15:27PM
1 point
[-]
Depression can be irrational--chemical imbalances, not enough sunlight/exercise/etc--and can also be totally rational (life actually does suck titanium balls). Psychiatric care can help the former; the latter seems as it should be vulnerable to rationality superpowers, but either that's incorrect or I'm just not superclever enough to win. It does not help when the two coincide (sucky life situation causing serious chemical problems).
There's also the question of whether or not a terrible situation is one that makes psychiatric help readily available (I'd hope online psychiatry could help with this, but I don't really know).
Comment author:kalium
23 October 2013 09:20:44PM
*
4 points
[-]
That depends. "Too depressed to do anything" is a pretty effective way out of certain unpleasant situations.
Specific example: Being in grad school caused my life to suck titanium balls, which (presumably combined with a pre-existing brain vulnerability) led me to the point where I was too depressed to do any work. Which meant I had to drop out. Which was the only way I could ever have left, as my moral system at the time did not permit giving up an endeavor simply because it was making me miserable. And, surprise, surprise, as soon as I got on the plane out of there it was like color came back into the world and life was worth living again.
Comment author:Moss_Piglet
23 October 2013 08:25:55PM
13 points
[-]
Trying to reason your way out of mental illness is like trying to pull yourself out of quicksand by yanking on your hair.
Depression screws with your thoughts and perceptions in incredibly profound ways, including your ability to make predictions about the future, and is absolutely a tamp on rational thought. That's true whether it is caused by another mental illness or a traumatic event in your life; it's just as "chemical" and just as difficult to escape either way. Throwing off depression with strength of reason or willpower is a misunderstanding of how untreated depressed people adapt and occasionally heal, not a prescription.
The human body is built to survive, and the brain is no exception, but a rational person should always try to supplement their natural strength with medicine when their life is on the line. Advising anything else seems irresponsible.
Comment author:Lumifer
23 October 2013 08:36:29PM
*
0 points
[-]
should always try to supplement their natural strength with medicine
Gwern is the go-to person here, but it is my impression that "standard" anti-depression drugs are neither particularly effective nor free of serious side-effects. And things which are more effective -- like ketamine -- are very rarely prescribed.
Comment author:Moss_Piglet
24 October 2013 01:27:53AM
*
5 points
[-]
my impression that "standard" anti-depression drugs are neither particularly effective nor free of serious side-effects. And things which are more effective -- like ketamine -- are very rarely prescribed.
More or less, but it's a question of levels. SSRIs didn't do much for me and a lot of other people, plus weight gain sucks (luckily no sexual dysfunction), but they're not particularly dangerous from what I understand. Stuff like Bupropion is awesome, as long as you don't mind sobriety and have a low risk for seizures. There's other drugs which modify SSRIs too, but I've never had any and they're supposedly more on the 'side-effect-y' side. New stuff like Ketamine is waaay out there, like almost on par with electroconvulsive therapy, in terms of how likely you are to see it but IDK what it's like in terms of safety.
But once the 'trial-and-error' portion of dosing is over with though and you're on something that works for you, it's absolutely night and day. I can only speak for myself obviously but it was a complete perspective switch, like someone flipped a switch in my head to 'not miserable.'
(Obviously I'm not an expert, just a guy who's spent some time on the patient end of things. I am really interested to hear Yvain's answer if he has one.)
Many drugs are probably not what you would call effective, but they're still worth trying. You'd be surprised how many drugs are not free of serious side effects. Luckily these effects are usually too rare to care about. It's just that taboo drugs get most of the attention and armchair medicine.
I really wish these kinds of discussions would begin and end with "I think you're depressed, it's a medical condition, go see a doctor. insert social support" Don't screw with a life threatening condition. Not pointing at you specifically.
Comment author:Lumifer
24 October 2013 04:00:31PM
2 points
[-]
I really wish these kinds of discussions would begin and end with "I think you're depressed, it's a medical condition, go see a doctor.
Well, it's a bit more complicated than that.
First, diagnosing strangers with psychiatric disorders over the Internet has a long history and, um, let's say it didn't always work out well :-D
Second, depression is a spectrum issue -- there are clear extremes but also there is a big muddle in the middle. You have to be careful of medicalizing psychological states which is a bad direction to go into.
Comment author:Lumifer
24 October 2013 05:12:37PM
*
0 points
[-]
What is bad about medicalization?
It narrows the range of what's considered "normal". It proposes medical solutions to what are not necessarily medical problems. It is, to a large degree, a way of expanding the market for the big pharma.
Lots of problems, google it up if you're interested...
Comment author:hyporational
24 October 2013 05:51:33PM
*
1 point
[-]
It narrows the range of what's considered "normal". It proposes medical solutions to what are not necessarily medical problems.
I think your perception of this problem has more to do with stigma associated with medical conditions. If you taboo the associated words, what you're left with is improving people and what's wrong with that? Do you oppose transhumanism on the same grounds?
And big pharma, we meet again. What is this singular, evil, money grabbing entity? I'd try to google it but I know I'd meet a violent mess of blogosphere mythology.
I would recommend investigating the safety and efficacy of selegiline. Seems somewhat effective, safe, and available (albeit from overseas for US users). Do your own homework though.
Comment author:Gabriel
27 October 2013 02:26:56PM
1 point
[-]
and can also be totally rational (life actually does suck titanium balls)
It's a mistake to assign truth values to emotions. They can't be correct or incorrect, they can be only helpful or unhelpful. And I don't think depression is ever helpful, barring convoluted thought experiments.
Comment author:Moss_Piglet
23 October 2013 05:47:48PM
12 points
[-]
As someone who's been in that boat, get in touch with a psychiatrist ASAP. It can very literally save your life, not to mention making it much much better on a day-to-day level.
Life is terrible, but it's also strange and beautiful; if you can't see a reason to continue with it, there is most likely an underlying problem (even if it is just "faulty wiring") which drugs and therapy can help you identify.
I cannot recommend seeing a psychiatrist more completely.
Comment author:EvelynM
23 October 2013 04:04:38PM
8 points
[-]
An external view of your life and health, from a trusted professional, may help you identify causes of your discomfort and, most importantly, strategies to improve your life.
Comment author:[deleted]
24 October 2013 08:50:52AM
5 points
[-]
is life really this terrible
Some lives are and others aren't. Without knowing anything about you I can't tell, but given that you can write in English and access the Web, I'd guess yours probably isn't and join the other people in suggesting that you see a professional.
I'm not sure how it works in your country, but you don't necessarily need a psychiatrist to diagnose and treat depression. Also it's good to check for bodily conditions that could make you feel like crap, and a non-psychiatrist might do that more reliably.
If I had a physicist speak at my funeral, I would hope that he would talk about a lot more than the conservation of energy. I don't particularly care about what happens to my energy.
If I am lucky, he will speak about relativity. My family will probably have the mistaken intuition that only things in the present are truly real. Teach them about spacetime. They need to know that time and space are connected - that me being in the past is just like me being far away. The difference is that we will only have one way communication. Even if they will no longer be able talk to me, I will still talk to them through memories.
If I am not so lucky, he will speak about quantum mechanics. If I die young, my family will be grieving over the potential future I have lost. Teach them about many worlds. They need to know that our world is constantly splitting - that just before I died, the world split off a different future in which I am still alive. There is another world, just as real as our own, in which I survive. This world will even interact with our own in very tiny ways.
I want a physicist to speak at my funeral. I want everyone to understand that my continued existence is way more verifiable than a religious afterlife and way more substantial than a simple conservation of energy.
Comment author:tgb
26 October 2013 05:55:31PM
0 points
[-]
Upvoted since it's a little harsh for 'us' to tell someone that something is better suited for open thread and then to downvote it without explanation when it goes there...
Genuinely (if admittedly idly) curious: if this was your only reason for upvoting, do you now feel like you should retract your upvote since the comment would no longer be net-downvoted without it?
Comment author:shminux
27 October 2013 06:02:31PM
1 point
[-]
TEVROMATIN:
PROFILE: Chemotherapy adjuvant specifically designed for glioblastomas of neuronal origin. By mimicking natural neural differentiation factors, it causes these tumors to regress from resilient high-grade neuroblasts towards more typical neurons, making them easy targets for stronger chemotherapeutic agents.
BANNED BECAUSE: During differentiation process, malignant nerve cells form connections to healthy nerve cells and to each other. As a result, tumor forms a functioning neural network effectively “telepathically” connected to healthy brain. Patients report feelings of overwhelming guilt as tumor accesses patient’s memories and emotions and realizes its role as a parasitic cancer, followed by its utter terror as it realizes it is about to be killed. Many patients refuse to continue with chemotherapy regimen; those who continue make a complete physical recovery but are psychologically scarred for life as they experience every moment of the tumor’s death as if it were their own.
Some HPMOR speculation Spoilers up to current chapter. After writing this, I checked the last LessWrong thread on HPMOR, and at least one component of this has already been noticed by other people, but others have not been, I think.
Comment author:shminux
21 October 2013 05:25:52AM
-1 points
[-]
I was disappointed in the last chapter, gung nqhygf jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr runs contrary to common sense and to the rest of the book.
Comment author:shminux
21 October 2013 11:15:01PM
*
1 point
[-]
Yeah, cause that never happens in canon.
Sorry, I was used to your fic's to higher standards of believability of human behavior than canon's.
I think wizard culture has some different ideas from your culture.
I must be missing something, because even Harry had trouble being taken seriously by most adults for most of the story, and no other (first-year) children were anywhere near his level. Yet suddenly so many of them seem to be taken seriously by their relatives and by all the most powerful wizards. And they didn't even have to save the Earth from the Formics.
It's still the culture that throws kids on a Hippogryff and tells them to get going.
And as Daphne notes in her thoughts, the children are standing in for their parents and speaking their parents' orders; they are acting as spokespersons for their families, and the others are treating them as such.
Comment author:somervta
21 October 2013 08:20:07AM
0 points
[-]
I suspect that had more to do with Harry's involvement than anything else.
"gung [crbcyr ehaavat guvatf] jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr" vf n ybg zber cynhfvoyr jura bar bs gurz vf n puvyq.
Comment author:linkhyrule5
26 October 2013 08:39:32AM
1 point
[-]
What work has been done with the causality/probability of ontological loops? For example, if I have two boxes, one with a million dollars in it, and I'm given the option to open one of them and then go back to change what I did (with various probabilities for choice of box, success of time travel, and so on), is there existing literature telling me how likely I am to walk out with a million dollars?
Obviously the answer will change depending on which version of time travel you use (invariant, universe switching, totally variant, etc.)
Comments (211)
Some LWers may be interested in a little bet/investment opportunity I'm setting up. I have become increasingly disgusted with what I've learned about the currently active Bitcoin+Tor black markets post-Silk-Road - specifically, with BlackMarket Reloaded & Sheep. I am also frustrated that customers are flocking to them, and they all seem absurdly optimistic. So, I am preparing to make a large public four-part escrowed bet with any comers on the upcoming demise of BMR & Sheep in the coming year, in the hopes that by putting money where my mouth is, I may shock at least a few of them into sanity and perhaps even profit off the more deluded ones.
The problem is, I feel I can afford to risk ฿1 ($200), but I'm not sure that this will be enough to impress anyone when split over 4 bets ($50 a piece). So I am willing to accept up to ฿1 in investments from anyone, to increase the amount I can wager. The terms are simple: whatever fraction of the bankroll you send, that's your share of any winnings. If we bet ฿2 and you sent ฿1, then you get half the winnings if any. (I am not interested in taking any cut here.)
My full writeup of the bet, with some statistics helping motivate the death probabilities I am betting based on: http://pastebin.com/bEuryTuF
If you are interested, you can reply here, or contact me at
gwern@gwern.net, or we can chat on Freenode (asgwernor just visit#lesswrong). I am currently ignoring private messages on LW, so don't do that.Also, please don't express interest unless you are genuinely fine with potentially losing your investment: given my best estimate of the probabilities & their correlations, there's somewhere >10% chance that we would lose all 4 bets as both BMR & Sheep survive the full year.
EDIT: if you really want to get in, I'll still take your bitcoins, but I think I have enough investors now, thanks everyone.
The bet has gone live at http://www.reddit.com/r/SilkRoad/comments/1pko9y/the_bet_bmr_and_sheep_to_die_in_a_year/
I will be price matching whatever gwern personally puts in.
Is that a per-person maximum, or are you only accepting up to that much worth of bets?
Edit: I have contacted gwern via IRC and invested 1 BTC.
That was a per-person limit; I may close it down soon, though (฿3 plus my own bitcoin and recent appreciation, should enough to impress people, and beyond that, I think there's diminishing returns).
don't you mean chance of losing every bet?
If so, no way in hell those are conditionally independent. If not, what did you mean?
Yes.
Of course they are not conditionally independent, that's why I gave it as a lower bound.
Specifically, I think we can agree that whatever the exact relationships, the failure of one bet will increase the chance of failure of all the others: if the 6-month sheep bet fails, then the 12-month becomes more likely to fail, and to a smaller degree, the BMR ones become more likely to fail. And not the other way around. Hence independence is the best-case scenario, and so it's the lower bound, and that's why I wrote ">10%".
Ah, I see. I was confused by the '=' sign.
Oslo on IRC jokingly summarizing part of a debate:
This has the makings of a card game or something.
http://lesswrong.com/lw/d2w/cardsagainstrationality/
At the Columbus megameetup, some people actually printed out a set of cards (as a stand-alone deck) and played the game. I don't know who of two people has the source file, but I can find out...
Ugh, so the underscores marking italics thing also works within URLs? (OTOH the link does go to the right place.)
Some of my friends and I were already thinking about making something like this — good to see there is a good start available!
It does, doesn't it...
All have to say is if someone actually makes this game, there has to be room for the awesomeness of quines. After all, "is an applause light" is an applause light, isn't it?
"is an applause light" is actually a boo light not an applause light. However it is true that "is a boo light" is a boo light.
http://jsbin.com/ibebih/3
Who isn't?
This thing is priceless.
Check out the discussion thread about the thing:
http://lesswrong.com/lw/egt/made_a_silly_meta_thing/
Thanks, it's awesome. Arguably better than the actual lesswrong context, tbh. :-(
FTFY
PubMed is allowing comments. Only people who have publications at PubMed will be permitted to comment. I predict that PubMed will find it needs human moderators.
PubMed's comment system will have some form of human moderation before 2015.
People who have publications at PubMed can have passwords stolen.
Random thought: I've long known that police can often extract false confessions of crimes, but I only just now made the connection to the AI box experiment. In both cases you have someone being convinced to say or do something manifestly against their best interest. In fact, if anything I think the false confessions might be an even stronger result, just because of the larger incentives involved. People can literally be persuaded to choose to go to prison, just by some decidedly non-superhuman police officers. Granted, it's all done in person, so stronger psychological pressure can be applied. But still: a false confession! Of murder! Resulting in jail!
I think I have to revise downwards my estimate of how secure humans are.
I can't see why you claim it's a stronger result. In the AI box experiment, the power is entirely in the gatekeeper's hands; in an interrogation situation the suspect is virtually powerless. This distinction is important because even the illusion of having power is enough to make someone less susceptible to persuasion.
Plus, police don't sit down with suspects in a chat room. They use 'enhanced interrogation techniques', methods such as an unfamiliar environment, threat of violence (or actual violence in some cases), and various other threats. An AI cannot do any of this to a gatekeeper unless the gatekeeper explicitly lets it out.
That's all certainly true, but the AI box experiment is still a game at heart. The gatekeeper loses and he's out, what, fifty bucks or something? (I know some games have been played - and won, I think? - with higher stakes, and those are indeed impressive). The suspect "loses" and he's out 20+ years of his life. It's hard to make a comparison but I think the two results are at least comparable, even with the power imbalance.
Humans are extremely susceptible to the arguments they have not been inoculated against. These arguments van be religious, scientific, emotional, financial, anything. One example is the new immigrants from certain places falling for get-rich-quick scams in disproportionally large numbers (not so much anymore, since the knowledge has spread). Or certain LW regulars believing Roko's basilisk. Or become vegan (not all mind hacking is necessarily negative).
I would conjecture that every single one of us has open ports to be exploited (some more so than others), and someone with a good model of you, be it a super-smart AI or a police negotiator, can manipulate you into willingly doing stuff you would never have expected to be convinced of doing before having heard the argument.
Actual people are also using a hell of a lot more than text.
I notice that the latest two posts from Yvain's blog haven´t shown up in the "recent from rationality blogs" field. If this is due to a decision to no longer include his blog among those that are linked, I believe this to be a mistake. Yvain's blog is in my view perhaps the most interesting and valuable among those that are/were linked. And although I am in no danger of missing his updates myself, the same might not be true of all LW readers that may be interested in his writing.
I think it is likely due to the political and controversial nature of those last two posts. I would be surprised if this was not the reason.
Someone has been regularly downvoting every thing I've posted in the past couple months (not just a single karmassasination). I really don't care about the karma (so please DO NOT upvote any of my previous posts in order to "fix" it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker's Rules and all.
I've been getting an early downvote on my posts, too. I can afford it, but it does seem malicious.
Do I understand it correctly that the behavior you describe is "downvote every new comment from user X when it appears" (as opposed to "go to user X's history and downvote a lot of their old comments at the same time")?
Because when hearing about karma assassinations, I always automatically assumed the latter form; only the words "early downvote" in Nancy's comment made me realize the former form is also possible.
A possible technical fix could be to not display the user comment's karma until at least three votes were made or at least one day has passed.
Also, off-topic: Crocker's Rules seem to be popular in out culture; maybe it would be nice to integrate them into LW user interface. For example user could add their "anonymous feedback URL" in preferences, and a new icon "Reply Anonymously" would then be displayed below all user's comments and articles.
Not only that, but I've been getting the downvotes on my posts, not my comments. I wouldn't call this karma assassination-- maybe karma harassment.
Crocker's Rules aren't about anonymity.
Theoretically it might be useful for people to be able to set a visible flag "Talk to me under Crocker's Rules" -- but I suspect that it will immediately degenerate into a status sign.
If I declare Crocker's Rules and you write something rude in a reply to me, other LW readers still see it. So even if I am perfectly okay with it (and I shouldn't have declared CR otherwise), you might lose some status in the eyes of the observers who don't properly evaluate the context of your reply.
If you send me a private message, we get rid of the observers. Unless I play dirty and later show the private message to someone else. Anonymous feedback would prevent me from doing so.
But yes, for 99% of cases, sending private message would be enough, anonymization is not needed. And we already have that option here.
Crocker's Rules, as I understand them, are about efficient conveyance of meaning without the extra baggage of social niceties. The are not about the ability to express unpopular views without social consequences which is where private messages or anonymity shine.
If you are concerned about observers misinterpreting the context you can always add a little [This post is under Crocker's Rules] tag somewhere.
Crocker's rules are not directly about anonymity no, but if you want to maximise your chances of receiving honest feedback an anonymous contact method is valuable.
I almost got scammed today. I received a very official looking piece of mail, "billing" me a few hundred bucks. Normally I would be able to see through it immediately, but this particular one caught me off guard. I am usually very good about being skeptical and it disappointed me that I almost fell for it. What I think happened was that, my familiarity heuristic was exploited.
I have business with a certain state and it was familiar for me to receive correspondences from various agencies and pay all sorts of different fees. So when I got this letter in the mail, it didn't raise any flags. I was curious to check online but not because I was suspicious but rather annoyed that I wasn't aware of this fee, that is when I discovered I was almost duped.
This isn't a particularly new scam, I have heard of it before, but when it happened to me, I almost didn't notice. What I learned from this whole thing is to be vigilant against letting my guard down to con artist that exploit the familiarity heuristic. I was so familiar with bills that I glanced over small print indicating that "this is a solicitation". I might have received these scams before but regarding my car payment or mortgage, but I was able to easily pick them out because I didn't have car payments or a mortgage, obvious scam was obvious. But then I get hit right where I am familiar, and then it wasn't so obvious.
the one time I've fallen for phishing was when I received an email purporting to be from my bank literally the day after I signed up for an account.
Interesting. Feel free to offer more details.
This is the letter. I was less careful than usual (I should have read through it), but because it had information about me and is consistent with what I might see on a normal basis, I let me guard down. I only attempted to check the fee schedules to see why I had missed something like this, all the while assuming that I probably did.
Wow, it does look very official. Without checking online, how is one supposed to know that there is no "Labor Compliance Office" in California.
What is 'taste' (as in, artistic taste)? And what differentiates 'good taste' from 'bad taste'?
I suggest Taste for Makers and How Art Can Be Good by PG.
There are some interesting points in there, especially about the fact that most people make themselves like what seems 'cultured' (I've definitely seen this type of appeal to majority among my friends - I was nearly roasted alive when I mentioned I honestly don't enjoy a particular classical composer).
There are also some fallacies in there too.
Anyway, the part where he talks about trickery is interesting:
I question this premise. It seems to imply that the purpose behind the art determines its quality, and not the art itself. For instance, if you have two identical paintings, but one was drawn with the intention of making money, and the other was drawn for true artistic merit, the latter one somehow has more value (and is thus of 'better taste') than the former.
At any rate, in the end that paragraph was the closest I got to his definition of 'taste' - the ability to recognize trickery in artistic works.
And especially this paragraph about people with good taste:
Finally,
While the insights presented are interesting (in providing a window to the author's mind, at least), It has not actually succeeded in this purpose.
I think it's just elliptic rather than fallacious.
Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don't get to communicate. So there is something they are all picking up on, but it isn't a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)
Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.
That's his basic argument for taste being a thing and it doesn't need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.
Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.
This really isn't about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person's perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.
Hmm, about 100 downvotes in the last couple of days, 1 per comment or so, suggest that someone here is royally pissed off at me. I wish I knew the reason. On the bright side, at least this forum provides some indication of a problem. When this happens to me IRL, I either never find out about it or deduce it months or years later based on second-hand information, rumors, or, in some cases, denied promotions/requests/opportunities. I wonder if this is a common experience? Situations like this is a significant reason why I would likely jump in with both feet if offered a chance to join a telepathic society.
Did you see that Daenerys and NancyLebovitz experienced a similar problem? Seems likely someones doing it systematically to several accounts.
Thanks, I missed that discussion.
Well, "this" is broad, but I expect that failing to notice enmity, and relatedly being unaware of consequent social attacks, is a pretty common experience, especially in "polite" social contexts (that is, ones in which overt expressions of conflict violate social norms).
"Crocker's Rules" are an attempt to subvert this; you might find it useful to declare that you operate under them... though I would expect not... in cases like you describe I expect that the downvoter(s) will not wish to be identified.
I wish you luck in deciphering the reason(s).
As someone with no particular aptitude in general niceties, I always welcome Crocker's rules, and mistakenly assume that others do, too.
My best (but still low-confidence) guess, based on the timing and on being overly critical in a comment is that this may have been taken as overly harsh.
For what it is worth, I really liked your comment. Though I guess I'd be pissed (for a minute) if someone said it to me. I didn't read the whole discussion, but she seemed pretty passioniate about her views. When I get that way, nothing makes me angrier than someone (rightly) pointing out that I'm "too passioniate" to discuss this clearly.
Having just got a Kindle Paperwhite, I'm surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I've implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I'm pretty sure there's a lot of untapped scope for the intelligent assembly and presentation of reading material.
So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?
k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.
(And even though the output is a bit bigger than the originals, I didn't encounter any problems with 600 page books... the result was about 50 megs tops.)
I think I set up mutt (and presumably some other software) just so that I could email files to my kindle from the command line; and I have an instapaper bookmarklet to do the same with webpages. I haven't used either very much recently, but that seems to pretty much cover my "getting content onto it" needs.
I have the same Instapaper bookmarklet. I've also set up Instapaper to forward a digest of all my Feedly content that I mark as "save for later". It turns out I only seem to use this feature for (a) incredibly long blog posts I probably shouldn't be reading at work, and (b) highly NSFW blog posts I probably shouldn't be reading at work. This makes for an interesting combination.
I'm fairly unsatisfied with the Kindle email document conversion, mainly because it doesn't do anything intelligent with document metadata. As it happens, I've been playing around with automated document metadata extraction, so I might see if I can put together a clever alternative.
Readability can be set up to send articles to it, and/or do a daily collection. Feedly can send rss feeds to it.
The user interface of the kindle is the real limitation, it fine for reading books/articles but pretty useless for going through large numbers of files.
I've been reminded of something Paul Graham said in his Dangerously Ambitious Startup Ideas essay, about how email is becoming a grossly inefficient to-do list for most people, and it could be worth instigating a whole new to-do protocol from the ground up, which had the degenerate case email equivalent of "to-do: read the following text".
So I've started looking through my emails to see what messages I receive which are essentially "read this text". It's become quite apparent that there aren't that many, and most of them are requests or suggestions to do something else online, (one point for Paul Graham), but there are a few obvious examples where this does happen, such as event itineraries, e-tickets, boarding passes, etc. These tend to be de facto documents, though, so it's not especially insightful.
Reflecting back on LessWrong's past, I've noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled "Applying Bayes Theorem." Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
"Lower Bounds on Superintelligence". While a lot of LW content is carefully researched, much of what's posted in support of the singularity hypothesis seems to devolve into just-so stories. I'd like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I'm looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
"Trading with entities that are smarter than you". Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
"How to get a stupid person to let you out of a box". Along with, I think, many people who've never done it, I find the results of the AI-box experiment highly implausible. I can't even imagine a superintelligent persuading me to let it out, or, equivalently, I can't imagine persuading even someone very stupid to let me out. I know the most successful AI players are keeping their strategies secret for reasons I don't understand (if nothing else, it seems to imply those strategies are exceedingly fragile), but if there's anyone who has a robust strategy that's even partially effective I'd be very interested to see it.
"From printing results to destroying all humans" - to me this is the weakest part of the MIRI et al case, and I think most objections we see are variants on this theme. It's obvious that an oracle-like AI would have to interact with the universe in some sense. It's obvious that an AI with unbounded ability to interact with the universe would most likely rapidly destroy all humans. It's nonobvious that there is no possible way to code an AI that can reliably tell the difference between the two, and a solution to this problem naively seems rather more tractable than solving Friendliness in full generality. I'd like to see an exploration of this problem.
"When your gut won't shut up and multiply." The recent downvoted discussion post seems to be in this area, suggesting the wider community is perhaps less interested than I, but I'd love to see some practical advice on effective decision strategies when one's calculated best action is intuitively morally dubious, with anecdotes of the success or failure of particular approaches.
"Times when I noticed I was confused". In theory, noticing you're confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn't. I'd like to see more examples of when this has and hasn't worked in practice, and useful habits to acquire that make you more likely to be able to notice.
Most of my examples here are trite individually, but significant collectively; that is, I remember the habit more easily than any particular examples. There have been situations where I had some niggling doubt, said "I'm confused, I ought to resolve this uncertainty," and after research concluded that I was wrong and by acting early I saved myself some hardship. But while I'm certain there have been at least three of those, I have trouble remembering them or thinking that the ones I do remember are worth sharing.
That's the kind of position I see frequently here - but from the outside it's very unconvincing. So I'd very much like to see concrete examples.
Brienne Strohl mentioned a website called Gingko on facebook which allows you to write documents in the form of nested trees.
I've been playing around with it today and found it very useful, being able to write ideas out in a disordered way seems to get around some of my perfectionism issues and stop me procrastinating. The real test is whether I continue to use it in the future, I'll try to check back in a month or so.
I recently realized that I think the stuff I already know about the history of science, math, etc., is really inherently interesting and fascinating to me, but that I've never actually thought about going out of my way to learn more on the subject. Does anybody on here have one really good book on the subject to recommend? I've already read Science and the Enlightenment by Hankins.
The Copernican Revolution, by Kuhn is one of the best science histories I've ever read.
The folk-tale version of how we adopted heliocentric cosmology is something like this: "Aristotle and Ptolemy thought the world was arranged as concentric crystalline spheres. Copernicus proposed a new model that better fit the data, and it was opposed by the Church. Ultimately thanks to the Reformation and the Enlightenment, the correct model won out."
None of those claims is right, and Kuhn does a great job explaining the true story. He explains what problem Copernicus thought he was solving and how well he solved it.
I agree that it is a good book. But it helps to be aware that Kuhn substantially simplifies a lot of what is going on. See for example here and here.
Awesome! I loved Kuhn's Structure of Scientific Revolutions, and it seems like an interesting subject, besides.
I second the recommendation of The Copernican Revolution, and suggest another book on the same topic: Arthur Koestler's The Sleepwalkers.
Koestler was a great novelist (his best known novel, Darkness at Noon, rivals 1984 in its portrayal of totalitarian thought) and a brilliant, eclectic and sometimes bizarre thinker. The Sleepwalkers is a grand history of astronomy and cosmology from ancient times to Newton, with the bulk of the focus on Copernicus, Kepler and Galileo.
Pros: Fascinating and very detailed biographical information on these three figures (and others like Tycho Brahe), presented in a way that reads like a novel, indeed a page-turner. His biography of Kepler is especially unforgettable, very different from a dry academic presentation. The historical presentation is peppered with opinionated philosophical and even sociological detours.
Cons: unbalanced covering of different topics, subjective and somewhat biased viewpoints. In particular, his interpretation of the relationship between Kepler and Galileo, and of Galileo's dealings with the Church, is colored by what seems to be a strong personal dislike of Galileo. His interpretation of the reasons why the heliocentric model was rejected in ancient times is also unreliable.
As long as his interpretations are taken with a grain of salt (or balanced with a more objective presentation like Kuhn's) I would definitely recommend it; it is the most enjoyable book on history of science I have read.
Could you elaborate?
According to him, the ancient heliocentric model of Aristarchus was clearly superior in simplicity and predictive power to the geocentric models of Ptolemy and others, and was abandoned for irrational reasons (religiously or ideologically motivated). From what I understand, the mainstream academic position is that, analyzed in context and without hindsight, the ancient rejection of the heliocentric theory was quite reasonable. Previous discussion in Less Wrong.
I think it is better to say that the rejection could have been reasonable, that we cannot rule out that possibility, not that we can rule out the possibility that it was not reasonable.
My interpretation is that Hipparchus was geocentric, perhaps for good reason, and everyone else was geocentric the bad reason that Hipparchus had data, and data was high status, not because they were convinced by the data. In any event, his data does not rule out the distances Archimedes proposes in the Sand Reckoner, probably following Aristarchus. But I don't think it is even really established that Hipparchus was geocentric, just that Ptolemy said so.
Update: Nope, history is bullshit. Hipparchus was not geocentric. Maybe Ptolemy said he was, but what did he know? Other ancient sources say that he refused to pick sides, not knowing how to distinguish the hypotheses. At the very least this shows that the heliocentric hypothesis was alive and well. Asking why they discarded it is wrong question. Frankly, I'm with Russo: the heliocentric hypothesis was standard.
I really enjoyed The Nothing That Is by Robert Kaplan. It's about the history of the concept (and the numeral) zero.
Possibly I should add that I read that when I was quite young (13ish?) and haven't reread since. It doesn't contain anything remotely resembling advanced maths - it's definitely about history and the philosophy of the concept. I obviously found it memorable though, so although the writing may have been so terrible I didn't notice at 13, it's unlikely.
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take. To apologize for that, people say that evolution is hard to predict because it's directionless, e.g. it doesn't necessarily lead to more complexity, larger number of individuals, larger total mass, etc. That leads to the question, is there some deep reason why we can't find any numerical parameter that is predictably increased by evolution, or is it just that we haven't looked hard enough?
Replies to comments that attempted to point out a numerical parameter that's increased by evolution. (I'd be more interested in comments pointing out a deep reason why we can't find such a numerical parameter, but there were no such comments.)
lmm:
That's been steady for awhile now.
ChristianKl:
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
David_Gerard:
That can't decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Lumifer:
That seems to be contradicted by the possibility of evolutionary suicide.
Humans don't have more offspring than bacteria in average conditions, and have much fewer offspring in ideal conditions.
More particularly, the equilibrium size of the DNA is very roughly inversely correlated with population size. A larger population size is better at filtering out disadvantagous traits. It's not linear - there are discontinuities as decreasing population size eliminates natural selection's ability to select against different things. And those things sometimes can even go on to be selected for for other reasons - there are genomic structures that are important for eukaryotes that could probably never have evolved in a bacterium because to get to them you need to go through various local minima of fitness.
Soil bacteria can have trillions of individuals per cubic meter of dirt and they actually experience direct evolution towards lower genome size - more DNA means more sites at which something could mutate and become problematic and they actually feel this force. Eukaryotes go up in volume by a factor of ~1000 and go down in population by at least as much, and lose much of the ability to select against introns and middling amounts of intergenic DNA and expanding repeat-based centromere elements.
Multicellular creatures with piddlingly tiny population sizes compared to microbes lose much of the ability to select against selfish transposon DNA elements, gigantic introns and gene deserts, and their promoter elements get fragmented into pieces strewn across many kilobases rather than one compact transcriptional regulation element of a few dozen to a few hundred base pairs (granted, we've also been able to make good use of some of these things for interesting purposes from our adaptive immune system to the concerted regulation of our hox gene clusters that regulate our body plans). They also become very sensitive to the particular character of the transposons or DNA repair machinery of their particular lineage and wind up random-walking like crazy up and down an order of magnitude or two in genome size as a result.
Thanks! I was hoping you'd show up, it's always nice to get a lesson :-)
Going back to the original question, are there any "general purpose adaptations" that never disappear once they show up? Does evolution act like a ratchet in any way at all?
Closest thing I can think of from what I know without going through literature is the building up of chains of dependencies. Once you have created a complex system that needs every bit to function, it has a tendency to stay as a unit or completely leave.
You can see that in a couple contexts. One is 'subfunctionalization'. Gene duplications are fairly common across evolution - one gene gets duplicated into two identical genes and they are free to evolve separately. You usually hear about that in the context of one getting a new function, but that's actually comparatively rare. Much more likely is both copies breaking slightly differently until now both of them are necessary. A major component of the ATP-generating apparatus in fungi went through this: a subunit that is elsewhere composed of a ring of identical proteins now has to be composed of a ring of two alternating almost identical proteins neither of which can do the job on its own. Ray-finned fish recently went through a whole-genome duplication, and a number of their developmental transcription factors are now subfunctionalized such that, say, one does the job in the head end and the other does its job in the tail end.
Another context is the organism I work in, yeast. I like to call yeast "a fungus that is trying its damndest to become a bacterium". It lives in a context much like many bacteria and it has shrunk its genome down to maybe 2.5x that of an E. coli and its generation time down to 90 minutes. But it still has 40 introns hanging out in less than 1% of its genes so it needs a fully functional spliceosome complex to be able to process those transcripts lest those 40 genes utterly fail all at once, and it has most of the hallmarks of eukaryotic genome structure and regulation (in a neat, smaller, more research-friendly package). That being said it has lost a few big eukaryotic systems, like nonsense-mediated RNA decay and RNA interference, and they left relatively little trace behind.
Sure, but mostly because evolution's so good at it. The fact that evolution so quickly filled a tidal pool, so quickly filled all the tidal pools, so quickly filled the oceans, so quickly covered the land, is evidence of strength rather than weakness.
There does seem to be a "punctuated equilibrium" effect here; life fills a region, appears static for a while, but then makes a breakthrough and rapidly fills another region. It could be argued that this is also true of things that humans optimize for: human population growth has abruptly rapidly accelerated at least twice (invention of agriculture, industrial revolution). Slavery was everywhere in the ancient world, then eliminated across most of it in the space of a century. Gay marriage went from hopefully-it-will-happen-in-my-lifetime to anyone who opposed it being basically shunned. Scientific and technological breakthroughs tend to look a lot like this.
Generalizing this to all optimization processes would be very speculative.
From bacteria that lived a long time ago. Not from those that live today that had many iterations to optimize themselves. Different bacteria species can also much better exchange genes with each other than vertebrates that need viruses to do so.
Implying that humans evolved from the kind of bacterias that are around today might be more wrong than saying that the bacteria we see know evolved from humans. There more evolutionary distance between todays bacteria and those from which humans descended and humans and those bacteria from which they descended.
Yeah, and there are often bacteria in a single flower pot that are less related to each other than you are to the potted plant. But both bacteria still have a much smaller genome than you or the plant, maybe because genome size matters for reproduction speed for them, but is insignificant for us.
Evolutionary suicide seems to be someone's theoretical idea. Is there any evidence that it happens in evolution in reality?
In any case, are you basically trying to find the directionality of evolution? On a meta level higher than "adapted to the current environment"? There probably isn't. Evolution is a quite simple mechanism, it just works given certain conditions. It is not goal-oriented, it's just how the world is.
However if I were forced to find something correlated with evolution, I'd probably say complexity.
This doesn't seem to be the case either
Depends on your time frame. Looking at the whole history of life on Earth evolution certainly correlates with complexity, looking at the last few million years, not so much.
I understand the argument about the upper limit of genetic information that can be sustained. I am somewhat suspicious of it because I'm not sure what will happen to this argument if we do NOT assume a stable environment (so the target of the optimization is elusive, it's always moving) and we do NOT assume a single-point optimum but rather imagine a good-enough plateau on which genome could wander without major selection consequences.
But I haven't thought about it enough to form a definite opinion.
Species of nightshade tend to evolve to become self-fertile, before dying out due to lack of genetic diversity.
Is this your source?
Link? Lots of plants are self-fertile and do quite well...
Better example: parthenogenic lizard species.
What makes that example better?
Damn it. It was going to be a better example because I was going to give the actual genera (Aspidoscelis and Cnemidophorus) of whiptail lizards whose species keep going down this path and then I got distracted and didn't do that. Oops.
Complexity in what way? Kolmogoroph complexity of DNA?
No, complexity of the phenotype.
How would you go about measuring that complexity?
I don't know. Eyeballing it seems to be a good start.
Why do you ask? Do you think that such things are unmeasurable or there are radically different ways of measuring them or what?
I have a hard time trying to form a judgement about whether a human is more or less complex than a dinosaur via eyeballing.
Is a grasshopper more of less complex than a human?
Well, would you have problems arranging the following in the order of complexity: a jellyfish, a tree, an amoeba, a human..?
Yes.
I think you just don't give an amoeba much credit because it's no multicellular organism. It's genome is 100-200 times the size of the human. As it's that big it seems like we haven't sequenced all of it so we don't know how many genes it has.
We also know very little about amoeba. Genetic analysis suggests that the do exchange genes with each other in some form but we don't know how.
Amoeba probably express a lot of stuff phenotypically that we don't yet understand.
Just apply Occam.
Possibility wouldn't contradict anything, a high enough probability would.
Plenty of people predict that increased antibiotica use will lead to a raise in antibiotica resistance among bacteria.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
A tree has to get lucky to survive the beginning. If it surives the beginning it can however grow tall and win.
Let's say you keep the enviroment stable for 2 billion years. Everything evolves naturally. Then you take tree seeds and bring them back to the present time. I think there a good chance that such a tree would outcompete grass at growing in glades.
Fossils don't really get used as the central evidence of common descent anymore. These days common descent usually get's determined by looking at the DNA. In my experience people who discuss evolution online that do focus on fossils are usually atheists who behave as if their atheism is a religion. They think it's important to defend Darwin against the creationists. On the other hand they aren't up to date with the current science on evolution.
You seem to be predicting that grasses have smaller genomes than trees, but wheat is famous for having a huge genome. Here's a table of a few plants. Maybe wheat is an outlier and I'd be interested if you had documentation of some pattern, but I've always heard that there is none.
What do you mean, "predict"? It has been empirically observed, a lot.
Huh? It doesn't work like that at all. For one thing, the "environment" isn't stable.
cousin made the claim that we can only say something about evolution that happened in the past. I say that we can confidently predict that increasing antibiotica resistance among bacteria will continue in the future.
Firstly describing complex system in a ew words is seldom completely accurate. The question is whether it's a useful mental model for thinking about it. In this case the idea I wanted to communicate is that it's very useful to think about the speed of iterations and the competitive advantage that a specis gets by having as advantage of hundred of millions of iterations over their competitors.
The enviroment doesn't have to be stable for the argument that I made. In changing enviroments a spezies with faster iterations adapts faster. A lot of genetic adaptions are also about housekeeping genes that are useful in most enviroments.
Evolution leads to a higher level of fitness in the environment, but the problem is that the environment itself is constantly changing in unpredictable ways. It's like an optimization process where the utility function itself is contantly changing. That's why it's very hard to reliably quantify fitness. For instance, billions of years ago, the increase in oxygen in the atmosphere killed a lot of existing organisms and forced aerobic bacteria on to the scene.
Why should there be a numerical parameter predictably increased by evolution? Why not look for a numerical parameter predictably increased by continental drift? or by prayer? by ostriches?
One of the key pieces of justification for FAI is the idea of "optimization process". Evolution is given as an example of such process, unlike continental drift or ostriches. It seems natural to ask what parameter is optimized.
Just FYI, I interpret that question very differently than your original.
Why don't you start with a simpler example, like a thermostat? Would you not call that an optimization process, minimizing the difference between observed and desired temperature?
Most of your rejections of suggestions in this thread would also reject the thermostat. An ideal thermostat keeps the temperature steady. Its utility function never improves, let alone monotonically. A real thermostat is even worse, continually taking random steps back. In extreme weather, it runs continually, but never gets anywhere near goal. It only optimizes within its ability. Similarly, evolution does not expand life without bound, because it has reached its limit of its ability to exploit the planet. This limit is subject to the fluctuations of climate. But the main limit on evolution is that it is competing with itself. Eliezer suggests that it is better to make it plural, "because fox evolution works at cross-purposes to rabbit evolution." I think most teleological errors about evolution are addressed by making it plural.
Also, thermostats occasionally commit suicide by burning down the building and losing control of future temperature. (PS - I think the best example of evolutionary suicide are genes that hijack meiosis to force their propagation, doubling their fitness in the short term. I've been told that ones that are sex-linked have been observed to very quickly wipe out the population, but I can't find a source. Added: the phase is "meiotic drive," though I still don't have an example leading to extinction.)
Inclusive reproductive fitness.
Do you mean to say that the expected inclusive fitness of a randomly selected creature from the population goes up with time? Well, if we sum that up over the whole population, we obtain the total number of offspring - right? And dividing that by the current population, we see that the expected inclusive fitness of a randomly selected creature is simply the population's growth rate. The problem is that evolution does not always lead to >1 population growth rate. Eliezer gave a nice example of that: "It's quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case the sustainable wolf population will decrease as new wolves replace old."
While I don't know of any simple or convenient numerical parameter, I'd note that we do have some handy non-retrospective pieces of evidence for evolution by natural selection, such as the induced occurrence of evolutionary benchmarks such as multicellularity.
In general, there are some adaptations which are highly predictable under certain circumstances, but there may not be any sort of meaningful measure we can use for evolution of organisms over time which aren't a function of their relationship with their environment.
I think whatever numerical parameter evolution raised generally (not always) in respect to its environment, it would have to do with meaningful complexity , however that can be numerically expressed, and local decrease in entropy. Design would cause those too, but hypothesizing it would violate occam's razor.
Different environments and different substrates for mutation cause different kinds of evolutions.
One main thing that happens with a long enough period of selection in a simple, stable environment on a microorganism is a shrinking of the genome.
You quite simply will not find a simple parameter perpetually increased by evolution. Whatever works better for that base organism in that particular environment will become more common. One thing being selected for under all circumstances and showing up all the time is just not the reality.
The chances of successful transmission of genes across generations given a stable environment. The number of offspring surviving to reproductive age is a good first-order approximation.
If you want something more tangible, predictions what features evolution would lose are rather easy -- those that are (energy-)expensive and are useless in the new environment.
Life "wants" to spread, so perhaps an increase in the volume in which life can be found?
Newly created islands may have "weird" biospheres initially, but evolve towards a more "normal" set of niches over time?
But why would life get more optimal? Evolution has finite optimization power, and it has long ago already reached this limit.
Huh? Even if you accept the estimates that your link points to, the amount of information in mammalian genome and optimization power of evolution are VERY different things.
How do you figure?
If you can narrow down the number of possible lifeforms to one in 2^n, that's n bits of optimization power, and n bits of information as to what the final lifeform is.
If life is getting more and more optimal, then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal, and we have more than 25 megabytes of information as to what that lifeform is.
You go and wait. I'll do other things in the meantime :-) Do you have any intuition how large that number is?
You've spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
Imagine the case where there's one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn't tell you what these creatures are.
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn't limit how evolution can optimize with different creatures in different places.
It's not going through them one at a time.
It's not a simple English description, but narrowing down the possibilities by a factor of two is always one bit of information. It doesn't matter whether it's "the first bit is one", "the xor of all the bits is one" or even "it's a hash of something starting with a one using X algorithm, which is a bijection".
It's the one with a higher inclusive genetic fitness. That's what evolution optimizes for.
If evolution has n bits of optimization power, that's equivalent to saying that if you order all possible lifeforms based on how optimal they are, this is going to be in the top 1/2^n of them. (It's actually somewhat more complicated, since it's more likely to be higher up and there's some chance of it being lower, but that's the basic idea.)
It does vary based on what lifeform you're looking at, since they all have different mutation rates and different numbers of children, but there's always a limit to the information, and I'm pretty sure that it's pretty much always a limit that's already been hit.
By my calculations, if you had the entire earth's surface covered by a solid meter-thick layer of bacteria for 4.6 billion years and each bacterium lived for 1 hour, that would be approximately 2^155 bacteria having lived and died.
You can massively increase genetic information (inasmuch as that actually means much in biology) very quickly with very simple genetic changes. It's not a case of searching through every possible 1 bit change.
Provided, of course, that your space of possibilities is finite and you know what it is. In the case of evolution you don't.
I don't understand what does "all possible lifeforms" mean. Does not compute.
Which limit? The limit of information in the mammalian genome? Or the limit of evolution -- whatever exists is the pinnacle an no better (given the same environment) can be achieved?
Something like "humans will have larger skulls and smaller teeth"?
But we know that isn't true.
There have been plenty of evolutionary simulations, surely they provide some testable predictions. I vaguely recall one of them: that new adaptations tend to propagate first in small isolated groups and only then spread through the rest of the species. I don't recall if this has been tested through the fossil records. I am sure there are many more testable predictions. Like how fish locked in a dark cave or murky water tend to lose eyesight. But the exact path is probably too hard to predict. For example, marine mammals did not develop gills. Or that mammals develop intelligence by growing Neocortex, while birds use DVR (dorsal ventricular ridge) or maybe Nidopallium for the same purpose.
Total number of species (including extinct).
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
I'm familiar with the discussion around fluid intelligence but what do we know about raising speed? Do we know how to conduct training to improve it?
When did you start, recently? I may be wrong but, I think average scores are matched to your peers regardless of time spent on the game. So if you just started exercises your score is being compared to everyone's score even those that have been learning how to play that particular game for a long time.
In case you are interested in the scores. At present I have 241 Lumosity points that I earned over the last month.
I used Lumosity in the past with a different account, probably 2 years ago. I think I might have gotten 500 point back then.
I use the free version. I have other experience with speed tests that also suggest that I'm relatively week in that area.
Is there research on the benefits of yoga compared to meditation, anaerobic exercise and aerobic exercise? Or any subset of these for that matter.
Google is your friend, but keep in mind that "yoga" is an umbrella term for a large variety of exercises. In particular, yoga as an Indian discipline aimed at reaching moksha, the liberation from the reincarnation cycle, is rather different from yoga as practiced in the West with the goal of losing 10 lbs.
I would add that the same thing goes for meditation, anaerobic exercise and aerobic exercise as well. All those terms include a lot of different activities.
O.o
(Anyway, I'm surprised that I'm surprised -- I know people do even weirder things to lose weight.)
(BTW: I do do yoga, but more for fun than for any of its practical benefits, which could be achieved in more cost-effective ways.)
I saw one study that indicated that meditation did not lower blood pressure, refuting earlier studies, but that yoga did. Can't find it now however. The wikipedia page on meditation research might be useful. also this
What kinds of benefits are you looking for? It seems likely they don't optimize the same things.
Am I running on corrupted hardware or is life really this terrible? I don't think I can last another decade like this one, let alone whatever cryonically-supplied futures that would await. At this point, I think I would pay not to be frozen.
Ugh.
It sounds like you are depressed. It's probably worth considering therapy or psychiatric care - these interventions have helped me a lot. Hope things get better for you.
Depression can be irrational--chemical imbalances, not enough sunlight/exercise/etc--and can also be totally rational (life actually does suck titanium balls). Psychiatric care can help the former; the latter seems as it should be vulnerable to rationality superpowers, but either that's incorrect or I'm just not superclever enough to win. It does not help when the two coincide (sucky life situation causing serious chemical problems).
There's also the question of whether or not a terrible situation is one that makes psychiatric help readily available (I'd hope online psychiatry could help with this, but I don't really know).
Not exactly -- while depression can be caused by major life suckage, depression is not a rational response to major life suckage.
That depends. "Too depressed to do anything" is a pretty effective way out of certain unpleasant situations.
Specific example: Being in grad school caused my life to suck titanium balls, which (presumably combined with a pre-existing brain vulnerability) led me to the point where I was too depressed to do any work. Which meant I had to drop out. Which was the only way I could ever have left, as my moral system at the time did not permit giving up an endeavor simply because it was making me miserable. And, surprise, surprise, as soon as I got on the plane out of there it was like color came back into the world and life was worth living again.
Trying to reason your way out of mental illness is like trying to pull yourself out of quicksand by yanking on your hair.
Depression screws with your thoughts and perceptions in incredibly profound ways, including your ability to make predictions about the future, and is absolutely a tamp on rational thought. That's true whether it is caused by another mental illness or a traumatic event in your life; it's just as "chemical" and just as difficult to escape either way. Throwing off depression with strength of reason or willpower is a misunderstanding of how untreated depressed people adapt and occasionally heal, not a prescription.
The human body is built to survive, and the brain is no exception, but a rational person should always try to supplement their natural strength with medicine when their life is on the line. Advising anything else seems irresponsible.
Gwern is the go-to person here, but it is my impression that "standard" anti-depression drugs are neither particularly effective nor free of serious side-effects. And things which are more effective -- like ketamine -- are very rarely prescribed.
More or less, but it's a question of levels. SSRIs didn't do much for me and a lot of other people, plus weight gain sucks (luckily no sexual dysfunction), but they're not particularly dangerous from what I understand. Stuff like Bupropion is awesome, as long as you don't mind sobriety and have a low risk for seizures. There's other drugs which modify SSRIs too, but I've never had any and they're supposedly more on the 'side-effect-y' side. New stuff like Ketamine is waaay out there, like almost on par with electroconvulsive therapy, in terms of how likely you are to see it but IDK what it's like in terms of safety.
But once the 'trial-and-error' portion of dosing is over with though and you're on something that works for you, it's absolutely night and day. I can only speak for myself obviously but it was a complete perspective switch, like someone flipped a switch in my head to 'not miserable.'
(Obviously I'm not an expert, just a guy who's spent some time on the patient end of things. I am really interested to hear Yvain's answer if he has one.)
Many drugs are probably not what you would call effective, but they're still worth trying. You'd be surprised how many drugs are not free of serious side effects. Luckily these effects are usually too rare to care about. It's just that taboo drugs get most of the attention and armchair medicine.
I really wish these kinds of discussions would begin and end with "I think you're depressed, it's a medical condition, go see a doctor. insert social support" Don't screw with a life threatening condition. Not pointing at you specifically.
Well, it's a bit more complicated than that.
First, diagnosing strangers with psychiatric disorders over the Internet has a long history and, um, let's say it didn't always work out well :-D
Second, depression is a spectrum issue -- there are clear extremes but also there is a big muddle in the middle. You have to be careful of medicalizing psychological states which is a bad direction to go into.
Agreed. That's what the "I think" and "doctor" parts are for. Better safe than sorry.
That's why there are experts whose job is to assess what's medical and what's not.
What is bad about medicalization? This could be an interesting topic to explore.
It narrows the range of what's considered "normal". It proposes medical solutions to what are not necessarily medical problems. It is, to a large degree, a way of expanding the market for the big pharma.
Lots of problems, google it up if you're interested...
I think your perception of this problem has more to do with stigma associated with medical conditions. If you taboo the associated words, what you're left with is improving people and what's wrong with that? Do you oppose transhumanism on the same grounds?
And big pharma, we meet again. What is this singular, evil, money grabbing entity? I'd try to google it but I know I'd meet a violent mess of blogosphere mythology.
If you're interested in anti-depressants, you should talk to Yvain, what with him being a head-doctor and all.
I would recommend investigating the safety and efficacy of selegiline. Seems somewhat effective, safe, and available (albeit from overseas for US users). Do your own homework though.
How are these two at all mutual exclusive?
It's a mistake to assign truth values to emotions. They can't be correct or incorrect, they can be only helpful or unhelpful. And I don't think depression is ever helpful, barring convoluted thought experiments.
As someone who's been in that boat, get in touch with a psychiatrist ASAP. It can very literally save your life, not to mention making it much much better on a day-to-day level.
Life is terrible, but it's also strange and beautiful; if you can't see a reason to continue with it, there is most likely an underlying problem (even if it is just "faulty wiring") which drugs and therapy can help you identify.
I cannot recommend seeing a psychiatrist more completely.
An external view of your life and health, from a trusted professional, may help you identify causes of your discomfort and, most importantly, strategies to improve your life.
Some lives are and others aren't. Without knowing anything about you I can't tell, but given that you can write in English and access the Web, I'd guess yours probably isn't and join the other people in suggesting that you see a professional.
I'm not sure how it works in your country, but you don't necessarily need a psychiatrist to diagnose and treat depression. Also it's good to check for bodily conditions that could make you feel like crap, and a non-psychiatrist might do that more reliably.
A response to Aaron Freeman's "You Want a Physicist to Speak at Your Funeral."
If I had a physicist speak at my funeral, I would hope that he would talk about a lot more than the conservation of energy. I don't particularly care about what happens to my energy.
If I am lucky, he will speak about relativity. My family will probably have the mistaken intuition that only things in the present are truly real. Teach them about spacetime. They need to know that time and space are connected - that me being in the past is just like me being far away. The difference is that we will only have one way communication. Even if they will no longer be able talk to me, I will still talk to them through memories.
If I am not so lucky, he will speak about quantum mechanics. If I die young, my family will be grieving over the potential future I have lost. Teach them about many worlds. They need to know that our world is constantly splitting - that just before I died, the world split off a different future in which I am still alive. There is another world, just as real as our own, in which I survive. This world will even interact with our own in very tiny ways.
I want a physicist to speak at my funeral. I want everyone to understand that my continued existence is way more verifiable than a religious afterlife and way more substantial than a simple conservation of energy.
Upvoted since it's a little harsh for 'us' to tell someone that something is better suited for open thread and then to downvote it without explanation when it goes there...
Genuinely (if admittedly idly) curious: if this was your only reason for upvoting, do you now feel like you should retract your upvote since the comment would no longer be net-downvoted without it?
My favorite item in the Yvain's list of fictional banned drugs.
We're a day out - this should be Oct 21-27. Next one: Oct 28-Oct 35. (cough)
When I posted, it was still the 20th in my timezone, so that's what I went with.
Some HPMOR speculation Spoilers up to current chapter. After writing this, I checked the last LessWrong thread on HPMOR, and at least one component of this has already been noticed by other people, but others have not been, I think.
I was disappointed in the last chapter, gung nqhygf jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr runs contrary to common sense and to the rest of the book.
Yeah, cause that never happens in canon.
I think wizard culture has some different ideas from your culture.
Sorry, I was used to your fic's to higher standards of believability of human behavior than canon's.
I must be missing something, because even Harry had trouble being taken seriously by most adults for most of the story, and no other (first-year) children were anywhere near his level. Yet suddenly so many of them seem to be taken seriously by their relatives and by all the most powerful wizards. And they didn't even have to save the Earth from the Formics.
It's still the culture that throws kids on a Hippogryff and tells them to get going.
And as Daphne notes in her thoughts, the children are standing in for their parents and speaking their parents' orders; they are acting as spokespersons for their families, and the others are treating them as such.
*Hippogriff
I suspect that had more to do with Harry's involvement than anything else. "gung [crbcyr ehaavat guvatf] jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr" vf n ybg zber cynhfvoyr jura bar bs gurz vf n puvyq.
Which part would you never do if you (as board member) were righteously angry at Dumbledore?
I'd never let a child do the public announcement of my decision.
Why not, if they could do it? This seems a foolish rejection of a class of tools. See Malala Yousafzai.
What work has been done with the causality/probability of ontological loops? For example, if I have two boxes, one with a million dollars in it, and I'm given the option to open one of them and then go back to change what I did (with various probabilities for choice of box, success of time travel, and so on), is there existing literature telling me how likely I am to walk out with a million dollars?
Obviously the answer will change depending on which version of time travel you use (invariant, universe switching, totally variant, etc.)
A good place to start for this might be Scott Aaronson's lecture on Time Travel from his "Quantum Computing Since Democritus" course.