If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Oslo on IRC jokingly summarizing part of a debate:
"""""""politics is the mindkiller" is an applause light" is a fully general counterargument" is deeply wise" is a semantic stop sign" is why our kind can't cooperate" is a fake explanation"
This has the makings of a card game or something.
http://lesswrong.com/lw/d2w/cards_against_rationality/
At the Columbus megameetup, some people actually printed out a set of cards (as a stand-alone deck) and played the game. I don't know who of two people has the source file, but I can find out...
Someone has been regularly downvoting every thing I've posted in the past couple months (not just a single karmassasination). I really don't care about the karma (so please DO NOT upvote any of my previous posts in order to "fix" it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker's Rules and all.
I've been getting an early downvote on my posts, too. I can afford it, but it does seem malicious.
Random thought: I've long known that police can often extract false confessions of crimes, but I only just now made the connection to the AI box experiment. In both cases you have someone being convinced to say or do something manifestly against their best interest. In fact, if anything I think the false confessions might be an even stronger result, just because of the larger incentives involved. People can literally be persuaded to choose to go to prison, just by some decidedly non-superhuman police officers. Granted, it's all done in person, so stronger psychological pressure can be applied. But still: a false confession! Of murder! Resulting in jail!
I think I have to revise downwards my estimate of how secure humans are.
Some LWers may be interested in a little bet/investment opportunity I'm setting up. I have become increasingly disgusted with what I've learned about the currently active Bitcoin+Tor black markets post-Silk-Road - specifically, with BlackMarket Reloaded & Sheep. I am also frustrated that customers are flocking to them, and they all seem absurdly optimistic. So, I am preparing to make a large public four-part escrowed bet with any comers on the upcoming demise of BMR & Sheep in the coming year, in the hopes that by putting money where my mouth is, I may shock at least a few of them into sanity and perhaps even profit off the more deluded ones.
The problem is, I feel I can afford to risk ฿1 ($200), but I'm not sure that this will be enough to impress anyone when split over 4 bets ($50 a piece). So I am willing to accept up to ฿1 in investments from anyone, to increase the amount I can wager. The terms are simple: whatever fraction of the bankroll you send, that's your share of any winnings. If we bet ฿2 and you sent ฿1, then you get half the winnings if any. (I am not interested in taking any cut here.)
My full writeup of the bet, with some statistics helping motivate the death...
Hmm, about 100 downvotes in the last couple of days, 1 per comment or so, suggest that someone here is royally pissed off at me. I wish I knew the reason. On the bright side, at least this forum provides some indication of a problem. When this happens to me IRL, I either never find out about it or deduce it months or years later based on second-hand information, rumors, or, in some cases, denied promotions/requests/opportunities. I wonder if this is a common experience? Situations like this is a significant reason why I would likely jump in with both feet if offered a chance to join a telepathic society.
PubMed is allowing comments. Only people who have publications at PubMed will be permitted to comment. I predict that PubMed will find it needs human moderators.
I recently realized that I think the stuff I already know about the history of science, math, etc., is really inherently interesting and fascinating to me, but that I've never actually thought about going out of my way to learn more on the subject. Does anybody on here have one really good book on the subject to recommend? I've already read Science and the Enlightenment by Hankins.
I notice that the latest two posts from Yvain's blog haven´t shown up in the "recent from rationality blogs" field. If this is due to a decision to no longer include his blog among those that are linked, I believe this to be a mistake. Yvain's blog is in my view perhaps the most interesting and valuable among those that are/were linked. And although I am in no danger of missing his updates myself, the same might not be true of all LW readers that may be interested in his writing.
Having just got a Kindle Paperwhite, I'm surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I've implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I'm pretty sure there's a lot of untapped scope for the intelligent assembly and presentation of reading material.
So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?
Reflecting back on LessWrong's past, I've noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled "Applying Bayes Theorem." Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
"Lower Bounds on Superintelligence". While a lot of LW content is carefully researched, much of what's posted in support of the singularity hypothesis seems to devolve into just-so stories. I'd like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I'm looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
"Trading with entities that are smarter than you". Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
"How to get a stupid person to let you out of a box". Along with, I think, many people who've never done it, I find the results of the AI-box experiment highly implausible. I can't even imagine ...
I almost got scammed today. I received a very official looking piece of mail, "billing" me a few hundred bucks. Normally I would be able to see through it immediately, but this particular one caught me off guard. I am usually very good about being skeptical and it disappointed me that I almost fell for it. What I think happened was that, my familiarity heuristic was exploited.
I have business with a certain state and it was familiar for me to receive correspondences from various agencies and pay all sorts of different fees. So when I got th...
What is 'taste' (as in, artistic taste)? And what differentiates 'good taste' from 'bad taste'?
Is there research on the benefits of yoga compared to meditation, anaerobic exercise and aerobic exercise? Or any subset of these for that matter.
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take. To apologize for that, people say that evolution is hard to predict because it's directionless, e.g. it doesn't necessarily lead to more complexity, larger number of individuals, larger total mass, etc. That leads to the question, is there some deep reason why we can't find any numerical parameter that is predictably increased by evolution, or is it just that we haven't looked hard enough?
Plenty of people predict that increased antibiotica use will lead to a raise in antibiotica resistance among bacteria.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
A tree has to get lucky to survive the beginning. If it surives the beginning it can however grow tall and win.
Let's say you keep the enviroment stable for 2 billion years. Everything evolves naturally. Then you take tree seeds and bring them back to the present time. I think there a good chance that such a tree would outcompete grass at growing in glades.
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take.
Fossils don't really get used as the central evidence of common descent anymore. These days common descent usually get's determined by looking at the DNA. In my experience people who discuss evolution online that do focus on fossils are usually atheists who behave as if their atheism is a religion. They think it's important to defend Darwin against the creationists. On the other hand they aren't up to date with the current science on evolution.
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
More particularly, the equilibrium size of the DNA is very roughly inversely correlated with population size. A larger population size is better at filtering out disadvantagous traits. It's not linear - there are discontinuities as decreasing population size eliminates natural selection's ability to select against different things. And those things sometimes can even go on to be selected for for other reasons - there are genomic structures that are important for eukaryotes that could probably never have evolved in a bacterium because to get to them you need to go through various local minima of fitness.
Soil bacteria can have trillions of individuals per cubic meter of dirt and they actually experience direct evolution towards lower genome size - more DNA means more sites at which something could mutate and become problematic and they actually feel this force. Eukaryotes go up in volume by a factor of ~1000 and go down in population by at least as much, and lose much of the ability to select against introns and middling amounts of intergenic DNA and expanding repeat-based centromere elements.
Mult...
Closest thing I can think of from what I know without going through literature is the building up of chains of dependencies. Once you have created a complex system that needs every bit to function, it has a tendency to stay as a unit or completely leave.
You can see that in a couple contexts. One is 'subfunctionalization'. Gene duplications are fairly common across evolution - one gene gets duplicated into two identical genes and they are free to evolve separately. You usually hear about that in the context of one getting a new function, but that's actually comparatively rare. Much more likely is both copies breaking slightly differently until now both of them are necessary. A major component of the ATP-generating apparatus in fungi went through this: a subunit that is elsewhere composed of a ring of identical proteins now has to be composed of a ring of two alternating almost identical proteins neither of which can do the job on its own. Ray-finned fish recently went through a whole-genome duplication, and a number of their developmental transcription factors are now subfunctionalized such that, say, one does the job in the head end and the other does its job in the tail end...
Brienne Strohl mentioned a website called Gingko on facebook which allows you to write documents in the form of nested trees.
I've been playing around with it today and found it very useful, being able to write ideas out in a disordered way seems to get around some of my perfectionism issues and stop me procrastinating. The real test is whether I continue to use it in the future, I'll try to check back in a month or so.
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
I'm familiar with the discussion around fluid intelligence but what do we know about raising speed? Do we know how to conduct training to improve it?
...TEVROMATIN:
PROFILE: Chemotherapy adjuvant specifically designed for glioblastomas of neuronal origin. By mimicking natural neural differentiation factors, it causes these tumors to regress from resilient high-grade neuroblasts towards more typical neurons, making them easy targets for stronger chemotherapeutic agents.
BANNED BECAUSE: During differentiation process, malignant nerve cells form connections to healthy nerve cells and to each other. As a result, tumor forms a functioning neural network effectively “telepathically” connected to healthy brain. Pa
A response to Aaron Freeman's "You Want a Physicist to Speak at Your Funeral."
If I had a physicist speak at my funeral, I would hope that he would talk about a lot more than the conservation of energy. I don't particularly care about what happens to my energy.
If I am lucky, he will speak about relativity. My family will probably have the mistaken intuition that only things in the present are truly real. Teach them about spacetime. They need to know that time and space are connected - that me being in the past is just like me being far away. The d...
What work has been done with the causality/probability of ontological loops? For example, if I have two boxes, one with a million dollars in it, and I'm given the option to open one of them and then go back to change what I did (with various probabilities for choice of box, success of time travel, and so on), is there existing literature telling me how likely I am to walk out with a million dollars?
Obviously the answer will change depending on which version of time travel you use (invariant, universe switching, totally variant, etc.)
Am I running on corrupted hardware or is life really this terrible? I don't think I can last another decade like this one, let alone whatever cryonically-supplied futures that would await. At this point, I think I would pay not to be frozen.
Ugh.
It sounds like you are depressed. It's probably worth considering therapy or psychiatric care - these interventions have helped me a lot. Hope things get better for you.
Trying to reason your way out of mental illness is like trying to pull yourself out of quicksand by yanking on your hair.
Depression screws with your thoughts and perceptions in incredibly profound ways, including your ability to make predictions about the future, and is absolutely a tamp on rational thought. That's true whether it is caused by another mental illness or a traumatic event in your life; it's just as "chemical" and just as difficult to escape either way. Throwing off depression with strength of reason or willpower is a misunderstanding of how untreated depressed people adapt and occasionally heal, not a prescription.
The human body is built to survive, and the brain is no exception, but a rational person should always try to supplement their natural strength with medicine when their life is on the line. Advising anything else seems irresponsible.
As someone who's been in that boat, get in touch with a psychiatrist ASAP. It can very literally save your life, not to mention making it much much better on a day-to-day level.
Life is terrible, but it's also strange and beautiful; if you can't see a reason to continue with it, there is most likely an underlying problem (even if it is just "faulty wiring") which drugs and therapy can help you identify.
I cannot recommend seeing a psychiatrist more completely.
An external view of your life and health, from a trusted professional, may help you identify causes of your discomfort and, most importantly, strategies to improve your life.
Some HPMOR speculation Spoilers up to current chapter. After writing this, I checked the last LessWrong thread on HPMOR, and at least one component of this has already been noticed by other people, but others have not been, I think.
Someone has been regularly downvoting every thing I've posted in the past couple months (not just a single karmassasination). I really don't care about the karma (so please DO NOT upvote any of my previous posts in order to "fix" it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker's Rules and all.
Do I understand it correctly that the behavior you describe is "downvote every new comment from user X when it appears" (as opposed to "go to user X's history and downvote a lot of their old comments at the same time")?
Because when hearing about karma assassinations, I always automatically assumed the latter form; only the words "early downvote" in Nancy's comment made me realize the former form is also possible.
A possible technical fix could be to not display the user comment's karma until at least three votes were made or at... (read more)