If it's worth saying, but not worth its own post, even in Discussion, it goes here.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
Moldbug on Cancer (and medicine in general)
...I'm going to be a heretic and argue that the problem with cancer research is institutional, not biological. The biological problem is clearly very hard, but the institutional problem is impossible.
You might or might not be familiar with the term "OODA loop," originally developed by fighter pilots:
http://en.wikipedia.org/wiki/OODA_loop
If the war on cancer was a dogfight, you'd need an order from the President every time you wanted to adjust your ailerons. Your OODA loop is 10-20 years long. If you're in an F-16 with Sidewinder missiles, and I'm in a Wright Flyer with a Colt .45, I'm still going to kill you under these conditions. Cancer is not (usually) a Wright Flyer with a Colt .45.
Lots of programmers are reading this. Here's an example of what life as a programmer would be like if you had to work with a 10-year OODA loop. You write an OS, complete with documentation and test suites, on paper. 10 years later, the code is finally typed in and you see if the test suites run. If bug - your OS failed! Restart the loop. I think it's pretty obvious that given these institutional constraints, we'd still be running CP/M. Oncology is s
What if we protect the AI industry from intrusive regulation early on when it's still safe, then suddenly it's an arms race of several UFAI projects, each hoping to be a little less bad than the others?
imagines US congress trying to legislate friendliness or regulate AI safety
ಠ_ಠ
First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late)
Steve Jobs sought out quackery. You seem to be confused by what is meant by quackery here:
The entire thrust of our medical regulatory system, from the Flexner Report to today, is the belief that it's better for 1000 patients to die of neglect, than 1 from quackery. Until this irrational fear of quack medicine is cured, there will be no real progress in the field.
People who die because they rely on alternative medicine aren't going to be helped in the slightest by an additional six or five or four sigmas of certainty within the walled garden of our medical regulatory system. Medical malpractice and incompetence also is not the correct meaning of "death by quackery" in the above text. Death by quackery quite clearly refers to death caused as deaths caused by experimental treatments figuring out what the hell is happening.
You indeed miss a far better reason to criticize Moldbug here. A good reason for Moldbug being wrong is that eve...
I've been getting the feeling lately that LW-main is an academic ghetto where you have to be sophisticated and use citations and stuff. My model of what it should be is more like a blog with content that is interesting and educational, but shorter and easier to understand. These big 50-page "ebooks" that seem to get a lot of upvotes aren't obviously better to me than shorter and more to-the-point posts with the same titles would be.
Are we suffering sophistication inflation?
I feel like the mechanism probably goes something like:
An appropriate umeshism might be "If you've never gotten a post moved to Discussion, you're being too much of a perfectionist."
The problem, of course, is that there are very few things we can do to reverse the trend towards higher and higher post sophistication, since it's not an explicit threshold set by anyone but simply a runaway escalation.
One possible "patch" which comes to mind would be to set it up so that sufficiently high-scoring Discussion posts automatically get moved to Main, although I have no idea how technically complicated that is. I don't even think the bar would have to be that high. Picking an arbitrary "nothing up my sleeve" number of 10, at the moment the posts above 10 points on the first page of Discussion are:
Let me guess it was one of the top posters
Yes.
who thought your recent criticism of the direction of the community got too much karma.
Yes; his criticism was trivially wrong, as could be seen just by looking at posts systematically.
Or maybe someone who didn't like your responses here.
Actually, I laid out exactly what was wrong with the post: it was a good idea which hadn't been developed anywhere to the extent that it would be worth reading or referring back to, and I gave pointers to the literature he could use to develop it.
The reason I told Konk that his contributions were slightly net negative - when he specifically asked for my opinion on the matter - was exactly what Vladimir_Nesov guessed: he was flinging around and contributing all sorts of things, and just generally increasing the noise to signal ratio. I suggested he simply develop his ideas better and post less; Konk was the one who decided that he should leave/take a long break, saying that he had a lot of academic work coming up as well.
I'm not convinced his criticism is wrong. Lukeprog listed lots of substantive recent articles, but I question whether they were progress, given the current state of the community (for example, I'd like more historical analysis a la James Q Wilson)
Given the karma, it appears that the community is not convinced the criticism is wrong. Even if Konkvistador is wrong, he isn't trivially wrong.
Lukeprog listed lots of substantive recent articles, but I question whether they were progress, given the current state of the community (for example, I'd like more historical analysis a la James Q Wilson)
I think you're shifting goalposts. 'Progress', whatever that is, is different from being insular, and ironically enough, genuine progress can be taken as insularity. (For example, Rational Wiki mocks LW for being so into TDT/UDT/*DT which don't yet have proper academic credentials and insinuates they represent irrational cult-like markers, even though those are some of the few topics I think LW has made clear-cut progress on!)
Given the karma, it appears that the community is not convinced the criticism is wrong. Even if Konkvistador is wrong, he isn't trivially wrong.
I don't like to appeal to karma. Karma is changeable, does change, and should change as time passes, the karma at any point being only a provisional estimate: I have, here and on Reddit, on occasion flipped a well-upvoted (or downvoted) comment to the other sign by a well-reasoned or researched rebuttal to some comment that is flat-out wrong.
Perhaps people simply hadn't looked at the list of recent posts to notice that the basic claim of insularity was obviously wrong, or perhaps they were being generous and like you, read him as claiming something more interesting or subtle or not so obviously wrong like 'LW is not working on non-LW material enough'.
Apologies for the harsh language gwern. I shouldn't have used it. I will edit and retract to correct that.
Tell me down voters did you even read my comment
I read your comment, and I downvoted you because it was rude towards gwern, calling him a "damn robot". And I'm one of the guys that urged Konkvistador to stay, in a comment above. That doesn't excuse your rudeness. So you get properly downvoted by me (and gwern got upvoted because I like that he spoke up and declared he was the "top poster" in question and also gave a clear explanation of his reasons).
That konkvistador gave gwern's criticism more weight than he should isn't gwern's fault, it's konkvistador's.
My brain came up with this thought:
All else being equal, a murder is better than an accidental death, because a murder at least satisfies someone's preferences.
I was very tempted to take this as a reductio ad absurdum of consequentialism, to find all the posts where I advocated consequentialism and edit them, saying I'm not a consequentialist anymore, and to rethink my entire object-level ethics from the ground up.
And then my brain came up with other thoughts that defeated the reductio and I'm just as consequentialist as before.
For some reason, this was all very scary to me. This is the third data point now in examples of, "Grognor's opinion being changed by arguments way too easily". I think I'm gullible.
Three things: 1) I'm curious if other consequentialists will find the same knockdown for the reductio that I did; 2) Should I increase my belief in consequentialism since it just passed a strenuous test, decrease it because it just barely survived a bout with a crippling illness, or leave it the same because they cancel out or some other reason? 3) I can't seem to figure out when not to change my mind in response to reasonable-looking arguments. Help
Maybe you need to pay more attention to the ceteris paribus. When you include that, it seems perfectly sensible to me.
Consider a world in which in 1945 Adolf Hitler will either choke to death on a piece of spaghetti or will be poisoned by a survivor of the death camps that bribed his way into Hitler's bunker...
Taken straight from the top of Hacker News: Eulerian Video Magnification for Revealing Subtle Changes in the World.
In short, some people have found an algorithm for amplifying periodic changes in a video. I suggest watching the video, the examples are striking.
The primary example they use is that of being able to measure someone's pulse by detecting the subtle variations of color in their face.
The relevance here, of course, is that it's a very concrete illustration of the fact that there's a hell of lot of information out there to be extracted (That Alien Message, etc.) Makes a nice companion example to the AI box experiment -- "Suppose you didn't even know this was possible, because the AI had figured it out first?"
I've recently started Redditing, and would like to take the opportunity to thank the LW readership for generally being such a high calibre of correspondent.
Thank you.
Convergent instrumental goal: KIll All Humans
Katja Grace lists a few more convergent instrumental goals that an AGI would have absent some special measure to moderate that goal. It seems to me that the usual risk from AI can be phrased as a CIV of "kill all humans". Not just because you are made of atoms that can be used for something else, but because if our goals differ, we humans are likely to act to frustrate the goals of the AGI and even to destroy it, in order to maximize our own values; killing us all mitigates that risk.
Do you consider Stupid Questions Open Thread a useful thing? Do you want new iterations to appear more reagularly? How often?
Even though I didn't ask anything in it, I enjoyed reading it and participating in discussions and I think that it could reduce "go to Sequences as in go to hell" problem and sophistication inflation.
I would like it to reoccur with approximately the regularity of usual Open Threads; maybe not on calendar basis, but after a week of silence in the old one or something like that.
If there's still somebody who thinks that the word "Singularity" hasn't lost all meaning, look no further than this paper:
We agree with Vinge's suggestion for naming events that are “capable of rupturing the fabric of human history” (or leading to profound societal changes) as a “singularity” [...] In this paper, we consider two past singularities (arguably with important enough social change to qualify) [...]. The globalization occurring under Portuguese leadership of maritime empire building and naval technological progress is characterized by a metric describing diffusion. The revolution in time keeping, on the other hand, is characterized by a technological capability metric.
Relevant to thinking about Moldbug's argument that decline in the quality of governance is masked by advances in technology and Pinker's argument on violence fading.
Murder and Medicine The Lethality of Criminal Assault 1960-1999
Despite the proliferation of increasinglydanger ous weapons and the verylar ge increase in rates of serious criminal assault, since 1960, the lethalityof such assault in the United States has dropped dramatically. This paradox has barely been studied and needs to be examined using national time-series data. Starting from the basic view that homicides are aggravated assaults with the outcome of the victim’s death, we assembled evidence from national data sources to show that the principal explanation of the downward trend in lethalityinvolves parallel developments in medical technologyand related medical support services that have suppressed the homicide rate compared to what it would be had such progress not been made. We argue that research into the causes and deterrabilityof homicide would benefit from a “lethality perspective” that focuses on serious assaults, only a small proportion of which end in death.
A blogger commenting on the study, and summar...
Here's a math problem that came up while I was cleaning up some decision theory math. Oh mighty LW, please solve this for me. If you fail me, I'll try MathOverflow :-)
Prove or disprove that for any real number between 0 and 1, there exist finite or infinite sequences and of positive real numbers, and a finite or infinite matrix of numbers each of which is either 0 or 1, such that:
\sum%20x_m=1%0A\\2)\sum%20y_n=1%0A\\3)\forall%20n\sum%20x_m\varphi_{mn}=p%0A\\4)\forall%20m\sum%20y_n\varphi_{mn}=p)
Right now I only know it's true for rational .
ETA Now I al...
The risk of supervolcanos looks higher than previously thought, though none are imminent.
Is there anything conceivable which can be done to ameliorate the risk?
Or maybe someone who didn't like your responses here.
If that's the reason someone asked Konkvistador to leave, then someone deserves less respect than given by Konkvistador. Much less respect.
Many articles don't use tags at all others often misuse or underuse them. Too bad only article authors and editors can edit tags. I can't count the times I was researching a certain topic on LW and felt a micro annoyance when I found as article that clearly should be tagged but isn't.
Could we perhaps make a public list of possible missing or poor tags by author, and then ask the author or an editor to fix it?
Could someone involved with TDT justify the expectation of "timeless trade" among post-singularity superintelligences? Why can't they just care about their individual future light-cones and ignore everything else?
I am doing a study on pick-up artistry. Currently I'm doing exploratory work to develop/find an abbreviated pick-up curriculum and operationalize pick-up success. I've been able to find some pretty good online resources*, but would appreciate any suggestions for further places to look. As this is undergraduate research I'm on a pretty nonexistent budget, so free stuff is greatly preferred. That said I can drop some out of pocket cash if necessary. If anyone with pick-up experience can talk to me, especially to give feedback on the completed materials that would be great.
*Seduction Chronicles and Attractology have been particularly useful
If you will need to convince a professor to someday give you a passing grade on this work I hope you are taking into account that most professors would consider what you are doing to be evil. Never, ever describe this kind of work on any type of graduate school application. Trust me, I know a lot about this kind of thing.
I wrote up what happened for Forbes.. I later found out that it was Smith's President not its Board of Trustees that finally decided to give me tenure.
Racism, sexism and homophobia are the three primary evils for politically correct professors. From what I've read of pick-up (i.e. Roissy's blog) it is in part predicated on a negative view of women's intelligence, standards and ethics making it indeed sexist.
See this to get a feel for how feminist react to criticisms of women. Truth is not considered a defense for this kind of "sexism". (A professor suggested I should not be teaching at Smith College because during a panel discussion on free speech I said Summers was probably correct.)
I've never discussed pick-up with another professor but systematically manipulative women into having sex by convincing them that you are something that you are not (alpha) would be considered by many feminist, I suspect, as form of non-consensual sex.
People here generally reading a big part of the sequences is important for participating in debate. I see large influence on the thinking of people on LessWrong from non-sequence and indeed non-LW writing such as Paul Grahams writing on Keeping your identity small or What you can't say. Why don't we include these in the promotion of material aspiring rationalists should ideally read?
Now consider building such a list. Don't include entire books. While a required reading list might complement the sequences nicely especially when Eliezer finally gets around ...
Disturbed to see two people I know linking to Dale Carrico on Twitter. Is there a standalone article somewhere that tries to explain the perils of trying to use literary criticism to predict the future? [EDIT: fixed name, thanks for the private message!]
What does it mean for a hypothesis to "have no moving parts"? Is that a technical thing or just a saying?
Meta
Guys I'd like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
But for the most part not only has the LessWrong community not updated on ideas and concepts that haven't grown here. The only major examples fellow LWers brought up ...
Post some insightful, LW-relevant image macros.
Very difficult words to spell, arranged for maximum errors-- the discussion includes descriptions of flash recognition of errors.
Theories of Big Universes or Multiverses abound-- Inflation, Many Worlds, mathematical universes etc. Given a certain plausible, naturalistic account of personal identity (that for you to exist merely requires there to be something psychologically continuous with earlier stages of your existence) if any of these theories is true we are immortal (though not necessarily in the pleasant sense).
Questions: Is the argument valid? What are the chances that none of the multiverse theories are true? What, if anything, can we say about the likely character of this a...
A disproportionate number of people involved with AI risk mitigation and the Singularity Institute have graduated from "Elite Universities" such as Princeton, Harvard, Yale, Berkeley, and so on and so fourth. How important are Elite Universities, besides from signalling status and intelligence? How important is signalling status by going to an elite University? Are they worth the investment?
Liron's post about the Atkins Diet got me thinking. I'd often heard that the vast majority of people who try to lose weight end up regaining most of it after 5 years, making permanent weight loss an extremely unlikely thing to succeed at. I checked out a few papers on the subject, but I'm not good at reading studies, so it would be great to get some help if any of you are interested. Here are the links (to pdfs) with a few notes. Anyone want to tell me if these papers really show what they say they do? Or at any rate, what do you think about the feasibilit...
It occurred to me that on this forum QM/MWI discussions are a mind-killer, for the same reasons as religion and politics are:
...As a rule, any mention of religion on an online forum degenerates into a religious argument. Why? Why does this happen with religion and not with Javascript or baking or other topics people talk about on forums?
What's different about religion is that people don't feel they need to have any particular expertise to have opinions about it. All they need is strongly held beliefs, and anyone can have those. No thread about Javascript wi
I'm thinking of writing a series of essays regarding applied rationality in terms of politics and utilitarianism, and the ways we can apply instrumental rationality to better help fight the mind-killingness of political arguments, but I'd like to make sure that lesswrong is open to this kind of thing. Is there any interest in this kind of thing? Is this against the no-politics rule?
Anyone else try the Bulletproof diet? Michael Vassar seems to have a high opinion of Dave Asprey, the diet's creator.
Anything non-obvious in job searching? I'm using my university's job listings and monster.com, but I welcome any and all advice as this is very new to me. While I won't ask, "What is the rational way of looking for jobs?" I will ask, "How can I look for jobs more effectively than with just online job postings?"
Anything non-obvious in job searching?
This depends on what you consider obvious. (Many things that seem obvious to me now, would be a great advice 10 or 15 years ago; sometimes even 1 year ago.) Also there is a difference between knowing something and feeling it; or less mysteriously: between being vaguely aware that something "could help" and having an experience that something trivial and easy to miss did cause 50% improvement in results. So at the risk of saying obvious things:
Don't be needy. Search for a job before you have to; that is before you run out of money. Some employers will take a lot of time; first interview, a week or two waiting, second interview, another week or two, third interview... be sure you have enough time to play this game. If a month later you get an offer that is not what you wanted, be sure to have a freedom to say "no".
Speak with more companies. If you get two great offers, you can always take one and refuse the other. If you get two bad offers (or one bad offer and one rejection), your only choices are to take a bad offer, or start the whole process again, losing a month of your time. How many companies is enough? You probably ...
For someone who hopes for lots of medical/bionic wonders going on the market within the next 2-3 decades, how stupid/costly it really is to start smoking a little bit today? I'm only asking because I tried it for the 1st time this week, and right now I'm sitting here smoking Dunhills, browsing stuff and listening to Alice in Chains, having a great night.
I insist on doing some light drug as I have an addictive personality that longs for a pleasant daily routine anyway - and I quit codeine this winter before it was made prescription-only (and not a moment t...
Post-Singularity you might BE a hoverboard.
(Of course, the premise of the comic is incompatible with the Singularity, since human-level AIs are widespread as companions, without ever going FOOM.)
Does a Tegmark Level IV type Big World completely break algorithmic probability? Is there any sort of probability that's equipped to deal with including a Big World as a possibility in your model?
Sean Carroll has a nice post at Cosmic Variance explaining how Occam's razor, properly interpreted, does not weigh against Many Worlds or multiverse theories. Sample quote:
...When it comes to the cosmological multiverse, and also the many-worlds interpretation of quantum mechanics, many people who are ordinarily quite careful fall into a certain kind of lazy thinking. The hidden idea seems to be (although they probbly wouldn’t put it this way) that we carry around theories of the universe in a wheelbarrow, and that every different object in the theory take
I was annoyed after first hearing the Monty Hall problem. It wasn't clear that the host must always open the door, which fundamentally changes the problem. Glad to see that it's a recognized problem.
..."The problem is not well-formed," Mr. Gardner said, "unless it makes clear that the host must always open an empty door and offer the switch. Otherwise, if the host is malevolent, he may open another door only when it's to his advantage to let the player switch, and the probability of being right by switching could be as low as zero." Mr.
Hypothetical: what do you think would happen if, in a Western country with a more or less "average" culture of ligitation - whether using trial by jury, by judge or a mix of both - all courts were allowed to judge not just the interpretation, applicability, spirit, etc, but also the constitutional merit of any law in every case (without any decision below the Supreme Court becoming precedent)?
Say, someone is arrested and brought to trial for illegal possession of firearms, but the judge just decides that the country's Constitution allows anyone ...
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a hilariously OP lvl ~30 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice. (ETA: I'll probably re-post this next Open Thread, I hope people don't mind too much.)
Unless Clippy has been brainwashing some humans, the joys of paperclipping are not as alien to the human mind as we had thought:
Our key insight is a pessimistic one: this is the sort of situation which, though individuals and markets don’t handle it well, isn’t actually handled well by governments either. The fundamental mistake of statist thinking is to juxtapose the tragically, inevitably flawed response of individuals and markets to large collective-action problems like this one against the hypothetical perfection of idealized government action, without coping with the reality that government action is also tragically and inevitably flawed.
Or, more simply...
Is anyone else occasionally browsing LW on Android devices and finding the image based vote up, reply, parent etc. links on the comments much more difficult to hit correctly than regular links?
A user who's judgement I deeply admire has told me off site that my posts are harmful to the community and it is better that I stop posting. I will respect his opinion and discontinue posting until further notice.
Please down vote this post if I make responses after it.
Thanks for all the fun and cool conversation! It was a great ride while it lasted, I will try to live up to the spirit of LW in the future.
First Checkpoint
I delayed the break from LW because of some of the feedback to this post as well as plain force of habit. I did some posts I considered...
until further notice.
You should note this on a calendar or something: two months from now you should re-evaluate your position. It seems to me like there's a chance you'll change to the point you're net positive; re-evaluation is cheap; that small chance should be allowed for, not discarded.
I'm sorry to see you go.
I do agree with gwern that your recent critical lamentations have been a negative contribution. Particularly because I find it is too easy to be influenced towards cynicism. However your recent dissatisfaction aside your contributions in general are fine, making you a valuable community member. I never see the name "Konkvistador" and think "Oh damn, that moron is commenting again", which puts you ahead of rather a lot of people and almost constitutes high praise!
I can perhaps empathise with becoming disgruntled with intellectual standards on lesswrong. People are stupid and the world is mad - including most people here and everywhere else I have interacted with humans. I recently took a whole 30 days off, getting my score down to '0', weakening the addiction and also relieving a lot of frustration. I enjoy lesswrong much more after doing that. Hopefully you decide to return some time in the future as well.
I tend to agree with Shokwave's replay. Lesswrong users not learning a bunch of history is not a big deal. The subject is fairly boring. Someone else can learn it.
Lesswrong isn't supposed to be a site where all users must learn arbitrary amounts of information about arbitrary subjects. Most people have better things to do.
I find your style of commenting both fun to read and interesting. I think your posts are valuable even if they're more "thinking out loud" than "I have studied ALL THE LITERATURE". As a community I think we can and SHOULD be able to talk about things in ways that don't involve 50 citations at the bottom of the page, even though I think those posts are valuable. I don't know who you're scaring away with your amount of commenting, but I don't miss them.
Jeez.
You've been the top contributor in the past 30 days.
This departure of yours is the most harmful thing you've ever done to the community. I wish you'd stay.
This is bloody stupid.
Please don't go. If someone from my cluster of ideaspace told you that you detracted from the community - they are wrong.
whether Konkvistador's posts are slightly harmful for the community
It is ridiculous to argue that an eloquent and prolific poster who actually seems to have read the motherfucking sequences and doesn't get tired of trying help new people access them (a rare trait these days) is causing harm.
Even if that was so for every single thing he wrote, and note that when Lukeprog cites against his argument that productivity and openness to outside ideas on LW is lower than it should be, the bundle includes many of Konkvistador's posts as examples of openness and productivity! Imagine that!
At the very least his excellent taste in outside links that he regularly shares with the community make him definitely a signal not a noise man.
But please lets pile on him. I bet soon someone will bring up how he "violated the mindkilling" taboo or even acusse him of getting "minkilled".
My rhetoric is what it is, I'm pissed. Feel free to make an argument for why Konkvistador's output is on net "harmful", I will try to consider it properly.
This is not my argument, please re-read the discussion when you calm down.
English is a viciously ambiguous language.
1) The preceding is not a quote, really, it's just a sentence I made up and want to analyze.
2) I think the sentence has more than an element of truth to it. While also being self-referential. This can be amusing in poetry, I guess, but I'm getting pretty sick of it right now.
3) I do not know what to do about this. I do not know how we even manage to talk to each other at all some times (!). Shades of meaning. Tones of voice running all out of sync to spoken words in order to hint at things that are better left u...
I want to talk about human intelligence amplification (IA), including things like brain-machine interfaces, brain/CNS mods, and perhaps eventually brute-force uploading/simulation. There are parallels between the dangers of AI and IA.
IA powerful enough to be or create an x-risk might be created before AGI. (E.g., successful IA might jump-start AGI development.) IA is likely to be created without a complete understanding of the human brain, because the task is just to modify existing brains, not to design one from scratch. We will then need FIA - the IA equ...
I just read the following comment from Ben123:
http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/6rzn
And it mentioned a chain of words I had not thought of before: "Multiplied by the reciprocal of the probability of this statement being true." At that point, I felt like I couldn't get much past "I notice I am confused." without feeling like I was making a math error. (And that it was possible the math error was in assuming the statement could be evaluated at all, which I wasn't sure of.)
In general, how should I asses...
A Transit of Venus finished a few hours ago. (100% overcast where I am, alas.)
The next one is in 2117. How many of us expect to see it?
ETA: So far, one nitpick and one Singularitarian prediction.
Personally, I expect to be dead in the usual way by the middle of this century at the latest, and even if I had myself frozen, I don't expect cryonic revival to be possible by 2117. I am not expecting a Singularity by then either. Twenty-year-olds today might reasonably have a hope of life extension of the necessary amount.
ETA2: A little sooner than that, there's ...
Would we lose much by not letting new, karmaless accounts post links? Active moderation is never going to be fast enough to keep stuff like this off the first page of http://lesswrong.com/comments, and it diminishes my enjoyment.
Or we could use some AI spam detection, I guess.
I've just realized that my information diet is well characterized as an eating disorder. Unfortunately, I'm not able to read about eating disorders (to see if their causes could plausibly result in information-diet analogs and whether their treatments can be used/adapted for information consumption disorders), because I get "sympathy attacks" (pain in my abdomen, weakness in my arms, mild nausea) when I see, hear salient descriptions of, or read about painful or very uncomfortable experiences.
I don't know what to do at this point. I'd like to hav...
To rationalize dust specks over torture, one can construct a utility function where utility of dust specks in n people is of the Zeno type, -(1-1/2^n), and the utility of torture is -2. Presumably, something else goes wrong when you do that. What is it?
As commenter Unknown pointed out in 2008, there must then exist two events A and B, with B only worse than A by an arbitrarily small amount, such that no number of As could be worse than some finite number of Bs.
Here's a little math problem that came up while I was cleaning up some decision theory math. Oh mighty LW, please solve this for me. If you fail me, I'll try MathOverflow :-)
Prove or disprove that for any real number between 0 and 1, there exist finite or infinite sequences and of positive reals, and a finite or infinite matrix of numbers each of which is either 0 or 1, such that:
\sum%20x_m=1%0A\\2)\sum%20y_n=1%0A\\3)\forall%20m,n\;\varphi_{mn}=\varphi_{nm}%0A\\4)\forall%20n\sum%20x_m\varphi_{mn}=p%0A\\5)\forall%20m\sum%20y_n\varphi_{mn}=p)
Right now I...
A graphical representation of sunk costs.
I just added a new post on my blog about some of my experiences with PredictionBook. It may be of interest to some here, but understand that the level of discourse is meant to be exactly in-between Less Wrong and my family and friends. It is very awkward for me to write this way and I don't really have the hang of it yet, so go easy. It is a very delicate balance between saying things imprecisely (and even knowingly wrong or incomplete) and keeping things jargon free and understandable to a wider audience.
As commenter Unknown pointed out in 2008, there must then exist two events A and B, with B only worse than A by an arbitrarily small amount, such that no number of As could be worse than some finite number of Bs.
Thanks, that's a valid point, pretty formal, too. I wonder if it invalidates the whole argument.