Open thread, Nov. 23 - Nov. 29, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (257)
What would happen if a altcoin was developed where users had to precommit not to forking that coin?
How exactly could users of something anonymous precommit to not do something?
It wouldn't be much different than the status quo. No one of the direct forks of bitcoin currently compete with bitcoin for the core purpose of being a currency and not just speculation.
Do you know of any remedy or prevention for hiccups? I can't get anything trusthworthy out of the internet nor out of friends and family. All just anecdotes.
There's a very extensive medical literature - although mostly focusing upon persistent (>48 hours) or intractable (>1 month) hiccups. One possible remedy jumped out at me from Google Scholar results: title alone gives the game away (albeit N=1):
Odeh, M., Bassan, H., & Oliven, A. (1990). Termination of intractable hiccups with digital rectal massage. Journal of internal medicine, 227(2), 145-146.
A very recent review by Steger et al (2015) gives good coverage of "state of the art" in acute hiccups:
before concluding in case of persistent/intractable hiccups:
Steger, M., Schneemann, M., & Fox, M. (2015). Systemic review: the pathogenesis and pharmacological treatment of hiccups. Alimentary pharmacology & therapeutics, 42(9), 1037-1050.
Further note: reference [23] above in Steger et al (2015) is "Watterson B. The Complete Calvin and Hobbes. Kansas City, MO: Andrews McMeel Publishing, 2005."
I've got a method that's reliable for me. I pay attention to how I feel between hiccups, observe what seems like a hiccuppy feeling (in the neighborhood of my diaphragm), and make myself stop feeling it.
Well, I know of some remedies, but they're also anecdotal :)
All the good ones I know are essentially breathing exercises, where you have to pay close attention to your breathing for a while (i.e. take control of your diaphragm). Like the classic "drink a glass of water from the far side of the glass" is actually a breathing exercise, which works just as well if you just do the breathing without the glass of water.
This works for me: Pour yourself a glass of water and hold it in one hand. Lift your arms up, reaching for the ceiling - this movement has the consequence of lifting your ribcage. Drink a few swallows from the glass of water without dropping your ribcage from its elevated orientation. Do this a few times.
agree with others; the diaphragm is the muscle underneath the lungs that controls your breathing. Hiccups are caused by irritation of the diaphragm. knowing this; you are looking for methods of relaxing the diaphragm. that includes generally trying to work out the control for the automatic muscle; and figuring out how to calm it down.
as for trustworthy, or better-than-anecdotes - you can get surgery if it's a long-term (over several months) problem. how do you relax the diaphragm? for your human-hardware? likely different to other humans' hardware - so not much luck finding non-anecdote solutions.
I've heard the Beatles have some recorded song they never released because they were too low quality. I think it would be worthwhile to study their material in its full breadth, mediocrity included, to get a sense for the true nature of the minds behind some greatness.
I've saved writings and poetry and raw, potentially embarrassing past creations for the sake of a similar understanding. I wish I had recordings of my initial fumblings with the instruments I now play rather better.
So it is in this general context of seeking fuller understanding, that I ask if anyone knows where to find these legendary old writings from Eliezer Yudkowsky, reputed to be embarrassing in their hubris, etc..
The "legendary old writings from Eliezer Yudkowsky" are probably easy to find, but I am not going to help you.
I do not like the idea of people (generally, not just EY) being judged for what they wrote dozens of years ago. (The "sense for the true nature" seems like the judgement is being prepared.)
Okay, I would make an exception in some situations; the rule of thumb being "more extreme things take longer time to forget". For example if someone would advocate genocide, or organize a murder of a specific person, then I would be suspicious of them even ten years later. But "embarrassing in their hubris"? Come on.
I don't think EY's ego got any smaller with time.
In the meantime he wrote the Sequences and HPMoR, and founded MIRI and CFAR. So maybe the distance between his ego and his real output got smaller.
Also, as Eliezer mentions in the Sequences, he used to have an "affective death spiral" about "intelligence", which is probably visible in his old writings, and contributes to the reader's perception of "big ego".
I don't really mind big egos as long as they drive people to produce something useful. (Yeah, we could have a separate debate about how much MIRI or HPMoR are really useful. But the old writings would be irrelevant for that debate.)
Here is what you sound like:
"But look at all this awesome fan fiction, and furthermore this 'big ego' is all your perception anyways, and furthermore I don't even mind it."
Why so defensive about EY's very common character flaws (which don't really require any exotic explanation, btw, e.g. think horses not zebras)? They don't reflect poorly on you.
EY's past stuff is evidence.
I'm defensive about digging in people's past, only to laugh that as teenagers they had the usual teenage hubris, and maybe as highly intelligent people they kept it for a few more years... and then use it to hint that even today 'deeply inside' they are 'essentially the same', i.e. not worth to be taken seriously.
What exactly are we punishing here; what exactly are we rewarding?
Ten or more years ago I also had a few weird ideas. My advantage is that I didn't publish them on visible places in English, and that I didn't become famous enough so people would now spend their time digging in my past. Also, I kept most of my ideas to myself, because I didn't try to organize people into anything. I didn't keep a regular diary, and when I find some old notes, I usually just cringe and quickly destroy them.
(So no, I don't care about any of Eliezer's flaws reflecting on me, or anything like that. Instead I imagine myself in a parallel universe, where I was more agenty and perhaps less introverted, so I started to spread my ideas sooner and wider, had the courage to try changing the world, and now people are digging up similar kinds of my writings. Generally, this is a mechanism for ruining sincere people's reputations: find something they wrote when they were just as sincere as now only less smart, and make people focus on that instead of what they are saying today.)
I guess I am oversensitive about this, because "pointing out that I failed at something a few years ago, therefore I shouldn't be trusted to do it, ever" was something my mother often did to me while I was a teenager. People grow up, damn it! It's not like once a baby, always a baby.
Everyone was a baby once. The difference is that for some people you have the records, and for other people you don't; so you can imagine that the former are still 'deep inside' baby-like and the latter are not. But that's confusing the map with the territory. As the saying goes, "an expert is a person who came from another city" (so you have never seen their younger self.). As the fictional evidence proves, you could have literally godlike powers, and people would still diss you if they knew you as a kid. But today on internet, everything is one big city, and anything you say can get documented forever. (Knowing this, I will forbid my children to use their real names online. Which probably will not help enough, because twenty years later there will be other methods for easily digging in people's past.)
Ah, whatever. It's already linked here anyway. So if it makes you feel better about yourself (returning the courtesy of online psychoanalysis) to read stupid stuff Eliezer wrote in the past, go ahead!
EDIT: I also see this as a part of a larger trend of intelligent people focusing too much on attacking each other instead of doing something meaningful. I understand the game-theoretical reasons for that (often it is easier to get status by attacking other people's work than presenting your own), but I don't want to support that trend.
EY is not a baby, and was not a baby in the time period under discussion. He is in his mid thirties today.
I have zero interest in gaining status in the LW/rationalist community. I already won the status tournament I care about. I have no interest in "crabbing" for that reason. I have no interest in being a "guru" to anyone. I am not EY's competitor, I am involved in a different game.
Whether me being free of the confounding influence of status in this context makes me a more reliable narrator I will let you decide.
What I am very interested in is decomposing cult behavior into constituent pieces to try to understand why it happens. This is what makes LW/rationalists so fascinating to me -- not quite a cult in the standard Scientology sense, but there is definitely something there.
Downvote explanation:
Using claim of immunity to status and authority games as evidence to assert a claim.
Which is to say, you are using a claim of immunity to status and authority games to assert status and authority.
Yes, that's right out of my own playbook, too. I welcome anybody who catches me at it to downvote me, and please let me know I've done it, as it is an insidious logical mistake I find it impossible to catch myself at.
I am not claiming status and authority (I don't want it), I am saying EY has a big ego. I don't think I need status and authority for that, right?
Say I did gain status and authority on LW. What would I do with it? I don't go to meetups, I hardly interact with the rationalist community in real life. What is this supposed status going to buy me, in practice? I am not trying to get laid. I am not looking to lead anybody, or live in a 'rationalist house,' or write long posts read by the community. Forget status, I don't even claim to be a community member, really.
I care about status in the context relevant to me (my academic community, for example, or my workplace).
Or, to put it simply, you guys are not my tribe. I just don't care enough about status here.
Bask in the glory? :-)
You might be an exception, but empirically speaking people tend to value their status in online communities, including communities members of which they will never meet in meatspace and which have no effect on their work/personal/etc. life.
Biologically hardwired instincts are hard to transcend :-/
You're claiming to have status and authority to make a particular claim about reality - "Outsider" status, a status which gains you, with respect to adjucation of insider status and authority games... status and authority.
Now, your argument could stand or fall on its own merits, but you've chosen not to permit this, and instead have argued that you should be taken seriously on the merits of your personal relationship to the group (read: taken to have status and authority relative to the group, at least with respect to this claim).
I don't understand your objection.
Asserting a claim is not the same thing as asserting status and authority.
I'm not sure what you want from Ilya here. He seems to be describing his motivations in good faith. Do you think he's lying to gain status? Do you think he's telling the truth, but gaining status as a side effect, and he shouldn't do that?
Quick edit: Oh, I should probably have read the rest of the thread. I think I understand your objection now, but I disagree with it.
Welcome to the zoo! Please do not poke the animals with sticks of throw things at them to attract their attention. Do not push fingers or other object through the fences. We would also ask you not to feed the animals as it might lead to digestive problems.
It's an interesting zoo, where all the exhibits think they're the ones visiting and observing...
Of course :-)
The true observers we'll never know, because by definition they are not commenting here.
Mid thirties in 2015 means about twenty in 2001 (the date of most of the linked archives), right? That's halfway to baby from where I am now. Some of my cringeworthy diaries were written in my mid twenties.
He literally wrote plans about what he would do with the billions of dollars the singularity institute would be bringing in by 2005 using the words 'silicon crusade' to describe its actions to bring about the singularity and interstellar supercivilization by 2010 so as to avoid the apocalyptic nanowar that would have started by then without their guidance. He also went on and on and on about his SAT scores in middle school (which are lower than those of one of my friends, taken via the same program at the same age) and how they proved he is a mutant supergenius who is the only possible person who can save the world.
I am distinctly unimpressed.
I can hardly wait to look back on his 'shameless blegging' post in a few years and compare it to reality. Pretty sure I know what the result will be.
Is it at all meaningful to you that EY writes this in his homepage?
It is true that EY has a big ego, but he also has the ability to renounce past opinions and admit his mistakes.
Absolutely, it is meaningful.
For many types of problems, analyzing how a system changed over time is a more effective method of understanding a problem than comparing one system's present state with another system's present state.
Is that true even with highly non-linear systems like humans?
Yes, it is.
Very interesting, thanks.
I don't think Eliezer's changes in hubris level are what's interesting-- he's had some influence, and no on seems to think his earliest work is his best. It might make sense to find out what how his writing has changed over time.
These are so much fun to read!
(snapshot times chosen more or less at random, and specific pages are what I consider the highlights)
https://web.archive.org/web/20010204095400/http://sysopmind.com/beyond.html
(contains links to everything below and much more)
https://web.archive.org/web/20010213215810/http://sysopmind.com/sing/plan.html (his original founding plans for the singularity institute, extremely amusing)
https://web.archive.org/web/20010606183250/http://sysopmind.com/singularity.html
http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer (some... exceptional quotes in here and you can follow links)
https://web.archive.org/web/20010309014808/http://sysopmind.com/eliezer.html
https://web.archive.org/web/20010202171200/http://sysopmind.com/algernon.html
More can be found poking around on web archive and youtube and vimeo. Even more via PM.
From Omnilibrium:
Donald Trump and the Methods of Rationality: Part I. Deconstructing Political Motivations
Is there a scientific way to prove or disprove discrimination in the academy?
Will Paris attacks succeed in their objectives?
What's in a name?
Health care wait time in Europe
How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?
What kind of threats?
Any arbitrary threat?
404: Generalized model not found
If molecular interactions are deterministic, are all universes identical?
In a universe where molecular interactions are deterministic, I don't see any additional universes emerging.
Depends on what you mean by "deterministic" (and "universe").
1) Do you assume each interaction has only one outcome, or are multiple outcomes (in different Everett branches) possible?
2) Do you assume all universes started in the same state? Molecular interactions in an existing universe are a different topic than the "creation of the universe".
If by deterministic you mean informationally, that is with complete information we have the possibility to predict any future states (barred complexity), then we most definitely know that molecular interactions are not deterministic.
However, even hypothesizing a deterministic universe, you could have different starting conditions that would evolve to different universes, and while you are at it, why not postulates different deterministic laws?
So, it seems like lots of people advise buying index funds, but how do I figure out which specific ones I should choose?
You need to figure out things like your own risk tolerance, your own time horizons for investments, and your own ideas about what might happen (or not) in the econo-financial world within your time horizons.
Short version: try something like Vanguard's online recommendation, or check out Wealthfront or Betterment. Probably you'll just end up buying VTSMX.
Long version: The basic argument for index funds over individual stocks is that you think that a <broad class> is going to outperform a <narrow subclass> because of general economic growth and reduced risk through pooling. So if you apply the same logic to index funds, what that argues is that you should find the index fund that covers the largest possible pool.
But it also becomes obvious that this logic only stretches so far--one might think that meta-indexing requires having a stock index fund and a bond index fund that are both held in proportion to the total value of stocks and bonds. So let's start looking at the factors that push in the opposite direction.
First, historically stocks have returned more than bonds long-term, with higher variability. It makes sense to balance your holdings based on your time and risk preferences, rather than the total market's time and risk preferences. (If you're young, preferentially own stocks.)
As well, you might live in the US, for example, and find it more legally convenient to own US stocks than international stocks. The corresponding fund is VTSMX, for the total US stock market. If you want the global fund, it's VTWSX.
You might have beliefs about small caps and large caps, or sectors, and so on and so on. One mistake to avoid here is saying "well, I have three options, so clearly I should put a third of my money into each option," especially because many of these options contain each other--the global fund mentioned earlier is also a US fund, because the US is part of the globe.
Asset allocation (what portion of your money is in stocks and bonds) is very important, depends on your age, and will get out of whack unless you rebalance. So use a Vanguard Target Retirement Date fund.
There are more financial assets than just stocks and bonds.
Yes, but those are the important ones. Stocks for high expected returns and bonds for stability. You can generalize "bonds" to include other things that return principal plus interest like cash and CDs.
What's the criterion of importance?
Um.... I hate to break it to you...
Important to the goal of increasing one's wealth while managing the risk of losing it. Certainly there are other possible goals (perhaps maximizing the chance of having a certain amount of money at a certain time, for example) but this is the most common, and the one that I assume people on LW discussing basic investing concepts would be interested in.
I'm not sure if you're referring to the fact that popular banks are returning virtually zero interest or if you're interpreting "cash" as "physical currency notes". If the former, I have cash in bank accounts that return .01%, 1%, and 4.09% (each serving different purposes). If the latter, I apologize for the confusion. The word is used to mean different things in different contexts. In the context of investing it is standard to include in its meaning checking and savings accounts, and often also CDs.
Given this definition, I don't see why only stocks and bonds qualify.
True, but given that you said "cash and CDs" I thought your idea of cash excludes deposits. Still, there are more asset classes than equity and fixed income.
My claim is that equity and fixed income are the important pieces for reaching that goal. With a total stock index fund and a total bond index fund you can achieve these goals almost as well as any other more complicated portfolio. Additional asset classes can add additional diversification or hedge against specific risks. What other asset classes do you have in mind? Real estate? Commodities? Currencies?
Fair enough. I was unclear.
The best argument for getting an index fund is the expense ratio; not broad versus narrow. Managed mutual funds have higher expense ratios because of the broker's salary. Private trading instead of buy and hold will similarly cost you more because of the transaction cost. To justify their transactions, a broker doesn't just have to beat the market, but to beat the market by a large enough swing to justify those extra costs. Because of the number of brokers out there, even if one has consistently beaten the market, it is impossible to determine whether that is due to skill or luck for any given broker. Large domestic index funds will generally have the lowest expense ratios.
I have a secondary question to that. These things seem to all operate online only, without bricks and mortar. How do I assure myself that a website that I have never seen before is trustworthy enough to invest, say, 6-figure sums of money in? Are there official ratings or registers, for probity rather than performance?
That's easy to answer for Vanguard, which has been around since 1975 and has $3T under management. It's not going anywhere. Both Wealthfront and Betterment were founded in 2008, in Palo Alto and NYC respectively, and have about $2B and $3B under management. I don't think there are any official ratings of probity out there; I'm not sure there's a good source besides trawling through the business press looking for red flags.
You may want to check if the brokerage firm/custodian is a member of SIPC, which provides a level of insurance against misappropriation. I think all the big names are members (Vanguard, Schwab, TD Ameritrade, Fidelity, etc.)
http://www.sipc.org/for-investors/what-sipc-protects
Why are there many LWers from, say, Europe, but not China?
I'm going to guess it's based on some of the East-West thinking differences outlined by Richard Nisbett in The Geography of Thought (I very highly recommend that book, BTW). I don't remember everything in the book, but I remember he had some stuff in there about why easterners are often less interested in, and have a harder time with, the sort of logical/scientific thinking that LW advocates.
Which is weird because, if you take seriously the ethnic-IQ correlation (which I don't), Asians show an higher-than-westerners average IQ.
Nothing to do with IQ, but with modes of thinking. According to Nisbett, Eastern thinking is more holistic and concrete vs. the Western formal and abstract approach. He says that Easterners often make fewer thinking mistakes when dealing with other people, where a more holistic approach is needed (for example, Easterners are much less prone to the Fundamental Attribution Error). But at the same time they tend to make more thinking mistakes when it comes to thinking about scientific questions, as that often requires formal, abstract thinking. Nisbett also speculates that this is why science developed only in the west even though China was way ahead of the west in (concrete-thinking-based) technological progress.
In general there's very little if any correlation between IQ and rationality. A lot of Keith Stanovich's work is on this.
I second the recommendation of The Geography of Thought.
I'm going to guess that English language proficiency is far higher in Europe than it is in China. But Asian Americans seem underrepresented on LW relative to the fields that LW draws heavily from, so that seems unlikely to be a complete explanation.
Post-human mathematics at arXiv.
It always seemed very strange to me how, despite the obvious similarities and overlaps between mathematics and computer science, the use of computers for mathematics has largely been a fringe movement and mathematicians mostly still do mathematics the way it was done in the 19th century. This even though precision and accuracy is highly valued in mathematics and decades of experience in computer science has shown us just how prone humans are to making mistakes in programs, proofs, etc. and just how stubbornly these mistakes can evade the eyes of proof-checkers.
Converting most of existing math into formal developments suitable for computer use would be a huge undertaking, possibly requiring several hundred man-years of work. Most people aren't going to work on such a goal with any seriousness until it's clear to them that the results will in fact be widely used. This in turn requires further work in order to come up with lightweight, broadly-applicable logical foundations/frameworks, as well as more work on the usability of proof environments. Progress on these things has been quite slow, although we have seen some encouraging news lately, such as the recent 'formal proof' of the Kepler conjecture. And even that was actually a bunch of formal proofs developed under quite different systems, that can be argued to solve the conjecture only when they're somehow combined. I think this example makes it abundantly clear that current approaches to this field - even at their most successful - do have non-trivial drawbacks.
You're speaking of unifying all of math under the same system. I don't think that's strictly necessary, or even desirable. The computer science equivalent of that would be a development environment where every algorithm in the literature is implemented as a function.
I'm wondering more about why problem-specific computer-verifiable proofs aren't used.
The problem is, no matter how 'problem-specific' your proofs are, they aren't going to be 'verifiable' unless you specify them all the way down to some reasonable foundation. That's the really big undertaking, so you'll want to unify things as much as possible, if only to share whatever you can and avoid any duplication of effort.
If that's true then it logically follows that most existing mathematics literature is un-verifiable - a statement that I think mathematicians would take issue with. After all, that's not how most mathematics literature is presented.
I agree with that.
In the future, it would be best to derive everything from the axioms. (Using libraries where the frequently used theorems are already proved.) The problem is, the most simple theorems that we can derive from the axioms quickly are not important enough to pay for the development and use of the software.
So a better approach would be for the system to accept a few theorems as (temporary) axioms. Essentially, if it would be okay to use the Pythagorean theorem in a scientific paper without proving it, then in the first version of the program it would be okay to use the Pythagorean theorem as an axiom -- displaying a warning "I have used Pythagorean theorem without having a proof of it".
This first version would already be helpful at verifying current papers. And there is an option to provide the proof of the Pythagorean theorem from the first principles later. If you add it later, you can re-run the papers and get the results with less warnings. If the Pythagorean theorem happens to be wrong, as long as you have provided the warnings for all papers, you know which ones of them to retract.
Actually, I believe such systems would be super helpful e.g. in set theory, when you want to verify whether the proof you used does rely on the axiom of choice. Because even if you didn't use it directly, maybe one of them theorems you used was based on it. Generally, using different sets of axioms could become easier.
Yes that's an insightful way of looking at how computer verification could assist in real mathematics research.
Going back to the CS analogy, programmers started out by writing everything in machine language, then gradually people began to write commonly-used functions as libraries that you could just install and forget about (they didn't even have to be in the same language) and they wrote higher-level languages that could automatically compile to machine code. Higher and higher levels of abstraction were recognized and implemented over the years (for implementing things like parsers, data structures, databases, etc.) until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. There was very little universal coordination in all of this; it just grew out of the needs of various people. No one in 1960 sat down and said, "Ok, let's write python."
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
As to "writing out your thoughts", people did design such a language in 1959...
P.S. Oh, and do your thoughts flow like this..?
LISP was definitely a thing in the 1960s, and python is not that different. For a long time, the former was pretty much 'the one' very-high-level, application-oriented language. Much like Python or Ruby today.
That the implementation of python is fairly slow is a different matter, and high-level languages need not be any slower than, say, C or Fortran, as modern JIT languages demonstrate. It just takes a lot of work to make them fast.
Lisp was also designed during that same period and probably proves your point even better. But 1960's Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
I think the difficulty is in part due to the fact that mathematicians use classical metalogic (e.g. proof by contradiction) which is not easily implemented in a computer system. The most famous mathematical assistant, Coq, is based on a constructive type theory. Even the univalence program, which is ambitious in its goal to formalize all mathematics, is based on a variant of intuitionistic meta-logic.
Substantial work has been done on this. The two major systems I know of are Automath (defunct but historically important) and Mizar (still alive). Looking at those articles just now also turns up Metamath. Also of historical interest is QED, which never really got started, but is apparently still inspiring enough that a 20-year anniversary workshop was held last year.
Creating a medium for formally verified proofs is a frequently occurring idea, but no-one has yet brought such a project to completion. These systems are still used only to demonstrate that it can be done, but they are not used to write up new theorems.
I thought there were several examples of theorems that had only been proved by computers, like the Four Color Theorem, but that they're sort of in their own universe because they rely on checking thousands of cases, and so not only could a person not really be sure that they verified the proof (because the odds of them making a mistake would be so high) they couldn't get much in the way of intuition or shared technique from the proof.
Yes, although as far as I know things like that, and the FCT in particular, have only been proved by custom software written for the problem.
There's also a distinction between using a computer to find a proof, and using it to formalise a proof found by other means.
Indeed, the computer-generated proofs of 4CT were not only not formal proofs, they were not correct. Once a decade, someone would point out an error in the previous version and code his own. But now there is a version for an off the shelf verifier.
Correctness is essential, but another highly desirable property of a mathematical proof is its insightfulness, that is, whether they contain interesting and novel ideas that can later be reused in others' work (often they are regarded as more important than a theorem itself). These others are humans and they desire, let's call it, "human-style" insights. Perhaps if we had AIs that "desired" "computer-style" insights, some people (and AIs) would write their papers to provide them and investigate problems that are most likely to lead to them. Proofs that involve computers are often criticized for being uninsightful.
Proofs that involve steps that require use of computers (as opposed to formal proofs that employ proof assistants) are sometimes also criticized for not being human verifiable, because while both humans make mistakes and computer software can contain bugs, mathematicians sometimes can use their intuition and sanity checks to find the former, but not necessarily the latter.
Mathematical intuition is developed by working in an area for a long time and being exposed to various insights, heuristics, ideas (mentioned in a first paragraph). Thus not only computer based proofs are harder to verify, but also if an area relies on a lot of non human verifiable proofs that means it might be significantly harder to develop an intuition in that area which might then make it harder for humans to create new mathematical ideas. It is probably easier understand the landscape of ideas that were created to be human understandable.
That is neither to say that computers have little place in mathematics (they do, they can be used for formal proofs, generating conjectures or gathering evidence for what approach to use to solve a problem), nor it is to say that computers will never make human mathematicians obsolete (perhaps they will become so good that humans will no longer be able to compete).
However, it should be noted that some people have different opinions.
Automated theorem proving is a different problem entirely and it's obviously not ready yet to take the place of human mathematicians. I'm not in disagreement with you here.
However there's no conflict between being 'insightful' and 'intuitive' and being computer-verifiable. In the ideal case you would have a language for expressing mathematics that mapped well to human intuition. I can't think of any reason this couldn't be done. But that's not even necessary -- you could simply write human-understandable versions of your proofs along with machine-verifiable versions, both proving the same statements.
People are working on changing that (at CMU for example).
What is the optimal amount of attention to pay to political news? I've been trying to cut down to reduce stress over things I can't control, but ignoring it entirely seems a little dangerous. For an extreme example, consider the Jews in Nazi Germany - I'd imagine those who kept an eye on what was going on were more likely to leave the country before the Holocaust. Of course something that bad is unlikely, but it seems like it could still be important to be aware of impactful new laws that are passed - eg anti-privacy laws, or internet piracy now much more heavily punishable, etc.
So what's the best way to keep up on things that might have an impact on one's life, without getting caught up in the back-and-forth of day-to-day politics?
For the extreme stuff, I think you'll get clues from things like how people like you are treated on the street.-- if it's your country. If you're at risk of being conquered by a government that hates you, the estimate is more complicated.
For the more likely things to keep track of, think about what's likely to affect you (like changes in laws) and use specialist sources.
To electioneering, zero would be about right (unless you appreciate the entertainement value). To particular laws and/or regulations which might affect you personally, enough to know the landscape.
If you live in the US I would guess that if you read LW you will see comments about really important political events.
Some things to think about:
Are there actual political threats to you in your own polity (nation, state, etc.)? Do you belong to groups that there's a history of official repression or large-scale political violence against? Are there notable political voices or movements explicitly calling for the government to round you up, kill you, take away your citizenship or your children, etc.? (To be clear: An entertainer tweeting "kill all the lawyers" is not what I mean here.)
Are you engaged in fields of business or hobbies that are novel, scary, dangerous, or offensive to a lot of people in your polity, and that therefore might be subject to new regulation? This includes both things that you acknowledge as possibly harmful (say, working with poisonous chemicals that you take precautions against, but which the public might be exposed to) as well as things that you don't think are harmful, but which other people might disagree. (Examples: Internet; fossil fuels; drones; guns; gambling; recreational drugs; pornography)
Internationally — In the past two hundred years, how often has your country been invaded or conquered? How many civil wars, coups d'état, or failed wars of independence have there been; especially ones sponsored by foreign powers? How much of your country's border is disputed with neighboring nations?
I do like the list :-)
Get weekly updates from light, happy sources (The Daily Show, The News Quiz, Mock the Week), and then specific searches for things that sound important.
I wondered how something called "Mock the Weak" would be considered a "happy source"... then I noticed the two "e"s
Those strike me as worse than useless for the kind of things ShardPhoenix is interested in, e.g., they are the kinds of shows that would mock the "idiots" who believe the "ridiculous conspiracy theory" that the Nazis are actually planning to systematically exterminate the Jews.
how I do it -
Things that I care about: local events (likelyhood of terrorism; or safety threats nearby)
Things I don't care about: any politics that is further away than that. (and not likely to affect my life)
global, country-wide, natural disasters that are far away.
This is harder than it seems. For example, to find out when you need to withdraw your money ahead of a banking crisis, like what happened in Cyprus and Greece, you need to figure this out ahead of everybody else. Furthermore, the authorities are going to be doing their best to cover up the impending crisis.
Why is my karma so low? Is there something I'm consistently doing wrong that I can do less wrong? I'm sorry.
I think it's that you post a lot of questions and not a lot of content. Less Wrong is predisposed to upvoting high-content responses. I haven't had an account for very long, but I have lurked for ages. That's my impression, anyways. I recognize that since I haven't actually pulled comment karma data from the site and analyzed it, I could be totally off-base.
Maybe when you ask questions, use this form:
[This is a general response to the post] and [This is what is confusing me] but [I thought about it and I think I have the answer, is this correct?] or [I thought about it, came up with these conclusions, but rejected them for reasons listed here, I'm still confused]
EDIT: I just looked at your submitted history. You do post content in Main, apparently, but your posts seem to run counter to the popular ideas here. There is bias, and LessWrong has a lot of ideas deemed "settled." Effective Altruism appears to be one, and you have posted arguments against it. I've also seen some of your posts jump to conclusions without explaining your explicit reasons. LWers seem to appreciate having concepts reduced as much as possible to make reasoning more explicit.
Any group has a lot of ideas that are settled. If you want to convince any scientific minded group that the Aristoteles four elements is true, then you have to hit a high bar for not getting rejected. If anything LW allows a wide array of contrarian points.
LW's second highest voted post is Holden's post against MIRI and is contrarian to core ideas of this community in the same sense as a post criticizing EA is. The difference is that the post actualy goes deep and make a substantive argument.
I want to say that that's what I was trying to imply, but that might be backwards-rationalization. I do have the impression that contrarian ideas are accepted and lauded if and only if they're presented with the reasoning standards of the community. I'll be honest: LW does strike me as far-fetched in some respects BUT I recognize that I haven't done enough reading on those subjects to have an informed opinion. I've lurked but am not an ingrained member of the community and can't give a detailed analysis of the standards. Only my impression.
AND I realize that this sounds defensive, and I know there's no real reason for my ego to be wounded. I appreciate your input! I hope that my advice to Clarity wasn't too far off the mark. I tried to be clear about my advice being based on impressions more than data.
EDIT: removed "biased," replaced with "far-fetched."
Yes, LW does have reasoning standards. That's part of what being refining the art of human rationality is about.
What do you mean with "biased"? That LW is different than mainstream society in the ideas it values?
Do you think it's a bias to treat badly reasoned post which might result in people dying the differently than harmless badly reasoned posts?
Obviously it has reasoning standards. They are much higher than the average person might expect, because that's one of the goals of the community.
Bias was an poor word to use, and I retract my use of the term. I mean that as a relatively new participant, there are ideas that seem far-fetched because I have not examined the arguments for them. I admit that this is nothing more than my visceral reaction. Until I examine each issue thoroughly, I won't be able to say anything but "that viscerally strikes me as biased." Cryonics, for instance, is a conclusion that seems far-fetched because I have a very poor understanding of biology, and no exposure to the discussion around it. Without a better background in the science and philosophy of cryonics, I have no way of incorporating casual acceptance of the idea into my own conclusion. I recognize that, admit it, and am apparently not being clear about that fact. In trying to express empathy with a visceral reaction of disbelief, I misused the word "bias" and will be more clear in the future.
On the second point: I understand that there's a cost to treating every post with the same rigor. Posts that are poorly reasoned, and come to potentially dangerous conclusions, should be examined more rigorously. Posts that are just as bad, but whose conclusions are less dangerous, can probably be taken less seriously. Even so...someone who makes many such arguments, with a mix of dangerous and less-dangerous conclusions, might see a lack of negative feedback as positive feedback. That's an issue in itself, but newcomers wouldn't be in a position to recognize that.
Cryonics is not a discussion that's primarily about biology. A lot of outsider will want to either think that cryonics works or that it doesn't. On LW there a current that we don't make binary judgements like that but instead reason with probabilities. So thinking that there a 20% chance that cryonics works is enough for people to go out and buy cryonics insurance because of the huge value that cryonics has if it succeeds. That's radically different than most people outside of LW think.
I understand that; I'm still not comfortable enough with the discussion about cryonics to bet on it working.
Do you have a probability in your head about cryonics working or not working, or do you feel uncomfortable assigning a probability?
A little of both, I think.
Basically, it's a whole mess of things to come to terms with. The spouse thing is the biggest.
Well, the biological aspect is "where exactly in the body is 'me' located"?
For example, many people on LW seem to assume that the whole 'me' is in the head; so you can just freeze the head, and feed the rest to the worms. Maybe that's a wrong idea; maybe the 'me' is much more distributed in the body, and the head is merely a coordinating organ, plus a center of a few things that need to work really fast. Maybe if the future science will revive the head and connect it to some cloned/artificial average human body, we will see the original personality replaced by more or less an average personality; perhaps keeping the memories of the original, but unable to empathise with the hobbies or values of the original.
Whether you need to freeze the whole body or whether the head is enough is a meaningful debate, but it has little to do with why a lot of people oppose cryonics.
At this stage, I can see an argument for freezing the gut, or at least samples of the gut, so as to get the microbiome. Anyone know about reviving frozen microbes?
Many of your comment get downvoted, sometimes heavily. In every open thread you post a lot of questions, some of them completely off topic.
A single good question in the open thread can give you 2-3 karma, but a single bad one can go down as -7 or less. So stop asking so much irrelevant questions and start contributing.
The first association I have with your username is "spams Open Threads with not really interesting questions".
Note that there are two parts in that objection. Posting a boring question in an Open Thread is not a problem per se -- I don't really want to discourage people from doing that. It's just that when I open any Open Thread, and there are at least five boring top-level comments by the same user, instead of simply ignoring them I feel annoyed.
Many of your comments are very general debate-openers, where you expect others to entertain you, but don't provide anything in return. Choosing your recent downvoted question as an example:
First, how do you estimate "threats and your ability to cope"? If you ask other people to provide their data, it would be polite to provide your own.
Second, what is your goal here? Are you just bored and want to start a debate that could entertain you? Or are you thinking about a specific problem you are trying to solve? Then maybe being more specific in the question could help to give you more relevant answer. But the thing is, your not being specific seems like an evidence for the "I am just bored and want you to entertain me" variant.
As I said before, I think it would be good if you get in the habit of trying to predict the votes that your posts get beforehand and then not post when you think that a post would produce negative karma.
One way to do this might be, whenever you write a post keep it in a textfile and wait a day. The next day you ask yourself whether there anything you can do to improve it. If you feel you can improve it, do it. Then you estimate a confidence interval for the karma you expect your post to get and take a note of it in a spreadsheet. If you think it will be positive post your comment.
If you train that skill I would expect you to raise your karma and learn a generally valuable skill.
If at the end of writing a post you think "I’m not sure where I was going with this anymore." as in http://lesswrong.com/r/discussion/lw/mzx/some_thoughts_on_decentralised_prediction_markets/ , don't publish the post. If you yourself don't see the point in your writing it's unlikely that others will consider it valuable.
This is the best advice. The trick to keeping high karma is to cultivate your discernment. Each time you write a post, assess its value, and then delete it if you don't anticipate people appreciating it. View that deletion as a victory equal to the victory of posting a high-karma comment.
I would be concerned that you might post with popular opinion not with valuable or worthwhile ideas. (if the caveat of worthwhile ideas even if they sound unpopular is included then this is still a good strategy)
I second this. This is also a very important skill for work and personal emails, and anything having to do with social sites like Facebook.
Thank you for asking. I've been trying to figure out what to say to you, but couldn't figure out quite what the issue is. One possibility in terms of karma is to bundle a number of comments into a single comment, but this doesn't address how the comments could be better.
A possible angle is to work on is being more specific. It might be like the difference between a new computer user and a more sophisticated computer user. The new user says "My computer doesn't work!", and there is no way to help that person from a distance until they say what sort of computer it is, what they were trying to do, and some detail about what happened.
Being specific doesn't come naturally to all people on all subjects, but it's a learnable skill, and highly valued here.
A large proportion of your comments seem very distracting and sort of off-topic for Less Wrong.
Thanks. Can I have an example which is either self-evident as distracting and off-topic or explain why it is?
This is a sufficiently evident example.
I looked at a few pages of your comment history to see if I could find a particularly horrible example to base an explanation on (entirelyuseless's link is appropriate), but I was surprised to find that the vast majority of your comments had no karma rather than downvotes.
I'm not sure what you need to do to upgrade or edit out your typical comment. Possibly you could review your upvoted comments to see how they're different from your usual comments.
You use LW as a dumping ground for whatever crosses your mind at the moment, and that is usually random and transient noise.
Thanks. What counts as noise and what as signal to you, and what do you mean by transient?
By "transient" I mean that you mention a topic once and then never show any interest in it again. By "noise" I mean random pieces of text which neither contain useful information nor are interesting.
Usually, your questions feel more suited for a general-purpose forum than the narrowly specialized set of interests commonly discussed here. (We do have "Stupid Questions" and "Instrumental Rationality" threads, but even those follow the same standards for comment quality as the rest of LW.)
Also, posting a dozen questions in succession may give users the impression that you're trying to monopolize the discussion. Even if that's not your intention, I would understand it if some users ended up thinking it is.
I would suggest looking for specialized forums on some of the topics that interest you, and using LW only for topics likely to be of interest to rationalists.
Thanks. Do you have a suggestion for another forum you recommend I move to?
I don't know much about topic-specific forums, but seeing as you like to ask frequent questions, Reddit and Quora come to mind.
as a hard rule; when posting in open; the ratio of your posts to posts by others should always be below 1:3 (other's might want to comment and suggest 1:4). You should post less then 1 in 4 of the posts in the open thread. They often read like a stream of consciousness (I think you know this already), and you might be better off taking on board some of the ideas of sitting on thoughts over a day or so and re-evaluating them for yourself before posting.
As a side note: presentation of an idea can help the reception. We are still human; and do care for delicate wording on some topics.
Thanks. I do tend to sit on my ideas, or I like to post and update those posts or reply with reflections upon revisitations of those thoughts so that I and others can see how my thinking changes over time.
My ratio is only that high when there is a new open thread. Since I post in blocks by formulating several posts then posting then when I next get a chance, it may appear early on that my ratio is high. But by the end of the month, I am certainly no where near that ratio.
I am continuously trying to improve my presentation. Unfortunately, till date I have received minimal specific feedback on how to improve presentation. Sometimes I feel the stream of consciousness approach illustrates the way I'm thinking about a certain thing more illustratively.
It may well do, but illustrating the way you're thinking about something isn't necessarily a good goal here. Why should anyone else care how you happen to be thinking about something?
There may be special cases in which they do. If you are a world-class expert on something it could be very enlightening to see how you think about it. If you are just a world-class thinker generally, it might be fascinating to see how you think about anything. Otherwise, not so much.
It may be worth releasing the posts gradually over the course of the week so as to not make it look like a clump. (and again paying attention to that ratio). I agree that you seem to post a chunk and once in a week. but it may serve better to spread out your posts.
In addition to what everyone else has said, here's a useful article on how to ask smart questions. It's talking about asking technical questions on support forums, but the matter generalises, especially the advice to make your best effort to answer it yourself, before asking it publicly, and when you do, to provide the context and where you have got to already.
Don't buy these comments too much. i'm glancing through them, they're much too critical. Listen to Nancy if anyone.
MealSquares (the company I'm starting with fellow LW user RomeoStevens) is searching for nutrition experts to join our advisory team. The ideal person has a combination of formally recognized nutrition expertise & also at least a casual interest in things like study methodology and effect sizes (this unfortunately seems to be a rare combination). Advising us will be an opportunity to improve the diets of many people, it should not be much work, you'll get a small stake in our company, and you'll help us earn money for effective giving. Please get in touch with us (ideally using this page) if you or someone you know might be interested!
I'm not the right person at all, but if you ever want an amateur data enthusiast to help clean and present research results, I'd be willing to donate my time. The project is interesting and I would like to start stretching my skill set. I am pretty good at graphing in R, have a solid understanding of probability theory (undergrad level). I also have a good intuition for cleaning data sets.
All of that evaluation is based on what other math nerds have told me, so I understand if you're not interested!
How does your product compare to widely-available meal replacement foods, like, say: http://www.cookietime.co.nz/osm.html ?
MealSquares are nutritionally complete--5 MealSquares contain all the vitamins & minerals you need to survive for a day, in the amounts you need them. In principle you could eat only MealSquares and do quite well, although we don't officially recommend this. It's more about having an easy "default meal" that you can eat with confidence once or twice a day when you don't have something more interesting to do like get dinner with friends.
MealSquares is made from a variety of whole foods, and almost all of the vitamins and minerals are from whole food sources (as opposed to competing products like Soylent that use dubious vitamin powders). Virtually every nutrition expert in the past century has recommended eating a variety of whole foods, and MealSquares stuffs more than 10 whole food ingredients in to a single convenient package, including 3 different fruits and 3 different vegetables.
We've put a lot of research in to MealSquares to make it better for you than most or all competing products on the market. For example, the first ingredient in Clif Bar is brown rice syrup (basically a glorified form of sugar), and they get their protein from rice and soy (not as bioavailable as other sources). MealSquares contains only a bit of added sugar (dark chocolate chips) and bioavailable protein sources. I'm having a hard time finding solid nutrition info on the One Square Meal website. But you can see that our 400 calorie bar (120 grams) has only 12 grams of sugar, so 10% sugar by weight, whereas their bar is 17.1% sugar by weight.
Most competing meal bars are similar: non-bioavailable protein sources and lots of sugar, generally added sugar. Clif Bar is basically a candy bar disguised to be healthy: it has 23 grams of sugar in a 230 calorie bar, and a Hershey's Milk Chocolate with Almonds bar has 19 grams of sugar in a 210 calorie bar. Most meal bar makers are doing the nutritional equivalent of taking a Hershey bar, adding in some vitamin powders and soy protein isolate, and telling their customers that it's a healthy snack.
The biggest practical difference between us and One Square Meal is probably that we are available in the US and they are available in New Zealand.
Interesting, thanks for the info. Yes most meal replacement bars seem to be simply soy-augmented candy bars, however there is of course a practical reason for this: sweet foods sell better.
It might be worth mentioning on your site that your product is more healthy and has less sugar than the alternatives. Another problem is soy protein. Some research hints at soy protein having undesirable hormone-imitating effects: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074428/ so this could be a selling point as well as I presume you do not use soy protein.
Do you have any plans for international shipping? (Say, the UK)
We've experimented with doing international shipping. It gets expensive, and it's also a bit of a hassle. It makes more sense if you're doing a group buy (90+ squares). If you really want MealSquares and you're willing to pay a bunch extra for international shipping, contact us and we can work out details. Long term we would love to set up production facilities in foreign countries like a regular multinational, but that won't be for a while.
I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now. However, 2 questions:
Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate?
What sorts of EA charities are you interested in?
I've been using MealSquares regularly, without realizing that that you guys were LWers or EAs. As such, I've been using mostly s/Soylent because of the cost difference. (A 400 Calorie MealSquare is ~$3, a 400 Calorie jug of Soylent 2.0 is ~$2.83, 400 Calories worth of unmixed Soylent powder is ~$1.83, and the ingredients for 400 Calories worth of DIY People Chow are ~$0.70. All these are slightly cheaper with a subscription/large purchase.)
I ask, because if you happen to be interested in similar EA causes to me, and expect to eventually donate X% of proceeds, then I should be budgeting my expenses to factor that in. If (100%-X%) * MealSquaresCost < soylentCost, then I would buy much less soylent and much (/many?) more MealSquares. I'd be paying a premium to Soylent in order to add a bit more culinary variety. (Also, I realize this X isn't equal to the expected altruistic return on investment, but that would be even harder to estimate.)
/chokes on his foie gras X-D
Someone gave you a downvote. If it was on my behalf or on the behalf of Soylent, then for the record I thought it was funny. :)
Yep, that's what we've been doing. (We've been providing free MealSquares to some EA organizations, but we haven't been donating a significant portion of our profits directly.)
At least 10%, hopefully significantly more.
We've been trying to focus on growing our business rather than evaluating EA giving opportunities. If we actually do make a lot of money to donate, it will make sense to spend a lot of time thinking about where to give it. And we'll try & focus on identifying opportunities that we have a comparative advantage in (opportunities that are more suited to large donors, like funding a new organization from scratch).
I'm not exactly sure why, but for some reason the idea of people buying our product because we are EAs makes me uncomfortable. I would much rather people buy it because it's good for you, convenient, tasty, etc. As you point out, we are less than 10% more expensive on a per-calorie basis than jug form Soylent. Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?
In retrospect, I think that would make me uncomfortable too. In your position, I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion. On the other hand, maybe a deep feeling of obligation to charity isn't a bad thing?
Based on my (fairly limited) understanding of nutrition, I suspect that any marginal difference between your products is fairly small. I suspect humans get strongly diminishing returns (in the form of increased lifespan) once we have our basic nutritional requirements met in bio-available forms and without huge amounts of anything harmful. After that, I'd expect the noise to overpower the signal. For example, perhaps unmeasured factors like my mood or eating habits change as a function of my Soylent/MealSquares choice, and I wind up getting fast food more often, or get less work done or something. Let's say it would take me a month of solid researching and reading nutrition textbooks to make a semi-educated decision of which of two good things is best. Would the added health benefit give me an additional month of life? What if I value my healthy life, here and now, far more than 1 more month spent senile in a nursing home? What if I also apply hyperbolic discounting?
I've probably done more directed health-related reading than most people. (Maybe 24 hours total, over the pasty year or so?) Enough to minimize the biggest causes of death, and have some vague idea of what "healthy" might look like. Enough to start fooling around with my own DIY soylent, even if I wouldn't want to eat that every day without more research. If someone who sounds knowledgeable sits down and does an independent review, I'd probably read it and scan the comments for critiques of the review.
I think many people would run the equation the other way -- buying from a company that gives a potion to charity is a way to pressure competing companies to do the same. In other words, MealSquares give consumers a way to put pressure on the industry. Of course, there are a lot of ways that that model could be flawed, but you're hardly abusing the people who make that choice.
Thanks for the explanation. I wrote up some of the details of our approach here. Nutrition is far from being settled, and major discoveries have been made just in the past 50 years. Therefore we take an approach that's fairly conservative, which means (among other things) getting most of our nutrients from whole foods, the way humans have been eating for virtually all of our species' history. We think the burden of proof should be on Soylent to show that their approach is a good one.
If anybody is interested in Moscow postrationality meetup, please comment here or pm me. Thanks!
Do transhumanist types tend to value years of life lived past however long they'd expect to live anyways linearly (I.e. if they'd pay a maximum of exactly n to live an extra year, then would they also be willing to pay a maximum of exactly 100n to live 100 extra years)?
If so, the cost effectiveness of cryonics (in terms of added life years lived) could be compared with the cost effectiveness of other implementable health interventions would-be cryonicists are on the fence on. What's the marginal disutility that a given transhumanist might get from forcing themselves to eat a bit more healthily, and how much would that extend their life expectancy by? What about for exercise? Or going to the doctor over that odd itch in their throat that they'd like to ignore just one more day?
The point I'm coming to is that if I want my friends to live longer lives (or have more QALYs, or whatever) in expectation, it's probably better for me to pester them about certain lifestyle choices and preventative interventions than it is to pester them to sign up for cryonics. (By the same token, I seem to recall that Hanson or Yudkowsky once pointed out that cryonics would be expected to add more years to ones life than an open heart surgery (?) relative to the cost, or something like that.)
The levels of uncertainty make this really hard to work with.
On the one hand perhaps it works and the person gets to live for billions of deeply fulfilling years, till the heat death of the universe experiencing 10x subjective time giving trillions of QALYs.
Or perhaps they get awoken into a world where life extension is possible but legally limited to a couple hundred years.
Or perhaps they get awoken into a world where they're considered on the same moral level as lab rats and millions of copies of their mind get to suffer in countless interesting ways.
so you end up with a very very wide range of values, negative to trillions of QALYs with no way to assign reasonable probabilities to anything in the range which makes cost effectiveness calculations a little less convincing.
I also ask myself these questions and I'm unable to answer them. In the end, I exercise and modify my diet as much as my will allows without causing me too much stress.
As for valuing years of life, if I considered that the very best outcome of cryonics (as HungryHobo described) is certain, then, well, even for very small values that will result in cryonics giving me far more utility than exercice. I don't value later years of my life that low.
Yudkowsky believes that cryonics has a greater than 50% chance of working, and that we will be able to have fun for any amount of time, so for him, the expected value of cryonics is ginormous.
I get quite a bit of disutility from forcing myself to eat a bit more healthily. My food diversity is very power; if I try to ingest one of many foods I don't like, I will throw up. Attempting to eat those foods anyway causes me great discomfort. So that's not a great way for me to increase overall utility.
On the last paragraph, it appears to me that the two basics - avoiding obesity and not smoking - are the best thing you can pester them about. But the other lifestyle choices have the expected benefit of a few years total, if you don't expect any new medical technology to be developed.
Not to be pedantic, but I thought this might be of interest: As I understand it, amount of exercise is a better predictor of lifespan than weight. That is, I would expect someone overweight but who exercises regularly to outlive someone skinny who never exercises.
For example, this life expectancy calculator outputs 70 years for a 5"6" 25 year old male who weighs 300lbs, but exercises vigorously daily. Changing the weight to 150 lbs and putting in no exercise raised the life expectancy by only 1 year. (a bit less than I was expecting, actually. I was about to significantly update, but then it occurred to me that 300 lbs isn't the definition of obesity. I knew this previously, but apparently hadn't fully internalized that.) EDIT: This calculator may not work well for weights over ~250 lbs. See comment below.
So, my top two recommendations to friends would be quit smoking and exercise regularly. I'd recommend Less Wrongers either do high intensity workouts once a can read or watch Khan Academy or listen to The week to minimize the amount of time spent on non-productive activities, or pick a more frequent but lower intensity activity they Sequences audiobook while doing. I'm not an expert or anything. That's just the impression I've gotten from my own research.
First, there is no reason for you to care about ranking ("better"), you should only care whether something is a good predictor of lifespan. Predictors are not exclusive.
Second, weight effect on lifespan is nonlinear. As far as I remember it's basically a U-shaped curve.
I think it's only U-shaped if you're plotting mortality rather than lifespan on the y-axis...
Fair point.
This seems like a good news to me, because I can have greater control over my exercise than my weight.
I'm not sure I would trust that calculator. I'm not used to US units so I put in 84kg (my weight) and it said "with that BMI you can't be alive" so I put in 840 thinking maybe it wants the first decimal as well. Now I realize it wanted pounds. And for this, 840lbs, it also outputed 70 years.
I'm not sure where the calculator gets its data from.
Hmmm, that's worrying. I played with some numbers for a 5'6" male, and got this:
99 lbs yields "Your BMI is way too low to be living"
100lbs yields 74 years
150lbs yields 76 years
200lbs yields 73 years
250lbs yields 69 years
300lbs yields 69 years
500lbs yields 69 years
999lbs yields 69 years
It looks to me like they are pulling data from a table, and the table maxes out under 250lbs?
This week on the slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/
Business and startups - CACE (Change Anything Chances Everything) with respect to startups and machine learning. prediction.io , ,meetings: [each person speaks, so the length of meeting of the meeting is O(n) and there are n people, so the total meeting cost is O(n^2). On the margin, adding one person to the standup means they listen to n people speak, and n people listen to them speak.] and how they cost businesses money. machine speech ability, data wrangling is tedious, data processing resources: data source, computing power and blidness. "the whole world is simpler if greed is the primary motivator for everything". "People talk a lot about market failure but government failure is a thing too.". VC's and extortionary practices. what is the intention of implementing UBI? (unanswered). "if the game-plan (the economy) changes - i.e. by automation; or basic income. The people with more resources will be able to adapt to it faster..." wealth distribution.
Debating and rhetoric - we break apart the discussions and arguments from other places... We analysed where the first statement of an argument elsewhere shifted from discussion to disagreement. (surprisingly early) a two-pronged approach to offence. in regards to:
1: clean up the statement so that it is harder to take offensively (steelman) 2: encourage less personal offence from the original statement both sides are needed to make discussions more productive.
Grice's Maxims of communication - https://en.wikipedia.org/wiki/Cooperative_principle this is also interesting: http://www.smart-words.org/linking-words/transition-words.html
Effective altruism - EA Global have started hosting videos from this year's conference on their site. Duplicates of what is already up. Nothing at all from the Oxford conference yet. http://eaglobal.org/videos
goals of lesswrong - raising the sanity waterline, and before we extinct the planet of humans. how could the sanity waterline be raised:
human relationships - living in different places and different cultures of doing so. driving vs public transport and safety concerns. "youthful optimism" and it's contrasting "aging pessimism" as an exploration-exploitation problem. If we make a rough assumption that both things exist and at some point a youthful optomist transitions to an aging pessimist; what can we learn about that and how can we benefit from knowing that as a natural process.
lingustics - the phrase; "If I understand you correctly; you were saying..." followed by what you are saying next. it slows down a conversation; but keeps it clear.
Open - so many things! IQ/ the sports gene, (re: parable of talents), Accountability groups, A Big disagreement about a thing about this thing http://lo-tho.blogspot.com/2014/12/epistemic-trust.html , http://www.informationisbeautiful.net/visualizations/rhetological-fallacies/ , QS data, Case law and it's influence on the law and an analogy to Edge testing in programming. Some discussions on the layers of the state of our facebooks post-paris-events. some online courses, fighting death, advice about how to think about motivated cognition (clever-arguer) vs intellectual honesty (by which I just mean the lack of motivated cognition) in the case where one person has a really high probability for X and honestly believes that the argument is very one-sided?.
The quotation you’re looking for is from Chesterton’s 1929 book, The Thing, in the chapter entitled, “The Drift from Domesticity”:
Parenting - (uncharacteristically quiet) some talk about video games that we let kids play
philosophy - is there's a fundamental difference in the peer relationships among men as compared to the peer relationships among women. I've heard often that men by default are indifferent to each other while women by default are adversaries.
response: sounds like an armchair philosophy. what evolutionary characteristics or behaviours did we or did we not pick up. even if you found a population with that to hold true; I doubt it would hold true everywhere. it may have temporarily been true for some people at some point. but evolution is all about gaming the rules. as soon as anything becomes a "rule" in the sense of being a regularly repeated behaviour; some individual who was not winning at the rule would try to generate a different win-condition so that they can continue to win.
in summary: how could we know? and also if it was true for a temporary time and place I doubt it would last more than a handful of generations. by generate I mean: randomly evolve a different pattern of behaviour.
"how should we feel, emotionally, about the real world when the real world kind of sucks, and is there anything we should do about it?" [various ideas; not completely answered]
political talk - article: does gifted education exacerbate social inequality? Feminism/anti-feminism, SJW and meme associated with it, liberatarianism,
programming - code academy!
Projects - Vlog plans, Nanowrimo, VR + presence and BDD, virtual assistant project, OKC method,
real life - joylent/soylent, food prep efficiency, vat-chicken-meat, making meat consumption more healthy, applying to universities, Nasa and how they code, feeling safe generally in the world...
rss feed - we have an RSS feed of any post on LW or SSC that notifies of posts if you are in the channel.
resources and links - http://betterexplained.com/ , http://www.mruniversity.com/ , https://www.kickstarter.com/projects/969324769/the-cold-shoulder-pro-calorie-burning-vest?ref=popular , https://class.coursera.org/modelthinking/lecture , https://www.duolingo.com/ , http://www.trutv.com/shows/adam-ruins-everything/index.html , http://diyhpl.us/wiki/ , https://medium.com/the-exofiles/why-do-we-need-friendly-artificial-intelligence-ce20112f532b
Science and technology - The capital costs of a transition to renewables and new energy forms in general are huge, legal issues of cryonics; and owning something when you are dead/not living (waiting for revival). our current legal system is set up so that dead people cannot own anything. DIYbio, autonomous vehicles and failures of them; also failures of non autonomous vehicles, space manufacturing...
welcome - everyone answers the questions: "Would you like to introduce yourself? Where are you from? What do you do with your time? What are you working on? What problems are you trying to solve?"
Feel free to join us. Active meetup time: A time to try to get lots of people online to talk about things is going to be chosen soon, probably a 12 hour window or so.
We have over 130 people who have signed up. Not nearly that many people are active, but each day something interesting happens...
last month on slack: http://lesswrong.com/r/discussion/lw/mwt/open_thread_oct_26_nov_01_2015/cuq5
More data on Kepler star KIC 8462852.
http://www.nasa.gov/feature/jpl/strange-star-likely-swarmed-by-comets
After going back through Spitzer space telescope infrared images, the star did not have an infrared excess as recently as earlier in 2015, meaning that there wasn't some kind of event that generated huge amounts of persistent dust between the last measurements of spectra and the Kepler dataset showing the dips in brightness. This bolsters the 'comet storm / icy body breakup' theory in that that would generate dust close to the star that rapidly goes away and is positioned such that we are primed to see large fractions of it as it is generated close to the star rather than a tiny fraction of dust further away.
(This comes after the Allen telescope array, failing to detect anything interesting, put an upper limit on radio radiation coming from the system at 'weaker than 400x the strength we could put out with Aricebo in narrow bands, or 5,000,000x in wide bands' for what that's worth)
Any US lawyers here?
A woman who once worked in a law office told me that clients come and go (she used the word e·phem·er·al) so the real allegiance for a lawyer is to other lawyers. Because they will see them again and again.
And Game Theory has something to say about how to treat a person that you are not likely to see again.
Please, folks, do not ask me to justify this "hearsay". I found her credible, so please take this woman's word as gospel, as an axiom, and go from there.
Please confirm, deny, explain or comment on her statement.
TIA.
A "person that you are not likely to see again" is not a complete description of a lawyer's client; it's missing the part where "this person pays me for my services so I need many of this person in order to make a living."
Your post reminds me of something.
If there is a huge disparity of power between the lawyer and you, Game Theory kind of "goes out the window".
Right?
The fact that I have never hired a lawyer may be a factor in my difficulty imagining a scenario where your lawyer turns into your opponent in a power struggle; I see it more likely to happen between you and your opponent's lawyer.
High-profile lawyers with a lot of power don't tend to be hired by ordinary people with little power. In any case, it is in your lawyer's interests that your interests get served. Besides, what you could lose in the worst scenario is that one lawsuit (and possibly money and/or jail time); what your lawyer has to lose in the worst scenario is reputation, future clients, and the legal ability to practice law.
Imagine the following situation: we are having a lawsuit against each other. Let's say it is already obvious for both of our lawyers which side is going to win, but it is not so obvious for us.
The lawyers have an option to do it quickly and relatively cheaply. But they also have an option to charge each of us for extra hours of work, if they tell us it is necessary. Neither option will change the outcome of the lawsuit. But it will change how much money the lawyers get from us.
In such case, it would be rational for the lawyers to cooperate with each other, against our interests.
In this example the obvious culprit is the practice of charging by the hour, which I've always found a terrible idea.
I just found out about the “hot hand fallacy fallacy” (Dan Kahan, Andrew Gelman, Miller&Sanjuro paper) as a type of bias that more numerate people are likely more susceptible to, and for whom it's highly counterintuitive. It's described as a specific failure mode of the intuition used to get rid of the gambler's fallacy.
I understand the correct statement like this. Suppose we’re flipping a fair coin.
*If you're predicting future flips of the coin, the next flip is unaffected by the results of your previous flips, because the flips are independent. So far, so good.
*However, if you're predicting the next flip in a finite series of flips that has already occurred, it's actually more likely that you'll alternate between heads and tails.
The discussion is mostly about whether a streak of a given length will end or continue. This is for length of 1 and probability of 0.5. Another example is
...because heads occurring separately are on average balanced by heads occurring in long sequences; but limiting the length of the series puts a limit on the long sequences.
In other words, in infinite sequences, "heads preceeded by heads" and "heads preceeded by tails" would be in balance, but if you cut out a finite subsequence, if the first one was "head preceeded by head", by cutting out the subsequence you have reclassified it.
Am I correct, or is there more?
I don't think this is correct. See my reply to AstraSequi.
(But I'm not certain I've understood what you're proposing, and if I haven't then of course your analysis and mine could both be right.)
Oops, you're right.
Using the words from my previous comment, now the trick seems to be that 'heads occurring separately are on average balanced by heads occurring in long sequences' -- but according to the rules of the game, you get only one point of reward for a long sequence, while you could get multiple punishments for the separately occuring heads, if they appear in different series. Well, approximately.
I think this is not quite right, and it's not-quite-right in an important way. It really isn't true in any sense that "it's more likely that you'll alternate between heads and tails". This is a Simpson's-paradox-y thing where "the average of the averages doesn't equal the average".
Suppose you flip a coin four times, and you do this 16 times, and happen to get each possible outcome once: TTTT TTTH TTHT TTHH THTT THTH THHT THHH HTTT HTTH HTHT HTHH HHTT HHTH HHHT HHHH.
What's going on here isn't any kind of tendency for heads and tails to alternate. It's that an individual head or tail "counts for more" when the denominator is smaller, i.e., when there are fewer heads in the sample.
My intuition is from the six points in Kahan's post. If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails. If we have an equal number of heads and tails left, P(HT) > P(HH) for the next two flips. After the first heads, the probability for the next two might not give P(TH) > P(TT), but relative to independence it will be biased in that direction because the first T gets used up.
Is there a mistake? I haven't done any probability in a while.
No, that is not correct. Have a look at my list of 16 length-4 sequences. Exactly half of all flips-after-heads are heads, and the other half tails. Exactly half of all flips-after-tails are heads, and the other half tails.
The result of Miller and Sanjuro is very specifically about "averages of averages". Here's a key quotation:
"The relative frequency [average #1] is expected [average #2] to be ...". M&S are not saying that in finite sequences of trials successes are actually rarer after streaks of success. They're saying that if you compute their frequency separately for each of your finite sequences then the average frequency you'll get will be lower. These are not the same thing. If, e.g., you run a large number of those finite sequences and aggregate the counts of streaks and successes-after-streaks, the effect disappears.
What, other than an interest in the commercial success of the car lot business, normative social influence and scrupulosity (all tenuous), stops someone from taking a second ticket (by foot) from a gated car park then immediately paying that one off when leaving, rather than paying the original entry ticket?
the gates usually only give tickets to large metal objects (like cars) because they have sensors in the road underneath the ticket machine. There was a Mr. Bean sketch about this event. He used a large metal rubbish bin to get a ticket.
Why don't people steal from other people if nobody is looking? General ethics.
These are what holds society together. These are what society is -- including the bit about commercial success.
But have you tried? The entry barriers only issue a ticket when there's a car in front of them. That's how it works at the car parks I'm familiar with that use that system.
And, to continue the discussion of why your karma is so persistently low, this is something you might have thought of before posting. See also.
'noisy text analytics'. Has anyone trialed applying those algorithms in their minds with human conversations or text messaging (say through facebook) it to filter information in real life? Was it more efficient than your default or non-volitional approach?
Introverts, Extroverts, and Cooperation
As usual, a small hypothetical social science study, but I'm willing to play with the conclusion, which is that extroverts are more likely to cheat unless they're likely to get caught. It wouldn't surprise the hell out of me if introverts are more likely to internalize social rules (or are people on the autism spectrum getting classified as introverts?).
Could "publicize your charity" be better advice for extroverts and/or majority extrovert subcultures than for introverts?
That's not what your link says. First, there is no cheating involved, we are talking about degrees of cooperation without any deceit. And second, it's not about "getting caught", it's about being exposed to the light of the public opinion which, of course, extroverts are more sensitive to.
I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.
Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?
For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).
Are there any studies that highlight which biases become stronger when someone "falls in love"? (Assume the love is reciprocated.) I am mainly interested in biases that affect short- and medium-term decisions, since the state of mind in question usually doesn't last long.
One example is the apparent overblown usage of the affect heuristic when judging the goodness of the new partner's perceived characteristics and actions (the halo effect on steroids).
A study that relies only on self-reported claims of 'being in love' might be interesting to read, but such a study would be of higher quality if there was an objective way to take a group of people and sort them into one of two groups: "in love" or "not in love." Based on my own experience and experiences reported by others, I wouldn't reject the notion that such a sorting is possible in principle, although it may be beyond our current technological capability. The pain associated with being suddenly separated from someone that you have 'fallen in love with' can rival physical pain in intensity. What type of instrumentation would we need to detect when a person is primed for such a response? I have no idea.
No, not automatically. An objective measurement can be both worse and be better than a self-reported measurement. There no reason to believe that one is inherently better.
Why do you think "a person being primed for feeling pain when being separated from their new partner" matters here?
Are you thinking about studies that, at the very least, suggest the possibility of such a separation being an option that the subject will experience based on the outcome of some action/decision being studied? :( that's horrible ):
Here is a study finding that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control": free copy / springer link
Also, while I was searching for studies, I found a news article saying this about a study by Robin Dunbar:
"The research, led by Robin Dunbar, head of the Institute of Cognitive and Evolutionary Anthropology at Oxford University, showed that men and women were equally likely to lose their closest friends when they started a new relationship."
More specifically, the study found the average number of lost friends per new relationship was two.
Except there is no publicly published paper anywhere online, despite what the news article says, there are only quotes by Dunbar at the 2010 British Science Festival, which seems a bit suspicious to me, maybe suggesting that the study was retracted later.
Paper in Nature about differences in gene expression correlated with chronological age.
tl;dr -- "We identified 1,497 genes that are differentially expressed with chronological age."
Quickdraw conclusion: this will require A LOT of silver bullets.
I don't think we learn a lot through the number. It might be that multiple genes are regulated by the same mechanism and turning that mechanism down brings us forward.
Yeah it doesn't say much. For one thing I'd say it's just about all of the genes that are differentially expressed, if you look hard enough. Regardless, that doesn't tell us how many of them really matter with respect to the things we care about, how many causal factors are at work, or how difficult it will be to fix. Doesn't rule out a single silver bullet aging cure (though other things probably do)
Meta-research: Evaluation and Improvement of Research Methods and Practices by John P. A. Ioannidis , Daniele Fanelli, Debbie Drake Dunne, Steven N. Goodman.
Facebook question:
I have different types of 'friends' on Facebook, such as "Family", "Rationalists", "English-speaking", etc. Different materials I post are interesting for different groups. There is an option to select visibility of my posts, but that seems not exactly what I want.
What I'd like is to make my posts so that they are available to everyone, including people I don't know (e.g. if anyone clicks on my name, they will see everything I ever posted), but I don't want all my posts to appear automatically on all of my 'friends' home pages, if they follow me. In other words, I don't want to spam my 'friends'' pages with stuff they are unlikely to read, yet I want anyone to be able to read each of my posts if they wish so.
Is there an option "don't push this automatically to all people, but let them see it if they click on a permalink"?
The way Facebook works, you decide what's available, but each of your friends has to individually decide how much they want to see of you.
The problem is exactly the "how much they want to see of you" part, namely that there is only the one undifferentiated "you" instead of "your rationality posts", "your family photos", "your posts with kitten videos". I don't want to bother my family with rationality posts, and don't want to bother my LW friends with Slovak posts, but as long as I don't want to limit it all to 'friends of my friends' I don't have a choice.
Technically, the solution would be to create multiple accounts for mutliple aspects of my life, and have different sets of 'friends' for each. But this is against Facebook TOS, and is also technically inconvenient.
Actually, maybe I could use the "Pages" feature for this... That allows people to post under multiple identities, so each of them can have different followers. But officially, "Pages are for businesses, brands and organizations". Not sure if "Viliam's comments on politics in Slovakia" qualitfies as any of that.
What you seem to be already doing, which is to manually select what group will see each post, seems to be good enough for your purposes. Anyone who actively wants to see more of you can simply go to your profile and see everything.
I don't understand why facebook messes up the language issue so strongly. It seems like the American's at facebook quarters just don't care about bilinguals.
In the news:
Nassim Taleb is an inverse stopped clock.
The main complaint seems to be that Taleb violates an orthodoxy and not that he's factually wrong. On the issues of costs the cited paper says:
There are observed cases where homeopathy did lead to cost savings as Taleb suggests.
Interestingly the cited PLoS paper puts people who don't take homeopathy into the homeopathy group based on the fact that they could get it for free:
What are you working on?
Do you need help?
Are you offering help to people, or just curious about support networks? I'm mainly trying to motivate myself to write up a paper on relatively old data: dealing with my usual problem that I am more excited about newer projects, even though the older ones are not completed. Help would be nice but it's essentially my sole responsibility to prepare a first draft, after which my coauthors will contribute.
What are you working on, and do you need help?