Reminder Memes

-12 Mass_Driver 13 May 2013 11:56PM

EDIT: Apologies to anyone who wasted time with this; I did not intend it to go live. I left a draft post up on a computer that had an automatic system update; it must have posted as the window was terminated.

Comment author: jtolds 11 May 2013 06:59:07AM *  45 points [-]

There's kind of a growing movement around Rob Rhinehart's Soylent thing, dunno if you folks have heard of this.

Basically, he got tired of making food all the time and tried to figure out the absolute minimum required chemical compounds required for a healthy diet, and then posted the overall list, and has now been roughly food free for three months, along with a bunch of other people.

It seems awesome to me and I'm hoping this sort of idea becomes more prevalent. My favorite quote from him I can't now find, but it's something along the lines of "I enjoy going to the movie theater, but I don't particularly feel the need to go three times a day."

There's small reddit community/discourse groups around getting your own mixture.

Comment author: Mass_Driver 13 May 2013 04:20:23AM 1 point [-]

Is there more to the Soylent thing than mixing off-the-shelf protein shake powder, olive oil, multivitamin pills, and mineral supplement pills and then eating it?

Comment author: RichardKennaway 07 May 2013 04:22:01PM 3 points [-]

Is there any practical difference between "assuming independent results" and "assuming zero probability for all models which do not generate independent results"?

No.

If not then I think we've just been exposed to people using different terminology.

I think it's more than terminology. And if Mencius can be dismissed as someone who does not really get Bayesian inference, one can surely not say the same of Cosma Shalizi, who has made the same argument somewhere on his blog. (It was a few years ago and I can't easily find a link. It might have been in a technical report or a published paper instead.) Suppose a Bayesian is trying to estimate the mean of a normal distribution from incoming data. He has a prior distribution of the mean, and each new observation updates that prior. But what if the data are not drawn from a normal distribution, but from the sum of two such distributions with well separated peaks? The Bayesian (he says) can never discover that. Instead, his estimate of the position of the single peak that he is committed to will wander up and down between the two real peaks, like the Flying Dutchman cursed never to find a port, while the posterior probability of seeing the data that he has seen plummets (on the log-odds scale) towards minus infinity. But he cannot avoid this: no evidence can let him update towards anything his prior gives zero probability to.

What (he says) can save the Bayesian from this fate? Model-checking. Look at the data and see if they are actually consistent with any model in the class you are trying to fit. If not, think of a better model and fit that.

Andrew Gelman says the same; there's a chapter of his book devoted to model checking. And here's a paper by both of them on Bayesian inference and philosophy of science, in which they explicitly describe model-checking as "non-Bayesian checking of Bayesian models". My impression (not being a statistician) is that their view is currently the standard one.

I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process. (I'm distancing myself from this claim, because as a non-statistician, I don't need to have any position on this. I just want to see the position stated here.) The single-peaked prior in Shalizi's story was merely a conditional one: supposing the true distribution to be in that family, the Bayesian estimate does indeed behave in that way. But all we have to do to save the Bayesian from a fate worse than frequentism is to widen the picture. That prior was merely a subset, worked with for computational convenience, but in the true prior, that prior only accounted for some fraction p<1 of the probability mass, the remaining 1-p being assigned to "something else". Then when the data fail to conform to any single Gaussian, the "something else" alternative will eventually overshadow the Gaussian model, and will need to be expanded into more detail.

"But," the soft Bayesians might say, "how do you expand that 'something else' into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn't fit looks the same as what we do, why pretend it's Bayesian inference?"

I suppose this would be Eliezer's answer to that last question.

I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.

Comment author: Mass_Driver 10 May 2013 09:15:53AM 1 point [-]

Isn't there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?

I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of your computing cycles on testing your preferred model, another quarter on testing mild variations on that model, another quarter on all different common distribution curves out of the back of your freshman statistics textbook, and the final quarter on brute-force fitting the data as best you can given that your priors about what kind of model to use for this data seem to be inaccurate.

I can't imagine any human being who is smart enough to run a statistical modeling exercise yet foolish enough to cycle between two peaks forever without ever questioning the assumption of a single peak, nor any human being foolish enough to test every imaginable hypothesis, even including hypotheses that are infinitely more complicated than the data they seek to explain. Why would we program computers (or design algorithms) to be stupider than we are? If you actually want to solve a problem, you try to get the computer to at least model your best cognitive features, if not improve on them. Am I missing something here?

Comment author: Vaniver 01 February 2013 09:27:35PM 17 points [-]

If you're not making quantitative predictions, you're probably doing it wrong.

--Gabe Newell during a talk. The whole talk is worthwhile if you're interested in institutional design or Valve.

Comment author: Mass_Driver 02 February 2013 08:20:12AM 13 points [-]

What's the percent chance that I'm doing it wrong?

Comment author: Mass_Driver 25 January 2013 10:00:41PM 16 points [-]

I once heard a story about the original writer of the Superman Radio Series. He wanted a pay rise, his employers didn't want to give him one. He decided to end the series with Superman trapped at the bottom of a well, tied down with kryptonite and surrounded by a hundred thousand tanks (or something along these lines). It was a cliffhanger. He then made his salary demands. His employers refused and went round every writer in America, but nobody could work out how the original writer was planning to have Superman escape. Eventually the radio guys had to go back to him and meet his wage demands. The first show of the next series began "Having escaped from the well, Superman hurried to..." There's a lesson in there somewhere, but I've no idea what it is.

-http://writebadlywell.blogspot.com/2010/05/write-yourself-into-corner.html

I would argue that the lesson is that when something valuable is at stake, we should focus on the simplest available solutions to the puzzles we face, rather than on ways to demonstrate our intelligence to ourselves or others.

In response to comment by [deleted] on Morality is Awesome
Comment author: Mass_Driver 08 January 2013 12:42:29AM 23 points [-]

Given at least moderate quality, upvotes correlate much more tightly with accessibility / scope of audience than quality of writing. Remember, the article score isn't an average of hundreds of scalar ratings -- it's the sum of thousands of ratings of [-1, 0, +1] -- and the default rating of anyone who doesn't see, doesn't care about, or doesn't understand the thrust of a post is 0. If you get a high score, that says more about how many people bothered to process your post than about how many people thought it was the best post ever.

Comment author: Mass_Driver 25 January 2013 09:58:27PM 2 points [-]

Ironically, this is my most-upvoted comment in several months.

Comment author: deathpigeon 12 January 2013 12:46:20PM 1 point [-]

Those are both good points. I view it as a bug because I feel like too much ethical thought bypasses conscious thought to ill affect. This can range from people not thinking about the ethics homosexuality because their pastor tells them its a sin to not thinking about the ethics of invading a country because people believe they are responsible for an attack of some kind, whether they are or not. However, Nyan_Sandwich's ethics of awesome does appear to bypass such problems, to an extent. It's hardly s, but it appears like it would do its job better than many other ethical systems in place today.

I should note that it wasn't ever intended to be a very strong objection. As a matter of fact, the original objection wasn't to the conclusions made, but to the path taken to get to them. If an argument for a conclusion I agree with is faulty, I usually attempt to point out the faults in the argument so that the argument can be better.

Also, I apologize for taking so long to respond. life (and Minecraft playing) interfered with me checking LessWrong, and I'm not yet used to checking it regularly as I'm new here.

Comment author: Mass_Driver 25 January 2013 09:57:42PM 1 point [-]

OK, so how else might we get people to gate-check the troublesome, philosophical, misleading parts of their moral intuitions that would have fewer undesirable side effects? I tend to agree with you that it's good when people pause to reflect on consequences -- but then when they evaluate those consequences I want them to just consult their gut feeling, as it were. Sooner or later the train of conscious reasoning had better dead-end in an intuitively held preference, or it's spectacularly unlikely to fulfill anyone's intuitively held preferences. (I, of course, intuitively prefer that such preferences be fulfilled.)

How do we prompt that kind of behavior? How can we get people to turn the logical brain on for consequentialism but off for normative ethics?

In response to Morality is Awesome
Comment author: [deleted] 06 January 2013 09:04:34PM 32 points [-]

[META] Why is this so heavily upvoted? Does that indicate actual value to LW, or just a majority of lurking septemberites captivated by cute pixel art?

It was just hacked out in a couple of hours to organize my thoughts for the meetup. It has little justification for anything, very little coherent overarching structure, and it's not even really serious. It's only 90% true, with many bugs. Very much a worse-is-better sort of post.

Now it's promoted with 50-something upvotes. I notice that I would not predict this, and feel the need to update.

What should I (we) learn from this?

  • Am I underestimating the value of a given post-idea? (i.e. should we all err on the side of writing more?)

  • Are structure, seriousness, watertightness and such are trumped by fun and clarity? Is it safe to run with this? This could save a lot of work.

  • Are people just really interested in morality, or re-framing of problems, or well-linked integration posts?

In response to comment by [deleted] on Morality is Awesome
Comment author: Mass_Driver 08 January 2013 12:42:29AM 23 points [-]

Given at least moderate quality, upvotes correlate much more tightly with accessibility / scope of audience than quality of writing. Remember, the article score isn't an average of hundreds of scalar ratings -- it's the sum of thousands of ratings of [-1, 0, +1] -- and the default rating of anyone who doesn't see, doesn't care about, or doesn't understand the thrust of a post is 0. If you get a high score, that says more about how many people bothered to process your post than about how many people thought it was the best post ever.

Comment author: deathpigeon 06 January 2013 11:23:45PM 0 points [-]

That misses my point. When people say awesome, they don't think back at the consequences or look forward for consequences. People say awesome without thinking about it AT ALL.

Comment author: Mass_Driver 07 January 2013 08:19:12PM 1 point [-]

OK, let's say you're right, and people say "awesome" without thinking at all. I imagine Nyan_Sandwich would view that as a feature of the word, rather than as a bug. The point of using "awesome" in moral discourse is precisely to bypass conscious thought (which a quick review of formal philosophy suggests is highly misleading) and access common-sense intuitions.

I think it's fair to be concerned that people are mistaken about what is awesome, in the sense that (a) they can't accurately predict ex ante what states of the world they will wind up approving of, or in the sense that (b) what you think is awesome significantly diverges from what I (and perhaps from what a supermajority of people) think is awesome, or in the sense that (c) it shouldn't matter what people approve of, because the 'right' think to do is something else entirely that doesn't depend on what people approve of.

But merely to point out that saying "awesome" involves no conscious thought is not a very strong objection. Why should we always have to use conscious thought when we make moral judgments?

In response to Morality is Awesome
Comment author: deathpigeon 06 January 2013 11:11:46AM 3 points [-]

"Awesome" is implicitly consequentialist.

Not necessarily. If I tell a story of how I went white water rafting, and the person I'm talking to tells me that what I did was "awesome," is he or she really thinking of the consequences of my white water rafting? Probably not. Instead, he or she probably thought very little before declaring the white water rafting awesome. That's an inherent problem to using awesome with morality. Awesome is usually used without thought. If you determine morality based on awesomeness, then you are moralizing without thinking at all, which can often be a problem.

Comment author: Mass_Driver 06 January 2013 08:08:54PM 0 points [-]

To say that something's 'consequentialist' doesn't have to mean that it's literally forward-looking about each item under consideration. Like any other ethical theory, consequentialism can look back at an event and determine whether it was good/awesome. If you going white-water rafting was a good/awesome consequence, then your decision to go white-water rafting and the conditions of the universe that let you do so were good/awesome.

View more: Prev | Next