You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

People who lie about how much they eat are jerks

-10 Elo 08 August 2016 03:45AM

Originally posted here: http://bearlamp.com.au/people-who-lie-about-how-much-they-eat-are-jerks/


Weight loss journey is a long and complicated problem solving adventure.  This is one small factor that adds to the confusion.  You probably have that one friend.  Appears to eat a whole bunch, and yet doesn't put on weight.  If you ever had that conversation it goes something like,

"How are you so thin?"
"raah raah metabolism"
"raah raah I dont know why I don't put on weight"
"Take advantage of the habit"

Well I have had enough.  You're wrong.  You're lying and you probably don't even know it.  It's not possible. (Within a reasonable scope of human variation) Calories and energy are a black box system.  Calories in, work out, leftovers become weight gain, deficit is weight loss.  If a human could eat significantly more calories for the same amount of work and not put on weight we would be prodding them in a lab for breaking the laws of physics on conservation of mass and conservation of energy.

So this is you, you say you gain weight no matter what you eat and that's scientifically impossible.  Now what?  You probably don't mean to break the laws of physics (and you probably don't actually break them).  You genuinely absentmindedly don't notice when you scoff down whole plates of food and when you skip dinner because you didn't feel like it (and absentmindedly balance the calories automatically).  It's all the same to you because you naturally do that.

This very likely is about habits, and natural habits that people have.  If for example John has the habit of getting home and going to the fridge, making dinner because it's usually the evening.  Wendy doesn't have the habit.  She eats when she is hungry.  Not having a set mealtime sometimes means that she gets tired-hungry and has a state of being too exhausted to decide what to eat and too hungry to do anything else that would help solve the problem.  But for Wendy she doesn't get home and automatically cook dinner.  (good things and bad things come from habits.)

Wendy and john go to a big lunch together.  They both eat 150% of the calories they should be eating for that meal, and they don't mind - enjoying food is part of enjoying life.  It was a fancy restaurant with good food.  Later that evening when Wendy gets home she doesn't feel hungry and goes off to read a book or talk to friends on the internet.  Eventually she has a light snack (of 10% of her "dinner" calories) and heads off to totalling 160% of the calories for the two meals.  Effectively under-eating for the day.  John on the other hand, has his habit of heading home and making dinner.  Even after the big lunch, his automatic systems take over and he makes and ordinary dinner of 100% of his calories for that meal.  John's total for that day is 250% for two meals or effectively half a meal extra for that day.

If W and J do this every week (assuming the rest of their diets are perfectly balanced), John will have an upwards trajectory and Wendy will have a downwards one.  John might ask Wendy how she stays so skinny, and Wendy wouldn't know.  After all they eat about the same amount when they are together.

No one understands this.  


What can we do about it?

1. We can hire scientists to follow both J and W around for a week and write down every time they eat something. (this is impractical - maybe if we are in an isolated environment like a weekend retreat it would be easier to do this)
2. We can get them to self report via an app (but people are usually pretty bad at that)
3. We can try ask more specifically, "what do you eat in a day?", or "what have you eaten since this time yesterday?" and gather data points to try to build a picture of what a person eats.
4. We can search for people with similar habits around food to us and ask them how they stay healthy.
5. We can look for people with successful habits around food, ask them for advice and then figure out why that advice works, and how to make that advice work for us.

On the noticing level.  You should notice that every single thing that you eat adds to your caloric intake. Every single piece of work you do adds to your burn.  It's easier to eat another piece of chocolate (for 5 seconds) than run another 15minutes to burn that chocolate off.  If something is not working towards your dieting success it's probably working against it.


Meta: this took one hour to write.

The Ultimate Testing Grounds

6 Stuart_Armstrong 28 October 2015 05:08PM

A putative new idea for AI control; index here.

EDIT: the mathematics of this approach can be found here.

In a previous post, I described a procedure for querying an AI about whether a certain outcome was desirable for it. When I did so, I didn't realise the power of the idea.

In brief, that idea allows us to query an AI with utility v as to whether an outcome X or an outcome ¬X was the one it most desired in a particular situation. By judicious use of 'false beliefs', the AI would react as if it had miraculous powers to determine X vs ¬X in ways that no-one could ever know about.

Many (including Toby Ord) has suggested variants on the 'honey pot' idea: leave something available to the AI that looks like an oversight, but is actually a trap, to see if the AI goes beyond the parameters of what we'd expect. Maybe a wire to the internet that the AI is not supposed to use, but seems to be unobserved.

The problem with these ideas is that the honey pots had to be physically constructed, physically accessible to the AI, and the AI had to be wrong about whether it was a trap or not.

But using the method above, we can go much further. We can test the AI in any scenario which we can imagine, as long as we can precisely physically describe it. We can check what would happen if the AI was given the ability to launch the world's nuclear arsenals, or divert an asteroid towards Earth. We can check what it would design if it was given unrestricted access to a DNA synthesiser. We could see what it would do if it could rewrite the text of a speech an important world leader was due to give. We could see whether it would sabotage the creation of other AIs - and if so, which ones. We could see what it did to a probe humanity planned to launch to the stars, what numbers it would call if it had access to a phone, what it would purchase if it had capital.

And, in all cases, the AI would be behaving as if its actions were undetectable and certain to succeed, while we ensured that they were (almost certainly) detectable and certain to fail. There is no "almost" in that certain to fail condition, because the AI is acting as if it had miraculous powers that it doesn't have.

Now, we can't use testing as our sole method of AI control (for reasons similar to here), but it does allow us some pretty effective ways of catching bad AIs.

[LINK] The future of the Turing test and intelligence measures

1 Stuart_Armstrong 17 June 2014 11:12AM

Following the recent hype over the potential of a machine passing of the Turing test, Adam Ford interviews Stuart Armstrong (me) of the FHI about the meaning of the test, how we can expect a future of many upcoming "Turing test passings" according to varying criteria of strictness, and how and why we test for intelligence in the first place.

I predict that we are entering an era where "X passed the Turing test" will be a more and more common announcement, followed by long discussions as to whether that was a true pass or not.

Come up with better Turing Tests

13 Stuart_Armstrong 10 June 2014 10:47AM

So the Turing test has been "passed", and the general consensus is that this was achieved in a very unimpressive way - the 13 year old Ukrainian persona was a cheat, the judges were incompetent, etc... These are all true, though the test did pass Turing's original criteria - and there are far more people willing to be dismissive of those criteria in retrospect than were in advance. It happened about 14 years later than Turing had been anticipating, which makes it quite a good prediction for 1950 (in my personal view, Turing made two mistakes that compensated - the "average interrogator" was a much lower bar than he thought, but progress on the subject would be much slower than he thought).

But anyway, the main goal now, as suggested by Toby Ord and others, is to design a better Turing test, something that can give AI designers something to aim at, and that would be a meaningful test of abilities. The aim is to ensure that if a program passes these new tests, we won't be dismissive of how it was achieved.

Here are a few suggestions I've heard about or thought about recently; can people suggest more and better ideas?

  1. Use proper control groups. 30% of judges thinking that a program is human is meaningless unless the judges also compare with actual humans. Pair up a human subject with a program, and the role of the judge is to establish which of the two subjects is the human and which is not.
  2. Toss out the persona tricks - no 13 year-olds, nobody with poor English skills. It was informative about human psychology that these tricks work, but we shouldn't allow them in future. All human subjects will have adequate English and typing skills.
  3. On that subject, make sure the judges and subjects are properly motivated (financial rewards, prizes, prestige...) to detect or appear human. We should also brief them that our usual conversational approach to establish which kind of human they are dealing with, is not useful for determining whether they are dealing with a human at all.
  4. Use only elite judges. For instance, if Scott Aaronson can't figure it out, the program must have some competence.
  5. Make a collection of generally applicable approaches (such as the Winograd Schemas) available to the judges, while emphasising they will have to come up with their own exact sentences, since anything online could have been used to optimise the program already.
  6. My favourite approach is to test the program on a task they were not optimised for. A cheap and easy way of doing that would be to test them on novel ASCII art.

My current method would be the lazy one of simply typing this, then waiting, arms folded:

"If you want to prove you're human, simply do nothing for 4 minutes, then re-type this sentence I've just written here, skipping one word out of 2".

 

Other minds and bats: the vampire Turing test

1 Stuart_Armstrong 25 March 2014 01:36PM

Thoughts inspired by Yvain's philosophical role-playing post.

Thomas Nagel produced a famous philosophical thought experiment "What Is It Like to Be A Bat?" In it, he argued that the reductionist understanding of consciousness was insufficient, since there exists beings - bats - that have conscious experiences that humans cannot understand. We cannot know what "it is like to be a bat", and looking reductively at bat brains, bat neurones, or the laws of physics, cannot (allegedly) grant us any understanding of this subjective experience. Therefore there remains an unavoidable subjective component to the problem of consciousness.

I won't address this issue directly (see for instance this, on the closely related subject of qualia), but instead look at the question: suppose someone told us that they actually knew what it was like to be a bat (as well as what it was like to be a human). Call such a being a vampire, for obvious reasons. So if someone claimed they were a vampire, how would we test this?

We can't simply ask them to describe what it's like to be a bat - it's perfectly possible they know what it's like to be a bat, but cannot describe it in human terms (just as we often fail to describe certain types of experiences to those who haven't experienced them). Could we run a sort of Turing test - maybe implant the putative vampire's brain into a bat body, and see how bat-like it behaved? But, as Nagel pointed out, this could be a test of whether they know how to behave like a bat behaves, not whether they know what it's like to be a bat.

I posit that one possible solution is to use the approach laid out in my post "the flawed Turing test". We need to pay attention as to how the "vampire" got their knowledge. If the vampire is a renown expert on bat behaviour and social interactions, who is also interested in sonar and paragliding - then them functioning as a bat is weak evidence as to them actually knowing what it is like to be a bat. But suppose instead that their knowledge comes from another source - maybe the vampire is a renown brain expert, who has grappled with philosophy of mind and spent many years examining the functioning of bat brains. But, crucially, they have never seen a full living bat in the wild or in the lab, they've never watched a natural documentary on bats, they've never even seen a photo of a bat. In that case, if they behave correctly when transplanted into a bat body, then it's strong evidence of them actually understanding what it's like to be a bat.

Similarly, maybe they got their knowledge after a long conversation with another "vampire". We have the recording of the conversation, and it's all about mental states, imagery, emotional descriptions and visualisation exercises - but not about physical descriptions or bat behaviour. In that case, as above, if they can function successfully as a bat, this is evidence of them really "getting it".

In summary, we can say "that person likely knows what it is like to be a bat" if "knowing what it's like to be a bat" is the most likely explanation for what we see. If they behave exactly like a bat when in a bat body, and we know they have no prior experience that teaches them how to behave like a bat (but a lot about the bat's mental states), then we can conclude that it's likely that they genuinely know what it's like to be a bat, and are implementing this knowledge, rather than imitating behaviour.

Is Politics the Mindkiller? An Inconclusive Test

14 OrphanWilde 27 July 2012 05:45PM

Or is the convention against discussing politics here silly?

I propose a test.  I'm going to try to lay down some rules on voting on comments for the test here (not that I can force anybody to abide by them):

1.) Top-level comments should introduce arguments (or ridicule me and/or this test); responses should be responses to those arguments.

2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both.

3.) Try not to downvote particular comments excessively, if they're legitimate lines of argument.  A faulty line of argument provides opportunity for rebuttal, and so for our test has value even then; that is, I want some faulty lines of argument here.  If you disagree, please downvote me, instead of the faulty comments, because this post is what you want less of, not those comments.  This necessarily implies, for balance, that we not excessively upvote comments.  I'd suggest fairly arbitrary limits of 3/-3?

Edit: 4.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.  (My apologies about missing this, folks.)

I'm going to try really hard not to get personally involved, except to lay down a leading comment posing an argument against abortion, a position I don't hold, for the record.  The core of the argument isn't disingenuous, and I hold that this argument is true, it just doesn't lead to my opposing abortion.  I do not hold the moral axiom by which I extend the basic argument to argue against abortion, however; I'm playing the devil's advocate to try to help me from getting sucked into the argument while providing an initial point of discussion.

Which leads me to the next point: If you see a hole in an argument, even if it's an argument for a perspective you agree with, poke through it.  The goal is to see whether we can have a constructive political argument here.

The fact that this is a test, and known to be a test, means this isn't a blind study.  Uh, try to act as if you're not being tested?

After it's gone on a little while, if this post hasn't been hopelessly downvoted and ridiculed (and thus the premise and test discarded as undesirable to begin with), we can put up a poll to see whether people found the political debates helpful, not helpful, and so on.

The Criminal Stupidity of Intelligent People

-14 fare 27 July 2012 04:08AM

What always fascinates me when I meet a group of very intelligent people is the very elaborate bullshit that they believe in. The naive theory of intelligence I first posited when I was a kid was that intelligence is a tool to avoid false beliefs and find the truth. Surrounded by mediocre minds who held obviously absurd beliefs not only without the ability to coherently argue why they held these beliefs, but without the ability of even understanding basic arguments about them, I believed as a child that the vast amount of superstition and false beliefs in the world was due to people both being stupid and following the authority of insufficiently intelligent teachers and leaders. More intelligent people and people following more intelligent authorities would thus automatically hold better beliefs and avoid disproven superstitions. However, as a grown up, I got the opportunity to actually meet and mingle with a whole lot of intelligent people, including many whom I readily admit are vastly more intelligent than I am. And then I had to find that my naive theory of intelligence didn't hold water: intelligent people were just as prone as less intelligent people to believing in obviously absurd superstitions. Only their superstitions would be much more complex, elaborate, rich, and far reaching than an inferior mind's superstitions.

For instance, I remember a ride with an extremely intelligent and interesting man (RIP Bob Desmarets); he was describing his current pursuit, which struck me as a brilliant mathematical mind's version of mysticism: the difference was that instead of marveling at some trivial picture of an incarnate god like some lesser minds might have done, he was seeking some Ultimate Answer to the Universe in the branching structures of ever more complex algebras of numbers, real numbers, complex numbers, quaternions, octonions, and beyond, in ever higher dimensions (notably in relation to super-string theories). I have no doubt that there is something deep, and probably enlightening and even useful in such theories, and I readily disqualify myself as to the ability to judge the contributions that my friend made to the topic from a technical point of view; no doubt they were brilliant in one way or another. Yet, the way he was talking about this topic immediately triggered the "crackpot" flag; he was looking there for much more than could possibly be found, and anyone (like me) capable of acknowledging being too stupid to fathom the Full Glory of these number structures yet able to find some meaning in life could have told that no, this topic doesn't hold key to The Ultimate Source of All Meaning in Life. Bob's intellectual quest, as exaggeratedly exalted as it might have been, and as interesting as it was to his own exceptional mind, was on the grand scale of things but some modestly useful research venue at best, and an inoffensive pastime at worst. Perhaps Bob could conceivably used his vast intellect towards pursuits more useful to you and I; but we didn't own his mind, and we have no claims to lay on the wonders he could have created but failed to by putting his mind into one quest rather than another. First, Do No Harm. Bob didn't harm any one, and his ideas certainly contained no hint of any harm to be done to anyone.

Unhappily, that is not always the case of every intelligent man's fantasies. Let's consider a discussion I was having recently, that prompted this article. Last week, I joined a dinner-discussion with a lesswrong meetup group: radical believers in rationality and its power to improve life in general and one's own life in particular. As you can imagine, the attendance was largely, though not exclusively, composed of male computer geeks. But then again, any club that accepts me as a member will probably be biased that way: birds of the feather flock together. No doubt, there are plenty of meetup groups with the opposite bias, gathering desperately non-geeky females to the almost exclusion of males. Anyway, the theme of the dinner was "optimal philanthropy", or how to give time and money to charities in a way that maximizes the positive impact of your giving. So far, so good.

But then, I found myself in a most disturbing private side conversation with the organizer, Jeff Kaufman (a colleague, I later found out), someone I strongly suspect of being in many ways saner and more intelligent than I am. While discussing utilitarian ways of evaluating charitable action, he at some point mentioned some quite intelligent acquaintance of his who believed that morality was about minimizing the suffering of living beings; from there, that acquaintance logically concluded that wiping out all life on earth with sufficient nuclear bombs (or with grey goo) in a surprise simultaneous attack would be the best possible way to optimize the world, though one would have to make triple sure of involving enough destructive power that not one single strand of life should survive or else the suffering would go on and the destruction would have been just gratuitous suffering. We all seemed to agree that this was an absurd and criminal idea, and that we should be glad the guy, brilliant as he may be, doesn't remotely have the ability to implement his crazy scheme; we shuddered though at the idea of a future super-human AI having this ability and being convinced of such theories.

That was not the disturbing part though. What tipped me off was when Jeff, taking the "opposite" stance of "happiness maximization" to the discussed acquaintance's "suffering minimization", seriously defended the concept of wireheading as a way that happiness may be maximized in the future: putting humans into vats where the pleasure centers of their brains will be constantly stimulated, possibly using force. Or perhaps instead of humans, using rats, or ants, or some brain cell cultures or perhaps nano-electronic simulations of such electro-chemical stimulations; in the latter cases, biological humans, being less-efficient forms of happiness substrate, would be done away with or at least not renewed as embodiments of the Holy Happiness to be maximized. He even wrote at least two blog posts on this theme: hedonic vs preference utilitarianism in the Context of Wireheading, and Value of a Computational Process. In the former, he admits to some doubts, but concludes that the ways a value system grounded on happiness differ from my intuitions are problems with my intutions.

I expect that most people would, and rightfully so, find Jeff's ideas as well as his acquaintance's ideas to be ridiculous and absurd on their face; they would judge any attempt to use force to implement them as criminal, and they would consider their fantasied implemention to be the worst of possible mass murders. Of course, I also expect that most people would be incapable of arguing their case rationally against Jeff, who is much more intelligent, educated and knowledgeable in these issues than they are. And yet, though most of them would have to admit their lack of understanding and their absence of a rational response to his arguments, they'd be completely right in rejecting his conclusion and in refusing to hear his arguments, for he is indeed the sorely mistaken one, despite his vast intellectual advantages.

I wilfully defer any detailed rational refutation of Jeff's idea to some future article (can you without reading mine write a valuable one?). In this post, I rather want to address the meta-point of how to address the seemingly crazy ideas of our intellectual superiors. First, I will invoke the "conservative" principle (as I'll call it), well defended by Hayek (who is not a conservative): we must often reject the well-argued ideas of intelligent people, sometimes more intelligent than we are, sometimes without giving them a detailed hearing, and instead stand by our intuitions, traditions and secular rules, that are the stable fruit of millenia of evolution. We should not lightly reject those rules, certainly not without a clear testable understanding of why they were valid where they are known to have worked, and why they would cease to be in another context. Second, we should not hesitate to use proxy in an eristic argument: if we are to bow to the superior intellect of our better, it should not be without having pitted said presumed intellects against each other in a fair debate to find out if indeed there is a better whose superior arguments can convince the others or reveal their error. Last but not least, beyond mere conservatism or debate, mine is the Libertarian point: there is Universal Law, that everyone must respect, whereby peace between humans is possible inasmuch and only inasmuch as they don't initiate violence against other persons and their property. And as I have argued in another previous essay (hardscrapple), this generalizes to maintaining peace between sentient beings of all levels of intelligence, including any future AI that Jeff may be prone to consider. Whatever the one's prevailing or dissenting opinions, the initiation of force is never to be allowed as a means to further any ends. Rather than doubt his intuition, Jeff should have been tipped that his theory was wrong and taken out of context by the very fact that it advocates or condones massive violation of this Universal Law. Criminal urges, mass-criminal at that, are a strong stench that should alert anyone that some ideas have gone astray, even when it might not be immediately obvious where exactly they started parting from the path of sanity.

Now, you might ask, it is good and well to poke fun at the crazy ideas that some otherwise intelligent people may hold; it may even allow one to wallow in a somewhat justified sense of intellectual superiority over people who otherwise are actually and objectively so one's intellectual superiors. But is there a deeper point? Is it relevant what crazy ideas intellectuals hold, whether inoffensive or criminal? Sadly, it is. As John McCarthy put it, "Soccer riots kill at most tens. Intellectuals' ideological riots sometimes kill millions." Jeff's particular crazy idea may be mostly harmless: the criminal raptures of the overintelligent nerd, that are so elaborate as to be unfathomable to 99.9% of the population, are unlikely to ever spread to enough of the power elite to be implemented. That is, unless by some exceptional circumstance there is a short and brutal transition to power by some overfriendly AI programmed to follow such an idea. On the other hand, the criminal raptures of a majority of the more mediocre intellectual elite, when they further possess simple variants that can intoxicate the ignorant and stupid masses, are not just theoretically able to lead to mass murder, but have historically been the source of all large-scale mass murders so far; and these mass murders can be counted in hundreds of millions, over the XXth century only, just for Socialism. Nationalism, Islamism and Social-democracy (the attenuated strand of socialism that now reigns in Western "Democracies") count their victims in millions only. And every time, the most well-meaning of intellectuals build and spread the ideologies of these mass-murders. A little initial conceptual mistake, properly amplified, can do that.

And so I am reminded of the meetings of some communist cells that I attended out of curiosity when I was in high-school. Indeed, trotskyites are very openly recruiting in "good" French high-schools. It was amazing the kind of non-sensical crap that these obviously above-average adolescent could repeat. "The morale of the workers is low." Whoa. Or "The petite-bourgeoisie" is plotting this or that. Apparently, grossly cut social classes spanning millions of individuals act as one man, either afflicted with depression or making machiavelian plans. Not that any of them knew much of either salaried workers or entrepreneurs but through one-sided socialist literature. If you think that the nonsense of the intellectual elite is inoffensive, consider what happens when some of them actually act on those nonsensical beliefs: you get terrorists who kill tens of people; when they lead ignorant masses, they end up killing millions of people in extermination camps or plain massacres. And when they take control of entire universities, and train generations of scholars, who teach generations of bureaucrats, politicians, journalists, then you suddenly find that all politicians agree on slowly implementing the same totalitarian agenda, one way or another.

If you think that control of universities by left-wing ideologists is just a French thing, consider how for instance, America just elected a president whose mentor and ghostwriter was the chief of a terrorist group made of Ivy League educated intellectuals, whose overriding concern about the country they claimed to rule was how to slaughter ten percent of its population in concentration camps. And then consider that the policies of this president's "right wing" opponent are indistinguishable from the policies of said president. The violent revolution has given way to the slow replacement of the elite, towards the same totalitarian ideals, coming to you slowly but relentlessly rather than through a single mass criminal event. Welcome to a world where the crazy ideas of intelligent people are imposed by force, cunning and superior organization upon a mass of less intelligent yet less crazy people.

Ideas have consequences. That's why everyone Needs Philosophy.

Crossposted from my livejournal: http://fare.livejournal.com/168376.html

Low legibility of Cognitive Reflection Test dramatically improves performance?

12 uzalud 08 November 2011 09:46AM

I'm reading Kahneman's Thinking, Fast and Slow and I've stopped on this:

90% of the students who saw the CRT in normal font made at least one mistake in the test, but the proportion dropped to 35% when the font was barely legible. You read this correctly: performance was better with the bad font.

This seems like an important finding, but I can't find references in the book (Kindle) or on the Web. Does anybody know any real evidence for this claim? EDIT: I found the original paper

Do you think that people could behave rationally with such a simple intervention?

simple intro to CRT

EDIT: fixed spelling in title

A fun estimation test, is it useful?

5 mwengler 20 December 2010 09:09PM

So you think its important to be able to estimate how well you are estimating something?  Here is a fun test that has been given to plenty of other people.  

I highly recommend you take the test before reading any more.  

http://www.codinghorror.com/blog/2006/06/how-good-an-estimator-are-you.html

 

The discussion of this test at the blog it is quoted in is quite interesting, but I recommend you read it after taking the test.  Similarly, one might anticipate there will be interesting discussion here on the test and whether it means what we want it to mean and so on.  

My great apologies if this has been posted before.  I did my bast with google trying to find any trace of this test, but if this has already been done, please let me know and ideally, let me know how I can remove my own duplicate post.

 

PS: The Southern California meetup 19 Dec 2010 was fantastic, thanks so much JenniferRM for setting it up.  This post on my part is an indirect result of what we discussed and a fun game we played while we were there.