You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stupid Questions December 2014

16 Post author: Gondolinian 08 December 2014 03:39PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Comments (341)

Comment author: CBHacking 08 December 2014 06:24:50PM 6 points [-]

Can anybody give me a good description of the term "metaphysical" or "metaphysics" in a way that is likely to stick in my head and be applicable to future contemplations and conversations? I have tried to read a few definitions and descriptions, but I've never been able to really grok any of them and even when I thought I had a working definition it slipped out of my head when I tried to use it later. Right now its default function in my brain is, when uttered, to raise a flag that signifies "I can't tell if this person is speaking at a level significantly above my comprehension or is just spouting bullshit, but either way I'm not likely to make sense of what they're saying" and therefore tends to just kind of kill the mental process that that was trying to follow what somebody was saying to me / what I was reading.

Given how often it comes up, and often from people I respect, I'm pretty sure that's not the correct behavior Figured it's worth asking here. In case it wasn't obvious, I have virtually no background in philosophy (though I've been looking to change that).

Comment author: Anatoly_Vorobey 08 December 2014 06:39:43PM *  13 points [-]

Metaphysics: what's out there? Epistemology: how do I learn about it? Ethics: what should I do with it?

Basically, think of any questions that are of the form "what's there in the world", "what is the world made of", and now take away actual science. What's left is metaphysics. "Is the world real or a figment of my imagination?", "is there such a thing as a soul?", "is there such a thing as the color blue, as opposed to objects that are blue or not blue?", "is there life after death?", "are there higher beings?", "can infinity exist?", etc. etc.

Note that "metaphysical" also tends to be used as a feel-good word, meaning something like "nobly philosophical, concerned with questions of a higher nature than the everyday and the mundane".

Comment author: polymathwannabe 08 December 2014 06:40:38PM 1 point [-]

Metaphysics: what's out there?

Isn't that ontology? What's the difference?

Comment author: ChristianKl 08 December 2014 06:45:31PM 1 point [-]

Ontology is a subdiscipline of metaphysics.

Is the many-world hypothesis true? Might be a metaphysical question that not directly ontology.

Comment author: Anatoly_Vorobey 08 December 2014 06:54:11PM 12 points [-]

"Ontology" is firmly dedicated to "exist or doesn't exist". Metaphysics is more broadly "what's the world like?" and includes ontology as a central subfield.

Whether there is free will is a metaphysical question, but not, I think, an ontological one (at least not necessarily). "Free will" is not a thing or a category or a property, it's a claim that in some broad aspects the world is like this and not like that.

Whether such things as desires or intentions exist or are made-up fictions is an ontological question.

Comment author: Gvaerg 08 December 2014 07:05:39PM *  1 point [-]

Thanks! I've seen many times the statement that ontology is strictly included in metaphysics, but this is the first time I've seen an example of something that's in the set-theoretic difference.

Comment author: CBHacking 08 December 2014 09:54:15PM 0 points [-]

Thanks. That's still not even a little intuitive to me, but it's a Monday and I had to be up absurdly early, so if it makes any sense to me right now (and it does), I have hope that I'll be able to internalize it even if I always need to think about it a bit. We'll see, probably no sooner than tomorrow though (sleeeeeeeeeep...).

I suspect that part of my problem is that I keep trying to decompose "metaphysics" into "physics about/describing/in the area of physics" and my brain helpfully points out that not only is it questionable whether that makes any sense to begin with, it almost never makes any sense whatsoever in context. If I just need to install a linguistic override for that word, I can do it, but I want to know what the override is supposed to be before I go to the effort.

The feel-good-word meaning seems likely to be a close relative of the flag-statement-as-bullshit meaning. That feels like a mental trap, though. The problem is, at least half the "concrete" examples that I've seen in this thread also seem likely to have little to no utility (certainly not enough to justify thinking about it for any length of time). Epistemology and ethics have obvious value, but it seems metaphysics comes up all the time in philosophical discussion too.

Comment author: [deleted] 08 December 2014 11:23:28PM 0 points [-]

A confusion of mine: How is epistemology a separate thing? Or is that just a flag for "We're going to go meta-level" and applied to some particular topic.

E.g. I read a bit of Kant about experience, which I suppose is metaphysics (right?) but it seems like if he's making any positive claim, the debate about the claim is going to be about the arguments for the claim, which is settled via epistemology?

Comment author: Anatoly_Vorobey 09 December 2014 07:47:35AM 5 points [-]

Hmm, I would disagree. If you have a metaphysical claim, then arguments for or against this claim are not normally epistemological; they're just arguments.

Think of epistemology as "being meta about knowledge, all the time, and nothing else".

What does it mean to know something? How can we know something? What's the difference between "knowing" a definition and "knowing" a theorem? Are there statements such that to know them true, you need no input from the outside world at all? (Kant's analytic vs synthetic distinction). Is 2+2=4 one such? If you know something is true, but it turns out later it was false, did you actually "know" it? (many millions of words have been written on this question alone).

Now, take some metaphysical claim, and let's take an especially grand one, say "God is infinite and omnipresent" or something. You could argue for or against that claim without ever going into epistemology. You could maybe argue that the idea of God as absolute perfection more or less requires Him to be present everywhere, in the smallest atom and the remotest star, at all times because otherwise it would be short of perfection, or something like this. Or you could say that if God is present everywhere, that's the same as if He was present nowhere, because presence manifests by the difference between presence and absence.

But of course if you are a modern person and especially one inclined to scientific thinking, you would likely respond to all this "Hey, what does it even mean to say all this or for me to argue this? How would I know if God is omnipresent or not omnipresent, what would change in the world for me to perceive it? Without some sort of epistemological underpinning to this claim, what's the difference between it and a string of empty words?"

And then you would be proceeding in the tradition started by Descartes, who arguably moved the center of philosophical thinking from metaphysics to epistemology in what's called the "epistemological turn", later boosted in the 20th century by the "lingustic turn" (attributed among others to Wittgenstein).

Metaphysics: X, amirite? Epistemological turn: What does it even mean to know X? Linguistic turn: What does it even mean to say X?

Comment author: gjm 08 December 2014 07:56:56PM 10 points [-]

This is in no way an answer to your actual question (Anatoly's is good) but it might amuse you.

"Meta" in Greek means something like "after" (but also "beside", "among", and various other things). So there is a

Common misapprehension: metaphysics is so called because it goes beyond physics -- it's mode abstract, more subtle, more elevated, more fundamental, etc.

This turns out not to be quite where the word comes from, so there is a

Common response": actually, it's all because Aristotle wrote a book called "Physics" and another, for which he left no title, that was commonly shelved after the "Physics" -- *meta ta Phusika -- and was commonly called the "Metaphysics". And the topics treated in that book came to be called by that name. So the "meta" in the name really has nothing at all to do with the relationship between the subjects.

But actually it's a bit more complicated than that; here's the

Truth (so far as I understand it): indeed Aristotle wrote those books, and indeed the "Metaphysics" is concerned with, well, metaphysics, and indeed the "Metaphysics" is called that because it comes "after the Physics". But the earliest sources we have suggest that the reason why the Metaphysics came after the Physics is that Aristotle thought it was important for physics to be taught first. So actually it's not far off to say that metaphysics is so called because it goes beyond physics, at least in the sense of being a more advanced topic (in Aristotle's time).

Comment author: TheOtherDave 08 December 2014 09:01:07PM *  1 point [-]

In my experience people use "metaphysics" to refer to philosophical exploration of what kinds of things exist and what the nature, behavior, etc. of those things is.

This is usually treated as distinct from scientific/experimental exploration of what kinds of things exist and what the nature, behavior, etc. of those things is, although those lines are blurry. So, for example, when Yudkowsky cites Barbour discussing the configuration spaces underlying experienced reality, there will be some disagreement/confusion about whether this is a conversation about physics or metaphysics, and it's not clear that there's a fact of the matter.

This is also usually treated as distinct from exploration of objects and experiences that present themselves to our senses and our intuitive reasoning... e.g. shoes and ducks and chocolate cake. As a consequence, describing a thought or worldview or other cognitive act as "metaphysical" can become a status maneuver... a way of distinguishing it from object-level cognition in an implied context where more object-level (aka "superficial") cognition is seen as less sophisticated or deep or otherwise less valuable.

Some people also use "metaphysical" to refer to a class of events also sometimes referred to as "mystical," "occult," "supernatural," etc. Sometimes this usage is consistent with the above -- that is, sometimes people are articulating a model of the world in which those events can best be understood by understanding the reality which underlies our experience of the world.

Other times it's at best metaphorical, or just outright bullshit.

As far as correct behavior goes... asking people to taboo "metaphysical" is often helpful.

Comment author: CBHacking 08 December 2014 10:10:13PM 1 point [-]

The rationalist taboo is one of the tools I have most enjoyed learning and found most useful in face-to-face conversations since discovering the Sequences. Unfortunately, it's not practical when dealing with mass-broadcast or time-shifted material, which makes it of limited use in dealing with the scenarios where I most frequently encounter the concept of metaphysics.

I tend to (over)react poorly to status maneuvers, which is probably part of why I've had a hard time with the word; it gets used in an information-free way sufficiently often that I'm tempted to just always shelve it there, and that in turn leads me to discount or even ignore the entire thought which contained it. This is a bias I'm actively trying to brainhack away, and I'm now tempted to go find some of my philosophically-inclined social circle and see if I can avoid that automatic reaction at least where this specific word is concerned (and then taboo it anyhow, for the sake of communication being informative).

I still haven't fully internalized the concept, but I'm getting closer. "The kinds of things that exist, and their natures" is something I can see a use for, and hopefully I can make it stick in my head this time.

Comment author: TheOtherDave 09 December 2014 07:24:51PM 0 points [-]

it gets used in an information-free way sufficiently often that I'm tempted to just always shelve it there, and that in turn leads me to discount or even ignore the entire thought which contained it.

This seems like a broader concern, and one worth addressing. People drop content-free words into their speech/writing all the time, either as filler or as "leftovers" from precursor sentences.

What happens if you treat it as an empty modifier, like "really" or "totally"?

Comment author: CBHacking 09 December 2014 11:20:57PM 0 points [-]

Leaving aside the fact that, by default, I don't consider "totally" to be content-free (I'm aware a lot of people use it that way, but I still often need to consciously discard the word when I encounter it), that still seems like at best it only works when used as a modifier. It doesn't help if somebody is actually talking about metaphysics. I'll keep it in mind as a backup option, though; "if I can't process that sentence when I include all the words they said, and one of them is 'metaphysical', what happens if I drop that word?"

Comment author: advancedatheist 08 December 2014 07:44:20PM *  1 point [-]

Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.

Interestingly enough Rand seems to make a disclaimer about that in her novel Atlas Shrugged. The philosophy professor character Hugh Akston says of his star students, Ragnar Danneskjold, John Galt and Francisco d'Anconia:

"Don't be astonished, Miss Taggart," said Dr. Akston, smiling, "and don't make the mistake of thinking that these three pupils of mine are some sort of superhuman creatures. They're something much greater and more astounding than that: they're normal men—a thing the world has never seen—and their feat is that they managed to survive as such. It does take an exceptional mind and a still more exceptional integrity to remain untouched by the brain-destroying influences of the world's doctrines, the accumulated evil of centuries—to remain human, since the human is the rational."

But then look at what Rand shows these allegedly "normal men" can do as Operating Objectivists:

Hank Rearden, a kind of self-trained Operating Objectivist who never studied under Akston, can design a new kind of railroad bridge in his mind which exploits the characteristics of his new alloy, even though he has never built a bridge before.

Francisco d'Anconia can deceive the whole world as he depletes his inherited fortune while making everyone believe that he spends his days as a playboy pickup artist, when he in fact he has lived without sex since his youthful sexual relationship with Dagny.

John Galt can build a motor which violates the conservation of energy and the laws of thermodynamics. Oh, and he can also confidently master Dagny's unexpected intrusion into Galt's Gulch despite his secret crush her, his implied adult virginity and his lack of an adult man's skill set for handling women. (You need life experience for that, not education in philosophy.) On top of that, he can survive torture without suffering from post-traumatic stress symptoms.

So despite Rand's disclaimer, if you view Atlas Shrugged as "advertising" for the abilities Rand's philosophy promises as it unlocks your potentials as a "normal man," then the Objectivist organizations which work with this idea implicitly do seem to offer to turn you into a "superhuman creature."

Comment author: buybuydandavis 08 December 2014 08:36:29PM *  -1 points [-]

Not quite in the spirit of admitting ignorance, but since it's in this thread, I'll answer it.

Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? ...
another cult

No.

So despite Rand's disclaimer, if you view....

So despite what Rand or any Objectivist ever said or did, if you choose to view Objectivism as a nutty cult, you can.

If you were actually interested in why Rand's characters are the way they are, you could read her book on art, "The Romantic Manifesto". Probably a quick google search on the book would give you your answer.

Comment author: fubarobfusco 08 December 2014 09:09:28PM 4 points [-]

Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.

Not that I'm aware of, but you might also be interested in A. E. Van Vogt's "Null-A" novels, which attempted to do this for a fictionalized version of Korzybski's General Semantics.

(Van Vogt later did become involved in Scientology, as did his (and Hubbard's) editor John W. Campbell.)

Comment author: NancyLebovitz 08 December 2014 09:22:33PM 3 points [-]

For what it's worth, Rand was an unusually capable person in her specialty (she wrote two popular, and somewhat politically influential novels in her second language), but still not in the same class as her heroes.

I'm not sure you've got the bit about Rearden right. I don't think there's any evidence that he came up with the final design for the bridge. There's a mention that he worked with a team to discover Rearden metal, and presumably he also had an engineering team. The point was that he (presumably) knew enough engineering to come up with something plausible, and that he was fascinated by producing great things enough to be distracted from something major going wrong that I don't remember.

I have no idea whether Rand knew Galt's engine was physically impossible, though I think she should have, considering that other parts of the book were well-researched. Dagny's situation at Taggart Transcontinental was probably typical for an Operations vice-president in a family owned business. The description of her doing cementless masonry matched with a book on the subject. Atlas Shrugged was the only place I saw the possibility of shale oil mentioned until, decades later, it turned out to be a possible technology.

Comment author: CBHacking 08 December 2014 10:27:43PM 1 point [-]

The research fail that jumped out at me hardest in Atlas Shrugged was the idea that so many people would consider a metal both stronger and lighter than steel physically impossible. By the time the book was published, not only was titanium fairly well understood, it was also being widely used in military and (some; what could be spared from Cold War efforts) commercial purposes. Its properties don't exactly match Rearden Metal (even ignoring the color and other mostly-unimportant characteristic) but they're close enough that it should be obvious that such materials are completely possible. Of course, that part of the book also talks about making steel rails last longer by making them denser, which seems completely bizarre to me; there are ways to increase the hardness of steel, but they involve things like heat-treating it.

TL;DR: I'm not sure I'd call the book "well-researched" as a whole, though some parts may well have been.

Comment author: Alsadius 08 December 2014 11:56:34PM 2 points [-]

The book exists in a deliberately timeless setting - it has elements of everything from about a century of span. Railroads weren't exactly building massive new lines in 1957, either.

Comment author: gattsuru 08 December 2014 09:27:18PM *  2 points [-]

A number of these matters seem more narrative or genre conveniences : Francisco acts a playboy in the same way Bruce Wayne does, Rearden's bridge development passes a lot of work to his specialist engineers (similarly to Rearden metal having a team of scientists skeptically helping him) and pretends that the man is still a one-man designer (among other handwaves). At the same time, Batman is not described as a superhuman engineer or playboy, nor would he act as those types of heroes. I'm also not sure we can know the long-term negative repercussions John Galt experiences given the length of the book, and not all people who experience torture display clinically relevant post-traumatic stress symptoms and many who do show them only sporadically. His engine is based on now-debunked theories of physics that weren't so obviously thermodynamics-violating at the time, similarly to Project Xylophone.

These men are intended to be top-of-field capability from the perspective of a post-Soviet writer who knew little about their fields and could easily research less. Many of the people who show up under Galt's tutelage are similarly exceptionally skilled, but even more are not so hugely capable.

On the other hand, the ability of her protagonists to persuade others and evaluate the risk of getting shot starts at superhuman and quickly becomes ridiculous.

On the gripping hand, I'm a little cautious about emphasizing fictional characters and acknowledgedly Heroic abilities as evidence, especially when the author wrote a number of non-fiction philosophy texts related to this topic.

Comment author: mgin 08 December 2014 10:05:37PM 0 points [-]

Not to my knowledge, but they should have! PM me..

Comment author: Viliam_Bur 08 December 2014 11:10:49PM *  5 points [-]

Seems to me that Rand's model is similar to LessWrong's "rationality as non-self-destruction".

Objectivism in the novels doesn't give the heroes any positive powers. It merely helps them avoid some harmful beliefs and behaviors, which are extremely common. Not burdened by these negative beliefs and behaviors, these "normal men" can fully focus on what they are good at, and if they have high intelligence and make the right choices, they can achieve impressive results.

(The harmful beliefs and behaviors include: feeling guilty for being good at something, focusing on exploiting other people instead of developing one's own skills.)

Hank Rearden's design of a new railroad bridge was completely unrelated to his political beliefs. It was a consequence of his natural talent and hard work, perhaps some luck. The political beliefs only influenced his decision of what to do with the invented technology. I don't remember what exactly were his options, but I think one of them was "archive the technology, to prevent changes in the industry, to preserve existing social order", and as a consequence of his beliefs he refused to consider this option. And even this was before he became a full Objectivist. (The only perfect Objectivist in the novel is Galt; and perhaps the people who later accept Galt's views.)

Francisco d'Anconia's fortune, as you wrote, was inherited. That's a random factor, unrelated to Objectivism.

John Galt's "magical" motor was also a result of his natural talent and hard work, plus some luck. The political beliefs only influenced his decision to hide the motor from public, using a private investor and a secret place.

Violating the law of thermodynamics, and surviving the torture without damage... that's fairy-tale stuff. But I think none of them is an in-universe consequence of Objectivism.

So, what exactly does Objectivism (or Hank Rearden's beliefs, which are partial Objectivism plus some compartmentalization) cause, in-universe?

It makes the heroes focus on their technical skills, and the more enlightened heroes on keeping their technical inventions for themselves. As opposed to attempting a political carreer or serving the existing political powers. Instead of networking, Rearden focuses on studying metal. Instead of donating the magical machine to the government, Galt keeps it secret. Instead of having his fortune taken by government, d'Anconia destroys it... probably because of a lack of smarter alternative (or maybe he somehow secretly preserves a part of his fortune, and ostentatiously destroys the rest to draw away attention; I don't remember the details here).

Without Objectivism, the heroes would most likely become clueless nerds serving the elite, because they couldn't win at the political fight (requires a completely different set of skills that people like Mouch are experts in), but they also wouldn't understand that the system is intentionally designed against them, so they would spend their energy in a futile fight, winning a few battles but losing the war.

Understanding the system allows one to focus on finding an "out of the box" solution. John Galt's victory is his ability to use his natural talent and work to devise a solution where he can live without political masters. He is economically independent, thanks to his magical motor, but also mentally independent. (If we removed the magic, his victory would be understanding the system, and the ability to resist its emotional blackmail and optimize for himself.)

The lack of this understanding made Rearden vulnerable to blackmail from his wife, and in a way cost Eddie Willers his life. (And James Taggart his sanity, if I remember correctly.)

tl;dr: (According to Rand) Objectivism makes you able to understand how the system works, so you can more realistically optimize for your values. Objectivism doesn't give you talent, skills, or luck; but it gives you a chance to use them more efficiently, instead of wasting them in a fight you cannot win.

EDIT: In real life, I expect that an Objectivist training could make people be more aware of their goals and negotiate harder. Maybe increase work ethics.

Comment author: alienist 11 December 2014 04:56:30AM 9 points [-]

On top of that, he can survive torture without suffering from post-traumatic stress symptoms.

PTSS almost seems like a culture-bound syndrome of the modern West. In particular there don't seem to be any references to it before WWI and even there (and in subsequent wars) all the references seem to be from the western allies. Furthermore, the reaction to "shell shock", as it was then called, during WWI suggests that this was something new that the established structures didn't know how to deal with.

Comment author: bogus 11 December 2014 09:32:20AM 4 points [-]

PTSS almost seems like a culture-bound syndrome of the modern West.

There are significant confounders here, as modern science-based psychology got started around the same time - and WWI really was very different from earlier conflicts, not least in its sheer scale. But the idea is nonetheless intriguing; the West really is quite different from traditional societies, along lines that could plausibly make folks more vulnerable to traumatic shock.

Comment author: NancyLebovitz 11 December 2014 05:54:15PM *  6 points [-]

Not everyone who's had traumatic experiences has PTSD.

More information

The scientists have a theory, and it has to do with the root causes of PTSD, previously undocumented. As compared with the resilient Danish soldiers, all those who developed PTSD were much more likely to have suffered emotional problems and traumatic events prior to deployment. In fact, the onset of PTSD was not predicted by traumatic war experiences but rather by childhood experiences of violence, especially punishment severe enough to cause bruises, cuts, burns and broken bones. PTSD sufferers were also more likely to have witnessed family violence and to have experienced physical attacks, stalking or death threats by a spouse. They also more often had past experiences that they could not, or would not, talk about.

Comment author: JQuinton 08 December 2014 08:11:57PM 2 points [-]

Looking for some people to refute this recently hair-brained idea I came up with.

The time period from the advent of the industrial revolution to the so-called digital revolution was about 150 - 200 years. Even though computers were being used around WWII, widespread computer use didn't start to shake things up until 1990 or so. I would imagine that AI would constitute a similar fundamental shift in how we live our lives. So would it be a reasonable extrapolation to think that widespread AI would be about 150 - 200 years after the beginning of the information age?

Comment author: sixes_and_sevens 08 December 2014 08:41:17PM 18 points [-]

By what principle would such an extrapolation be reasonable?

Comment author: shminux 08 December 2014 09:15:28PM 9 points [-]

If you are doing reference class forecasting, you need at least a few members in your reference class and a few outside of it, together with the reasons why some are in and others out. If you are generalizing from one example, then, well...

Comment author: NobodyToday 08 December 2014 09:40:32PM 3 points [-]

I'm a firstyear AI student, and we are currently in the middle of exploring AI 'history'. Ofcourse I don't know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc.

The problem is they came across some hard problems which they can't really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem.

Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can't seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec's paradox ).

Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.

Comment author: Daniel_Burfoot 08 December 2014 11:02:55PM 2 points [-]

And machine learning is a thing which we can't seem to get very far with.

Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.

Comment author: Punoxysm 10 December 2014 05:17:31AM *  1 point [-]

but deep learning is really a new thing under the sun.

On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90's in the form of convolutional neural nets, and they had another burst of popularity in 2006.

Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress.

This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.

Comment author: DanielLC 09 December 2014 06:18:25PM 0 points [-]

And I doubt whether that is ever truly possible.

It's possible. We're an example of that. The question is if it's humanly possible.

There's a common idea of an AI being able to make another twice as smart as itself, which could make another twice as smart as itself, etc. causing an exponential increase in intelligence. But it seems just as likely that an AI could only make one half as smart as itself, in which case we'll never even be able to get the first human-level AI.

Comment author: ctintera 10 December 2014 11:40:00AM *  0 points [-]

The example you give to prove plausibility is also a counterexample to the argument you make immediately afterwards. We know that less-intelligent or even non-intelligent things can produce greater intelligence because humans evolved, and evolution is not intelligent.

It's more a matter of whether we have enough time to drudge something reasonable out of the problem space. If we were smarter we could search it faster.

Comment author: DanielLC 10 December 2014 07:06:52PM 0 points [-]

Evolution is an optimization process. It might not be "intelligent" depending on your definition, but it's good enough for this. Of course, that just means that a rather powerful optimization process occurred just by chance. The real problem is, as you said, it's extremely slow. We could probably search it faster, but that doesn't mean that we can search it fast.

Comment author: Punoxysm 08 December 2014 09:00:49PM 7 points [-]

Can anyone link a deep discussion, including energy and time requirements, issues with spaceship shielding from radiation and collisions, etc., that would be involved in interstellar travel? I ask because I am wondering whether this is substantially more difficult than we often imagine, and perhaps a bottleneck in the Drake Equation

Comment author: shminux 08 December 2014 09:08:45PM 4 points [-]

Project Icarus seems like a decent place to start.

Comment author: Alsadius 09 December 2014 12:06:15AM *  9 points [-]

tl;dr: It is definitely more difficult than most people think, because most people's thoughts(even scientifically educated ones) are heavily influenced by sci-fi, which is almost invariably premised on having easy interstellar transport. Even the authors like Clarke with difficult interstellar transport assume that the obvious problems(e.g., lightspeed) remain, but the non-obvious problems(e.g., what happens when something breaks when you're two light-years from the nearest macroscopic object) disappear.

Comment author: gjm 09 December 2014 02:02:17AM 4 points [-]

Some comments on this from Charles Stross. Not optimistic about the prospects. Somewhat quantitative, at the back-of-envelope level of detail.

Comment author: lukeprog 10 December 2014 03:46:42AM 2 points [-]

A fair bit of this is either cited or calculated within "Eternity in six hours." See also my interview with one of its authors, and this review by Nick Beckstead.

Comment author: Eniac 10 December 2014 04:41:14AM 2 points [-]

You might want to check out Centauri Dreams, best blog ever and dedicated to this issue.

Comment author: Dahlen 08 December 2014 09:20:39PM *  4 points [-]

Is it possible even in principle to perform a "consciousness transfer" from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require a complete brain transplant? (If so, how would neural structures found in the second body heal & connect with the transplanted brain so that a functional central nervous system results?) Wouldn't the person whose consciousness is being transferred experience some sort of personality change due to "inhabiting" a structurally different brain or body?

Is this whole hypothesis just an artifact of reminiscent introjected mind-body dualism, not compatible with modern science? Does the science world even know enough about consciousness and the brain to be able to answer this question?

I'm asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality. Unfortunately, I lack the necessary background knowledge to think coherently about this idea, so I figured there are many people on LW who don't, and could explain to me whether this whole idea makes sense.

Comment author: ChristianKl 08 December 2014 10:52:14PM 1 point [-]

There no such thing as "purely informational" when it comes to brains.

I'm asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality.

If you want to focus on that problem it's likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.

Comment author: Dahlen 12 December 2014 01:24:22AM 0 points [-]

There no such thing as "purely informational" when it comes to brains.

It's good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers.

If you want to focus on that problem it's likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.

It occurred to me too, but I'm not sure this is the definite conclusion. Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn't leave the patient in as good a state as simply moving to a 20-year-old body.

Comment author: CBHacking 08 December 2014 10:58:14PM 4 points [-]

I don't think anybody has hard evidence of answers to any of those questions yet (though I'd be fascinated to learn otherwise) but I can offer some conjectures:

Possible in principle? Yes. I see no evidence that sentience and identity are anything other than information stored in the nervous system, and in theory the cognitive portion of a nervous system an organ and could be transplanted like any other.

Preserving anatomical integrity? Not with anything like current science. We can take non-intrusive brain scans, but they're pretty low-resolution and (so far as I know) strictly read-only. Even simply stimulating parts of the brain isn't enough to basically re-write it in such a way that it becomes another person's brain.

Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable. It's probably possible to do it without a full brain at all, which seems less evil if you can somehow do it my some mechanism other than what amounts to a pre-natal full lobotomy, but would require the physical brain transplant option for transference.

Nerves connecting and healing? Nerves can repair themselves, though it's usually extremely slow. Stem cell therapies have potential here, though. Connecting the brain to the rest of the body is a lot of nerves, but they're pretty much all sensory and motor nerves so far as I know; the brain itself is fairly self-contained

Personality change? That depends on how different the new body is from the old, I would guess. The obviously-preferable body is a clone, for many reasons including avoiding the need to avoid immune system rejection of the new brain. Personality is always going to be somewhat externally-driven, so I wouldn't expect somebody transferred from a 90-year-old body to a 20-year-old one to have the same personality regardless of any other information because the body will just be younger. On the other hand, if you use a clone body that's the same age as the transferee, it wouldn't shock me if the personality didn't actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.

Now, mind you, I'm no brain surgeon (or medical professional of any sort), nor have I studied any significant amount of psychology. Nor am I a philosopher (see my question above). However, I don't really see how the mind could be anything except a characteristic of the body. Altering (intentionally or otherwise) the part of the body responsible for thought alters the mind. Our current attempted maps of the mind don't come close to fully representing the territory, but I firmly believe it is mappable. Whether an existing one is re-mappable I can't say, but the idea of transplanting a brain has been explored in science fiction for decades, and in theory I see ne logical reason why it couldn't work.

Comment author: Gunnar_Zarncke 09 December 2014 09:46:58PM *  3 points [-]

To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time.

I don't think this is currently possible. The body just wouldn't work. A large part of the 'wiring' during infant and childhood is connecting body parts and functions with higher and higher level concepts. Think about toilet training. You aren't even aware of how it works but it nonetheless somehow connects large scale planning (how urgent is it, when and where are toilets) to the actual control of the organs. Considering how differnt minds (including the connection to the body) are I think the minimum requirement (short of signularity-level interventions) is an identical twin.

That said I think the existing techniques for transferring motion from one brain to another combined with advanced hypnosis and drugs could conceivably developed to a point where it is possible to transfer noticable parts of your identity over to another body - at least over an extended period of time where the new brain 'learn' to be you. To also transfer memory is camparably easy. Whether the result can be called 'you' or is sufficiently alike to you is another question.

Comment author: Dahlen 12 December 2014 01:14:05AM *  1 point [-]

Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable.

That's how I pictured it, yes. At this point I wouldn't concern myself with the ethics of it, because, if our technology advances this much, then simply the fact that humanity can perform such a feat is an extremely positive thing, and probably the end of death as we know it. What worries me more is that this wouldn't result in a functional mature individual. For instance: in order to develop the muscular system, the body's skeletal muscles would have to experience some sort of stress, i.e. be used. If you grow the organism in a jar from birth to consciousness transfer (as is probably most ethical), it wouldn't have moved at all its entire life up to that point, and would therefore have extremely weak musculature. What to do in the meantime, electrically stimulate the muscles? Maybe, but it probably wouldn't have results comparable to natural usage. Besides, there are probably many other body subsystems that would suffer similarly without much you could do about it. See Gunnar Zarncke's comment below.

On the other hand, if you use a clone body that's the same age as the transferee, it wouldn't shock me if the personality didn't actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.

Yes, but I imagine most uses to be related to rejuvenation. It would mean that the genetic info required for cloning would have to be gathered basically at birth (and the cloning process begun shortly thereafter), and there would still be a 9-month age difference. There's little point in growing a backup clone for an organism so soon after birth. An age difference of 20 years between person and clone seems more reasonable.

Comment author: Alsadius 09 December 2014 12:09:43AM 1 point [-]

In order to provide a definite answer to this question, we'd need to know how the brain produces consciousness and personality, as well as the exact mechanism of the upload(e.g., can it rewire synapses?).

Comment author: Eniac 10 December 2014 05:06:05AM 0 points [-]

The task you describe, at least the part where no whole brain transplant is involved, can be divided into two parts: 1) extracting the essential information about your mind from your brain, and 2) implanting that same information back into another brain.

Either of these could be achieved in two radically different ways: a) psychologically, i.e. by interview or memoir writing on the extraction side and "brain-washing" on the implanting side, or b) technologically, i.e. by functional MRI, electro-encephalography, etc on the extraction side. It is hard for me to envision a technological implantation method.

Either way, it seems to me that once we understand the mind enough to do any of this, it will turn out the easiest to just do the extraction part and then simulate the mind on a computer, instead of implanting it into a new body. Eliminate the wetware, and gain the benefit of regular backups, copious copies, and Moore's law for increasing effectiveness. Also, this would be ethically much more tractable.

It seems to me this could also be the solution to the unfriendly AI problem. What if the AI are us? Then yielding the world to them would not be so much of a problem, suddenly.

Comment author: mwengler 11 December 2014 04:27:16PM 0 points [-]

psychologically, i.e. by interview or memoir writing on the extraction side and "brain-washing" on the implanting side,

I would expect recreating a mind from interviews and memoirs to be about as accurate as building a car based on interviews and memoirs written by someone who had driven cars. which is to say, the part of our mind that talks and writes is not noted for its brilliant and detailed insight into how the vast majority of the mind works.

Comment author: mwengler 11 December 2014 04:35:24PM -1 points [-]

Suppose all the memories in one person were wiped and replaced with your memories. I believe the new body would claim to be you. It would introspect as you might now, and find your memories as its own, and say "I am Dahlen in a new body."

But would it be you? If the copying had been non-destructive, then Dahlen in the old body still exists and would "know" on meeting Dahlen in the new body that Dahlen in the new body was really someone else who just got all Dahlen's memories up to that point.

Meanwhile, Dahlen in the new body would have capabilities, moods, reactions, which would depend on the substrate more than the memories. The functional parts of the brain, the wiring-other-than-memories as it were, would be different in the new body. Dahlen in the new body would probably behave in ways that were similar to how the old body with its old memories behaved. It would still think it was Dahlen, but as Dahlen in the old body might think, that would just be its opinion and obviously it is mistaken.

As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.

Would anybody other than Dahlen in the old body have a reason to doubt that Dahlen in the new body wasn't really Dahlen? I don't think so, and especially Dahlen in the new body would probably be pretty sure it was Dahlen, even if it claimed to rationally understand how it might not be. It would know it was somebody, and wouldn't be able to come up with any other compelling idea for who it was other than Dahlen.

Comment author: Dahlen 12 December 2014 12:53:39AM *  1 point [-]

I understand all this. And it's precisely the sort of personality preservation that I find largely useless and would like to avoid. I'm not talking about copying memories from one brain to another; I'm talking about preserving the sense of self in such a way that the person undergoing this procedure would have the following subjective experience: be anesthetized (probably), undergo surgery (because I picture it as some form of surgery), "wake up in new body". (The old body would likely get buried, because the whole purpose of performing such a transfer would be to save dying -- very old or terminally ill -- people's lives.) There would be only one extant copy of that person's memories, and yet they wouldn't "die"; there would be the same sort of continuity of self experienced by people before and after going to sleep. The one who would "die" is technically the person in the body which constitutes the recipient of the transfer (who may have been grown just for this purpose and kept unconscious its whole life). That's what I mean. Think of it as more or less what happens to the main character in the movie Avatar.

I realize the whole thing doesn't sound very scientific, but have I managed to get my point across?

As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.

Yes, but... Everybody's physiological basis for feelings is more or less the same; granted, there are structural differences that cause variation in innate personality traits and other mental functions, and a different brain might employ the body's neurotransmitter reserve in different ways (I think), but the whole system is sufficiently similar from human to human that we can relate to each other's experiences. There would be differences, and the differences would cause the person to behave differently in the "new body" than it did in the "old body", but I don't think one would have to move the glands or limbic system or what-have-you in addition to just the brain.

Comment author: NancyLebovitz 08 December 2014 10:05:27PM 10 points [-]

Is there any plausible way the earth could be moved away from the sun and into an orbit which would keep the earth habitable when the sun becomes a red giant?

Comment author: calef 08 December 2014 10:59:42PM *  15 points [-]

According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.

This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 * 10^10 in Joules / Kilogram, or about 3.4 * 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).

Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.

Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).

So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.

There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.

Comment author: Eniac 09 December 2014 01:10:30AM 11 points [-]

I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.

Comment author: Kyre 09 December 2014 05:13:05AM 10 points [-]
Comment author: Eniac 10 December 2014 04:24:40AM 2 points [-]

Hah, thanks for pointing this out. I must have read or heard of this before and then forgotten about it, except in my subconscious. Looks like they have done the math, too, and it figures. Cool!

Comment author: Daniel_Burfoot 08 December 2014 11:00:24PM 1 point [-]

This is a fascinating question. Very speculatively, I could imagine somehow using energy gained by pushing other objects closer to the Sun, to move the Earth away from the Sun. Like some sort of immense elastic band stretching between Mars and Earth, pulling Earth "up" and Mars "down".

Comment author: DanielLC 09 December 2014 06:04:16PM 1 point [-]

That is essentially what would happen if you used gravitational assistance and orbited asteroids between Mars and Earth.

Comment author: CBHacking 08 December 2014 11:07:32PM 4 points [-]

Ignoring the concept of "can we apply that much delta-V to a planet?", I'd be interested to know whether it's believed that there exists a "Goldilocks zone" suitable for life at all stages of a star's life. Intuitively it seems like there should be, I'm not sure.

Of course, it should be pointed out that the common understanding of "when the sun becomes a red giant" may be a bit flawed; the sun will cool and expand, then collapse. On a human time scale, it will spend a lot of that time as a red giant, but if you simply took the Earth when its orbit started to be crowded by the inner edge of the Goldilocks zone and put it in a new orbit, that new orbit wouldn't be anywhere close to an eternally safe one. Indeed, I suspect that the outermost of the orbits required for the giant-stage sun would be too far from the sun at the time we'd first need to move the Earth.

Comment author: JoshuaZ 08 December 2014 11:52:54PM 1 point [-]

Yes, I saw an article a few years ago a back of the envelope estimate that suggested this would be doable if one could turn mass on the moon more or less directly to energy and use the moon as a gravitational tug to slowly move Earth out of the way. You can change mass almost directly into energy by feeding the mass into a few smallish blackholes.

Comment author: blogospheroid 09 December 2014 09:44:00AM *  0 points [-]

How do they propose to move the blackholes? Nothing can touch a blackhole, right?

Comment author: gjm 09 December 2014 12:26:45PM 5 points [-]

Black holes feel gravity just like any other massive body. And they can be electrically charged. So you can move them around with strong enough gravitational and/or electric fields.

Comment author: DanielLC 09 December 2014 06:03:21PM 1 point [-]

It can, as long as you don't mind that you won't get it back when you're done. You have to constantly fuel the black hole anyway. Just throw the fuel in from the opposite direction that you want the black hole to go.

Comment author: Eniac 10 December 2014 04:34:38AM 4 points [-]

Throwing mass into a black hole is harder than it sounds. Conveniently sized black holes that you actually would have a chance at moving around are extremely small, much smaller than atoms, I believe. I think they would just sit there without eating much, despite strenous efforts at feeding them. The cross-section is way too small.

To make matters worse, such holes would emit a lot of Hawking radiation, which would a) interfere with trying to feed them, and b) quickly evaporate them ending in an intense flash of gamma rays.

Comment author: DanielLC 10 December 2014 06:56:13AM 0 points [-]

The problem is throwing mass into other mass hard enough to make a black hole in the first place.

Hawking radiation isn't a big deal. In fact, the problem is making a black hole small enough to get a significant amount of it. An atom-sized black hole has around a tenth of a watt of Hawking radiation. I think it might be possible to get extra energy from it. From what I understand, Hawking radiation is just what doesn't fall back in. If you enclose the black hole, you might be able to absorb some of this energy.

Comment author: Eniac 11 December 2014 03:39:52AM *  1 point [-]

Yes, making them would be incredibly hard, and because of their relatively short lifetimes, it would be extremely surprising to find any lying around somewhere. Atom sized black holes would be very heavy and not produce much Hawking readiation, as you say. Smaller ones would produce more Hawking radiation, be even harder to feed, and evaporate much faster.

Comment author: DaFranker 09 December 2014 03:21:35PM *  2 points [-]

I'm curious about the thought process that led to this being asked in the "stupid questions" thread rather than the "very advanced theoretical speculation of future technology" thread. =P

As a more serious answer: Anything that would effectively give us a means to alter mass and/or the effects of gravity in some way (if there turns out to be a difference) would help a lot.

Comment author: NancyLebovitz 09 December 2014 04:02:35PM 2 points [-]

I wasn't sure there was a way to do it within current physics.

Now we get to the hard question: supposing we (broadly interpreted, it will probably be a successor species) want to move the earth outwards using those little gravitational nudges, how do we get civilizations with a sufficiently long attention span?

Comment author: DaFranker 09 December 2014 04:49:08PM 0 points [-]

[...] how do we get civilizations with a sufficiently long attention span?

I heard Ritalin has a solution. Couldn't pay attention long enough to verify. ba-dum tish

On a serious note, isn't the whole killing-the-Earth-for-our-children thing a rather interesting scenario? I've never seen it mentioned in my game theory-related reading, and I find that to be somewhat sad. I'm pretty sure a proper modeling of the game scenario would cover both climate change and eaten-by-red-giant.

Comment author: NancyLebovitz 09 December 2014 05:10:01PM 0 points [-]

I don't see the connection to killing the earth for our children. Moving the earth outwards is an effort to save the earth for our far future selves and our children.

Comment author: gjm 09 December 2014 07:11:39PM 3 points [-]

I think "for our children" means "as far as our children are concerned" and failing to move the earth's orbit so it doesn't get eaten by the sun (despite being able to do it) would qualify as "killing the earth for our children". (The more usual referents being things like resource depletion and pollution with potentially disastrous long-term effects.)

Comment author: NancyLebovitz 09 December 2014 07:17:26PM 0 points [-]

Thanks. That makes sense.

Comment author: DanielLC 09 December 2014 06:01:23PM 1 point [-]

If we haven't gotten one by then, we're doomed. Or at least, we don't get a very good planet. We could still have space-stations or live on planets where we have to bring our own atmosphere.

Comment author: shminux 09 December 2014 07:45:48PM *  3 points [-]

Not "when the sun becomes a red giant", because red giants are variable on a much too short time scale, but, as others mentioned, we can probably keep the earth in a habitable zone for another 5 billion years or so. We have more than enough hydrogen on earth to provide the necessary potential energy increase with fusion-based propulsion, though building something like a 100 petaWatt engine is problematic at this point, (for comparison, it is a significant fraction of the total solar radiation hitting the earth).

EDIT: I suspect that terraforming Mars (and/or cooling down the Earth more efficiently when the Sun gets brighter) would require less energy than moving the Earth to the Mars orbit. My calculations could be off, though, hopefully someone can do them independently.

Comment author: Anomylous 09 December 2014 08:31:17PM 4 points [-]

Only major problem I know of with terraforming Mars is how to give it a magnetic field. We'd have to somehow re-melt the interior of the planet. Otherwise, we could just put up with constant intense solar radiation, and atmosphere off-gassing into space. Maybe if we built a big fusion reactor in the middle of the planet...?

Comment author: shminux 09 December 2014 09:40:09PM *  9 points [-]

I recall estimating the power required to run an equatorial superconducting ring a few meters thick 1 km or so under the Mars surface with enough current to simulate Earth-like magnetic field. If I recall correctly, it would require about the current level of power generation on Earth to ramp it up over a century or so to the desired level. Then whatever is required to maintain it (mostly cooling the ring), which is very little. Of course, an accident interrupting the current flow would be an epic disaster.

Comment author: alienist 11 December 2014 06:17:38AM 5 points [-]

Wouldn't it be more efficient to use that energy to destroy Mars and build start building a Dyson swarm from the debris?

Comment author: shminux 11 December 2014 04:13:49PM 3 points [-]

Let's do a quick estimate. Destroying a Mars-like planet requires expending the equivalent of its gravitational self-energy, ~GM^2/R, which is about 10^32J (which we could easily obtain from a comet 10 kn in radius... consisting of antimatter!) For comparison, the Earth's magnetic field has about 10^26J of energy, a million times less. I leave it to you to draw the conclusions.

Comment author: mwengler 11 December 2014 09:33:32PM 3 points [-]

The sun's luminosity will rise by around 300X as it turns into a giant. If we wish to keep the same energy flux onto the earth at that point, we must increase the earth's orbit a factor of sqrt(300) = 17X. The total energy of the earth's current orbit is 2.65E33 J. We must reduce this to 1/17 of its current value. or reduce it by (16/17)*2.65E33 J = 2.5E33 J. The current total annual energy production in the world is about 5E17 J. The sun will be a red giant in about 7.6E9 years. So we would need about a million times current global energy production running full time into rocket motors to push the earth out to a safe orbit by the time the sun has expanded.

But it is worse than that. The Sun actually expands over a scant 5 million years near the end of that 7.6E9 years. So to avoid freezing for billions of years because we have started moving away from the sun too soon, we essentially will need a billion times current energy production running into rocket engines for those 5 million years of solar expansion. But the good news is we have 7.6E9 billion years to figure out how to do that.

If we use plasma rockets which push reaction mass out at 1% the speed of light, then we will need a total of about 6E16 kg reaction mass, or about 0.000001% of the earth's total mass. The total mass of water on the earth is about 1E21 kg so we could do all of this using water as reaction mass and still have 99.99% of the water left when we are done.

Comment author: Nornagest 11 December 2014 10:28:47PM 1 point [-]

I wonder what the exhaust plume of an engine like that would look like, and how far away from it you'd have to be standing to still be capable of looking at anything after a second or two.

Comment author: gattsuru 08 December 2014 10:06:04PM 7 points [-]

Are there any good trust, value, or reputation metrics in the open source space? I've recently established a small internal-use Discourse forum and been rather appalled by the limitations of what is intended to be a next-generation system (status flag, number of posts, tagging), and from a quick overview most competitors don't seem to be much stronger. Even fairly specialist fora only seem marginally more capable.

This is obviously a really hard problem and conflux of many other hard problems, but it seems odd that there are so many obvious improvements available.

((Inspired somewhat by my frustration with Karma, but I'm honestly more interested in its relevance for outside situations.))

Comment author: fubarobfusco 08 December 2014 11:32:41PM 1 point [-]

I don't know of one. I doubt that everyone wants the same sort of thing out of such a metric. Just off the top of my head, some possible conflicts:

  • Is a post good because it attracts a lot of responses? Then a flamebait post that riles people into an unproductive squabble is a good post.
  • Is a post good because it leads to increased readership? Then spamming other forums to promote a post makes it a better post, and posting porn (or something else irrelevant that attracts attention) is really very good.
  • Is a post good because a lot of users upvote it? Then people who create sock-puppet accounts to upvote themselves are better posters; as are people who recruit their friends to mass-upvote their posts.
  • Is a post good because the moderator approves of it? Then as the forum becomes more popular, if the moderator has no additional time to review posts, a diminishing fraction of posts are good.

The old wiki-oid site Everything2 explicitly assigns "levels" to users, based on how popular their posts are. Users who have proven themselves have the ability to signal-boost posts they like with a super-upvote.

It seems to me that something analogous to PageRank would be an interesting approach: the estimated quality of a post is specifically an estimate of how likely a high-quality forum member is to appreciate that post. Long-term high-quality posters' upvotes should probably count for a lot more than newcomers' votes. And moderators or other central, core-team users should probably be able to manually adjust a poster's quality score to compensate for things like a formerly-good poster going off the deep end, the revelation that someone is a troll or saboteur, or (in the positive direction) someone of known-good offline reputation joining the forum.

Comment author: Viliam_Bur 09 December 2014 10:42:17AM 8 points [-]

Tangentially, is it possible for a good reputation metric to survive attacks in real life?

Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.

One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to "recommend" the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let's suppose it isn't, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you "recommend" this party, which according to their model would help them win the election.

In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where otherwise trustworthy people are forced to act against their will. A possible defense would be if people could somehow hide their votes; e.g. your boss might know that you have high prestige and the company has low prestige, but has no methods to verify whether you have "recommended" the company or not (so you could just lie that you did). But if we make everything secret, is there a way to verify whether the system is really working as described? (The owner of the system could just add 9000 trust points to his favorite political party and no one would ever find out.)

I suspect this is all confused and I am asking a wrong question. So feel free to answer to question I should have asked.

Comment author: kpreid 09 December 2014 06:07:27PM 3 points [-]

I don't have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run.

… otherwise trustworthy people are forced to act against their will. … But if we make everything secret, is there a way to verify whether the system is really working as described?

This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.

Comment author: alienist 11 December 2014 06:57:51AM 6 points [-]

In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile),

And then there are absentee ballots which potentially make said laws a joke.

Comment author: gattsuru 09 December 2014 08:19:03PM *  2 points [-]

There are simultaneously a large number of laws prohibiting employers from retaliating against persons for voting, and a number of accusations of retaliation for voting. So this isn't a theoretical issue. I'm not sure it's distinct from other methods of compromising trusted users -- the effects are similar whether the compromised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Certificates -- but it's a good demonstration that you simply can't trust any node inside a network.

(There's some interesting overlap with MIRI's value stability questions, but they're probably outside the scope of this thread and possibly only metaphor-level.)

Interestingly, there are some security metrics designed with the assumption that some number of their nodes will be compromised, and with some resistance to such attacks. I've not seen this expanded to reputation metrics, though, and there are technical limitations. TOR, for example, can only resist about a third of its nodes being compromised, and possibly fewer than that. Other setups have higher theoretical resistance, but are dependent on central high-value nodes that trade resistance against to vulnerability against spoofing.

It seems like there's some value in closing the gap between carrier wave and signal in reputation systems, rather than a discrete reputation system, but my sketched out implementations become computationally intractable quickly.

Comment author: Lumifer 09 December 2014 06:39:18PM 4 points [-]

Are there any good trust, value, or reputation metrics

The first problem is defining what do you want to measure. "Trust" and "reputation" are two-argument functions and "value" is notoriously vague.

Comment author: gattsuru 09 December 2014 08:32:08PM 4 points [-]

For clarity, I meant "trust" and "reputation" in the technical senses, where "trust" is authentication, and where "reputation" is an assessment or group of assessments for (ideally trusted) user ratings of another user.

But good point, especially for value systems.

Comment author: Lumifer 09 December 2014 09:10:41PM *  1 point [-]

I am still confused. When you say that trust is authentication, what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?

For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?

Note that weeding out idiots, sockpuppets, and trolls is much easier than constructing a useful-for-everyone ranking of legitimate users. Different people will expect and want your rankings to do different things.

Comment author: gattsuru 09 December 2014 11:34:00PM *  3 points [-]

what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?

For starters, a system to be sure that a user or service is the same user or service it was previously. Web of trusts /or/ a central authority would work, but honestly we run into limits even before the gap between electronic worlds and meatspace. PGP would be nice, but PGP itself is closed-source, and neither PGP nor OpenPGP/GPG are user-accessible enough to even survive in the e-mail sphere they were originally intended to operate. SSL allows for server authentication (ignoring the technical issues), but isn't great for user authentication.

I'm not aware of any generalized implementation for other use, and the closest precursors (keychain management in Murmur/Mumble server control?) are both limited and intended to be application-specific. But at the same time, I recognize that I don't follow the security or open-source worlds as much as I should.

For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?

Oh, yeah. It's not an easy problem to solve Right.

I'm more interested in if anyone's trying to solve it. I can see a lot of issues with a user-based reputation even in addition to the obvious limitation and tradeoffs that fubarobfusco provides -- a visible metric is more prone to being gamed but obscuring the metric reduces its utility as a feedback for 'good' posting, value drift without a defined root versus possible closure without, so on.

What surprises me is that there are so few attempts to improve the system beyond the basics. IP.Board, vBulletin, and phpBoard plugins are usually pretty similar -- the best I've seen merely lets you disable them on a per-subfora basis rather than globally, and they otherwise use a single point score. Reddit uses the same Karma system whether you're answering a complex scientific question or making a bad joke. LessWrong improves on that only by allowing users to see how contentious a comment's scoring. Discourse uses count of posts and tags, almost embarrassingly minimalistic. I've seen a few systems that make moderator and admin 'likes' count for more. I think that's about the fanciest.

I don't expect them to have an implementation that matches my desires, but I'm really surprised that there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds. These might even be /bad/ decisions, but usually you see someone making them.

I expect Twitter or FaceBook have something complex underneath the hood, but if they do, they're not talking about the specifics and not doing a very good job. Maybe its their dominance in the social development community, but I dunno.

Comment author: Lumifer 10 December 2014 02:00:48AM 1 point [-]

For starters, a system to be sure that a user or service is the same user or service it was previously.

That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?

You don't need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before.

I'm more interested in if anyone's trying to solve it.

Well, again, the critical question is: What are you really trying to achieve?

If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it's still a two-argument function.

there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds.

Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well.

I expect Twitter or FaceBook have something complex underneath the hood

Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.

Comment author: fubarobfusco 10 December 2014 02:30:11AM 2 points [-]

That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things)

"All the usual things" are many, and some of them are quite wrong indeed.

If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It's better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources.

On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people's — maybe you want tripcodes.

And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.

Comment author: knb 08 December 2014 11:29:52PM 9 points [-]

Would it be possible to slow down or stop the rise of sea level (due to global warming) by pumping water out of the oceans and onto the continents?

Comment author: CBHacking 09 December 2014 12:10:17AM *  3 points [-]

Where does the water go? Assuming you want to reduce sea level by a 1/2 inch using this mechanism, you have to do the equivalent of covering the entire ETA: land area of earth in a full inch of water (what's worse, seawater; you'd want to desalinate it). Even assuming you can find room on land for all this water and the pump capacity to displace it all, what's to stop it from washing right back out to sea? Some of it can be used to refill aquifers, but the capacity of those is trivial next to that of the oceans. Some of it can be stored as ice and snow, but global warming will reduce (actually, has already quite visibly reduced) land glaciation; even if you can somehow induce the water to freeze, that heat you extract from it will have to go somewhere and unless you can dump it out of the atmosphere entirely it will just contribute to the warming. The rest of the water will just flood the existing rivers in its mad rush to do what nearly all continental water is always doing anyhow: flowing to sea.

Comment author: TheOtherDave 09 December 2014 06:56:31PM 2 points [-]

Clearly, the solution is to build a space elevator and ship water into orbit. We lower the sea levels, the water is there if we need it later, and in the meantime we get to enjoy the pretty rings.

(No, I'm not serious.)

Comment author: Vaniver 09 December 2014 07:02:03PM 0 points [-]

in the meantime we get to enjoy the pretty rings.

Now I'm curious how much energy it would take to set up a stable ring orbit made of ice crystals for Earth, or if that would be impossible without stationkeeping corrections.

Comment author: Lumifer 09 December 2014 07:29:53PM 1 point [-]

How long will ice survive in Earth's orbit, anyway?

Comment author: CBHacking 09 December 2014 11:26:13PM -1 points [-]

I think it would depend on the orbit? Obviously it would need to be in an orbit that does not collide with our artificial satellites, and it would need to be high enough to make atmospheric drag negligible, but that leave a lot of potential orbits. I don't think of any reason ice would go away with any particular haste from any of them, but I'm not an expert in this area.

Orbital decay aside, why might ice (once placed into an at-the-time stable orbit) not survive?

Comment author: Lumifer 10 December 2014 01:49:15AM *  1 point [-]

why might ice (once placed into an at-the-time stable orbit) not survive?

Sun.

Solar radiation at 1 AU is about 1.3KW/sq.m. Ice that is not permanently in the shade will disappear rather rapidly, I would think.

Comment author: CBHacking 10 December 2014 07:52:05AM 0 points [-]

I would think it would lose heat to space fast enough, but maybe not. I know heat dissipation is a major concern for spacecraft, but those are usually generating their own heat rather than just trying to dump what they pick up from the sun. What would happen to the ice / water? It's not like it can just evaporate into the atmosphere...

Comment author: RichardKennaway 10 December 2014 01:59:05PM *  3 points [-]

It's not like it can just evaporate into the atmosphere...

Vapour doesn't need an atmosphere to take it up. Empty space does just as well.

So, how long would a snowball in high orbit last? Sounds like a question for xkcd. A brief attempt at a lower bound that is probably a substantial underestimate:

How much energy has to be pumped in per kilogram to turn ice at whatever the "temperature" is in orbit into water vapour? Call that E. Let S be the solar insolation of 1.3 kW/m^2. Imagine the ice is a spherical cow, er, a rectangular block directly facing the sun. According to Wikipedia the albedo of sea ice is in the range 0.5 to 0.7. Take that as 0.6, so the fraction of energy retained is A = 0.4. The density of ice is D = 916.7 kg/m^3. Ignore radiative cooling, conduction to the cold side of the iceberg, and time spent in the Earth's shadow, and assume that the water vapour instantly vanishes. Then the surface will ablate at a rate of SA/ED m/s. Equivalently, ED/86400SA days per metre.

For simplicity I'll take the ice to be at freezing point. Then:

E = 334 kJ/kg to melt + 420 kJ/kg to reach boiling point + 2260 kJ/kg to boil = 3014 kJ/kg.

For a lower starting temperature, increase E accordingly.

3014 * 916.7 / (86400 * 1.3 * 0.4) = 61 days per metre. Not all that long, but meanwhile, you've created a hazard for space flight and for the skyhook.

I suspect that ignoring radiative cooling will be the largest source of error here, but this isn't a black body, so I don't know how closely the Stefan-Boltzmann law will apply, and I haven't calculated the results if it did. (ETA: The black body temperature of the Moon is just under freezing.)

(ETA: fixed an error in the calculation of E, whereby I had 4200 instead of 420 kJ/kg to reach boiling point. Also, pasting in all the significant figures from the sources doesn't mean this is claimed to be anything more than a rough estimate.)

Comment author: Lumifer 10 December 2014 03:38:13PM *  3 points [-]

to reach boiling point

This is vacuum -- all liquid water will boil immediately, at zero Celsius. Besides I'm sure there will be some sublimation of ice directly to water vapor.

In fact, looking at water's phase diagram, in high vacuum liquid water just doesn't exist so I think ice will simply sublimate without the intermediate liquid stage.

Comment author: Eniac 09 December 2014 01:31:18AM 1 point [-]

Well, this is not pumping, but it might be much more efficient: As I understand, the polar ice caps are in an equilibrium between snowfall and runoff. If you could somehow wall in a large portion of polar ice, such that it cannot flow away, it might rise to a much higher level and sequester enough water to make a difference in sea levels. A super-large version of a hydroelectric dam, in effect, for ice.

It might also help to have a very high wall around the patch to keep air from circulating, keeping the cold polar air where it is and reduce evaporation/sublimation.

Comment author: Falacer 09 December 2014 02:05:38AM 16 points [-]

We could really use a new Aral sea, but intuitively I'd expected that this would be a tiny dent in the depth of the oceans. So, to the maths:

Wikipedia claims that from 1960 to 1998 the volume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.

I'm going to give that another 5% for more loss since then, as the South Aral Sea has now lost its eastern half enitrely.

This gives ~1100 * .85 = 935km^3 of water that we're looking to replace.

The Earth is ~500m km^2 in surface area, approx. 70% of which is water = 350m km^2 in water.

935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.

This is massively larger that I would have predicted, and it gets better. The current salinity of the Aral Sea is 100g/l which is way higher than that of seawater at 35g/l, so we could pretty much pump the water straight in still with net environmental gain. Infact this is a solution to the crisis that has been previously proposed, although it looks like most people would rather dilute the seawater first.

To acheive the desired result of 1 inch drop in sea level, we only need to find 9 equivalent projects around the world. Sadly, the only other one I know of is Lake Chad, which is significantly smaller than the Aral Sea. However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants: I would expect that we might be able to get this amount again if we undertook a desalination/irrigation program in the Sahara.

Comment author: mwengler 11 December 2014 04:00:19PM 2 points [-]

Dead Sea and Salton Sea leap to mind as good projects.

Also could we store more water in the atmosphere? If we just poured water into a desert like the Sahara, most of it would evaporate before it flowed back to the sea. This would seem to raise the average moisture content of the atmosphere. Sure eventually it gets rained back down, but this would seem to be a feature more than a bug for a world that keeps looking for more fresh water. Indeed my mind is currently inventing interesting methods for moving the water around using purely the heat from the sun as an energy source.

Comment author: DanielLC 09 December 2014 06:07:14PM *  2 points [-]

One possibility would be to replace the ice caps by hand. Run a heated pipeline from the ocean to the icecaps, pump water there, and let it freeze on its own. I don't know how well that would work, and I suspect you're better off just letting sea levels rise. If you need the land that bad, just make floating platforms.

Edit: Replace "ice caps" with "Antartica". Adding ice to the northern icecap, or even the southern one out where it's floating, won't alter the sea level since floating objects displace their mass in water.

Comment author: mwengler 11 December 2014 03:54:59PM 9 points [-]

I recommend googling "geoengineering global warming" and reading some of the top hits. There are numerous proposals for reducing or reversing global warming which are astoundingly less expensive than reducing carbon dioxide emissions, and also much more likely to be effective.

To your direct question about storing more water on land, this would be a geoengineering project. Some straightforward approaches to doing it:

Use rainfall as your "pump" in order to save having to build massive energy using water pumps. Without any effort on our part, nature natually lifts water a km or more above sea level and then drops it, much of it dropped onto land. That water generally is funneled back to the ocean in rivers. With just the constructino of walls, some rivers might be prevented from draining into the ocean. Large areas would be flooded by the river, storing water other than in the ocean.

Use gravity as your pump. THere are many large locations on earth than are below sea level. Aquifers that took no net energy to do pumping could be built that would essentially gravity-feed ocean water into these areas. These areas can be hundreds of meters below sea level, so if even 1% of the earth's surface is 100 m below sea level, then the ocean's could be lowered by a bit more than 1 m by filling these depressions with ocean water.

Of course either one of these approaches will cause massive other changes, although probably in a positive direction as far as climate is concerned. More water surface on the planet should mean more evaporation of water which reates more clouds which reflects more energy from the sun, lowering the heating of the earth. But of course a non-trivial analysis might yield a rich detail of effects worth pondering.

In the past features like the Salton sea and the Dead sea have been filled by fresh-water rivers, essentially meaning that rain was used as the pump to fill them. The demand for fresh water has stopped these features from being filled. It seems to me that an aquifer to refill these features with salt water from the ocean would be relatively benign in impact, since in nature these features have been fuller of salt water in the past, and so the impact of that water might be blessed by humanity as "natural" instead of cursed by humanity as "man made."

Comment author: Capla 12 December 2014 02:06:07AM *  0 points [-]

This should be a what if question. I'd like to see what Randall would do with it.

Comment author: knb 12 December 2014 04:29:28AM 0 points [-]

I don't know what you mean. Who is Randal?

Comment author: Capla 12 December 2014 05:07:04AM 2 points [-]

Randall Munroe Is the person who draws XKCD. He also has a blog where he give in depth answers to unusual questions.

Comment author: timujin 09 December 2014 09:15:30AM 5 points [-]

I have a constant impression that everyone around me is more competent than me at everything. Does it actually mean that I am, or is there some sort of strong psychological effect that can create that impression, even if it is not actually true? If there is, is it a problem you should see your therapist about?

Comment author: LizzardWizzard 09 December 2014 10:30:45AM 1 point [-]

I suppose that the problem emerged only because you communicate only with people of your sort and level of awareness, try to go on a trip to some rural village or start conversations with taxists, dishwashers, janitors, cooks, security guards etc.

Comment author: NancyLebovitz 09 December 2014 10:34:29AM 3 points [-]

Possibly parallel-- I've had a feeling for a long time that something bad was about to happen. Relatively recently, I've come to believe that this isn't necessarily an accurate intuition about the world, it's muscle tightness in my abdomen. It's probably part of a larger pattern, since just letting go in the area where I feel it doesn't make much difference.

I believe that patterns of muscle tension and emotions are related and tend to maintain each other.

It's extremely unlikely that everyone is more competent than you at everything. If nothing else, your writing is better than that of a high proportion of people on the internet. Also, a lot of people have painful mental habits and have no idea that they have a problem.

More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything?

This sounds to me like something worth taking to a therapist, bearing in mind that you may have to try more than one therapist to find one that's a good fit.

I believe there's strong psychological effect which can create that impression-- growing up around people who expect you to be incompetent. Now that I think about it, there may be genetic vulnerability involved, too.

Possibly worth exploring: free monthly Feldenkrais exercise-- this are patterns of gentle movement which produce deep relaxation and easier movement. The reason I think you can get some evidence about your situation by trying Feldenkrais is that, if you find your belief about other people being more competent at everything goes away, even briefly, than you have some evidence that the belief is habitual.

Comment author: timujin 09 December 2014 11:34:27AM 1 point [-]

If nothing else, your writing is better than that of a high proportion of people on the internet.

Do you know me?

More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything?

I find a lot of evidence for it, but I am not sure I am not being selective. For example, I am the only one in my peer group that never did any extra-curricular activities at school. While everyone had something like sports or hobbies, I seemed to only study at school an waste all my other time surfing the internet and playing the same video games over and over.

Comment author: NancyLebovitz 09 December 2014 11:45:17AM 0 points [-]

I don't think I know you, but I'm not that great at remembering people. I made the claim about your writing because I've spent a lot of time online.

I'm sure you're being selective about the people you're comparing yourself to.

Comment author: MathiasZaman 09 December 2014 12:01:28PM 0 points [-]

I seemed to only study at school an waste all my other time surfing the internet and playing the same video games over and over.

Obvious question: Are you better at those games than other people? (On average, don't compare yourself to the elite.)

How easy did studying come to you?

Comment author: timujin 09 December 2014 07:10:13PM 1 point [-]

At THOSE games? Yes. I can complete about half of American McGee's Alice blindfolded. Other games? General gaming? No. Or, okay, I am better than non-gamers, but my kinda-gamer peers are crub-stomping me at multiplayer in every game.

Studying - very easy. Now, when I am a university student - quite hard.

Comment author: MathiasZaman 10 December 2014 01:19:53PM 6 points [-]

Studying - very easy. Now, when I am a university student - quite hard.

Seems like you fell prey to the classic scenario of "being intelligent enough to breeze through high school and all I ended up with is a crappy work ethic."

University is as good of a place as any to fix this problem. First of all, I encourage you to do all the things people tell you you should do, but most people don't: Read up before classes, review after classes, read the extra material, ask your professors questions or help, schedule periodic review sessions of the stuff you're supposed to know... You'll regret not doing those things when you get your degree but don't feel very competent about your knowledge. Try to make a habit out of this and it'll get easier in other aspects of your life.

And try new things. This is probably a cliché in the LW-sphere by now, but really try a lot of new things.

Comment author: timujin 10 December 2014 01:55:59PM 0 points [-]

Thanks. Still, should I take it as "yes, you are less competent than people around you"?

Comment author: polymathwannabe 10 December 2014 02:29:53PM 2 points [-]

Maybe just less disciplined than you need to be. "Less competent" is too confusingly relative to mean anything solid.

Comment author: timujin 10 December 2014 02:37:49PM 1 point [-]

Well, here's a confusing part. I didn't tell the whole truth in parent post, there are actually two areas that I am probably more competent than peers, in which others openly envy me instead of the other way around. One is the ability to speak English (a foreign language, most my peers wouldn't be able to ask this question here), another is discipline. Everyone actually envies me for almost never procrastinating, never forgetting anything, etc. Are we talking about different disciplines here?

Comment author: polymathwannabe 10 December 2014 02:45:58PM 0 points [-]

If you already have discipline, what exactly is the difficulty you're finding to study now as compared to previous years?

Comment author: ChristianKl 09 December 2014 12:41:31PM 5 points [-]

The idea that playing an instrument is a hobby while playing a video game isn't is completely cultural. It says something about values but little about competence.

Comment author: NancyLebovitz 09 December 2014 04:04:01PM 1 point [-]

Having a background belief that you're worse than everyone at everything probably lowered your initiative.

Comment author: mwengler 11 December 2014 03:03:39PM 1 point [-]

I've had a feeling for a long time that something bad was about to happen.

Nancy, I believe you are describing anxiety. That you are anxious, that if you went to a psychologist for therapy and you were covered by insurance that they would list your diagnosis on the reimbursement form as "generalized anxiety disorder."

I say this not as a psychologist but as someone who was anxious much of his life. For me it was worth doing regular talking therapy and (it seems to me) hacking my anxiety levels slowly downward through directed introspection. I am still more timid than I would like in situations where, for example, I might be very direct telling a woman (of the appropriate sex) I love her, or putting my own ideas forward forcefully at work. But all of these things I do better now than I did in the past, and I don't consider my self-adjustment to be finished yet.

Anyway, If you haven't named what is happening to you as "anxiety," it might be helpful to consider that some of what has been learned about anxiety over time might be interesting to you, that people who are discussing anxiety may often be discussing something relevant to you.

Comment author: Viliam_Bur 09 December 2014 10:50:04AM 6 points [-]

Impostor syndrome:

Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.

Psychological research done in the early 1980s estimated that two out of five successful people consider themselves frauds and other studies have found that 70 percent of all people feel like impostors at one time or another. It is not considered a psychological disorder, and is not among the conditions described in the Diagnostic and Statistical Manual of Mental Disorders.

Comment author: timujin 09 December 2014 11:30:42AM 3 points [-]

Err, that's not it. I am no more successful than them. Or, at least, I kinda feel that everyone else is more successful than me as well.

Comment author: MathiasZaman 09 December 2014 12:00:18PM *  0 points [-]

I frequently feel similar and I haven't found a good way to deal with those feelings, but it's implausible that everyone around you is more competent at everything. Some things to take into account:

  • Who are you comparing yourself to? Peers? Everyone you meet? Successful people?
  • What traits are you comparing? It's unlikely that someone who is, for example, better at math than you are is also superior in every other area.
  • Maybe you haven't found your advantage or a way to exploit this.
  • Maybe you haven't spend enough time on one thing to get really good at it.

Long shot: Do you think you might have ADHD? (pdf warning) Alternatively, go over the diagnostic criteria

Comment author: gjm 09 December 2014 12:14:55PM 3 points [-]

Your link is broken because it has parentheses in the URL. Escape them with backslashes to unbreak it.

Comment author: MathiasZaman 09 December 2014 12:18:55PM 3 points [-]

Thank you very much.

Comment author: gjm 09 December 2014 01:09:15PM 3 points [-]

You're welcome!

Comment author: elharo 09 December 2014 12:26:23PM *  1 point [-]

Possible, but unlikely. We're all just winging it and as others have pointed out, impostor syndrome is a thing.

Comment author: IlyaShpitser 09 December 2014 01:47:07PM *  2 points [-]

There are two separate issues: morale management and being calibrated about your own abilities.

I think the best way to be well-calibrated is to approximate pagerank -- to get a sense of your competence, don't ask yourself, average the extracted opinion of others that are considered competent and have no incentives to mislead you (this last bit is tricky, also the extracting process may have to be slightly indirect).

Morale is hard, and person specific. My experience is that in long term projects/goals, morale becomes a serious problem long before the situation actually becomes bad. I think having "wolverine morale" ("You know what Mr. Grizzly? You look like a wuss, I can totally take you!") is a huge chunk of success, bigger than raw ability.

Comment author: Lumifer 09 December 2014 06:46:03PM 0 points [-]

think having "wolverine morale" ("You know what Mr. Grizzly? You look like a wuss, I can totally take you!") is a huge chunk of success, bigger than raw ability.

Is Zuckerberg's "Move fast, break things" similar/related?

Comment author: Toggle 09 December 2014 06:37:29PM *  16 points [-]

Reminds me of something Scott said once:

And when I tried to analyzed my certainty that – even despite the whole multiple intelligences thing – I couldn’t possibly be as good as them, it boiled down to something like this: they were talented at hard things, but I was only talented at easy things.

It took me about ten years to figure out the flaw in this argument, by the way.

Comment author: Gunnar_Zarncke 09 December 2014 09:34:42PM *  5 points [-]

This reminds me of my criteria for learning: "You have understood something when it appears to be easy." The mathematicians call this state 'trivial'. It has become easy because you trained the topic until the key aspects became part of your unconscious competence. Then it appears to yourself as easy - because you no longer need to think about it.

Comment author: Gondolinian 12 December 2014 01:28:23AM *  7 points [-]

See also: The Illusion of Winning by Scott Adams (h/t Kaj_Sotala)

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

Comment author: [deleted] 09 December 2014 06:40:34PM 4 points [-]

I think people are quick to challenge this type of impression because it pattern matches to known cognitive distortions involved in things like depression, or known insecurities in certain competitive situations.

For example, consider that most everyone will structure their lives such that their weaknesses are downplayed and their positive features are more prominent. This can happen either by choice of activity (e.g. the stereotypical geek avoids social games) or by more overt communication filtering (e.g. most people don't talk about their anger problems). Accordingly, it's never hard to find information that confirms your own relative incompetence, if there's some emotional tendency to look for it.

Aside from that, a great question is "to what ends am I making this comparison?" I find it unlikely that you have a purely academic interest in the question of your relative competence.

First, it can often be useful to know your relative competence in a specific competitive domain. But even here, this information is only one part of your decision process: You may be okay with e.g. choosing a lower expected rank in one career over a higher rank in another because you enjoy the work more, or find it more compatible with your values, or because it pays betters, or leaves more time for you family, or you're risk averse, or it's more altruistic, etc. But knowing your likely rank along some dimension will tell you a bit about the likely pay-offs of competing along that dimension.

But what is the use of making an across-the-board self-comparison?

Suppose you constructed some general measure of competence across all domains. Suppose you found out you were below average (or even above average). Then what? It seems you're in still in the same situation as before: You still must choose how to spend your time. The general self-comparison measure is nothing more than the aggregate of your expected relative ranks on specific sub-domains, which are more relevant to any specific choice. And as I said above, your expected rank in some area is far from the only bit of information you care about.

As an aside, a positive use for a self-comparison is to provide a role-model. If you find yourself unfavorably compared to almost everyone, consider yourself lucky that you have so many role-models to choose from! Since you are probably like other people in most respects, you can expect to find low-hanging fruit in many areas where you have poor relative performance.

But if you find (as many people will) that you've hit the point of diminishing returns regarding the time you spend comparing yourself to others, perhaps you can simply recognize this and realize that it's neither cowardly nor avoidant to spend your mental energy elsewhere.

Comment author: Lumifer 09 December 2014 06:53:17PM 0 points [-]

Is that basically a self-confidence problem?

Comment author: timujin 09 December 2014 07:01:59PM 2 points [-]

Is it? I don't know.

Comment author: Lumifer 09 December 2014 07:34:26PM 0 points [-]

Well, does it impact what you are willing to do or try? Or it's just an abstract "I wish I were as cool" feeling?

If you imagine yourself lacking that perception (e.g. imagine everyone's IQ -- except yours -- dropping by 20 points), would the things you do in life change?

Comment author: timujin 09 December 2014 08:37:55PM 0 points [-]

Guesses here. I would be taking up more risks in areas where success depends on competition. I would become less conforming, more arrogant and cynical. I would care less about producing good art, and good things in general. I would try less to improve my social skills, empathy and networking, and focus more on self-sufficiency. I wouldn't have asked this question here, on LW.

Comment author: mwengler 11 December 2014 03:12:12PM 1 point [-]

I personally am a fan of talking therapy. If you are thinking something is worth asking a therapist about, it is worth asking a therapist about. But beyond the generalities, thinking you are not good enough is absolutely right in the targets of the kinds of things it can be helpful to discuss with a therapist.

Consider the propositions: 1) everyone is more competent than you at everything and 2) you can carry on a coherent conversation on lesswrong I am pretty sure that these are mutually exclusive propositions. I'm pretty sure just from reading some of your comments that you are more competent than plenty of other people at a reasonable range of intellectual pursuits.

Anything you can talk to a therapist about you can talk to your friends about. Do they think you are less competent than everybody else? They might point out to you in a discussion some fairly obvious evidence for or against this proposition that you are overlooking.

Comment author: EphemeralNight 12 December 2014 03:26:22AM 2 points [-]

I sometimes have a similar experience, and when I do, it is almost always simply an effect of my own standards of competence being higher than those around me.

Imagine, some sort of problem arises in the presence of a small group. The members of that group look at each other, and whoever signals the most confidence gets first crack at the problem. But this more-confident person then does not reveal any knowledge or skill that the others do not possess, because said confidence was entirely do to higher willingness to potentially make the problem worse through trial and error.

So, in this scenario, feeling less competent does not mean you are less competent; it means you are more risk-adverse. Do you have a generalized paralyzing fear of making the problem worse? If so, welcome to the club. If not, nevermind.

Comment author: Anatoly_Vorobey 09 December 2014 10:35:29PM *  7 points [-]

Is there a causal link between being relatively lonely and isolated during school years and (higher chance of) ending up a more intelligent, less shallow, more successful adult?

Imagine that you have a pre-school child who has socialization problems, finds it difficult to do anything in a group of other kids, to acquire friends, etc., but cognitively the kid's fine. If nothing changes, the kid is looking at being shunned or mocked as weird throughout school. You work hard on overcoming the social issues, maybe you go with the kid to a therapist, you arrange play-dates, you play-act social scenarios with them..

Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with popular kids. We were never part of any crowd. Instead we read books under our desks during lessons and read SF novels during the breaks and read science encyclopedias during dinner at home, and started programming at 10, and and and. Now you're working so hard to give your kid a full social life. You barely had any, are you sure now you'd rather you had it otherwise? Let me be frank. You have a smart kid. It's normal for a smart kid to be kind of lonely throughout school, and never hang out with lots of other kids, and read books instead. It builds substance. Having a lousy social life is not the failure scenario. The failure scenario is to have a very full and happy school experience and end up a ditzy adolescent. You should worry about that much much more, and distribute your efforts accordingly.

Is your friend completely asinine, or do they have a point?

Comment author: philh 09 December 2014 10:57:17PM 6 points [-]

My friend isn't obviously-to-me wrong, but their argument is unconvincing to me.

It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.

It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.

Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.

Ditzy adolescent - how likely is this?

FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.

Comment author: NancyLebovitz 10 December 2014 04:16:44AM 3 points [-]

There may be a choice between a lot of time thinking/learning vs. a lot of time socializing.

It seems to me that a lot of famous creative people were childhood invalids, though I haven't heard of any such from recent decades. It may be that the right level of invalidism isn't common any more.

Comment author: dxu 11 December 2014 05:08:15AM *  5 points [-]

It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.

True, but it may be one of those problems that's just not fixable without seriously restructuring the school system, especially if something like Villiam_Bur's theory is true.

It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.

Speaking from experience, I can tell you that I know a lot more than any of my peers (I'm 16), and practically all of that is due to the reading I did and am still doing. That reading was a direct result of my isolation and would likely not have occurred had I been more socially accepted. I should add that I have never once felt resentment or insecurity due to this, though I have developed a slight sense of superiority. (That last part is something I am working to fix.)

Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.

I suppose this one depends on how you define a "failure mode". I have never viewed my lack of social life as a bad thing or even a hindrance, and it doesn't seem like it will have many long-term effects either--it's not like I'll be regularly interacting with my current peers for the rest of my life.

Ditzy adolescent - how likely is this?

Again, this depends on how you define "ditzy". Based on my observations of a typical high school student at my age, I would not hesitate to classify over 90% of them as "ditzy", if by "ditzy" you mean "playing social status games that will have little impact later on in life". I shudder at the thought of ever becoming like that, which to me sounds like a much worse prospect than not having much of a social life.

FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.

I see. Well, to each his own. I myself cannot imagine growing up with anything other than the childhood I did, but that may just be lack of imagination on my part. Who knows; maybe I would have turned out better than I did if I had had more social interaction during childhood. Then again, I might not have. Without concrete data, it's really hard to say.

Comment author: Viliam_Bur 09 December 2014 11:15:38PM 10 points [-]

Seems to me that very high intelligence can cause problems with socialization: you are different from your peers, so it is more difficult for you to model them, and for them to model you. You see each other as "weird". (Similar problem for very low intelligence.) Intelligence causes loneliness, not the other way round.

But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.

I am not sure about the relation between reading many books and being "less shallow". Do intelligent kids surrounded by intelligent kids also read a lot?

Comment author: dxu 11 December 2014 04:53:11AM *  4 points [-]

All of this is very true (for me, anyway--typical mind fallacy and all that). High intelligence does seem to cause social isolation in most situations. However, I also agree with this:

But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.

High intelligence does not intrinsically have a negative effect on your social skills. Rather, I feel that it's the lack of peers that does that. Lack of peers leads to lack of relatability leads to lack of socialization leads to lack of practice leads to (eventually) poor social skills. Worse yet, eventually that starts feeling like the norm to you; it no longer feels strange to be the only one without any real friends. When you do find a suitable social group, on the other hand, I can testify from experience that the feeling is absolutely exhilarating. That's pretty much the main reason I'm glad I found Less Wrong.

Comment author: alienist 11 December 2014 05:24:24AM 9 points [-]

Here is Paul Graham's essay on the subject.

Comment author: Toggle 09 December 2014 10:37:17PM *  3 points [-]

Maneki Neko is a short story about an AI that manages a kind of gift economy. It's an enjoyable read.

I've been curious about this 'class' of systems for a while now, but I don't think I know enough about economics to ask the questions well. For example- the story supplies a superintelligence to function as a competent central manager, but could such a gift network theoretically exist without being centrally managed (and without trivially reducing to modern forms of currency exchange)? Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis? And so on.

In particular, I'm looking for the intellectual tools that would be used to ask these questions in a more rigorous way; it would be great if I had better ways of figuring out which of these questions are obviously stupid and which are not. Specific disciplines in economics or game theory, perhaps. Things along the lines of LW's Mechanism Design sequence would be fantastic. Can anyone give me a few pointers?

Comment author: ChristianKl 10 December 2014 03:22:30PM 1 point [-]

Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis?

The stock market has a lot of capable AIs that manage capital allocation.

Comment author: Toggle 10 December 2014 07:15:30PM 1 point [-]

Fair point. It's my understanding that this is limited to rapid day trades, with implications for the price of a stock but not cash-on-hand for the actual company. I was imagining something more like a helper algorithm for venture capital or angel investors, comparable to the PGMs underpinning the insurance industry.

Comment author: badger 10 December 2014 07:35:24PM 7 points [-]

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.

For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.

Comment author: Toggle 10 December 2014 08:23:06PM 1 point [-]

This looks very useful. Thanks!

Another one of those interesting questions is whether the pricing system must be equivalent to currency exchange. To what extent are the traditional modes of transaction a legacy of the limitations behind physical coinage, and what degrees of freedom are offered by ubiquitous computation and connectivity? Etc. (I have a lot of questions.)

Comment author: badger 10 December 2014 09:09:08PM 1 point [-]

Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.

Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from "equal incomes".

What benefits do you think a different system might provide, or what problems does monetary exchange have that you're trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.

Comment author: Lumifer 10 December 2014 07:35:30PM 2 points [-]

I'm looking for the intellectual tools that would be used to ask these questions in a more rigorous way

The field of study that deals with this is called economics. Any reason an intro textbook won't suit you?

Comment author: torekp 10 December 2014 12:11:03AM 5 points [-]

True, false, or neither?: It is currently an open/controversial/speculative question in physics whether time is discretized.

Comment author: polymathwannabe 10 December 2014 01:37:28AM 8 points [-]

The Wikipedia article on Planck time says:

Theoretically, this is the smallest time measurement that will ever be possible, roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change.

However, the article on Chronon says:

The Planck time is a theoretical lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times.

Comment author: Grothor 10 December 2014 05:08:48AM 1 point [-]

Many things in our best models of physics are discrete, but as far as I know, our coordinates (time, space, or four-dimensional space-time coordinates) are never discrete. Even something like quantum field theory, which treats things in a non-intuitively discrete way does not do this. For example, we might view the process of an electron scattering off another electron as an exchange of many discrete photons between the two electrons, but it is all written in terms of integrals or derivatives, rather than differences or sums.

Comment author: Punoxysm 10 December 2014 01:58:09AM *  0 points [-]

If the Bay Area has such a high concentration of rationalists, shouldn't it have more-rational-than-average housing, transportation and legislation?

Sadly, I know the stupid answers to this stupid questions. I just want to vent a bit.

Comment author: Lumifer 10 December 2014 02:03:56AM 3 points [-]

It is mostly rational for generating advantage to people with political pull and power.

Comment author: fubarobfusco 10 December 2014 02:13:35AM 5 points [-]

Are rationalists more or less likely than non-rationalists to participate in local government?

Comment author: NancyLebovitz 10 December 2014 04:19:06AM 8 points [-]

The Bay Area has a high concentration of rationalists compared to most places, but I don't think it's very high compared to the local population. How many rationalists are we talking about?

Comment author: Ebthgidr 10 December 2014 03:04:02AM 3 points [-]

A question about Lob's theorem: assume not provable(X). Then, by rules of If-then statements, if provable(X) then X is provable But then, by Lob's theorem, provable(X), which is a contradiction. What am I missing here?

Comment author: DanielFilan 10 December 2014 03:35:53AM 2 points [-]

I'm not sure how you're getting from not provable(X) to provable(provable(X) -> X), and I think you might be mixing meta levels. If you could prove not provable(X), then I think you could prove (provable(X) ->X), which then gives you provable(X). Perhaps the solution is that you can never prove not provable(X)? I'm not sure about this though.

Comment author: Ebthgidr 10 December 2014 10:37:36AM 0 points [-]

I forget the formal name for the theorem, but isn't (if X then Y) iff (not-x or Y) provable in PA? Because I was pretty sure that's a fundamental theorem in first order logic. Your solution is the one that looked best, but it still feels wrong. Here's why: Say P is provable. Then not-P is provably false. Then not(provable(not-P)) is provable. Not being able to prove not(provable(x)) means nothing is provable.

Comment author: DanielFilan 10 December 2014 11:09:14AM *  0 points [-]

You're right that (if X then Y) is just fancy notation for (not(X) or Y). However, I think you're mixing up levels of where things are being proved. For the purposes of the rest of this comment, I'll use provable(X) to mean that PA or whatever proves X, and not that we can prove X. Now, suppose provable(P). Then provable(not(not(P))) is derivable in PA. You then claim that not(provable(not(P))) follows in PA, that is to say, that provable(not(Q)) -> not(provable(Q)). However, this is precisely the statement that PA is consistent, which is not provable in PA. Therefore, even though we can go on to prove not(provable(not(P))), PA can't, so that last step doesn't work.

Comment author: Ebthgidr 10 December 2014 11:17:23AM 0 points [-]

Ok, thanks for clearing that up.

Comment author: Grothor 10 December 2014 05:31:19AM 16 points [-]

It seems like we suck at using scales "from one to ten". Video game reviews nearly always give a 7-10 rating. Competitions with scores from judges seem to always give numbers between eight and ten, unless you crash or fall, and get a five or six. If I tell someone my mood is a 5/10, they seem to think I'm having a bad day. That is, we seem to compress things into the last few numbers of the scale. Does anybody know why this happens? Possible explanations that come to mind include:

  • People are scoring with reference to the high end, where "nothing is wrong", and they do not want to label things as more than two or three points worse than perfect

  • People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

  • I'm succumbing to confirmation bias and this isn't a real pattern

Comment author: jaime2000 10 December 2014 11:22:27AM 12 points [-]

I'm succumbing to confirmation bias and this isn't a real pattern

No, this is definitely a real pattern. YouTube switched from a 5-star rating system to a like/dislike system when they noticed, and videogames are notorious for rank inflation.

Comment author: MathiasZaman 10 December 2014 01:10:40PM 9 points [-]

People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

I don't think it's this. Belgium doesn't use letter-grading and still succumbs to the problem you mentioned in areas outside the classroom.

Comment author: Capla 12 December 2014 02:02:36AM 0 points [-]

What do they use instead?

Comment author: gjm 10 December 2014 03:46:05PM 9 points [-]

Partial explanation: we interpret these scales as going from worst possible to best possible, and

  • games that get as far as being on sale and getting reviews are usually at least pretty good because otherwise there'd be no point selling them and no point reviewing them
  • people entering competitions are usually at least pretty good because otherwise they wouldn't be there
  • a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly

One reason why this is only a partial explanation is that "possible" obviously really means something like "at least semi-plausible" and what's at least semi-plausible depends on context and whim. But, e.g., suppose we take it to mean something like: take past history, discard outliers at both ends, and expand the range slightly. Then I bet what you find is that

  • most games that go on sale and attract enough attention to get reviewed are broadly of comparable quality
    • but a non-negligible fraction are quite a lot worse because of some serious failing in design or management or something
  • most performances in competitions at a given level are broadly of comparable quality
    • but a non-negligible fraction are quite a lot worse because the competitor made a mistake of some kind
  • most of a given person's days are roughly equally satisfactory
    • but a non-negligible fraction are quite a lot worse because of illness, work stress, argument with a family member, etc.

so that in order for a scale to be able to cover (say) 99% of cases it needs to extend quite a bit further downward than upward relative to the median case.

Comment author: Capla 12 December 2014 02:02:01AM 3 points [-]

a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly

Think about it in therms of probability space. If somthign is basically functional, then there are a near- infinite number of ways for it to be worse, but a finite number of ways for it to get better.

http://xkcd.com/883/

Comment author: Kindly 10 December 2014 06:13:53PM 4 points [-]

Math competitions often have the opposite problem. The Putnam competition, for example, often has a median score of 0 or 1 out of 120.

I'm not sure this is a good thing. Participating in a math competition and getting 0 points is pretty discouraging, in a field where self-esteem is already an issue.

Comment author: alienist 11 December 2014 05:30:27AM 9 points [-]

Interestingly enough, the scores on individual questions are extremely bimodal. They're theoretically out of 10 but the numbers between 3 and 7 are never used.

Comment author: gwern 10 December 2014 07:13:59PM 7 points [-]

You may find the work of the authors of http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2369332 interesting.

Comment author: atorm 10 December 2014 07:30:24PM -1 points [-]

I think it's the C thing. I have no evidence for this.

Comment author: Gavin 10 December 2014 09:41:14PM 10 points [-]

RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.

Comment author: wadavis 10 December 2014 10:18:02PM 3 points [-]

I tried to change out the 10 rating for a z-score rating in my own conversations. It failed due to my social circles not being familiar with the normal bell curve.

Comment author: gwern 11 December 2014 12:00:11AM 4 points [-]

If you wanted to maximize the informational content of your ratings, wouldn't you try to mimick a uniform distribution?

Comment author: knb 11 December 2014 02:19:31AM 4 points [-]

I've noticed the same thing. Part of it might be that reviewers are reluctant to alienate fans of [thing being reviewed]. Another explanation is that they are intuitively norming against a wider degree of things than they actually review. For example, I was buying a smartphone recently, and a lot of lower-end devices I was considering had few reviews, but famous high-end brands (like iPhone Galaxy S, etc.) are reviewed by pretty much everyone.

Playing devil's advocate, it might be that there are more perceivable degrees of badness/more ways to fail than there are of goodness, so we need a wider range of numbers to describe and fairly rank the failures.

Comment author: alienist 11 December 2014 05:48:45AM 9 points [-]

Well here is an article by Megan McArdle that talking about how insider-outsider dynamics can lead to this kind of rank inflation.

Comment author: timujin 10 December 2014 07:46:04AM 4 points [-]

In dietary and health articles they often speak about "processed food". What exactly is processed food and what is unprocessed food?

Comment author: polymathwannabe 10 December 2014 02:14:57PM 0 points [-]

Anything that you could have picked from the plant yourself (a pear, a carrot, a berry) AND has not been sprinkled with conservants/pesticides/shiny gloss is unprocessed. If it comes in a package and looks nothing like what nature gives (noodles, cookies, jell-o), it's been processed.

Raw milk also counts as unprocessed, but in the 21st century there's no excuse to be drinking raw milk.

Comment author: timujin 10 December 2014 02:21:11PM *  0 points [-]

So, it doesn't make sense to talk about processed meats, if you can't pick them from plants?

If I roast my carrot, does it become processed?

Comment author: polymathwannabe 10 December 2014 02:37:17PM 0 points [-]

I'm assuming you value your health and thus don't eat any raw meat, so all of it is going to be processed---if only at your own kitchen.

By the same standard, a roasted carrot is, technically speaking, "processed." However, what food geeks usually think of when they say "processed" involves a massive industrial plant where your food is filled with additives to compensate for all the vitamins it loses after being crushed and dehydrated. Too often it ends up with an inhuman amount of salt and/or sugar added to it, too.

Comment author: Lumifer 10 December 2014 04:05:55PM 2 points [-]

in the 21st century there's no excuse to be drinking raw milk

That's debatable -- some people believe raw milk to be very beneficial.

Comment author: polymathwannabe 10 December 2014 04:21:00PM 1 point [-]
Comment author: Lumifer 10 December 2014 04:30:28PM 1 point [-]

Oh, I'm sure the government wants you to believe raw milk is the devil :-)

In reality I think it depends, in particular on how good your immune system is. If you're immunocompromised, it's probably wise to avoid raw milk (as well as, say, raw lettuce in salads). On the other hand, if your immune system is capable, I've seen no data that raw milk presents an unacceptable risk -- of course how much risk is unacceptable varies by person.

Comment author: AlexSchell 11 December 2014 02:42:20AM 5 points [-]

Do you have any sources that quantify the risk?

Comment author: Lumifer 10 December 2014 04:05:21PM 10 points [-]

Definitions will vary depending on the purity obsession of the speaker :-) but as a rough guide, most things in cans, jars, boxes, bottles, and cartons will be processed. Things that are, more or less, just raw plants and animals (or parts of them) will be unprocessed.

There are boundary cases about which people argue -- e.g. is pasteurized milk a processed food? -- but for most things in a food store it's pretty clear what's what.

Comment author: timujin 10 December 2014 04:21:36PM 0 points [-]

Thanks! That does make sense.

Comment author: Kaura 10 December 2014 02:54:19PM 2 points [-]

Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.

This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it's just not a big deal in an Everett multiverse?

(There's probably a lot that I've missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)

Comment author: DanielFilan 11 December 2014 12:01:00AM 4 points [-]

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be?

Not really. If you're in a suboptimal branch, but still doing better than if you didn't exist at all, then you aren't making the world better off by self-destructing regardless of whether other branches exist.

Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.

It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn't important for this particular discussion) of branches where everything is stellar - just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn't so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1/2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1/2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.

Comment author: Kaura 11 December 2014 04:53:18PM 0 points [-]

Thanks! Ah, I'm probably just typical-minding like there's no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you "want to keep living", you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you "want to see humanity flourish indefinitely", you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering). To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life.

just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive

Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch <=> rendering the branch irrelevant, pretty much.

which isn't so important.

To me it did feel like this is obviously what's important, and the branches where you don't exist simply don't matter - there's no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans).

If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a moral atrocy. I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people - no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations. But this is certainly something I should learn to understand better before anyone gives me a world-destroying cancer cure button.

*Which is one main difference when comparing this to regular old population ethics, I suppose.

Comment author: DanielFilan 12 December 2014 01:07:35AM *  0 points [-]

To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.

As it happens, you totally can (it's called the Born measure, and it's the same number as what people used to think was the probabilities of different branches occurring), and agents that satisfy sane decision-theoretic criteria weight branches by their Born measure - see this paper for the details.

I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people - no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations.

This is a good place to strengthen intuition, since if you replace "killing myself" with "torturing myself", it's still true that none of your future selves who remain alive/untortured "would ever notice anything, vast amounts of future copies of [yourself] would wake up just like they thought they would the nloext morning, and carry on with their lives and aspirations". If you arrange for yourself to be tortured in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also wake up and get tortured. Similarly, if you arrange for yourself to be killed in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also get killed (which is presumably a bad thing even or especially if everybody else also dies).

One way to intuitively see that this way of thinking is going to get you in trouble is to note that your preferences, as stated, aren't continuous as a function of reality. You're saying that universes where (1-x) proportion of branches feature you being dead and x proportion of branches feature you being alive are all equally fine for all x > 0, but that a universe where you are dead with proportion 1 and alive with proportion 0 would be awful (well, you didn't actually say that, but otherwise you would be fine with killing some of your possible future selves in a classical universe). However, there is basically no difference between a universe where (1-epsilon) proportion of branches feature you being dead and epsilon proportion of branches feature you being alive, and a universe where 1 proportion of branches feature you being dead and 0 proportion of branches feature you being alive (since don't forget, MWI looks like a superposition of waves, not a collection of separate universes). This is the sort of thing which is liable to lead to crazy behaviour.