You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stupid Questions December 2014

16 Post author: Gondolinian 08 December 2014 03:39PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Comments (341)

Sort By: Leading
Comment author: Grothor 10 December 2014 05:31:19AM 16 points [-]

It seems like we suck at using scales "from one to ten". Video game reviews nearly always give a 7-10 rating. Competitions with scores from judges seem to always give numbers between eight and ten, unless you crash or fall, and get a five or six. If I tell someone my mood is a 5/10, they seem to think I'm having a bad day. That is, we seem to compress things into the last few numbers of the scale. Does anybody know why this happens? Possible explanations that come to mind include:

  • People are scoring with reference to the high end, where "nothing is wrong", and they do not want to label things as more than two or three points worse than perfect

  • People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

  • I'm succumbing to confirmation bias and this isn't a real pattern

Comment author: jaime2000 10 December 2014 11:22:27AM 12 points [-]

I'm succumbing to confirmation bias and this isn't a real pattern

No, this is definitely a real pattern. YouTube switched from a 5-star rating system to a like/dislike system when they noticed, and videogames are notorious for rank inflation.

Comment author: Gavin 10 December 2014 09:41:14PM 10 points [-]

RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.

Comment author: alienist 11 December 2014 05:48:45AM 9 points [-]

Well here is an article by Megan McArdle that talking about how insider-outsider dynamics can lead to this kind of rank inflation.

Comment author: gjm 10 December 2014 03:46:05PM 9 points [-]

Partial explanation: we interpret these scales as going from worst possible to best possible, and

  • games that get as far as being on sale and getting reviews are usually at least pretty good because otherwise there'd be no point selling them and no point reviewing them
  • people entering competitions are usually at least pretty good because otherwise they wouldn't be there
  • a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly

One reason why this is only a partial explanation is that "possible" obviously really means something like "at least semi-plausible" and what's at least semi-plausible depends on context and whim. But, e.g., suppose we take it to mean something like: take past history, discard outliers at both ends, and expand the range slightly. Then I bet what you find is that

  • most games that go on sale and attract enough attention to get reviewed are broadly of comparable quality
    • but a non-negligible fraction are quite a lot worse because of some serious failing in design or management or something
  • most performances in competitions at a given level are broadly of comparable quality
    • but a non-negligible fraction are quite a lot worse because the competitor made a mistake of some kind
  • most of a given person's days are roughly equally satisfactory
    • but a non-negligible fraction are quite a lot worse because of illness, work stress, argument with a family member, etc.

so that in order for a scale to be able to cover (say) 99% of cases it needs to extend quite a bit further downward than upward relative to the median case.

Comment author: Capla 12 December 2014 02:02:01AM 3 points [-]

a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly

Think about it in therms of probability space. If somthign is basically functional, then there are a near- infinite number of ways for it to be worse, but a finite number of ways for it to get better.

http://xkcd.com/883/

Comment author: MathiasZaman 10 December 2014 01:10:40PM 9 points [-]

People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

I don't think it's this. Belgium doesn't use letter-grading and still succumbs to the problem you mentioned in areas outside the classroom.

Comment author: gwern 10 December 2014 07:13:59PM 7 points [-]

You may find the work of the authors of http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2369332 interesting.

Comment author: someonewrongonthenet 16 December 2014 11:45:21PM *  6 points [-]

People are thinking in terms of grades

That's not an explanation, just a symptom of the problem. People of mediocre talent and high talent both get A - that's part of the reason why we have to use standardized tests with a higher ceiling.

My intuition is that the top few notches are satisficing, whereas all lower ratings are varying degrees of non-satisficing. The degree to which everything tends to cluster at the top represents the degree to which everything is satisfactory for practical purposes. In situations where the majority of the rated things are not satisfactory (like the Putnam - nothing less than a correct proof is truly satisfactory), the ratings will cluster near the bottom.

For example, compare motels to hotels. Motels always have fewer stars, because motels in general are worse. Whereas, say, video games will tend to cluster at the top because video games in general are satisfactorily fun.

Or, think Humanities vs. Engineering grades. Humanities students in general satisfy the requirements to be historians and writers or liberal-arts-educated-white-collar workers more than Engineering students satisfy the requirements to be engineers.

Comment author: Grothor 17 December 2014 05:17:18AM 1 point [-]

That's not an explanation, just a symptom of the problem.

This is what I was trying to convey when I said it might be another example of the problem.

I think it's reasonable, in many contexts, to say that achieving 75% of the highest possible score on an exam should earn you what most people think of as a C grade (that is, good enough to proceed with the next part of your education, but not good enough to be competitive).

I would say that games are different. There is not, as far as I know, a quantitative rubric for scoring a game. A 6/10 rating on a game does not indicate that the game meets 60% of the requirements for a perfect game. It really just means that it's similar in quality to other games that have received the same score, and usually a 6/10 game is pretty lousy. I found a histogram of scores on metacritic:

http://www.giantbomb.com/profile/dry_carton/blog/metacritic-score-distribution-graphs/82409/

The peak of the distributions seems to be around 80%, while I'd eyeball the median to be around 70-75%. There is a long tail of bad games. You may be right that this distribution does, in some sense, reflect the actual distribution of game quality. My complaint is that this scoring system is good at resolving bad games from truly awful games from comically terrible games, but it is bad at resolving a good game from a mediocre game.

What I think it should be is a percentile-based score, like Lumifer describes:

Consider this example: I come up to you and ask "So, how was the movie?". You answer "I give it a 6 out of 10". Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifurcate reality.

In branch 1 you then add "The distribution of my ratings follows the distribution of movie quality, savvy?" and let's say I'm sufficiently statistically savvy to understand that. But... does it help me? I don't know the distribution of movie quality. it's probably bell-shaped, maybe, but not quite normal if only because it has to be bounded, I have no idea if its skewed, etc.

In branch 2 you then add "The rating of 6 means I rate the movie to be in the sixth decile". Ah, that's much better. I now know that out of 10 movies that you've seen five were probably worse and three were probably better. That, to me, is a more useful piece of information.

Then again, maybe it's difficult to discern a difference in quality between a 60th percentile game and an 80th percentile game.

Comment author: someonewrongonthenet 17 December 2014 10:40:24PM 0 points [-]

This is what I was trying to convey when I said it might be another example of the problem.

Oh right, I didn't read carefully sorry.

Comment author: Ixiel 12 December 2014 07:46:47PM 4 points [-]

This is exactly why in my family we use +2/-2. 0 really does feel like average in a way 5-6/10 or 3/5 doesn't.

Comment author: knb 11 December 2014 02:19:31AM 4 points [-]

I've noticed the same thing. Part of it might be that reviewers are reluctant to alienate fans of [thing being reviewed]. Another explanation is that they are intuitively norming against a wider degree of things than they actually review. For example, I was buying a smartphone recently, and a lot of lower-end devices I was considering had few reviews, but famous high-end brands (like iPhone Galaxy S, etc.) are reviewed by pretty much everyone.

Playing devil's advocate, it might be that there are more perceivable degrees of badness/more ways to fail than there are of goodness, so we need a wider range of numbers to describe and fairly rank the failures.

Comment author: Kindly 10 December 2014 06:13:53PM 4 points [-]

Math competitions often have the opposite problem. The Putnam competition, for example, often has a median score of 0 or 1 out of 120.

I'm not sure this is a good thing. Participating in a math competition and getting 0 points is pretty discouraging, in a field where self-esteem is already an issue.

Comment author: alienist 11 December 2014 05:30:27AM 9 points [-]

Interestingly enough, the scores on individual questions are extremely bimodal. They're theoretically out of 10 but the numbers between 3 and 7 are never used.

Comment author: hyporational 15 December 2014 02:16:14AM 3 points [-]

In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5. Of course there's a selection effect and people don't like to look like whiners but I'm not convinced these fully explain the situation.

In Finland the lowest grade you can get from primary education to high school is 4 so that probably affects the situation too.

Comment author: DanArmak 21 December 2014 08:50:37PM 1 point [-]

In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5.

How do you then interpret their responses? Do you compare only the responses of the same person at different times, or between persons (or to guide initial treatment)? Do you have a reference scale that translates self-reported pain to something with an objective referent?

Comment author: hyporational 22 December 2014 12:20:48PM 2 points [-]

Do you compare only the responses of the same person at different times

Yes. There's too much variation between persons. I also think there's variation between types of pain and variation depending on whether there are other symptoms. There are no objective specific referents but people who are in actual serious pain usually look like it, are tachycardic, hypertensive, aggressive, sweating, writhing or very still depending on what type of pain were talking about. Real pain is also aggravated by relevant manual examinations.

Comment author: Grothor 15 December 2014 06:33:06PM 0 points [-]

In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5.

This is actually what initially got me thinking about this. I read a half-satire thing about people misusing pain scales. Since my only source for the claim that people do this was a somewhat satirical article, I didn't bring it up initially.

I was surprised when I heard that people do this, because I figured most people getting asked that question aren't in near as much pain as they could be, and they don't have much to gain by inflating their answer. When I've been asked to give an answer on the pain scale, I've almost always felt like I'm much closer to no pain than to "the worst pain I can imagine" (which is what I was told a ten is), and I can imagine being in such awful pain that I can't answer the question. I think I answered seven one time when I had a bone sticking through my skin (which actually hurt less than I might have thought).

Comment author: DanArmak 21 December 2014 08:48:14PM 0 points [-]

most people getting asked that question aren't in near as much pain as they could be, and they don't have much to gain by inflating their answer.

Maybe they think that by inflating their answer they gain, on the margin, better / more intensive / more prompt medical service. Especially in an ER setting where they may intuit themselves to be competing against other patients being triaged and asked the same question, they might perceive themselves (consciously or not) to be in an arms race where the person who claims to be experiencing the most pain gets treated first.

Comment author: wadavis 10 December 2014 10:18:02PM 3 points [-]

I tried to change out the 10 rating for a z-score rating in my own conversations. It failed due to my social circles not being familiar with the normal bell curve.

Comment author: gwern 11 December 2014 12:00:11AM 4 points [-]

If you wanted to maximize the informational content of your ratings, wouldn't you try to mimick a uniform distribution?

Comment author: wadavis 12 December 2014 03:54:32PM 1 point [-]

The intent was to communicate one piece of information without confusion: where on the measurement spectrum the item fits relative to others in its group. As opposed to delivering as much information as possible, for which there are more nuanced systems.

Most things I am rating do not have a uniform distribution, I tried to follow a normal distribution because it would fit the greater majority of cases. We lose information and make assumptions when we measure data on the wrong distribution, did you fit to uniform by volume or by value? It was another source of confusion.

As mentioned, this method did fail. I changed my methods to saying 'better than 90% of the items in its grouping' and had moderate success. While solving the uniform/normal/Chi-squared distribution problem it is still too long winded for my tastes.

Comment author: Lumifer 12 December 2014 04:00:23PM 2 points [-]

Most things I am rating do not have a uniform distribution

The distribution of your ratings does not need to follow the distribution of what you are rating. For maximum information your (integer) rating should point to a quantile -- e.g. if you're rating on a 1-10 scale your rating should match the decile into which the thing being rated falls. And if your ratings correspond to quantiles, the ratings themselves are uniformly distributed.

Comment author: wadavis 12 December 2014 04:30:35PM 1 point [-]

We have different goals. I want to my rating to reflect the items relative position in its group, you want a rating to reflect the items value independent of the group.

Is this accurate?

Comment author: Lumifer 12 December 2014 04:56:48PM *  2 points [-]

Doesn't seem so. If you rate by quintiles your rating effectively indicates the rank of the bucket to which the thing-being-rated belongs. This reflects "the item's relative position in its group".

If you want your rating to reflect not a rank but something external, you can set up a variety of systems, but I would expect that for max information your rating would have to point a quintile of that external measure of the "value independent of the group".

Comment author: wadavis 12 December 2014 06:47:30PM 0 points [-]

Trying to stab at the heart of the issue: I want the distribution of the ratings to follow the distribution of the rated because when looking at the group this provides an additional piece of information.

Comment author: Lumifer 12 December 2014 08:31:51PM 4 points [-]

Well, at this point the issue becomes who's looking at your rating. This "additional piece of information" exists only for people who have a sufficiently large sample of your previous ratings so they understand where the latest rating fits in the overall shape of all your ratings.

Consider this example: I come up to you and ask "So, how was the movie?". You answer "I give it a 6 out of 10". Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifurcate reality.

In branch 1 you then add "The distribution of my ratings follows the distribution of movie quality, savvy?" and let's say I'm sufficiently statistically savvy to understand that. But... does it help me? I don't know the distribution of movie quality. it's probably bell-shaped, maybe, but not quite normal if only because it has to be bounded, I have no idea if its skewed, etc.

In branch 2 you then add "The rating of 6 means I rate the movie to be in the sixth decile". Ah, that's much better. I now know that out of 10 movies that you've seen five were probably worse and three were probably better. That, to me, is a more useful piece of information.

Comment author: wadavis 15 December 2014 03:35:13PM 0 points [-]

I understand and concede to the better logic. This provides greater insight on why the original attempt to use these ratings failed.

Comment author: NancyLebovitz 08 December 2014 10:05:27PM 10 points [-]

Is there any plausible way the earth could be moved away from the sun and into an orbit which would keep the earth habitable when the sun becomes a red giant?

Comment author: calef 08 December 2014 10:59:42PM *  15 points [-]

According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.

This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 * 10^10 in Joules / Kilogram, or about 3.4 * 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).

Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.

Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).

So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.

There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.

Comment author: Eniac 09 December 2014 01:10:30AM 11 points [-]

I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.

Comment author: Kyre 09 December 2014 05:13:05AM 10 points [-]
Comment author: Eniac 10 December 2014 04:24:40AM 2 points [-]

Hah, thanks for pointing this out. I must have read or heard of this before and then forgotten about it, except in my subconscious. Looks like they have done the math, too, and it figures. Cool!

Comment author: CBHacking 08 December 2014 11:07:32PM 4 points [-]

Ignoring the concept of "can we apply that much delta-V to a planet?", I'd be interested to know whether it's believed that there exists a "Goldilocks zone" suitable for life at all stages of a star's life. Intuitively it seems like there should be, I'm not sure.

Of course, it should be pointed out that the common understanding of "when the sun becomes a red giant" may be a bit flawed; the sun will cool and expand, then collapse. On a human time scale, it will spend a lot of that time as a red giant, but if you simply took the Earth when its orbit started to be crowded by the inner edge of the Goldilocks zone and put it in a new orbit, that new orbit wouldn't be anywhere close to an eternally safe one. Indeed, I suspect that the outermost of the orbits required for the giant-stage sun would be too far from the sun at the time we'd first need to move the Earth.

Comment author: mwengler 11 December 2014 09:33:32PM 3 points [-]

The sun's luminosity will rise by around 300X as it turns into a giant. If we wish to keep the same energy flux onto the earth at that point, we must increase the earth's orbit a factor of sqrt(300) = 17X. The total energy of the earth's current orbit is 2.65E33 J. We must reduce this to 1/17 of its current value. or reduce it by (16/17)*2.65E33 J = 2.5E33 J. The current total annual energy production in the world is about 5E17 J. The sun will be a red giant in about 7.6E9 years. So we would need about a million times current global energy production running full time into rocket motors to push the earth out to a safe orbit by the time the sun has expanded.

But it is worse than that. The Sun actually expands over a scant 5 million years near the end of that 7.6E9 years. So to avoid freezing for billions of years because we have started moving away from the sun too soon, we essentially will need a billion times current energy production running into rocket engines for those 5 million years of solar expansion. But the good news is we have 7.6E9 billion years to figure out how to do that.

If we use plasma rockets which push reaction mass out at 1% the speed of light, then we will need a total of about 6E16 kg reaction mass, or about 0.000001% of the earth's total mass. The total mass of water on the earth is about 1E21 kg so we could do all of this using water as reaction mass and still have 99.99% of the water left when we are done.

Comment author: Nornagest 11 December 2014 10:28:47PM 1 point [-]

I wonder what the exhaust plume of an engine like that would look like, and how far away from it you'd have to be standing to still be capable of looking at anything after a second or two.

Comment author: shminux 09 December 2014 07:45:48PM *  3 points [-]

Not "when the sun becomes a red giant", because red giants are variable on a much too short time scale, but, as others mentioned, we can probably keep the earth in a habitable zone for another 5 billion years or so. We have more than enough hydrogen on earth to provide the necessary potential energy increase with fusion-based propulsion, though building something like a 100 petaWatt engine is problematic at this point, (for comparison, it is a significant fraction of the total solar radiation hitting the earth).

EDIT: I suspect that terraforming Mars (and/or cooling down the Earth more efficiently when the Sun gets brighter) would require less energy than moving the Earth to the Mars orbit. My calculations could be off, though, hopefully someone can do them independently.

Comment author: Anomylous 09 December 2014 08:31:17PM 4 points [-]

Only major problem I know of with terraforming Mars is how to give it a magnetic field. We'd have to somehow re-melt the interior of the planet. Otherwise, we could just put up with constant intense solar radiation, and atmosphere off-gassing into space. Maybe if we built a big fusion reactor in the middle of the planet...?

Comment author: shminux 09 December 2014 09:40:09PM *  9 points [-]

I recall estimating the power required to run an equatorial superconducting ring a few meters thick 1 km or so under the Mars surface with enough current to simulate Earth-like magnetic field. If I recall correctly, it would require about the current level of power generation on Earth to ramp it up over a century or so to the desired level. Then whatever is required to maintain it (mostly cooling the ring), which is very little. Of course, an accident interrupting the current flow would be an epic disaster.

Comment author: alienist 11 December 2014 06:17:38AM 5 points [-]

Wouldn't it be more efficient to use that energy to destroy Mars and build start building a Dyson swarm from the debris?

Comment author: shminux 11 December 2014 04:13:49PM 3 points [-]

Let's do a quick estimate. Destroying a Mars-like planet requires expending the equivalent of its gravitational self-energy, ~GM^2/R, which is about 10^32J (which we could easily obtain from a comet 10 kn in radius... consisting of antimatter!) For comparison, the Earth's magnetic field has about 10^26J of energy, a million times less. I leave it to you to draw the conclusions.

Comment author: DaFranker 09 December 2014 03:21:35PM *  2 points [-]

I'm curious about the thought process that led to this being asked in the "stupid questions" thread rather than the "very advanced theoretical speculation of future technology" thread. =P

As a more serious answer: Anything that would effectively give us a means to alter mass and/or the effects of gravity in some way (if there turns out to be a difference) would help a lot.

Comment author: NancyLebovitz 09 December 2014 04:02:35PM 2 points [-]

I wasn't sure there was a way to do it within current physics.

Now we get to the hard question: supposing we (broadly interpreted, it will probably be a successor species) want to move the earth outwards using those little gravitational nudges, how do we get civilizations with a sufficiently long attention span?

Comment author: DanielLC 09 December 2014 06:01:23PM 1 point [-]

If we haven't gotten one by then, we're doomed. Or at least, we don't get a very good planet. We could still have space-stations or live on planets where we have to bring our own atmosphere.

Comment author: JoshuaZ 08 December 2014 11:52:54PM 1 point [-]

Yes, I saw an article a few years ago a back of the envelope estimate that suggested this would be doable if one could turn mass on the moon more or less directly to energy and use the moon as a gravitational tug to slowly move Earth out of the way. You can change mass almost directly into energy by feeding the mass into a few smallish blackholes.

Comment author: Daniel_Burfoot 08 December 2014 11:00:24PM 1 point [-]

This is a fascinating question. Very speculatively, I could imagine somehow using energy gained by pushing other objects closer to the Sun, to move the Earth away from the Sun. Like some sort of immense elastic band stretching between Mars and Earth, pulling Earth "up" and Mars "down".

Comment author: DanielLC 09 December 2014 06:04:16PM 1 point [-]

That is essentially what would happen if you used gravitational assistance and orbited asteroids between Mars and Earth.

Comment author: DanArmak 21 December 2014 09:08:26PM 0 points [-]

I don't really know if it's plausible, but Larry Niven's far-future fiction A World Out of Time (the novel, not the original short story of the same name) deals with exactly this problem.

His solution is a "fusion candle": build a huge double-ended fusion tube, put it in the atmosphere of a gas giant, and light it up. The thrust downwards keeps the tube floating in the atmosphere. The thrust upwards provides an engine to push the gas giant around. In the book, they pushed Uranus to Earth, and then moved it outwards again, gravitationally pulling the Earth along.

Comment author: knb 08 December 2014 11:29:52PM 9 points [-]

Would it be possible to slow down or stop the rise of sea level (due to global warming) by pumping water out of the oceans and onto the continents?

Comment author: Falacer 09 December 2014 02:05:38AM 16 points [-]

We could really use a new Aral sea, but intuitively I'd expected that this would be a tiny dent in the depth of the oceans. So, to the maths:

Wikipedia claims that from 1960 to 1998 the volume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.

I'm going to give that another 5% for more loss since then, as the South Aral Sea has now lost its eastern half enitrely.

This gives ~1100 * .85 = 935km^3 of water that we're looking to replace.

The Earth is ~500m km^2 in surface area, approx. 70% of which is water = 350m km^2 in water.

935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.

This is massively larger that I would have predicted, and it gets better. The current salinity of the Aral Sea is 100g/l which is way higher than that of seawater at 35g/l, so we could pretty much pump the water straight in still with net environmental gain. Infact this is a solution to the crisis that has been previously proposed, although it looks like most people would rather dilute the seawater first.

To acheive the desired result of 1 inch drop in sea level, we only need to find 9 equivalent projects around the world. Sadly, the only other one I know of is Lake Chad, which is significantly smaller than the Aral Sea. However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants: I would expect that we might be able to get this amount again if we undertook a desalination/irrigation program in the Sahara.

Comment author: mwengler 11 December 2014 04:00:19PM 2 points [-]

Dead Sea and Salton Sea leap to mind as good projects.

Also could we store more water in the atmosphere? If we just poured water into a desert like the Sahara, most of it would evaporate before it flowed back to the sea. This would seem to raise the average moisture content of the atmosphere. Sure eventually it gets rained back down, but this would seem to be a feature more than a bug for a world that keeps looking for more fresh water. Indeed my mind is currently inventing interesting methods for moving the water around using purely the heat from the sun as an energy source.

Comment author: DanArmak 21 December 2014 09:13:29PM 0 points [-]

However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants

Isn't it more of an indication of how much water can be contained in the Aral Sea basin? The plants don't need to contain all of the missing Aral Sea water at once, they just need to be watered faster than the Sea is being refilled by rainfall. How much water does rainfall supply every year, as a percentage of the Sea's total volume?

Comment author: mwengler 11 December 2014 03:54:59PM 9 points [-]

I recommend googling "geoengineering global warming" and reading some of the top hits. There are numerous proposals for reducing or reversing global warming which are astoundingly less expensive than reducing carbon dioxide emissions, and also much more likely to be effective.

To your direct question about storing more water on land, this would be a geoengineering project. Some straightforward approaches to doing it:

Use rainfall as your "pump" in order to save having to build massive energy using water pumps. Without any effort on our part, nature natually lifts water a km or more above sea level and then drops it, much of it dropped onto land. That water generally is funneled back to the ocean in rivers. With just the constructino of walls, some rivers might be prevented from draining into the ocean. Large areas would be flooded by the river, storing water other than in the ocean.

Use gravity as your pump. THere are many large locations on earth than are below sea level. Aquifers that took no net energy to do pumping could be built that would essentially gravity-feed ocean water into these areas. These areas can be hundreds of meters below sea level, so if even 1% of the earth's surface is 100 m below sea level, then the ocean's could be lowered by a bit more than 1 m by filling these depressions with ocean water.

Of course either one of these approaches will cause massive other changes, although probably in a positive direction as far as climate is concerned. More water surface on the planet should mean more evaporation of water which reates more clouds which reflects more energy from the sun, lowering the heating of the earth. But of course a non-trivial analysis might yield a rich detail of effects worth pondering.

In the past features like the Salton sea and the Dead sea have been filled by fresh-water rivers, essentially meaning that rain was used as the pump to fill them. The demand for fresh water has stopped these features from being filled. It seems to me that an aquifer to refill these features with salt water from the ocean would be relatively benign in impact, since in nature these features have been fuller of salt water in the past, and so the impact of that water might be blessed by humanity as "natural" instead of cursed by humanity as "man made."

Comment author: CBHacking 09 December 2014 12:10:17AM *  3 points [-]

Where does the water go? Assuming you want to reduce sea level by a 1/2 inch using this mechanism, you have to do the equivalent of covering the entire ETA: land area of earth in a full inch of water (what's worse, seawater; you'd want to desalinate it). Even assuming you can find room on land for all this water and the pump capacity to displace it all, what's to stop it from washing right back out to sea? Some of it can be used to refill aquifers, but the capacity of those is trivial next to that of the oceans. Some of it can be stored as ice and snow, but global warming will reduce (actually, has already quite visibly reduced) land glaciation; even if you can somehow induce the water to freeze, that heat you extract from it will have to go somewhere and unless you can dump it out of the atmosphere entirely it will just contribute to the warming. The rest of the water will just flood the existing rivers in its mad rush to do what nearly all continental water is always doing anyhow: flowing to sea.

Comment author: TheOtherDave 09 December 2014 06:56:31PM 2 points [-]

Clearly, the solution is to build a space elevator and ship water into orbit. We lower the sea levels, the water is there if we need it later, and in the meantime we get to enjoy the pretty rings.

(No, I'm not serious.)

Comment author: DanielLC 09 December 2014 06:07:14PM *  2 points [-]

One possibility would be to replace the ice caps by hand. Run a heated pipeline from the ocean to the icecaps, pump water there, and let it freeze on its own. I don't know how well that would work, and I suspect you're better off just letting sea levels rise. If you need the land that bad, just make floating platforms.

Edit: Replace "ice caps" with "Antartica". Adding ice to the northern icecap, or even the southern one out where it's floating, won't alter the sea level since floating objects displace their mass in water.

Comment author: Eniac 09 December 2014 01:31:18AM 1 point [-]

Well, this is not pumping, but it might be much more efficient: As I understand, the polar ice caps are in an equilibrium between snowfall and runoff. If you could somehow wall in a large portion of polar ice, such that it cannot flow away, it might rise to a much higher level and sequester enough water to make a difference in sea levels. A super-large version of a hydroelectric dam, in effect, for ice.

It might also help to have a very high wall around the patch to keep air from circulating, keeping the cold polar air where it is and reduce evaporation/sublimation.

Comment author: Anatoly_Vorobey 09 December 2014 10:35:29PM *  7 points [-]

Is there a causal link between being relatively lonely and isolated during school years and (higher chance of) ending up a more intelligent, less shallow, more successful adult?

Imagine that you have a pre-school child who has socialization problems, finds it difficult to do anything in a group of other kids, to acquire friends, etc., but cognitively the kid's fine. If nothing changes, the kid is looking at being shunned or mocked as weird throughout school. You work hard on overcoming the social issues, maybe you go with the kid to a therapist, you arrange play-dates, you play-act social scenarios with them..

Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with popular kids. We were never part of any crowd. Instead we read books under our desks during lessons and read SF novels during the breaks and read science encyclopedias during dinner at home, and started programming at 10, and and and. Now you're working so hard to give your kid a full social life. You barely had any, are you sure now you'd rather you had it otherwise? Let me be frank. You have a smart kid. It's normal for a smart kid to be kind of lonely throughout school, and never hang out with lots of other kids, and read books instead. It builds substance. Having a lousy social life is not the failure scenario. The failure scenario is to have a very full and happy school experience and end up a ditzy adolescent. You should worry about that much much more, and distribute your efforts accordingly.

Is your friend completely asinine, or do they have a point?

Comment author: Viliam_Bur 09 December 2014 11:15:38PM 10 points [-]

Seems to me that very high intelligence can cause problems with socialization: you are different from your peers, so it is more difficult for you to model them, and for them to model you. You see each other as "weird". (Similar problem for very low intelligence.) Intelligence causes loneliness, not the other way round.

But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.

I am not sure about the relation between reading many books and being "less shallow". Do intelligent kids surrounded by intelligent kids also read a lot?

Comment author: dxu 11 December 2014 04:53:11AM *  4 points [-]

All of this is very true (for me, anyway--typical mind fallacy and all that). High intelligence does seem to cause social isolation in most situations. However, I also agree with this:

But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.

High intelligence does not intrinsically have a negative effect on your social skills. Rather, I feel that it's the lack of peers that does that. Lack of peers leads to lack of relatability leads to lack of socialization leads to lack of practice leads to (eventually) poor social skills. Worse yet, eventually that starts feeling like the norm to you; it no longer feels strange to be the only one without any real friends. When you do find a suitable social group, on the other hand, I can testify from experience that the feeling is absolutely exhilarating. That's pretty much the main reason I'm glad I found Less Wrong.

Comment author: Tem42 04 July 2015 03:15:47AM *  1 point [-]

It is not true that people cannot - or do not - interact successfully with people that are less intelligent than they are. Many children get along well with their younger siblings. Many adults love being kindergarten teachers... Or feel highly engaged working in the dementia wing of the rest home. Many people of all intelligence levels love having very dumb pets. These are not people (or beings) that you relate to because of their 'relatability' in the sense that they are like you, but because they are meaningful to you. And interacting with people build social skills appropriate to those people -- which may not be very generalizable when you are practicing interacting with kindergarten students, but is certainly a useful skill when you are interacting with average people.

I personally would think that the problem under discussion is not related to intelligence, but in trying to help an introvert identify the most fulfilling interpersonal bonds without making them more social in a general sense. However, I don't know the kid in question, so I can't say.

Comment author: alienist 11 December 2014 05:24:24AM 9 points [-]

Here is Paul Graham's essay on the subject.

Comment author: philh 09 December 2014 10:57:17PM 6 points [-]

My friend isn't obviously-to-me wrong, but their argument is unconvincing to me.

It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.

It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.

Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.

Ditzy adolescent - how likely is this?

FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.

Comment author: dxu 11 December 2014 05:08:15AM *  5 points [-]

It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.

True, but it may be one of those problems that's just not fixable without seriously restructuring the school system, especially if something like Villiam_Bur's theory is true.

It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.

Speaking from experience, I can tell you that I know a lot more than any of my peers (I'm 16), and practically all of that is due to the reading I did and am still doing. That reading was a direct result of my isolation and would likely not have occurred had I been more socially accepted. I should add that I have never once felt resentment or insecurity due to this, though I have developed a slight sense of superiority. (That last part is something I am working to fix.)

Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.

I suppose this one depends on how you define a "failure mode". I have never viewed my lack of social life as a bad thing or even a hindrance, and it doesn't seem like it will have many long-term effects either--it's not like I'll be regularly interacting with my current peers for the rest of my life.

Ditzy adolescent - how likely is this?

Again, this depends on how you define "ditzy". Based on my observations of a typical high school student at my age, I would not hesitate to classify over 90% of them as "ditzy", if by "ditzy" you mean "playing social status games that will have little impact later on in life". I shudder at the thought of ever becoming like that, which to me sounds like a much worse prospect than not having much of a social life.

FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.

I see. Well, to each his own. I myself cannot imagine growing up with anything other than the childhood I did, but that may just be lack of imagination on my part. Who knows; maybe I would have turned out better than I did if I had had more social interaction during childhood. Then again, I might not have. Without concrete data, it's really hard to say.

Comment author: mindspillage 13 December 2014 08:11:33AM 1 point [-]

It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.

Speaking from experience, I can tell you that I know a lot more than any of my peers (I'm 16), and practically all of that is due to the reading I did and am still doing. That reading was a direct result of my isolation and would likely not have occurred had I been more socially accepted. I should add that I have never once felt resentment or insecurity due to this, though I have developed a slight sense of superiority. (That last part is something I am working to fix.)

Reading a ton as a teen was very helpful to me also, but I think I would have still done it if I had a rich social life of people who were also smart and enjoyed reading. Ultimately being around peers who challenge me is more motivating than being isolated; I don't want to be the one dragging behind.

I do feel that I had to learn a fair amount of basic social skills through deliberately watching and taking apart, rather than just learning through doing--making me somewhat the social equivalent of someone who has learned a foreign language through study rather than by growing up a native speaker; I have the pattern of strengths and weaknesses associated with the different approach.

Comment author: NancyLebovitz 10 December 2014 04:16:44AM 3 points [-]

There may be a choice between a lot of time thinking/learning vs. a lot of time socializing.

It seems to me that a lot of famous creative people were childhood invalids, though I haven't heard of any such from recent decades. It may be that the right level of invalidism isn't common any more.

Comment author: John_Maxwell_IV 13 December 2014 09:17:35AM 2 points [-]

I think I remember reading that famous inventors were likely to be isolated due to illness as children. I think it's unlikely that intelligence is decreased by being well-socialized, but it seems possible to me that people who are very well-socialized might find themselves thinking of fewer original ideas.

Comment author: gattsuru 08 December 2014 10:06:04PM 7 points [-]

Are there any good trust, value, or reputation metrics in the open source space? I've recently established a small internal-use Discourse forum and been rather appalled by the limitations of what is intended to be a next-generation system (status flag, number of posts, tagging), and from a quick overview most competitors don't seem to be much stronger. Even fairly specialist fora only seem marginally more capable.

This is obviously a really hard problem and conflux of many other hard problems, but it seems odd that there are so many obvious improvements available.

((Inspired somewhat by my frustration with Karma, but I'm honestly more interested in its relevance for outside situations.))

Comment author: Viliam_Bur 09 December 2014 10:42:17AM 8 points [-]

Tangentially, is it possible for a good reputation metric to survive attacks in real life?

Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.

One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to "recommend" the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let's suppose it isn't, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you "recommend" this party, which according to their model would help them win the election.

In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where otherwise trustworthy people are forced to act against their will. A possible defense would be if people could somehow hide their votes; e.g. your boss might know that you have high prestige and the company has low prestige, but has no methods to verify whether you have "recommended" the company or not (so you could just lie that you did). But if we make everything secret, is there a way to verify whether the system is really working as described? (The owner of the system could just add 9000 trust points to his favorite political party and no one would ever find out.)

I suspect this is all confused and I am asking a wrong question. So feel free to answer to question I should have asked.

Comment author: kpreid 09 December 2014 06:07:27PM 3 points [-]

I don't have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run.

… otherwise trustworthy people are forced to act against their will. … But if we make everything secret, is there a way to verify whether the system is really working as described?

This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.

Comment author: alienist 11 December 2014 06:57:51AM 6 points [-]

In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile),

And then there are absentee ballots which potentially make said laws a joke.

Comment author: gattsuru 09 December 2014 08:19:03PM *  2 points [-]

There are simultaneously a large number of laws prohibiting employers from retaliating against persons for voting, and a number of accusations of retaliation for voting. So this isn't a theoretical issue. I'm not sure it's distinct from other methods of compromising trusted users -- the effects are similar whether the compromised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Certificates -- but it's a good demonstration that you simply can't trust any node inside a network.

(There's some interesting overlap with MIRI's value stability questions, but they're probably outside the scope of this thread and possibly only metaphor-level.)

Interestingly, there are some security metrics designed with the assumption that some number of their nodes will be compromised, and with some resistance to such attacks. I've not seen this expanded to reputation metrics, though, and there are technical limitations. TOR, for example, can only resist about a third of its nodes being compromised, and possibly fewer than that. Other setups have higher theoretical resistance, but are dependent on central high-value nodes that trade resistance against to vulnerability against spoofing.

It seems like there's some value in closing the gap between carrier wave and signal in reputation systems, rather than a discrete reputation system, but my sketched out implementations become computationally intractable quickly.

Comment author: Lumifer 09 December 2014 06:39:18PM 4 points [-]

Are there any good trust, value, or reputation metrics

The first problem is defining what do you want to measure. "Trust" and "reputation" are two-argument functions and "value" is notoriously vague.

Comment author: gattsuru 09 December 2014 08:32:08PM 4 points [-]

For clarity, I meant "trust" and "reputation" in the technical senses, where "trust" is authentication, and where "reputation" is an assessment or group of assessments for (ideally trusted) user ratings of another user.

But good point, especially for value systems.

Comment author: Lumifer 09 December 2014 09:10:41PM *  1 point [-]

I am still confused. When you say that trust is authentication, what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?

For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?

Note that weeding out idiots, sockpuppets, and trolls is much easier than constructing a useful-for-everyone ranking of legitimate users. Different people will expect and want your rankings to do different things.

Comment author: gattsuru 09 December 2014 11:34:00PM *  3 points [-]

what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?

For starters, a system to be sure that a user or service is the same user or service it was previously. Web of trusts /or/ a central authority would work, but honestly we run into limits even before the gap between electronic worlds and meatspace. PGP would be nice, but PGP itself is closed-source, and neither PGP nor OpenPGP/GPG are user-accessible enough to even survive in the e-mail sphere they were originally intended to operate. SSL allows for server authentication (ignoring the technical issues), but isn't great for user authentication.

I'm not aware of any generalized implementation for other use, and the closest precursors (keychain management in Murmur/Mumble server control?) are both limited and intended to be application-specific. But at the same time, I recognize that I don't follow the security or open-source worlds as much as I should.

For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?

Oh, yeah. It's not an easy problem to solve Right.

I'm more interested in if anyone's trying to solve it. I can see a lot of issues with a user-based reputation even in addition to the obvious limitation and tradeoffs that fubarobfusco provides -- a visible metric is more prone to being gamed but obscuring the metric reduces its utility as a feedback for 'good' posting, value drift without a defined root versus possible closure without, so on.

What surprises me is that there are so few attempts to improve the system beyond the basics. IP.Board, vBulletin, and phpBoard plugins are usually pretty similar -- the best I've seen merely lets you disable them on a per-subfora basis rather than globally, and they otherwise use a single point score. Reddit uses the same Karma system whether you're answering a complex scientific question or making a bad joke. LessWrong improves on that only by allowing users to see how contentious a comment's scoring. Discourse uses count of posts and tags, almost embarrassingly minimalistic. I've seen a few systems that make moderator and admin 'likes' count for more. I think that's about the fanciest.

I don't expect them to have an implementation that matches my desires, but I'm really surprised that there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds. These might even be /bad/ decisions, but usually you see someone making them.

I expect Twitter or FaceBook have something complex underneath the hood, but if they do, they're not talking about the specifics and not doing a very good job. Maybe its their dominance in the social development community, but I dunno.

Comment author: Lumifer 10 December 2014 02:00:48AM 1 point [-]

For starters, a system to be sure that a user or service is the same user or service it was previously.

That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?

You don't need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before.

I'm more interested in if anyone's trying to solve it.

Well, again, the critical question is: What are you really trying to achieve?

If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it's still a two-argument function.

there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds.

Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well.

I expect Twitter or FaceBook have something complex underneath the hood

Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.

Comment author: fubarobfusco 10 December 2014 02:30:11AM 2 points [-]

That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things)

"All the usual things" are many, and some of them are quite wrong indeed.

If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It's better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources.

On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people's — maybe you want tripcodes.

And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.

Comment author: gattsuru 15 December 2014 09:00:47PM 0 points [-]

What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?

In addition to the usual problems, which are pretty serious to start with, you're relying on the client. To borrow from information security, the client is in the hands of the enemy. Sockpuppet (sybil in trust networks) attacks, where entity pretends to be many different users (aka sockpuppets), and impersonation attacks, where a user pretends to be someone they are not, are both well-documented and exceptionally common. Every forum package I can find relies on social taboos or simply ignoring the problem, followed by direct human administrator intervention, and most don't even make administrator intervention easy.

There are also very few sites that have integrated support for private-key-like technologies, and most forum packages are not readily compatible with even all password managers.

This isn't a problem that can be perfectly solved, true. But right now it's not even got bandaids.

Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different.

"Normal" social reputation runs into pretty significant issues as soon as your group size exceeds even fairly small groups -- I can imagine folk who could handle a couple thousand names, but it's common for a site to have orders of magnitude more users. These systems can provide useful tools for noticing and handling matters that are much more evident in pure data than in "expert judgments". But these are relatively minor benefits.

At a deeper level, a well-formed reputation system should encourage 'good' posting (posting that matches the expressed desires of the forum community) and discourage 'bad' posts (posting that goes against the expressed desires of the forum community), as well as reduce incentives toward me-too or this-is-wrong-stop responses.

This isn't without trade-offs : you'll implicitly make the forum's culture drift more slowly, and encourage surviving dissenters to be contrarians for whom the reputation system doesn't matter. But the existing reputation systems don't let you make that trade-off, and instead you have to decide whether to use a far more naive system that is very vulnerable to attack.

You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well.

To some extent -- spell-check and capitalization expectations for a writing community will be different than that of a video game or chemistry forum, help forums will expect shorter-lifespan users than the median community -- but a sizable number of these aspects are common to nearly all communities.

Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.

They have incentives toward keeping users. "Bad" posters are tautologically a disincentive for most users (exceptions: some folk do show revealed preferences for hearing from terrible people).

Comment author: Lumifer 15 December 2014 09:21:36PM 1 point [-]

the client is in the hands of the enemy

Yes, of course, but if we start to talk in these terms, the first in line is the standard question: What is your threat model?

I also don't think there's a good solution to sockpuppetry short of mandatory biometrics.

But the existing reputation systems don't let you make that trade-off

Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that's not used anywhere and reputation determining what, how, and when can you post.

very vulnerable to attack

Attack? Again, threat model, please.

"Bad" posters are tautologically a disincentive for most users

Not if you can trivially easy block/ignore them which is the case for Twitter and FB.

Comment author: gattsuru 15 December 2014 11:57:16PM *  0 points [-]

What is your threat model?

An attacker creates a large number of nodes and overwhelms any signal in the initial system.

For the specific example of a reddit-based forum, it's trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.

I also don't think there's a good solution to sockpuppetry short of mandatory biometrics.

10% of the problem is hard. That does not explain the small amount of work done on the other 90%. The vast majority of sockpuppets aren't that complicated: most don't use VPNs or anonymizers, most don't use large stylistic variation, and many even use the same browser from one persona to the next. It's also common for a sockpuppets to have certain network attributes in common with their original persona. Full authorship analysis has both structural (primarily training bias) and pragmatic (CPU time) limitations that would make it unfeasible for large forums...

But there are a number of fairly simple steps to fight sockpuppets that computers handle better than humans, and yet still require often-unpleasant manual work to check.

Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that's not used anywhere and reputation determining what, how, and when can you post.

Yes, but there aren't open-source systems that exist and have documentation which do these things beyond the most basic level. At most, there are simple reputation systems where a small amount has an impact on site functionality, such as this site. But Reddit's codebase does not allow upvotes to be limited or weighed based on the age of account, does not have , and would require pretty significant work to change any of these attributes. (The main site at least acts against some of the more overt mass-downvoting by acting against downvotes applied to the profile page, but this doesn't seem present here?)

Not if you can trivially easy block/ignore them which is the case for Twitter and FB.

If a large enough percentage of outside user content is "bad", users begin to treat that space as advertising and ignore it. Many forums also don't make it easy to block users (see : here), and almost none handle blocking even the most overt of sockpuppets well.

Comment author: alienist 19 December 2014 06:11:28AM 7 points [-]

An attacker creates a large number of nodes and overwhelms any signal in the initial system.

For the specific example of a reddit-based forum, it's trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.

Limit the ability of low karma users to upvote.

Comment author: Lumifer 16 December 2014 01:34:54AM *  1 point [-]

You seem to want to build a massive sledgehammer-wielding mech to solve the problem of fruit flies on a banana.

So the attacker expends a not inconsiderable amount of effort to build his sockpuppet army and achieves sky-high karma on a forum. And..? It's not like you can sell karma or even gain respect for your posts from other than newbies. What would be the point?

Not to mention that there is a lot of empirical evidence out there -- formal reputation systems on forums go back at least as far as early Slashdot and y'know? they kinda work. They don't achieve anything spectacular, but they also tend not have massive failure modes. Once the sockpuppet general gains the attention of an admin or at least a moderator, his army is useless.

You want to write a library which will attempt to identify sockpuppets through some kind of multifactor analysis? Sure, that would be a nice thing to have -- as long as it's reasonable about things. One of the problems with automated defense mechanisms is that they can be often used as DOS tools if the admin is not careful.

If a large enough percentage of outside user content is "bad"

That still actually is the case for Twitter and FB.

Comment author: iamthelowercase 12 June 2015 04:25:04PM 0 points [-]

Inre: Facebook/Twitter:

TL;DR I think Twitter Facebook et al do have something complex, but it is outside the hood rather than under it. (I guess they could have both.)

The "friending" system takes advantage of human's built-in reputation system. When I look at X's user page, it tells me that W, Y, and Z also follow/"friended" X. Then when I make my judgement of X, X leaches some amount of "free" "reputation points" from Z's "reputation". Of course, if W, Y, and Z all have bad reputations, that is reflected. Maybe W and Z have good reputations, but Y does not -- now I'm not sure what X's reputation should be like and need to look at X more closely.

Of course, this doesn't scale beyond a couple hundred people.

Comment author: Lumifer 17 December 2014 10:31:36PM 1 point [-]

You may be interested in the new system called Dissent

Comment author: fubarobfusco 08 December 2014 11:32:41PM 1 point [-]

I don't know of one. I doubt that everyone wants the same sort of thing out of such a metric. Just off the top of my head, some possible conflicts:

  • Is a post good because it attracts a lot of responses? Then a flamebait post that riles people into an unproductive squabble is a good post.
  • Is a post good because it leads to increased readership? Then spamming other forums to promote a post makes it a better post, and posting porn (or something else irrelevant that attracts attention) is really very good.
  • Is a post good because a lot of users upvote it? Then people who create sock-puppet accounts to upvote themselves are better posters; as are people who recruit their friends to mass-upvote their posts.
  • Is a post good because the moderator approves of it? Then as the forum becomes more popular, if the moderator has no additional time to review posts, a diminishing fraction of posts are good.

The old wiki-oid site Everything2 explicitly assigns "levels" to users, based on how popular their posts are. Users who have proven themselves have the ability to signal-boost posts they like with a super-upvote.

It seems to me that something analogous to PageRank would be an interesting approach: the estimated quality of a post is specifically an estimate of how likely a high-quality forum member is to appreciate that post. Long-term high-quality posters' upvotes should probably count for a lot more than newcomers' votes. And moderators or other central, core-team users should probably be able to manually adjust a poster's quality score to compensate for things like a formerly-good poster going off the deep end, the revelation that someone is a troll or saboteur, or (in the positive direction) someone of known-good offline reputation joining the forum.

Comment author: Punoxysm 08 December 2014 09:00:49PM 7 points [-]

Can anyone link a deep discussion, including energy and time requirements, issues with spaceship shielding from radiation and collisions, etc., that would be involved in interstellar travel? I ask because I am wondering whether this is substantially more difficult than we often imagine, and perhaps a bottleneck in the Drake Equation

Comment author: Alsadius 09 December 2014 12:06:15AM *  9 points [-]

tl;dr: It is definitely more difficult than most people think, because most people's thoughts(even scientifically educated ones) are heavily influenced by sci-fi, which is almost invariably premised on having easy interstellar transport. Even the authors like Clarke with difficult interstellar transport assume that the obvious problems(e.g., lightspeed) remain, but the non-obvious problems(e.g., what happens when something breaks when you're two light-years from the nearest macroscopic object) disappear.

Comment author: gjm 09 December 2014 02:02:17AM 4 points [-]

Some comments on this from Charles Stross. Not optimistic about the prospects. Somewhat quantitative, at the back-of-envelope level of detail.

Comment author: shminux 08 December 2014 09:08:45PM 4 points [-]

Project Icarus seems like a decent place to start.

Comment author: Eniac 10 December 2014 04:41:14AM 2 points [-]

You might want to check out Centauri Dreams, best blog ever and dedicated to this issue.

Comment author: lukeprog 10 December 2014 03:46:42AM 2 points [-]

A fair bit of this is either cited or calculated within "Eternity in six hours." See also my interview with one of its authors, and this review by Nick Beckstead.

Comment author: CBHacking 08 December 2014 06:24:50PM 6 points [-]

Can anybody give me a good description of the term "metaphysical" or "metaphysics" in a way that is likely to stick in my head and be applicable to future contemplations and conversations? I have tried to read a few definitions and descriptions, but I've never been able to really grok any of them and even when I thought I had a working definition it slipped out of my head when I tried to use it later. Right now its default function in my brain is, when uttered, to raise a flag that signifies "I can't tell if this person is speaking at a level significantly above my comprehension or is just spouting bullshit, but either way I'm not likely to make sense of what they're saying" and therefore tends to just kind of kill the mental process that that was trying to follow what somebody was saying to me / what I was reading.

Given how often it comes up, and often from people I respect, I'm pretty sure that's not the correct behavior Figured it's worth asking here. In case it wasn't obvious, I have virtually no background in philosophy (though I've been looking to change that).

Comment author: Anatoly_Vorobey 08 December 2014 06:39:43PM *  13 points [-]

Metaphysics: what's out there? Epistemology: how do I learn about it? Ethics: what should I do with it?

Basically, think of any questions that are of the form "what's there in the world", "what is the world made of", and now take away actual science. What's left is metaphysics. "Is the world real or a figment of my imagination?", "is there such a thing as a soul?", "is there such a thing as the color blue, as opposed to objects that are blue or not blue?", "is there life after death?", "are there higher beings?", "can infinity exist?", etc. etc.

Note that "metaphysical" also tends to be used as a feel-good word, meaning something like "nobly philosophical, concerned with questions of a higher nature than the everyday and the mundane".

Comment author: polymathwannabe 08 December 2014 06:40:38PM 1 point [-]

Metaphysics: what's out there?

Isn't that ontology? What's the difference?

Comment author: Anatoly_Vorobey 08 December 2014 06:54:11PM 12 points [-]

"Ontology" is firmly dedicated to "exist or doesn't exist". Metaphysics is more broadly "what's the world like?" and includes ontology as a central subfield.

Whether there is free will is a metaphysical question, but not, I think, an ontological one (at least not necessarily). "Free will" is not a thing or a category or a property, it's a claim that in some broad aspects the world is like this and not like that.

Whether such things as desires or intentions exist or are made-up fictions is an ontological question.

Comment author: Gvaerg 08 December 2014 07:05:39PM *  1 point [-]

Thanks! I've seen many times the statement that ontology is strictly included in metaphysics, but this is the first time I've seen an example of something that's in the set-theoretic difference.

Comment author: ChristianKl 08 December 2014 06:45:31PM 1 point [-]

Ontology is a subdiscipline of metaphysics.

Is the many-world hypothesis true? Might be a metaphysical question that not directly ontology.

Comment author: gjm 08 December 2014 07:56:56PM 10 points [-]

This is in no way an answer to your actual question (Anatoly's is good) but it might amuse you.

"Meta" in Greek means something like "after" (but also "beside", "among", and various other things). So there is a

Common misapprehension: metaphysics is so called because it goes beyond physics -- it's mode abstract, more subtle, more elevated, more fundamental, etc.

This turns out not to be quite where the word comes from, so there is a

Common response": actually, it's all because Aristotle wrote a book called "Physics" and another, for which he left no title, that was commonly shelved after the "Physics" -- *meta ta Phusika -- and was commonly called the "Metaphysics". And the topics treated in that book came to be called by that name. So the "meta" in the name really has nothing at all to do with the relationship between the subjects.

But actually it's a bit more complicated than that; here's the

Truth (so far as I understand it): indeed Aristotle wrote those books, and indeed the "Metaphysics" is concerned with, well, metaphysics, and indeed the "Metaphysics" is called that because it comes "after the Physics". But the earliest sources we have suggest that the reason why the Metaphysics came after the Physics is that Aristotle thought it was important for physics to be taught first. So actually it's not far off to say that metaphysics is so called because it goes beyond physics, at least in the sense of being a more advanced topic (in Aristotle's time).

Comment author: TheOtherDave 08 December 2014 09:01:07PM *  1 point [-]

In my experience people use "metaphysics" to refer to philosophical exploration of what kinds of things exist and what the nature, behavior, etc. of those things is.

This is usually treated as distinct from scientific/experimental exploration of what kinds of things exist and what the nature, behavior, etc. of those things is, although those lines are blurry. So, for example, when Yudkowsky cites Barbour discussing the configuration spaces underlying experienced reality, there will be some disagreement/confusion about whether this is a conversation about physics or metaphysics, and it's not clear that there's a fact of the matter.

This is also usually treated as distinct from exploration of objects and experiences that present themselves to our senses and our intuitive reasoning... e.g. shoes and ducks and chocolate cake. As a consequence, describing a thought or worldview or other cognitive act as "metaphysical" can become a status maneuver... a way of distinguishing it from object-level cognition in an implied context where more object-level (aka "superficial") cognition is seen as less sophisticated or deep or otherwise less valuable.

Some people also use "metaphysical" to refer to a class of events also sometimes referred to as "mystical," "occult," "supernatural," etc. Sometimes this usage is consistent with the above -- that is, sometimes people are articulating a model of the world in which those events can best be understood by understanding the reality which underlies our experience of the world.

Other times it's at best metaphorical, or just outright bullshit.

As far as correct behavior goes... asking people to taboo "metaphysical" is often helpful.

Comment author: CBHacking 08 December 2014 10:10:13PM 1 point [-]

The rationalist taboo is one of the tools I have most enjoyed learning and found most useful in face-to-face conversations since discovering the Sequences. Unfortunately, it's not practical when dealing with mass-broadcast or time-shifted material, which makes it of limited use in dealing with the scenarios where I most frequently encounter the concept of metaphysics.

I tend to (over)react poorly to status maneuvers, which is probably part of why I've had a hard time with the word; it gets used in an information-free way sufficiently often that I'm tempted to just always shelve it there, and that in turn leads me to discount or even ignore the entire thought which contained it. This is a bias I'm actively trying to brainhack away, and I'm now tempted to go find some of my philosophically-inclined social circle and see if I can avoid that automatic reaction at least where this specific word is concerned (and then taboo it anyhow, for the sake of communication being informative).

I still haven't fully internalized the concept, but I'm getting closer. "The kinds of things that exist, and their natures" is something I can see a use for, and hopefully I can make it stick in my head this time.

Comment author: torekp 10 December 2014 12:11:03AM 5 points [-]

True, false, or neither?: It is currently an open/controversial/speculative question in physics whether time is discretized.

Comment author: polymathwannabe 10 December 2014 01:37:28AM 8 points [-]

The Wikipedia article on Planck time says:

Theoretically, this is the smallest time measurement that will ever be possible, roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change.

However, the article on Chronon says:

The Planck time is a theoretical lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times.

Comment author: iamthelowercase 12 June 2015 08:20:30AM 0 points [-]

So, if I understand this rightly-

Any two events must take place at least one Plank time apart. But so long as they do, it can be any number of plank times -- even, say, pi. Right?

Comment author: Grothor 10 December 2014 05:08:48AM 1 point [-]

Many things in our best models of physics are discrete, but as far as I know, our coordinates (time, space, or four-dimensional space-time coordinates) are never discrete. Even something like quantum field theory, which treats things in a non-intuitively discrete way does not do this. For example, we might view the process of an electron scattering off another electron as an exchange of many discrete photons between the two electrons, but it is all written in terms of integrals or derivatives, rather than differences or sums.

Comment author: timujin 09 December 2014 09:15:30AM 5 points [-]

I have a constant impression that everyone around me is more competent than me at everything. Does it actually mean that I am, or is there some sort of strong psychological effect that can create that impression, even if it is not actually true? If there is, is it a problem you should see your therapist about?

Comment author: Toggle 09 December 2014 06:37:29PM *  16 points [-]

Reminds me of something Scott said once:

And when I tried to analyzed my certainty that – even despite the whole multiple intelligences thing – I couldn’t possibly be as good as them, it boiled down to something like this: they were talented at hard things, but I was only talented at easy things.

It took me about ten years to figure out the flaw in this argument, by the way.

Comment author: Gondolinian 12 December 2014 01:28:23AM *  7 points [-]

See also: The Illusion of Winning by Scott Adams (h/t Kaj_Sotala)

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

Comment author: Gunnar_Zarncke 09 December 2014 09:34:42PM *  5 points [-]

This reminds me of my criteria for learning: "You have understood something when it appears to be easy." The mathematicians call this state 'trivial'. It has become easy because you trained the topic until the key aspects became part of your unconscious competence. Then it appears to yourself as easy - because you no longer need to think about it.

Comment author: Viliam_Bur 09 December 2014 10:50:04AM 6 points [-]

Impostor syndrome:

Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.

Psychological research done in the early 1980s estimated that two out of five successful people consider themselves frauds and other studies have found that 70 percent of all people feel like impostors at one time or another. It is not considered a psychological disorder, and is not among the conditions described in the Diagnostic and Statistical Manual of Mental Disorders.

Comment author: timujin 09 December 2014 11:30:42AM 3 points [-]

Err, that's not it. I am no more successful than them. Or, at least, I kinda feel that everyone else is more successful than me as well.

Comment author: Elo 25 December 2014 01:11:48AM 0 points [-]

Consider that maybe you might be wrong about the imposter syndrome. As a person without it - its hard to know how you think/feel and how you concluded that you couldn't have it. But maybe its worth asking - How would someone convince you to change your mind on this topic?

Comment author: timujin 25 December 2014 05:40:06PM *  0 points [-]

By entering some important situation where my and his comparative advantage in some sort of competence comes into play, and losing.

Comment author: Elo 28 December 2014 08:57:21AM 1 point [-]

what if you developed a few bad heuristics about how other successful people were not inherently more successful but just got lucky (or some other external granting of success) as they went along; whereas your hard-earned successes were due to successful personal skills... Hard earned, personally achieved success.

its probably possible to see a therapist about it; but I would suggest you can work your own way around it (consider it a challenge that can be overcome with the correct growth mindset)

Comment author: [deleted] 09 December 2014 06:40:34PM 4 points [-]

I think people are quick to challenge this type of impression because it pattern matches to known cognitive distortions involved in things like depression, or known insecurities in certain competitive situations.

For example, consider that most everyone will structure their lives such that their weaknesses are downplayed and their positive features are more prominent. This can happen either by choice of activity (e.g. the stereotypical geek avoids social games) or by more overt communication filtering (e.g. most people don't talk about their anger problems). Accordingly, it's never hard to find information that confirms your own relative incompetence, if there's some emotional tendency to look for it.

Aside from that, a great question is "to what ends am I making this comparison?" I find it unlikely that you have a purely academic interest in the question of your relative competence.

First, it can often be useful to know your relative competence in a specific competitive domain. But even here, this information is only one part of your decision process: You may be okay with e.g. choosing a lower expected rank in one career over a higher rank in another because you enjoy the work more, or find it more compatible with your values, or because it pays betters, or leaves more time for you family, or you're risk averse, or it's more altruistic, etc. But knowing your likely rank along some dimension will tell you a bit about the likely pay-offs of competing along that dimension.

But what is the use of making an across-the-board self-comparison?

Suppose you constructed some general measure of competence across all domains. Suppose you found out you were below average (or even above average). Then what? It seems you're in still in the same situation as before: You still must choose how to spend your time. The general self-comparison measure is nothing more than the aggregate of your expected relative ranks on specific sub-domains, which are more relevant to any specific choice. And as I said above, your expected rank in some area is far from the only bit of information you care about.

As an aside, a positive use for a self-comparison is to provide a role-model. If you find yourself unfavorably compared to almost everyone, consider yourself lucky that you have so many role-models to choose from! Since you are probably like other people in most respects, you can expect to find low-hanging fruit in many areas where you have poor relative performance.

But if you find (as many people will) that you've hit the point of diminishing returns regarding the time you spend comparing yourself to others, perhaps you can simply recognize this and realize that it's neither cowardly nor avoidant to spend your mental energy elsewhere.

Comment author: NancyLebovitz 09 December 2014 10:34:29AM 3 points [-]

Possibly parallel-- I've had a feeling for a long time that something bad was about to happen. Relatively recently, I've come to believe that this isn't necessarily an accurate intuition about the world, it's muscle tightness in my abdomen. It's probably part of a larger pattern, since just letting go in the area where I feel it doesn't make much difference.

I believe that patterns of muscle tension and emotions are related and tend to maintain each other.

It's extremely unlikely that everyone is more competent than you at everything. If nothing else, your writing is better than that of a high proportion of people on the internet. Also, a lot of people have painful mental habits and have no idea that they have a problem.

More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything?

This sounds to me like something worth taking to a therapist, bearing in mind that you may have to try more than one therapist to find one that's a good fit.

I believe there's strong psychological effect which can create that impression-- growing up around people who expect you to be incompetent. Now that I think about it, there may be genetic vulnerability involved, too.

Possibly worth exploring: free monthly Feldenkrais exercise-- this are patterns of gentle movement which produce deep relaxation and easier movement. The reason I think you can get some evidence about your situation by trying Feldenkrais is that, if you find your belief about other people being more competent at everything goes away, even briefly, than you have some evidence that the belief is habitual.

Comment author: mwengler 11 December 2014 03:03:39PM 1 point [-]

I've had a feeling for a long time that something bad was about to happen.

Nancy, I believe you are describing anxiety. That you are anxious, that if you went to a psychologist for therapy and you were covered by insurance that they would list your diagnosis on the reimbursement form as "generalized anxiety disorder."

I say this not as a psychologist but as someone who was anxious much of his life. For me it was worth doing regular talking therapy and (it seems to me) hacking my anxiety levels slowly downward through directed introspection. I am still more timid than I would like in situations where, for example, I might be very direct telling a woman (of the appropriate sex) I love her, or putting my own ideas forward forcefully at work. But all of these things I do better now than I did in the past, and I don't consider my self-adjustment to be finished yet.

Anyway, If you haven't named what is happening to you as "anxiety," it might be helpful to consider that some of what has been learned about anxiety over time might be interesting to you, that people who are discussing anxiety may often be discussing something relevant to you.

Comment author: timujin 09 December 2014 11:34:27AM 1 point [-]

If nothing else, your writing is better than that of a high proportion of people on the internet.

Do you know me?

More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything?

I find a lot of evidence for it, but I am not sure I am not being selective. For example, I am the only one in my peer group that never did any extra-curricular activities at school. While everyone had something like sports or hobbies, I seemed to only study at school an waste all my other time surfing the internet and playing the same video games over and over.

Comment author: ChristianKl 09 December 2014 12:41:31PM 5 points [-]

The idea that playing an instrument is a hobby while playing a video game isn't is completely cultural. It says something about values but little about competence.

Comment author: jaime2000 12 December 2014 05:12:32PM *  3 points [-]

One important difference is that video games are optimized to be fun while musical instruments aren't. Therefore, playing an instrument can signal discipline in a way that playing a game can't.

Comment author: ChristianKl 12 December 2014 06:43:44PM 0 points [-]

I'm not sure that's true. There's selection pressure on musical instruments to make them fun to use. Most of the corresponding training also mostly isn't optimised for learning but for fun.

Comment author: NancyLebovitz 09 December 2014 04:04:01PM 1 point [-]

Having a background belief that you're worse than everyone at everything probably lowered your initiative.

Comment author: EphemeralNight 12 December 2014 03:26:22AM 2 points [-]

I sometimes have a similar experience, and when I do, it is almost always simply an effect of my own standards of competence being higher than those around me.

Imagine, some sort of problem arises in the presence of a small group. The members of that group look at each other, and whoever signals the most confidence gets first crack at the problem. But this more-confident person then does not reveal any knowledge or skill that the others do not possess, because said confidence was entirely do to higher willingness to potentially make the problem worse through trial and error.

So, in this scenario, feeling less competent does not mean you are less competent; it means you are more risk-adverse. Do you have a generalized paralyzing fear of making the problem worse? If so, welcome to the club. If not, nevermind.

Comment author: IlyaShpitser 09 December 2014 01:47:07PM *  2 points [-]

There are two separate issues: morale management and being calibrated about your own abilities.

I think the best way to be well-calibrated is to approximate pagerank -- to get a sense of your competence, don't ask yourself, average the extracted opinion of others that are considered competent and have no incentives to mislead you (this last bit is tricky, also the extracting process may have to be slightly indirect).

Morale is hard, and person specific. My experience is that in long term projects/goals, morale becomes a serious problem long before the situation actually becomes bad. I think having "wolverine morale" ("You know what Mr. Grizzly? You look like a wuss, I can totally take you!") is a huge chunk of success, bigger than raw ability.

Comment author: Elo 25 December 2014 01:09:46AM *  1 point [-]

Look up the imposter syndrome. And try not to automatically say; "I don't have it because I never did anything of noteworthyness"

---Oh dang; someone else got to it first.

How did you go with your opinions of imposter syndrome now?

Comment author: mwengler 11 December 2014 03:12:12PM 1 point [-]

I personally am a fan of talking therapy. If you are thinking something is worth asking a therapist about, it is worth asking a therapist about. But beyond the generalities, thinking you are not good enough is absolutely right in the targets of the kinds of things it can be helpful to discuss with a therapist.

Consider the propositions: 1) everyone is more competent than you at everything and 2) you can carry on a coherent conversation on lesswrong I am pretty sure that these are mutually exclusive propositions. I'm pretty sure just from reading some of your comments that you are more competent than plenty of other people at a reasonable range of intellectual pursuits.

Anything you can talk to a therapist about you can talk to your friends about. Do they think you are less competent than everybody else? They might point out to you in a discussion some fairly obvious evidence for or against this proposition that you are overlooking.

Comment author: timujin 12 December 2014 02:38:30PM 1 point [-]

I asked my friends around. Most were unable to point out a single thing I am good at, except speaking English very well for a foreign language, and having a good willpower. One said "hmmm, maybe math?" (as it turned out, he was fast-talked by the math babble that was auraing around me for some time after having read Godel, Escher, Bach), and several pointed out that I am handsome (while a nice perk, I don't what that to be my defining proficiency).

Comment author: mwengler 12 December 2014 02:52:28PM 3 points [-]

Originally you expressed concern that all other people were better than you at all the things you might do.

But here you find out from your friends that for each thing you do there are other people around you who do it better.

In a world with 6 billion people, essentially every one of us can find people who are better at what we are good at than we are. So join the club. What works is to take some pleasure in doing things.

Only you can improve your understanding of the world, for instance. No one in the world is better at increasing your understanding of the world than you are. I read comments here and post "answers" here to increase my understanding of the world. It doesn't matter that other people here are better at answering these questions, or that other people here have a better understanding of the world than I do. I want to increase my understanding of the world and I am the only person in the world who can do that.

I also wish to understand taking pleasure and joy from the world and work to increase my pleasure and joy in the world. No one can do that for me better than I can. You might take more joy than me in kissing that girl over there. Still, I will kiss her if I can because having you kiss her gives me much less joy and pleasure than kissing her myself, even if I am getting less joy from kissing here than you would get for yourself if you kissed her .

The concern you express to only participate in things where you are better than everybody else is just a result of your evolution as a human being. The genes that make you think being better than others around you have, in the past, caused your ancestors to find effective and capable mates, able to keep their children alive and able to produce children who would find effective and capable mates. But your genes are just your genes they are not the "truth of the world." You can make the choice to do things because you want the experience of doing them, and you will find you are better than anybody else in the world by far at giving yourself experiences.

Comment author: elharo 09 December 2014 12:26:23PM *  1 point [-]

Possible, but unlikely. We're all just winging it and as others have pointed out, impostor syndrome is a thing.

Comment author: LizzardWizzard 09 December 2014 10:30:45AM 1 point [-]

I suppose that the problem emerged only because you communicate only with people of your sort and level of awareness, try to go on a trip to some rural village or start conversations with taxists, dishwashers, janitors, cooks, security guards etc.

Comment author: artemium 17 December 2014 06:59:56AM 4 points [-]

Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.

Comment author: timujin 10 December 2014 07:46:04AM 4 points [-]

In dietary and health articles they often speak about "processed food". What exactly is processed food and what is unprocessed food?

Comment author: Lumifer 10 December 2014 04:05:21PM 10 points [-]

Definitions will vary depending on the purity obsession of the speaker :-) but as a rough guide, most things in cans, jars, boxes, bottles, and cartons will be processed. Things that are, more or less, just raw plants and animals (or parts of them) will be unprocessed.

There are boundary cases about which people argue -- e.g. is pasteurized milk a processed food? -- but for most things in a food store it's pretty clear what's what.

Comment author: Dahlen 08 December 2014 09:20:39PM *  4 points [-]

Is it possible even in principle to perform a "consciousness transfer" from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require a complete brain transplant? (If so, how would neural structures found in the second body heal & connect with the transplanted brain so that a functional central nervous system results?) Wouldn't the person whose consciousness is being transferred experience some sort of personality change due to "inhabiting" a structurally different brain or body?

Is this whole hypothesis just an artifact of reminiscent introjected mind-body dualism, not compatible with modern science? Does the science world even know enough about consciousness and the brain to be able to answer this question?

I'm asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality. Unfortunately, I lack the necessary background knowledge to think coherently about this idea, so I figured there are many people on LW who don't, and could explain to me whether this whole idea makes sense.

Comment author: CBHacking 08 December 2014 10:58:14PM 4 points [-]

I don't think anybody has hard evidence of answers to any of those questions yet (though I'd be fascinated to learn otherwise) but I can offer some conjectures:

Possible in principle? Yes. I see no evidence that sentience and identity are anything other than information stored in the nervous system, and in theory the cognitive portion of a nervous system an organ and could be transplanted like any other.

Preserving anatomical integrity? Not with anything like current science. We can take non-intrusive brain scans, but they're pretty low-resolution and (so far as I know) strictly read-only. Even simply stimulating parts of the brain isn't enough to basically re-write it in such a way that it becomes another person's brain.

Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable. It's probably possible to do it without a full brain at all, which seems less evil if you can somehow do it my some mechanism other than what amounts to a pre-natal full lobotomy, but would require the physical brain transplant option for transference.

Nerves connecting and healing? Nerves can repair themselves, though it's usually extremely slow. Stem cell therapies have potential here, though. Connecting the brain to the rest of the body is a lot of nerves, but they're pretty much all sensory and motor nerves so far as I know; the brain itself is fairly self-contained

Personality change? That depends on how different the new body is from the old, I would guess. The obviously-preferable body is a clone, for many reasons including avoiding the need to avoid immune system rejection of the new brain. Personality is always going to be somewhat externally-driven, so I wouldn't expect somebody transferred from a 90-year-old body to a 20-year-old one to have the same personality regardless of any other information because the body will just be younger. On the other hand, if you use a clone body that's the same age as the transferee, it wouldn't shock me if the personality didn't actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.

Now, mind you, I'm no brain surgeon (or medical professional of any sort), nor have I studied any significant amount of psychology. Nor am I a philosopher (see my question above). However, I don't really see how the mind could be anything except a characteristic of the body. Altering (intentionally or otherwise) the part of the body responsible for thought alters the mind. Our current attempted maps of the mind don't come close to fully representing the territory, but I firmly believe it is mappable. Whether an existing one is re-mappable I can't say, but the idea of transplanting a brain has been explored in science fiction for decades, and in theory I see ne logical reason why it couldn't work.

Comment author: Gunnar_Zarncke 09 December 2014 09:46:58PM *  3 points [-]

To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time.

I don't think this is currently possible. The body just wouldn't work. A large part of the 'wiring' during infant and childhood is connecting body parts and functions with higher and higher level concepts. Think about toilet training. You aren't even aware of how it works but it nonetheless somehow connects large scale planning (how urgent is it, when and where are toilets) to the actual control of the organs. Considering how differnt minds (including the connection to the body) are I think the minimum requirement (short of signularity-level interventions) is an identical twin.

That said I think the existing techniques for transferring motion from one brain to another combined with advanced hypnosis and drugs could conceivably developed to a point where it is possible to transfer noticable parts of your identity over to another body - at least over an extended period of time where the new brain 'learn' to be you. To also transfer memory is camparably easy. Whether the result can be called 'you' or is sufficiently alike to you is another question.

Comment author: Dahlen 12 December 2014 01:14:05AM *  1 point [-]

Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable.

That's how I pictured it, yes. At this point I wouldn't concern myself with the ethics of it, because, if our technology advances this much, then simply the fact that humanity can perform such a feat is an extremely positive thing, and probably the end of death as we know it. What worries me more is that this wouldn't result in a functional mature individual. For instance: in order to develop the muscular system, the body's skeletal muscles would have to experience some sort of stress, i.e. be used. If you grow the organism in a jar from birth to consciousness transfer (as is probably most ethical), it wouldn't have moved at all its entire life up to that point, and would therefore have extremely weak musculature. What to do in the meantime, electrically stimulate the muscles? Maybe, but it probably wouldn't have results comparable to natural usage. Besides, there are probably many other body subsystems that would suffer similarly without much you could do about it. See Gunnar Zarncke's comment below.

On the other hand, if you use a clone body that's the same age as the transferee, it wouldn't shock me if the personality didn't actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.

Yes, but I imagine most uses to be related to rejuvenation. It would mean that the genetic info required for cloning would have to be gathered basically at birth (and the cloning process begun shortly thereafter), and there would still be a 9-month age difference. There's little point in growing a backup clone for an organism so soon after birth. An age difference of 20 years between person and clone seems more reasonable.

Comment author: hyporational 15 December 2014 02:33:26AM 1 point [-]

Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism?

This can already be done via the senses. This also transfers consciousness of the content that is being transferred. What would consciousness without content look like?

Comment author: Alsadius 09 December 2014 12:09:43AM 1 point [-]

In order to provide a definite answer to this question, we'd need to know how the brain produces consciousness and personality, as well as the exact mechanism of the upload(e.g., can it rewire synapses?).

Comment author: ChristianKl 08 December 2014 10:52:14PM 1 point [-]

There no such thing as "purely informational" when it comes to brains.

I'm asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality.

If you want to focus on that problem it's likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.

Comment author: Ebthgidr 10 December 2014 03:04:02AM 3 points [-]

A question about Lob's theorem: assume not provable(X). Then, by rules of If-then statements, if provable(X) then X is provable But then, by Lob's theorem, provable(X), which is a contradiction. What am I missing here?

Comment author: DanielFilan 10 December 2014 03:35:53AM 2 points [-]

I'm not sure how you're getting from not provable(X) to provable(provable(X) -> X), and I think you might be mixing meta levels. If you could prove not provable(X), then I think you could prove (provable(X) ->X), which then gives you provable(X). Perhaps the solution is that you can never prove not provable(X)? I'm not sure about this though.

Comment author: Toggle 09 December 2014 10:37:17PM *  3 points [-]

Maneki Neko is a short story about an AI that manages a kind of gift economy. It's an enjoyable read.

I've been curious about this 'class' of systems for a while now, but I don't think I know enough about economics to ask the questions well. For example- the story supplies a superintelligence to function as a competent central manager, but could such a gift network theoretically exist without being centrally managed (and without trivially reducing to modern forms of currency exchange)? Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis? And so on.

In particular, I'm looking for the intellectual tools that would be used to ask these questions in a more rigorous way; it would be great if I had better ways of figuring out which of these questions are obviously stupid and which are not. Specific disciplines in economics or game theory, perhaps. Things along the lines of LW's Mechanism Design sequence would be fantastic. Can anyone give me a few pointers?

Comment author: badger 10 December 2014 07:35:24PM 7 points [-]

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.

For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.

Comment author: Toggle 10 December 2014 08:23:06PM 1 point [-]

This looks very useful. Thanks!

Another one of those interesting questions is whether the pricing system must be equivalent to currency exchange. To what extent are the traditional modes of transaction a legacy of the limitations behind physical coinage, and what degrees of freedom are offered by ubiquitous computation and connectivity? Etc. (I have a lot of questions.)

Comment author: badger 10 December 2014 09:09:08PM 1 point [-]

Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.

Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from "equal incomes".

What benefits do you think a different system might provide, or what problems does monetary exchange have that you're trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.

Comment author: Lumifer 10 December 2014 07:35:30PM 2 points [-]

I'm looking for the intellectual tools that would be used to ask these questions in a more rigorous way

The field of study that deals with this is called economics. Any reason an intro textbook won't suit you?

Comment author: ChristianKl 10 December 2014 03:22:30PM 1 point [-]

Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis?

The stock market has a lot of capable AIs that manage capital allocation.

Comment author: Toggle 10 December 2014 07:15:30PM 1 point [-]

Fair point. It's my understanding that this is limited to rapid day trades, with implications for the price of a stock but not cash-on-hand for the actual company. I was imagining something more like a helper algorithm for venture capital or angel investors, comparable to the PGMs underpinning the insurance industry.

Comment author: hamnox 18 December 2014 12:10:03AM 2 points [-]

Here I be, looking at a decade old Kurzweil book, and I want to know whether the trends he's graphing hold up after in later years. I have no inkling of where on earth one GETs these kinds of factoids, except by some mystical voodoo powers of Research bestowed by Higher Education. It's not just guesstimation... probably.

Bits per Second per Dollar for wireless devices? Smallest DRAM Half Pitches? Rates of adoption for pre-industrial inventions? From whence do all these numbers come and how does one get more recent collections of numbers?

Comment author: knb 18 December 2014 05:45:26AM *  1 point [-]

LW user Stuart Armstrong did a number of posts assessing Kurzweil's predictions: Here, here, here, and here.

Comment author: Gondolinian 17 December 2014 03:42:57PM *  2 points [-]

Mostly just out of curiosity:

What happens karma-wise when you submit a post to Discussion, it gets some up/downvotes, you resubmit it to Main, and it gets up/downvotes there? Does the post's score transfer, or does it start from 0?

Comment author: Kaj_Sotala 19 December 2014 08:59:34PM 2 points [-]

The post's score transfers, but I think that the votes that were applied when it was in Discussion don't get the x10 karma multiplier that posts in Main otherwise do.

Comment author: Gondolinian 19 December 2014 09:05:28PM *  0 points [-]

Thanks!

Comment author: ilzolende 14 December 2014 08:35:09PM 2 points [-]

How do I improve my ability to simulate/guess other people's internal states and future behaviors? I can, just barely, read emotions, but I make the average human look like a telepath.

Comment author: hyporational 15 December 2014 01:51:04AM *  1 point [-]

It's trial and error mostly, paying attention to other people doing well or making mistakes, getting honest feedback from a skilled and trusted friend. Learning social skills is like learning to ride a bike, reading about it doesn't give you much of an advantage.

The younger you are the less it costs to make mistakes. I think a social job is a good way to learn because customers are way less forgiving than other people you randomly meet. You could volunteer for some social tasks too.

If your native hardware is somehow socially limited then you might benefit from reading a little bit more and you might have to develop workarounds to use what you've got to read people. It's difficult to learn from mistakes if you don't know you're making them.

One thing I've learned about the average human looking like a telepath is that most people are way too certain about their particular assumption when there are actually multiple possible ways to understand a situation. People generally aren't as great at reading each other as they think that are.

Comment author: ilzolende 21 December 2014 05:52:14AM 0 points [-]

My native hardware is definitely limited - I'm autistic.

The standard quick-and-dirty method of predicting others seems to be "model them as slightly modified versions of you", but when other people's minds are more similar to each other than they are to you, the method works far better for them than it does for you.

My realtime modeling isn't that much worse than other people's, but other people can do a lot more with a couple of minutes and no distractions than I can.

Thanks a bunch for the suggestions!

Comment author: hyporational 23 December 2014 05:08:27PM *  1 point [-]

The standard quick-and-dirty method of predicting others seems to be "model them as slightly modified versions of you"

It certainly doesn't feel that way to me, but I might have inherited some autistic characteristics since there are a couple of autistic people in my extended family. Now that I've worked with people more, it's more like I have several basic models of people like "rational", "emotional", "aggressive", "submissive", "assertive", "polite", "stupid", "smart", and then modify those first impressions according to additional information.

I definitely try not to model other people based on my own preferences since they're pretty unusual, and I hate it when other people try to model me based on their own preferences especially if they're emotional and extroverted. I find that kind of empathy very limited, and these days I think I can model a wider variety of people than many natural extroverts can, in the limited types of situations where I need to.

Comment author: ilzolende 25 December 2014 07:40:32AM *  0 points [-]

Thanks! Your personality archetypes/stereotypes sound like a quick-and-dirty modeling system that I can actually use, but one that I shouldn't explain to the people who know me by my true name.

That probably explains why I hadn't heard about it already: if it were less offensive-sounding, then someone would have told me about it. Instead, we get the really-nice-sounding but not very practical suggestions about putting yourself in other peoples' shoes, which is better for basic* morality than it is for prediction.

*By "basic", I mean "stuff all currently used ethical systems would agree on", like 'don't hit someone in order to acquire their toys.'

Comment author: Capla 12 December 2014 07:11:21PM 2 points [-]

Is "how do I get better at sex?" a solved problem?

Is it just a matter of getting a partner who will given you feedback and practicing?

Comment author: Kaura 10 December 2014 02:54:19PM 2 points [-]

Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.

This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it's just not a big deal in an Everett multiverse?

(There's probably a lot that I've missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)

Comment author: DanielFilan 11 December 2014 12:01:00AM 4 points [-]

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be?

Not really. If you're in a suboptimal branch, but still doing better than if you didn't exist at all, then you aren't making the world better off by self-destructing regardless of whether other branches exist.

Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.

It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn't important for this particular discussion) of branches where everything is stellar - just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn't so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1/2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1/2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.

Comment author: JQuinton 08 December 2014 08:11:57PM 2 points [-]

Looking for some people to refute this recently hair-brained idea I came up with.

The time period from the advent of the industrial revolution to the so-called digital revolution was about 150 - 200 years. Even though computers were being used around WWII, widespread computer use didn't start to shake things up until 1990 or so. I would imagine that AI would constitute a similar fundamental shift in how we live our lives. So would it be a reasonable extrapolation to think that widespread AI would be about 150 - 200 years after the beginning of the information age?

Comment author: sixes_and_sevens 08 December 2014 08:41:17PM 18 points [-]

By what principle would such an extrapolation be reasonable?

Comment author: shminux 08 December 2014 09:15:28PM 9 points [-]

If you are doing reference class forecasting, you need at least a few members in your reference class and a few outside of it, together with the reasons why some are in and others out. If you are generalizing from one example, then, well...

Comment author: NobodyToday 08 December 2014 09:40:32PM 3 points [-]

I'm a firstyear AI student, and we are currently in the middle of exploring AI 'history'. Ofcourse I don't know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc.

The problem is they came across some hard problems which they can't really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem.

Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can't seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec's paradox ).

Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.

Comment author: Daniel_Burfoot 08 December 2014 11:02:55PM 2 points [-]

And machine learning is a thing which we can't seem to get very far with.

Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.

Comment author: Punoxysm 10 December 2014 05:17:31AM *  1 point [-]

but deep learning is really a new thing under the sun.

On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90's in the form of convolutional neural nets, and they had another burst of popularity in 2006.

Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress.

This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.

Comment author: Gram_Stone 30 December 2014 05:27:09AM *  1 point [-]

Is it a LessWrongian faux pas to comment only to agree with someone? Here's the context:

That's the kind of person that goes on to join LW and tell you. There are also people who read a sequence post or two because they followed a link from somewhere, weren't shocked at all, maybe learned something, and left. In fact I'd expect they're the vast majority.

I was going to say that I agree and that I had not considered my observation as an effect of survivorship bias.

I guess I thought it might be useful to explicitly relate what he said to a bias. Maybe that's just stating the obvious here? Maybe I should do it anyway because it might help someone?

I'd also like to know about this in less specific contexts.

Comment author: Gram_Stone 30 December 2014 05:21:20AM 1 point [-]

What prerequisite knowledge is necessary to read and understand Nick Bostrom's Superintelligence?

Comment author: knb 18 December 2014 05:37:59AM 1 point [-]

I have a vague notion from reading science fiction stories that black holes may be extremely useful for highly advanced (as in, post-singularity/space-faring) civilizations. For example, IIRC, in John C. Wright's Golden Age series, a colony formed near a black hole became fantastically wealthy.

I did some googling, but all I found was that they would be great at cooling computer systems in space. That seems useful, but I was expecting something more dramatic. Am I missing something?

Comment author: alienist 19 December 2014 05:50:41AM *  9 points [-]

I did some googling, but all I found was that they would be great at cooling computer systems in space.

When you're sufficiently advanced, cooling your systems, technically disposing of entropy, is one of the main limiting constraint on your system. Also if you throw matter into a black hole just right you can get its equivalent (or half its equivalent I forgot which) out in energy.

Edit: thinking about it, it is half the mass.

Comment author: orthonormal 26 December 2014 10:26:50PM 0 points [-]

Also if you throw matter into a black hole just right you can get its equivalent (or half its equivalent I forgot which) out in energy.

Not in useful energy, if you're thinking of using Hawking radiation; it comes out in very high-entropy form. I was so sad when I realized that the "Hawking reactor" I'd invented in fifth grade would violate the Second Law of Thermodynamics.

Comment author: alienist 27 December 2014 01:49:34AM 8 points [-]

I wasn't talking about Hawkings radiation. If I throw matter in a black hole just right, I can get half the mass to come out in low-entropy photons. That's why the brightest objects in the universe are black holes that are currently eating something.

Comment author: orthonormal 27 December 2014 02:36:39AM 3 points [-]

Ah, cool! Forgot about how quasars are hypothesized to work.

Comment author: JoshuaZ 26 December 2014 11:47:47PM *  0 points [-]

It is useable if you use small blackholes. You don't need to be able to use all of the energy for lots of purposes since a tiny bit of mass leads to so much energy.

Comment author: Lumifer 18 December 2014 06:00:43AM 0 points [-]

They make awesome garbage disposal units :-)

Comment author: advancedatheist 08 December 2014 07:44:20PM *  1 point [-]

Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.

Interestingly enough Rand seems to make a disclaimer about that in her novel Atlas Shrugged. The philosophy professor character Hugh Akston says of his star students, Ragnar Danneskjold, John Galt and Francisco d'Anconia:

"Don't be astonished, Miss Taggart," said Dr. Akston, smiling, "and don't make the mistake of thinking that these three pupils of mine are some sort of superhuman creatures. They're something much greater and more astounding than that: they're normal men—a thing the world has never seen—and their feat is that they managed to survive as such. It does take an exceptional mind and a still more exceptional integrity to remain untouched by the brain-destroying influences of the world's doctrines, the accumulated evil of centuries—to remain human, since the human is the rational."

But then look at what Rand shows these allegedly "normal men" can do as Operating Objectivists:

Hank Rearden, a kind of self-trained Operating Objectivist who never studied under Akston, can design a new kind of railroad bridge in his mind which exploits the characteristics of his new alloy, even though he has never built a bridge before.

Francisco d'Anconia can deceive the whole world as he depletes his inherited fortune while making everyone believe that he spends his days as a playboy pickup artist, when he in fact he has lived without sex since his youthful sexual relationship with Dagny.

John Galt can build a motor which violates the conservation of energy and the laws of thermodynamics. Oh, and he can also confidently master Dagny's unexpected intrusion into Galt's Gulch despite his secret crush her, his implied adult virginity and his lack of an adult man's skill set for handling women. (You need life experience for that, not education in philosophy.) On top of that, he can survive torture without suffering from post-traumatic stress symptoms.

So despite Rand's disclaimer, if you view Atlas Shrugged as "advertising" for the abilities Rand's philosophy promises as it unlocks your potentials as a "normal man," then the Objectivist organizations which work with this idea implicitly do seem to offer to turn you into a "superhuman creature."

Comment author: alienist 11 December 2014 04:56:30AM 9 points [-]

On top of that, he can survive torture without suffering from post-traumatic stress symptoms.

PTSS almost seems like a culture-bound syndrome of the modern West. In particular there don't seem to be any references to it before WWI and even there (and in subsequent wars) all the references seem to be from the western allies. Furthermore, the reaction to "shell shock", as it was then called, during WWI suggests that this was something new that the established structures didn't know how to deal with.

Comment author: NancyLebovitz 11 December 2014 05:54:15PM *  6 points [-]

Not everyone who's had traumatic experiences has PTSD.

More information

The scientists have a theory, and it has to do with the root causes of PTSD, previously undocumented. As compared with the resilient Danish soldiers, all those who developed PTSD were much more likely to have suffered emotional problems and traumatic events prior to deployment. In fact, the onset of PTSD was not predicted by traumatic war experiences but rather by childhood experiences of violence, especially punishment severe enough to cause bruises, cuts, burns and broken bones. PTSD sufferers were also more likely to have witnessed family violence and to have experienced physical attacks, stalking or death threats by a spouse. They also more often had past experiences that they could not, or would not, talk about.

Comment author: bogus 11 December 2014 09:32:20AM 4 points [-]

PTSS almost seems like a culture-bound syndrome of the modern West.

There are significant confounders here, as modern science-based psychology got started around the same time - and WWI really was very different from earlier conflicts, not least in its sheer scale. But the idea is nonetheless intriguing; the West really is quite different from traditional societies, along lines that could plausibly make folks more vulnerable to traumatic shock.

Comment author: Viliam_Bur 08 December 2014 11:10:49PM *  5 points [-]

Seems to me that Rand's model is similar to LessWrong's "rationality as non-self-destruction".

Objectivism in the novels doesn't give the heroes any positive powers. It merely helps them avoid some harmful beliefs and behaviors, which are extremely common. Not burdened by these negative beliefs and behaviors, these "normal men" can fully focus on what they are good at, and if they have high intelligence and make the right choices, they can achieve impressive results.

(The harmful beliefs and behaviors include: feeling guilty for being good at something, focusing on exploiting other people instead of developing one's own skills.)

Hank Rearden's design of a new railroad bridge was completely unrelated to his political beliefs. It was a consequence of his natural talent and hard work, perhaps some luck. The political beliefs only influenced his decision of what to do with the invented technology. I don't remember what exactly were his options, but I think one of them was "archive the technology, to prevent changes in the industry, to preserve existing social order", and as a consequence of his beliefs he refused to consider this option. And even this was before he became a full Objectivist. (The only perfect Objectivist in the novel is Galt; and perhaps the people who later accept Galt's views.)

Francisco d'Anconia's fortune, as you wrote, was inherited. That's a random factor, unrelated to Objectivism.

John Galt's "magical" motor was also a result of his natural talent and hard work, plus some luck. The political beliefs only influenced his decision to hide the motor from public, using a private investor and a secret place.

Violating the law of thermodynamics, and surviving the torture without damage... that's fairy-tale stuff. But I think none of them is an in-universe consequence of Objectivism.

So, what exactly does Objectivism (or Hank Rearden's beliefs, which are partial Objectivism plus some compartmentalization) cause, in-universe?

It makes the heroes focus on their technical skills, and the more enlightened heroes on keeping their technical inventions for themselves. As opposed to attempting a political carreer or serving the existing political powers. Instead of networking, Rearden focuses on studying metal. Instead of donating the magical machine to the government, Galt keeps it secret. Instead of having his fortune taken by government, d'Anconia destroys it... probably because of a lack of smarter alternative (or maybe he somehow secretly preserves a part of his fortune, and ostentatiously destroys the rest to draw away attention; I don't remember the details here).

Without Objectivism, the heroes would most likely become clueless nerds serving the elite, because they couldn't win at the political fight (requires a completely different set of skills that people like Mouch are experts in), but they also wouldn't understand that the system is intentionally designed against them, so they would spend their energy in a futile fight, winning a few battles but losing the war.

Understanding the system allows one to focus on finding an "out of the box" solution. John Galt's victory is his ability to use his natural talent and work to devise a solution where he can live without political masters. He is economically independent, thanks to his magical motor, but also mentally independent. (If we removed the magic, his victory would be understanding the system, and the ability to resist its emotional blackmail and optimize for himself.)

The lack of this understanding made Rearden vulnerable to blackmail from his wife, and in a way cost Eddie Willers his life. (And James Taggart his sanity, if I remember correctly.)

tl;dr: (According to Rand) Objectivism makes you able to understand how the system works, so you can more realistically optimize for your values. Objectivism doesn't give you talent, skills, or luck; but it gives you a chance to use them more efficiently, instead of wasting them in a fight you cannot win.

EDIT: In real life, I expect that an Objectivist training could make people be more aware of their goals and negotiate harder. Maybe increase work ethics.

Comment author: fubarobfusco 08 December 2014 09:09:28PM 4 points [-]

Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.

Not that I'm aware of, but you might also be interested in A. E. Van Vogt's "Null-A" novels, which attempted to do this for a fictionalized version of Korzybski's General Semantics.

(Van Vogt later did become involved in Scientology, as did his (and Hubbard's) editor John W. Campbell.)

Comment author: NancyLebovitz 08 December 2014 09:22:33PM 3 points [-]

For what it's worth, Rand was an unusually capable person in her specialty (she wrote two popular, and somewhat politically influential novels in her second language), but still not in the same class as her heroes.

I'm not sure you've got the bit about Rearden right. I don't think there's any evidence that he came up with the final design for the bridge. There's a mention that he worked with a team to discover Rearden metal, and presumably he also had an engineering team. The point was that he (presumably) knew enough engineering to come up with something plausible, and that he was fascinated by producing great things enough to be distracted from something major going wrong that I don't remember.

I have no idea whether Rand knew Galt's engine was physically impossible, though I think she should have, considering that other parts of the book were well-researched. Dagny's situation at Taggart Transcontinental was probably typical for an Operations vice-president in a family owned business. The description of her doing cementless masonry matched with a book on the subject. Atlas Shrugged was the only place I saw the possibility of shale oil mentioned until, decades later, it turned out to be a possible technology.

Comment author: CBHacking 08 December 2014 10:27:43PM 1 point [-]

The research fail that jumped out at me hardest in Atlas Shrugged was the idea that so many people would consider a metal both stronger and lighter than steel physically impossible. By the time the book was published, not only was titanium fairly well understood, it was also being widely used in military and (some; what could be spared from Cold War efforts) commercial purposes. Its properties don't exactly match Rearden Metal (even ignoring the color and other mostly-unimportant characteristic) but they're close enough that it should be obvious that such materials are completely possible. Of course, that part of the book also talks about making steel rails last longer by making them denser, which seems completely bizarre to me; there are ways to increase the hardness of steel, but they involve things like heat-treating it.

TL;DR: I'm not sure I'd call the book "well-researched" as a whole, though some parts may well have been.

Comment author: Alsadius 08 December 2014 11:56:34PM 2 points [-]

The book exists in a deliberately timeless setting - it has elements of everything from about a century of span. Railroads weren't exactly building massive new lines in 1957, either.

Comment author: NancyLebovitz 12 December 2014 10:10:27PM 2 points [-]

The three people Akston was talking about didn't include Rearden. They were D'Anconia, Galt, and Danneskjold (the mostly off-stage pirate). I feel as though I've lost, not just geek points, but objectivist points both for forgetting something from the book, but also because I went along with everyone else who got it wrong.

The remarkable thing about Galt and torture isn't that he didn't get PTSD, it's that he completely kept his head, and over-awed his torturers. He broke James Taggart's mind, not that Taggart's mind was in such great shape to begin with.

Comment author: gattsuru 08 December 2014 09:27:18PM *  2 points [-]

A number of these matters seem more narrative or genre conveniences : Francisco acts a playboy in the same way Bruce Wayne does, Rearden's bridge development passes a lot of work to his specialist engineers (similarly to Rearden metal having a team of scientists skeptically helping him) and pretends that the man is still a one-man designer (among other handwaves). At the same time, Batman is not described as a superhuman engineer or playboy, nor would he act as those types of heroes. I'm also not sure we can know the long-term negative repercussions John Galt experiences given the length of the book, and not all people who experience torture display clinically relevant post-traumatic stress symptoms and many who do show them only sporadically. His engine is based on now-debunked theories of physics that weren't so obviously thermodynamics-violating at the time, similarly to Project Xylophone.

These men are intended to be top-of-field capability from the perspective of a post-Soviet writer who knew little about their fields and could easily research less. Many of the people who show up under Galt's tutelage are similarly exceptionally skilled, but even more are not so hugely capable.

On the other hand, the ability of her protagonists to persuade others and evaluate the risk of getting shot starts at superhuman and quickly becomes ridiculous.

On the gripping hand, I'm a little cautious about emphasizing fictional characters and acknowledgedly Heroic abilities as evidence, especially when the author wrote a number of non-fiction philosophy texts related to this topic.

Comment author: Interpolate 23 December 2014 03:07:40AM 0 points [-]

These aren't so much "stupid" questions but ones which have no clear answer, and I'm curious what people here feel have to say about this.

-Why should (or shouldn't) one aspire to be "good" in the sense of prosocial, altruistic etc.?

-Why should (or shouldn't) one attempt to be as honest as possible in their day to day lives?

I have strong altruistic inclinations because that's how I'm predisposed to be and often because coincides with my values; other people's suffering upsets me and I would prefer to live a world in which people are kind and supportive of each other. I want to be nice, but I don't want to want to be nice; I can't find strong rational reasons to be altruistic.

I'm honest with people I voluntarily interact with, but ambivalent about lying in general. For example, I'm currently on sort of intermittent fasting regimen and if someone I'm not particularly familiar with offers food, I tend to say "I've already ate" rather than giving my real reason for abstaining from. I've seen it argued that lying to others will make you more likely to lie to yourself, but I'm unconvinced this is the case.

Comment author: Evan_Gaensbauer 16 December 2014 05:18:37AM 0 points [-]

[Meta]

In the last 'stupid questions' thread, I posed the suggestion that I write a post called "Non-Snappy Answers to Stupid Questions", which would be a summary post with a list of the most popular stupid questions asked, or stupid questions with popular answers. That is, I'm taking how many upvotes each pair of questions and answers got as an indicator of how many people care about them, or how many people at least thought the answer to a question was a good one. I'm doing this so there will be a single spot where interesting answers can be found, rather than members of LessWrong having to dig through hundreds of comments on multiple threads to discover useful answers to simple questions.

I'll publish this post at the end of December, or beginning of January, when this thread is complete. It could be updated in the future, but, by that point, it will include questions asked from ten separate threads over the course of more than a year, which is a lot. It will include this thread, which will be the most recent.

My question is: how should I organize it? Should I sort questions by topic? By how popular the question was? By how popular the answer was? By some other means? Leave your feedback below.

Comment author: [deleted] 15 December 2014 03:56:52PM *  0 points [-]

Back in 2010, Will Newsome posted this as a joke:

Sure, everything you [said] made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.

But isn't it actually true?

Comment author: TheOtherDave 15 December 2014 04:55:42PM 0 points [-]

What would I do differently if I believed it was true, or wasn't?
What expectations about future events would I have in one case, that I wouldn't have in the other?
What beliefs about past events would I have in one case, that I wouldn't have in the other?

Comment author: [deleted] 15 December 2014 06:54:19PM *  0 points [-]

I understand that this has no decision-making value. I'm only interested in the philosophical meaning of this point.

Comment author: TheOtherDave 16 December 2014 01:09:45AM 0 points [-]

Hm.
Can you say more about what you're trying to convey by "philosophical meaning"?

For example, what is the philosophical meaning of your question?

Comment author: [deleted] 16 December 2014 03:52:47PM 0 points [-]

That if we are to be completely intellectually honest and rigorous, we must accept complete skepticism.

Comment author: TheOtherDave 16 December 2014 09:36:12PM 0 points [-]

Hm.
OK. Thanks for replying, tapping out here.

Comment author: Viliam_Bur 16 December 2014 08:35:21PM 0 points [-]

Maybe we could honestly accept than impossible demands of rigor are indeed impossible. And focus on what is possible.

You can't convince a rock to agree with you on something. There is still some chance with humans.

Comment author: [deleted] 28 December 2014 06:16:19PM *  0 points [-]

The Tortoise's mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool. If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.

This appears to be a circular argument.

Maybe we could honestly accept than impossible demands of rigor are indeed impossible. And focus on what is possible.

This is why I wrote this:

I understand that this has no decision-making value.

Comment author: IlyaShpitser 16 December 2014 12:09:10AM 0 points [-]

It means you should learn to like learning other languages/ways of thinking.