This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Stupid Questions December 2014
New Comment
342 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems like we suck at using scales "from one to ten". Video game reviews nearly always give a 7-10 rating. Competitions with scores from judges seem to always give numbers between eight and ten, unless you crash or fall, and get a five or six. If I tell someone my mood is a 5/10, they seem to think I'm having a bad day. That is, we seem to compress things into the last few numbers of the scale. Does anybody know why this happens? Possible explanations that come to mind include:

  • People are scoring with reference to the high end, where "nothing is wrong", and they do not want to label things as more than two or three points worse than perfect

  • People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

  • I'm succumbing to confirmation bias and this isn't a real pattern

I'm succumbing to confirmation bias and this isn't a real pattern

No, this is definitely a real pattern. YouTube switched from a 5-star rating system to a like/dislike system when they noticed, and videogames are notorious for rank inflation.

[-]gjm150

Partial explanation: we interpret these scales as going from worst possible to best possible, and

  • games that get as far as being on sale and getting reviews are usually at least pretty good because otherwise there'd be no point selling them and no point reviewing them
  • people entering competitions are usually at least pretty good because otherwise they wouldn't be there
  • a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly

One reason why this is only a partial explanation is that "possible" obviously really means something like "at least semi-plausible" and what's at least semi-plausible depends on context and whim. But, e.g., suppose we take it to mean something like: take past history, discard outliers at both ends, and expand the range slightly. Then I bet what you find is that

  • most games that go on sale and attract enough attention to get reviewed are broadly of comparable quality
    • but a non-negligible fraction are quite a lot worse because of some serious failing in design or management or something
  • most performances in competitions at a given level ar
... (read more)
5Capla
Think about it in therms of probability space. If somthign is basically functional, then there are a near- infinite number of ways for it to be worse, but a finite number of ways for it to get better. http://xkcd.com/883/
[-]Gavin130

RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.

People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

I don't think it's this. Belgium doesn't use letter-grading and still succumbs to the problem you mentioned in areas outside the classroom.

0Capla
What do they use instead?
3MathiasZaman
Points out of a maximum. The teacher is supposed to decide in advance how much points a test will be worth (5, 10, 20 and 25 being common options, but I've also had tests where I scored 17,26/27) and then decides how much points a question will be worth. You need to get half of the maximum or more for a passing grade. That's in high school. In university everything is scored out of a maximum of 20 points.
9gwern
You may find the work of the authors of http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2369332 interesting.
8someonewrongonthenet
That's not an explanation, just a symptom of the problem. People of mediocre talent and high talent both get A - that's part of the reason why we have to use standardized tests with a higher ceiling. My intuition is that the top few notches are satisficing, whereas all lower ratings are varying degrees of non-satisficing. The degree to which everything tends to cluster at the top represents the degree to which everything is satisfactory for practical purposes. In situations where the majority of the rated things are not satisfactory (like the Putnam - nothing less than a correct proof is truly satisfactory), the ratings will cluster near the bottom. For example, compare motels to hotels. Motels always have fewer stars, because motels in general are worse. Whereas, say, video games will tend to cluster at the top because video games in general are satisfactorily fun. Or, think Humanities vs. Engineering grades. Humanities students in general satisfy the requirements to be historians and writers or liberal-arts-educated-white-collar workers more than Engineering students satisfy the requirements to be engineers.
2Richard Korzekwa
This is what I was trying to convey when I said it might be another example of the problem. I think it's reasonable, in many contexts, to say that achieving 75% of the highest possible score on an exam should earn you what most people think of as a C grade (that is, good enough to proceed with the next part of your education, but not good enough to be competitive). I would say that games are different. There is not, as far as I know, a quantitative rubric for scoring a game. A 6/10 rating on a game does not indicate that the game meets 60% of the requirements for a perfect game. It really just means that it's similar in quality to other games that have received the same score, and usually a 6/10 game is pretty lousy. I found a histogram of scores on metacritic: http://www.giantbomb.com/profile/dry_carton/blog/metacritic-score-distribution-graphs/82409/ The peak of the distributions seems to be around 80%, while I'd eyeball the median to be around 70-75%. There is a long tail of bad games. You may be right that this distribution does, in some sense, reflect the actual distribution of game quality. My complaint is that this scoring system is good at resolving bad games from truly awful games from comically terrible games, but it is bad at resolving a good game from a mediocre game. What I think it should be is a percentile-based score, like Lumifer describes: Then again, maybe it's difficult to discern a difference in quality between a 60th percentile game and an 80th percentile game.
0someonewrongonthenet
Oh right, I didn't read carefully sorry.
7knb
I've noticed the same thing. Part of it might be that reviewers are reluctant to alienate fans of [thing being reviewed]. Another explanation is that they are intuitively norming against a wider degree of things than they actually review. For example, I was buying a smartphone recently, and a lot of lower-end devices I was considering had few reviews, but famous high-end brands (like iPhone Galaxy S, etc.) are reviewed by pretty much everyone. Playing devil's advocate, it might be that there are more perceivable degrees of badness/more ways to fail than there are of goodness, so we need a wider range of numbers to describe and fairly rank the failures.
6alienist
Well here is an article by Megan McArdle that talking about how insider-outsider dynamics can lead to this kind of rank inflation.
5Kindly
Math competitions often have the opposite problem. The Putnam competition, for example, often has a median score of 0 or 1 out of 120. I'm not sure this is a good thing. Participating in a math competition and getting 0 points is pretty discouraging, in a field where self-esteem is already an issue.
5alienist
Interestingly enough, the scores on individual questions are extremely bimodal. They're theoretically out of 10 but the numbers between 3 and 7 are never used.
4hyporational
In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5. Of course there's a selection effect and people don't like to look like whiners but I'm not convinced these fully explain the situation. In Finland the lowest grade you can get from primary education to high school is 4 so that probably affects the situation too.
0DanArmak
How do you then interpret their responses? Do you compare only the responses of the same person at different times, or between persons (or to guide initial treatment)? Do you have a reference scale that translates self-reported pain to something with an objective referent?
3hyporational
Yes. There's too much variation between persons. I also think there's variation between types of pain and variation depending on whether there are other symptoms. There are no objective specific referents but people who are in actual serious pain usually look like it, are tachycardic, hypertensive, aggressive, sweating, writhing or very still depending on what type of pain were talking about. Real pain is also aggravated by relevant manual examinations.
0Richard Korzekwa
This is actually what initially got me thinking about this. I read a half-satire thing about people misusing pain scales. Since my only source for the claim that people do this was a somewhat satirical article, I didn't bring it up initially. I was surprised when I heard that people do this, because I figured most people getting asked that question aren't in near as much pain as they could be, and they don't have much to gain by inflating their answer. When I've been asked to give an answer on the pain scale, I've almost always felt like I'm much closer to no pain than to "the worst pain I can imagine" (which is what I was told a ten is), and I can imagine being in such awful pain that I can't answer the question. I think I answered seven one time when I had a bone sticking through my skin (which actually hurt less than I might have thought).
0DanArmak
Maybe they think that by inflating their answer they gain, on the margin, better / more intensive / more prompt medical service. Especially in an ER setting where they may intuit themselves to be competing against other patients being triaged and asked the same question, they might perceive themselves (consciously or not) to be in an arms race where the person who claims to be experiencing the most pain gets treated first.
4Ixiel
This is exactly why in my family we use +2/-2. 0 really does feel like average in a way 5-6/10 or 3/5 doesn't.
4wadavis
I tried to change out the 10 rating for a z-score rating in my own conversations. It failed due to my social circles not being familiar with the normal bell curve.
5gwern
If you wanted to maximize the informational content of your ratings, wouldn't you try to mimick a uniform distribution?
1wadavis
The intent was to communicate one piece of information without confusion: where on the measurement spectrum the item fits relative to others in its group. As opposed to delivering as much information as possible, for which there are more nuanced systems. Most things I am rating do not have a uniform distribution, I tried to follow a normal distribution because it would fit the greater majority of cases. We lose information and make assumptions when we measure data on the wrong distribution, did you fit to uniform by volume or by value? It was another source of confusion. As mentioned, this method did fail. I changed my methods to saying 'better than 90% of the items in its grouping' and had moderate success. While solving the uniform/normal/Chi-squared distribution problem it is still too long winded for my tastes.
2Lumifer
The distribution of your ratings does not need to follow the distribution of what you are rating. For maximum information your (integer) rating should point to a quantile -- e.g. if you're rating on a 1-10 scale your rating should match the decile into which the thing being rated falls. And if your ratings correspond to quantiles, the ratings themselves are uniformly distributed.
1wadavis
We have different goals. I want to my rating to reflect the items relative position in its group, you want a rating to reflect the items value independent of the group. Is this accurate?
1Lumifer
Doesn't seem so. If you rate by quintiles your rating effectively indicates the rank of the bucket to which the thing-being-rated belongs. This reflects "the item's relative position in its group". If you want your rating to reflect not a rank but something external, you can set up a variety of systems, but I would expect that for max information your rating would have to point a quintile of that external measure of the "value independent of the group".
0wadavis
Trying to stab at the heart of the issue: I want the distribution of the ratings to follow the distribution of the rated because when looking at the group this provides an additional piece of information.
6Lumifer
Well, at this point the issue becomes who's looking at your rating. This "additional piece of information" exists only for people who have a sufficiently large sample of your previous ratings so they understand where the latest rating fits in the overall shape of all your ratings. Consider this example: I come up to you and ask "So, how was the movie?". You answer "I give it a 6 out of 10". Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifurcate reality. In branch 1 you then add "The distribution of my ratings follows the distribution of movie quality, savvy?" and let's say I'm sufficiently statistically savvy to understand that. But... does it help me? I don't know the distribution of movie quality. it's probably bell-shaped, maybe, but not quite normal if only because it has to be bounded, I have no idea if its skewed, etc. In branch 2 you then add "The rating of 6 means I rate the movie to be in the sixth decile". Ah, that's much better. I now know that out of 10 movies that you've seen five were probably worse and three were probably better. That, to me, is a more useful piece of information.
0wadavis
I understand and concede to the better logic. This provides greater insight on why the original attempt to use these ratings failed.
0ChristianKl
Quite often the difference between the top 10 percent is higher than the difference of the people between 45% and 55%. IQ scales have more people in the middle than on the edges.
2Lumifer
As far as I remember, IQs are normalized ranks so to answer the question which 10% is "wider" you need to define by which measure.
0atorm
I think it's the C thing. I have no evidence for this.

Is there any plausible way the earth could be moved away from the sun and into an orbit which would keep the earth habitable when the sun becomes a red giant?

[-]calef220

According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.

This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).

Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.

Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).

So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.

There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.

[-]Eniac180

I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.

3Eniac
Hah, thanks for pointing this out. I must have read or heard of this before and then forgotten about it, except in my subconscious. Looks like they have done the math, too, and it figures. Cool!
5CBHacking
Ignoring the concept of "can we apply that much delta-V to a planet?", I'd be interested to know whether it's believed that there exists a "Goldilocks zone" suitable for life at all stages of a star's life. Intuitively it seems like there should be, I'm not sure. Of course, it should be pointed out that the common understanding of "when the sun becomes a red giant" may be a bit flawed; the sun will cool and expand, then collapse. On a human time scale, it will spend a lot of that time as a red giant, but if you simply took the Earth when its orbit started to be crowded by the inner edge of the Goldilocks zone and put it in a new orbit, that new orbit wouldn't be anywhere close to an eternally safe one. Indeed, I suspect that the outermost of the orbits required for the giant-stage sun would be too far from the sun at the time we'd first need to move the Earth.
3mwengler
The sun's luminosity will rise by around 300X as it turns into a giant. If we wish to keep the same energy flux onto the earth at that point, we must increase the earth's orbit a factor of sqrt(300) = 17X. The total energy of the earth's current orbit is 2.65E33 J. We must reduce this to 1/17 of its current value. or reduce it by (16/17)*2.65E33 J = 2.5E33 J. The current total annual energy production in the world is about 5E17 J. The sun will be a red giant in about 7.6E9 years. So we would need about a million times current global energy production running full time into rocket motors to push the earth out to a safe orbit by the time the sun has expanded. But it is worse than that. The Sun actually expands over a scant 5 million years near the end of that 7.6E9 years. So to avoid freezing for billions of years because we have started moving away from the sun too soon, we essentially will need a billion times current energy production running into rocket engines for those 5 million years of solar expansion. But the good news is we have 7.6E9 billion years to figure out how to do that. If we use plasma rockets which push reaction mass out at 1% the speed of light, then we will need a total of about 6E16 kg reaction mass, or about 0.000001% of the earth's total mass. The total mass of water on the earth is about 1E21 kg so we could do all of this using water as reaction mass and still have 99.99% of the water left when we are done.
0Nornagest
I wonder what the exhaust plume of an engine like that would look like, and how far away from it you'd have to be standing to still be capable of looking at anything after a second or two.
3DaFranker
I'm curious about the thought process that led to this being asked in the "stupid questions" thread rather than the "very advanced theoretical speculation of future technology" thread. =P As a more serious answer: Anything that would effectively give us a means to alter mass and/or the effects of gravity in some way (if there turns out to be a difference) would help a lot.
2NancyLebovitz
I wasn't sure there was a way to do it within current physics. Now we get to the hard question: supposing we (broadly interpreted, it will probably be a successor species) want to move the earth outwards using those little gravitational nudges, how do we get civilizations with a sufficiently long attention span?
1DaFranker
I heard Ritalin has a solution. Couldn't pay attention long enough to verify. ba-dum tish On a serious note, isn't the whole killing-the-Earth-for-our-children thing a rather interesting scenario? I've never seen it mentioned in my game theory-related reading, and I find that to be somewhat sad. I'm pretty sure a proper modeling of the game scenario would cover both climate change and eaten-by-red-giant.
0NancyLebovitz
I don't see the connection to killing the earth for our children. Moving the earth outwards is an effort to save the earth for our far future selves and our children.
4gjm
I think "for our children" means "as far as our children are concerned" and failing to move the earth's orbit so it doesn't get eaten by the sun (despite being able to do it) would qualify as "killing the earth for our children". (The more usual referents being things like resource depletion and pollution with potentially disastrous long-term effects.)
0NancyLebovitz
Thanks. That makes sense.
0DanielLC
If we haven't gotten one by then, we're doomed. Or at least, we don't get a very good planet. We could still have space-stations or live on planets where we have to bring our own atmosphere.
2Shmi
Not "when the sun becomes a red giant", because red giants are variable on a much too short time scale, but, as others mentioned, we can probably keep the earth in a habitable zone for another 5 billion years or so. We have more than enough hydrogen on earth to provide the necessary potential energy increase with fusion-based propulsion, though building something like a 100 petaWatt engine is problematic at this point, (for comparison, it is a significant fraction of the total solar radiation hitting the earth). EDIT: I suspect that terraforming Mars (and/or cooling down the Earth more efficiently when the Sun gets brighter) would require less energy than moving the Earth to the Mars orbit. My calculations could be off, though, hopefully someone can do them independently.
3Anomylous
Only major problem I know of with terraforming Mars is how to give it a magnetic field. We'd have to somehow re-melt the interior of the planet. Otherwise, we could just put up with constant intense solar radiation, and atmosphere off-gassing into space. Maybe if we built a big fusion reactor in the middle of the planet...?
[-]Shmi110

I recall estimating the power required to run an equatorial superconducting ring a few meters thick 1 km or so under the Mars surface with enough current to simulate Earth-like magnetic field. If I recall correctly, it would require about the current level of power generation on Earth to ramp it up over a century or so to the desired level. Then whatever is required to maintain it (mostly cooling the ring), which is very little. Of course, an accident interrupting the current flow would be an epic disaster.

0alienist
Wouldn't it be more efficient to use that energy to destroy Mars and build start building a Dyson swarm from the debris?
4Shmi
Let's do a quick estimate. Destroying a Mars-like planet requires expending the equivalent of its gravitational self-energy, ~GM^2/R, which is about 10^32J (which we could easily obtain from a comet 10 kn in radius... consisting of antimatter!) For comparison, the Earth's magnetic field has about 10^26J of energy, a million times less. I leave it to you to draw the conclusions.
2JoshuaZ
Yes, I saw an article a few years ago a back of the envelope estimate that suggested this would be doable if one could turn mass on the moon more or less directly to energy and use the moon as a gravitational tug to slowly move Earth out of the way. You can change mass almost directly into energy by feeding the mass into a few smallish blackholes.
0blogospheroid
How do they propose to move the blackholes? Nothing can touch a blackhole, right?
6gjm
Black holes feel gravity just like any other massive body. And they can be electrically charged. So you can move them around with strong enough gravitational and/or electric fields.
2DanielLC
It can, as long as you don't mind that you won't get it back when you're done. You have to constantly fuel the black hole anyway. Just throw the fuel in from the opposite direction that you want the black hole to go.
4Eniac
Throwing mass into a black hole is harder than it sounds. Conveniently sized black holes that you actually would have a chance at moving around are extremely small, much smaller than atoms, I believe. I think they would just sit there without eating much, despite strenous efforts at feeding them. The cross-section is way too small. To make matters worse, such holes would emit a lot of Hawking radiation, which would a) interfere with trying to feed them, and b) quickly evaporate them ending in an intense flash of gamma rays.
0DanielLC
The problem is throwing mass into other mass hard enough to make a black hole in the first place. Hawking radiation isn't a big deal. In fact, the problem is making a black hole small enough to get a significant amount of it. An atom-sized black hole has around a tenth of a watt of Hawking radiation. I think it might be possible to get extra energy from it. From what I understand, Hawking radiation is just what doesn't fall back in. If you enclose the black hole, you might be able to absorb some of this energy.
1Eniac
Yes, making them would be incredibly hard, and because of their relatively short lifetimes, it would be extremely surprising to find any lying around somewhere. Atom sized black holes would be very heavy and not produce much Hawking readiation, as you say. Smaller ones would produce more Hawking radiation, be even harder to feed, and evaporate much faster.
0DanArmak
I don't really know if it's plausible, but Larry Niven's far-future fiction A World Out of Time (the novel, not the original short story of the same name) deals with exactly this problem. His solution is a "fusion candle": build a huge double-ended fusion tube, put it in the atmosphere of a gas giant, and light it up. The thrust downwards keeps the tube floating in the atmosphere. The thrust upwards provides an engine to push the gas giant around. In the book, they pushed Uranus to Earth, and then moved it outwards again, gravitationally pulling the Earth along.
0Daniel_Burfoot
This is a fascinating question. Very speculatively, I could imagine somehow using energy gained by pushing other objects closer to the Sun, to move the Earth away from the Sun. Like some sort of immense elastic band stretching between Mars and Earth, pulling Earth "up" and Mars "down".
1DanielLC
That is essentially what would happen if you used gravitational assistance and orbited asteroids between Mars and Earth.
[-]knb130

Would it be possible to slow down or stop the rise of sea level (due to global warming) by pumping water out of the oceans and onto the continents?

We could really use a new Aral sea, but intuitively I'd expected that this would be a tiny dent in the depth of the oceans. So, to the maths:

Wikipedia claims that from 1960 to 1998 the volume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.

I'm going to give that another 5% for more loss since then, as the South Aral Sea has now lost its eastern half enitrely.

This gives ~1100 * .85 = 935km^3 of water that we're looking to replace.

The Earth is ~500m km^2 in surface area, approx. 70% of which is water = 350m km^2 in water.

935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.

This is massively larger that I would have predicted, and it gets better. The current salinity of the Aral Sea is 100g/l which is way higher than that of seawater at 35g/l, so we could pretty much pump the water straight in still with net environmental gain. Infact this is a solution to the crisis that has been previously proposed, although it looks like most people would rather dilute the seawater first.

To acheive the desired result of 1 inch drop in sea level, we only need to find 9 equivalent projects around the world. Sadly, the only other one I know of is Lake Chad, which is significantly smaller than the Aral Sea. However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants: I would expect that we might be able to get this amount again if we undertook a desalination/irrigation program in the Sahara.

3mwengler
Dead Sea and Salton Sea leap to mind as good projects. Also could we store more water in the atmosphere? If we just poured water into a desert like the Sahara, most of it would evaporate before it flowed back to the sea. This would seem to raise the average moisture content of the atmosphere. Sure eventually it gets rained back down, but this would seem to be a feature more than a bug for a world that keeps looking for more fresh water. Indeed my mind is currently inventing interesting methods for moving the water around using purely the heat from the sun as an energy source.
0DanArmak
Isn't it more of an indication of how much water can be contained in the Aral Sea basin? The plants don't need to contain all of the missing Aral Sea water at once, they just need to be watered faster than the Sea is being refilled by rainfall. How much water does rainfall supply every year, as a percentage of the Sea's total volume?

I recommend googling "geoengineering global warming" and reading some of the top hits. There are numerous proposals for reducing or reversing global warming which are astoundingly less expensive than reducing carbon dioxide emissions, and also much more likely to be effective.

To your direct question about storing more water on land, this would be a geoengineering project. Some straightforward approaches to doing it:

Use rainfall as your "pump" in order to save having to build massive energy using water pumps. Without any effort on our part, nature natually lifts water a km or more above sea level and then drops it, much of it dropped onto land. That water generally is funneled back to the ocean in rivers. With just the constructino of walls, some rivers might be prevented from draining into the ocean. Large areas would be flooded by the river, storing water other than in the ocean.

Use gravity as your pump. THere are many large locations on earth than are below sea level. Aquifers that took no net energy to do pumping could be built that would essentially gravity-feed ocean water into these areas. These areas can be hundreds of meters below sea level, ... (read more)

4CBHacking
Where does the water go? Assuming you want to reduce sea level by a 1/2 inch using this mechanism, you have to do the equivalent of covering the entire ETA: land area of earth in a full inch of water (what's worse, seawater; you'd want to desalinate it). Even assuming you can find room on land for all this water and the pump capacity to displace it all, what's to stop it from washing right back out to sea? Some of it can be used to refill aquifers, but the capacity of those is trivial next to that of the oceans. Some of it can be stored as ice and snow, but global warming will reduce (actually, has already quite visibly reduced) land glaciation; even if you can somehow induce the water to freeze, that heat you extract from it will have to go somewhere and unless you can dump it out of the atmosphere entirely it will just contribute to the warming. The rest of the water will just flood the existing rivers in its mad rush to do what nearly all continental water is always doing anyhow: flowing to sea.
3TheOtherDave
Clearly, the solution is to build a space elevator and ship water into orbit. We lower the sea levels, the water is there if we need it later, and in the meantime we get to enjoy the pretty rings. (No, I'm not serious.)
0Vaniver
Now I'm curious how much energy it would take to set up a stable ring orbit made of ice crystals for Earth, or if that would be impossible without stationkeeping corrections.
0Lumifer
How long will ice survive in Earth's orbit, anyway?
-1CBHacking
I think it would depend on the orbit? Obviously it would need to be in an orbit that does not collide with our artificial satellites, and it would need to be high enough to make atmospheric drag negligible, but that leave a lot of potential orbits. I don't think of any reason ice would go away with any particular haste from any of them, but I'm not an expert in this area. Orbital decay aside, why might ice (once placed into an at-the-time stable orbit) not survive?
1Lumifer
Sun. Solar radiation at 1 AU is about 1.3KW/sq.m. Ice that is not permanently in the shade will disappear rather rapidly, I would think.
0CBHacking
I would think it would lose heat to space fast enough, but maybe not. I know heat dissipation is a major concern for spacecraft, but those are usually generating their own heat rather than just trying to dump what they pick up from the sun. What would happen to the ice / water? It's not like it can just evaporate into the atmosphere...
4Richard_Kennaway
Vapour doesn't need an atmosphere to take it up. Empty space does just as well. So, how long would a snowball in high orbit last? Sounds like a question for xkcd. A brief attempt at a lower bound that is probably a substantial underestimate: How much energy has to be pumped in per kilogram to turn ice at whatever the "temperature" is in orbit into water vapour? Call that E. Let S be the solar insolation of 1.3 kW/m^2. Imagine the ice is a spherical cow, er, a rectangular block directly facing the sun. According to Wikipedia the albedo of sea ice is in the range 0.5 to 0.7. Take that as 0.6, so the fraction of energy retained is A = 0.4. The density of ice is D = 916.7 kg/m^3. Ignore radiative cooling, conduction to the cold side of the iceberg, and time spent in the Earth's shadow, and assume that the water vapour instantly vanishes. Then the surface will ablate at a rate of SA/ED m/s. Equivalently, ED/86400SA days per metre. For simplicity I'll take the ice to be at freezing point. Then: E = 334 kJ/kg to melt + 420 kJ/kg to reach boiling point + 2260 kJ/kg to boil = 3014 kJ/kg. For a lower starting temperature, increase E accordingly. 3014 916.7 / (86400 1.3 * 0.4) = 61 days per metre. Not all that long, but meanwhile, you've created a hazard for space flight and for the skyhook. I suspect that ignoring radiative cooling will be the largest source of error here, but this isn't a black body, so I don't know how closely the Stefan-Boltzmann law will apply, and I haven't calculated the results if it did. (ETA: The black body temperature of the Moon is just under freezing.) (ETA: fixed an error in the calculation of E, whereby I had 4200 instead of 420 kJ/kg to reach boiling point. Also, pasting in all the significant figures from the sources doesn't mean this is claimed to be anything more than a rough estimate.)
3Lumifer
This is vacuum -- all liquid water will boil immediately, at zero Celsius. Besides I'm sure there will be some sublimation of ice directly to water vapor. In fact, looking at water's phase diagram, in high vacuum liquid water just doesn't exist so I think ice will simply sublimate without the intermediate liquid stage.
0Richard_Kennaway
Right, I forgot the effect of pressure. So E will be different, perhaps very different. What will it be?
0Lumifer
Here is the proper math. This is expressed in terms of ice temperature, though, so we'll need to figure out how much the solar flux would heat the outer layer of ice first.
3DanielLC
One possibility would be to replace the ice caps by hand. Run a heated pipeline from the ocean to the icecaps, pump water there, and let it freeze on its own. I don't know how well that would work, and I suspect you're better off just letting sea levels rise. If you need the land that bad, just make floating platforms. Edit: Replace "ice caps" with "Antartica". Adding ice to the northern icecap, or even the southern one out where it's floating, won't alter the sea level since floating objects displace their mass in water.
3Eniac
Well, this is not pumping, but it might be much more efficient: As I understand, the polar ice caps are in an equilibrium between snowfall and runoff. If you could somehow wall in a large portion of polar ice, such that it cannot flow away, it might rise to a much higher level and sequester enough water to make a difference in sea levels. A super-large version of a hydroelectric dam, in effect, for ice. It might also help to have a very high wall around the patch to keep air from circulating, keeping the cold polar air where it is and reduce evaporation/sublimation.
2Capla
This should be a what if question. I'd like to see what Randall would do with it.
0knb
I don't know what you mean. Who is Randal?
5Capla
Randall Munroe Is the person who draws XKCD. He also has a blog where he give in depth answers to unusual questions.
4Lumifer
Randal is Randall Munroe who makes xkcd and, notably, answers what-if questions.

Can anyone link a deep discussion, including energy and time requirements, issues with spaceship shielding from radiation and collisions, etc., that would be involved in interstellar travel? I ask because I am wondering whether this is substantially more difficult than we often imagine, and perhaps a bottleneck in the Drake Equation

tl;dr: It is definitely more difficult than most people think, because most people's thoughts(even scientifically educated ones) are heavily influenced by sci-fi, which is almost invariably premised on having easy interstellar transport. Even the authors like Clarke with difficult interstellar transport assume that the obvious problems(e.g., lightspeed) remain, but the non-obvious problems(e.g., what happens when something breaks when you're two light-years from the nearest macroscopic object) disappear.

7gjm
Some comments on this from Charles Stross. Not optimistic about the prospects. Somewhat quantitative, at the back-of-envelope level of detail.
6Shmi
Project Icarus seems like a decent place to start.
3Eniac
You might want to check out Centauri Dreams, best blog ever and dedicated to this issue.
2lukeprog
A fair bit of this is either cited or calculated within "Eternity in six hours." See also my interview with one of its authors, and this review by Nick Beckstead.

Is there a causal link between being relatively lonely and isolated during school years and (higher chance of) ending up a more intelligent, less shallow, more successful adult?

Imagine that you have a pre-school child who has socialization problems, finds it difficult to do anything in a group of other kids, to acquire friends, etc., but cognitively the kid's fine. If nothing changes, the kid is looking at being shunned or mocked as weird throughout school. You work hard on overcoming the social issues, maybe you go with the kid to a therapist, you arrange play-dates, you play-act social scenarios with them..

Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with popular kids. We were never part of any crowd. Instead we read books under our desks during lessons and read SF novels during the breaks and read science encyclopedias during dinner at home, and started programming at 10, and and and. Now you're working so hard to give your kid a full social life. You barely had any, are you sure now you'd rather you had it otherwise? Let me be... (read more)

Seems to me that very high intelligence can cause problems with socialization: you are different from your peers, so it is more difficult for you to model them, and for them to model you. You see each other as "weird". (Similar problem for very low intelligence.) Intelligence causes loneliness, not the other way round.

But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.

I am not sure about the relation between reading many books and being "less shallow". Do intelligent kids surrounded by intelligent kids also read a lot?

5dxu
All of this is very true (for me, anyway--typical mind fallacy and all that). High intelligence does seem to cause social isolation in most situations. However, I also agree with this: High intelligence does not intrinsically have a negative effect on your social skills. Rather, I feel that it's the lack of peers that does that. Lack of peers leads to lack of relatability leads to lack of socialization leads to lack of practice leads to (eventually) poor social skills. Worse yet, eventually that starts feeling like the norm to you; it no longer feels strange to be the only one without any real friends. When you do find a suitable social group, on the other hand, I can testify from experience that the feeling is absolutely exhilarating. That's pretty much the main reason I'm glad I found Less Wrong.
1Tem42
It is not true that people cannot - or do not - interact successfully with people that are less intelligent than they are. Many children get along well with their younger siblings. Many adults love being kindergarten teachers... Or feel highly engaged working in the dementia wing of the rest home. Many people of all intelligence levels love having very dumb pets. These are not people (or beings) that you relate to because of their 'relatability' in the sense that they are like you, but because they are meaningful to you. And interacting with people build social skills appropriate to those people -- which may not be very generalizable when you are practicing interacting with kindergarten students, but is certainly a useful skill when you are interacting with average people. I personally would think that the problem under discussion is not related to intelligence, but in trying to help an introvert identify the most fulfilling interpersonal bonds without making them more social in a general sense. However, I don't know the kid in question, so I can't say.
[-]philh120

My friend isn't obviously-to-me wrong, but their argument is unconvincing to me.

It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.

It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.

Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.

Ditzy adolescent - how likely is this?

FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.

7dxu
True, but it may be one of those problems that's just not fixable without seriously restructuring the school system, especially if something like Villiam_Bur's theory is true. Speaking from experience, I can tell you that I know a lot more than any of my peers (I'm 16), and practically all of that is due to the reading I did and am still doing. That reading was a direct result of my isolation and would likely not have occurred had I been more socially accepted. I should add that I have never once felt resentment or insecurity due to this, though I have developed a slight sense of superiority. (That last part is something I am working to fix.) I suppose this one depends on how you define a "failure mode". I have never viewed my lack of social life as a bad thing or even a hindrance, and it doesn't seem like it will have many long-term effects either--it's not like I'll be regularly interacting with my current peers for the rest of my life. Again, this depends on how you define "ditzy". Based on my observations of a typical high school student at my age, I would not hesitate to classify over 90% of them as "ditzy", if by "ditzy" you mean "playing social status games that will have little impact later on in life". I shudder at the thought of ever becoming like that, which to me sounds like a much worse prospect than not having much of a social life. I see. Well, to each his own. I myself cannot imagine growing up with anything other than the childhood I did, but that may just be lack of imagination on my part. Who knows; maybe I would have turned out better than I did if I had had more social interaction during childhood. Then again, I might not have. Without concrete data, it's really hard to say.
1mindspillage
Reading a ton as a teen was very helpful to me also, but I think I would have still done it if I had a rich social life of people who were also smart and enjoyed reading. Ultimately being around peers who challenge me is more motivating than being isolated; I don't want to be the one dragging behind. I do feel that I had to learn a fair amount of basic social skills through deliberately watching and taking apart, rather than just learning through doing--making me somewhat the social equivalent of someone who has learned a foreign language through study rather than by growing up a native speaker; I have the pattern of strengths and weaknesses associated with the different approach.
2NancyLebovitz
There may be a choice between a lot of time thinking/learning vs. a lot of time socializing. It seems to me that a lot of famous creative people were childhood invalids, though I haven't heard of any such from recent decades. It may be that the right level of invalidism isn't common any more.
6alienist
Here is Paul Graham's essay on the subject.
3John_Maxwell
I think I remember reading that famous inventors were likely to be isolated due to illness as children. I think it's unlikely that intelligence is decreased by being well-socialized, but it seems possible to me that people who are very well-socialized might find themselves thinking of fewer original ideas.

Are there any good trust, value, or reputation metrics in the open source space? I've recently established a small internal-use Discourse forum and been rather appalled by the limitations of what is intended to be a next-generation system (status flag, number of posts, tagging), and from a quick overview most competitors don't seem to be much stronger. Even fairly specialist fora only seem marginally more capable.

This is obviously a really hard problem and conflux of many other hard problems, but it seems odd that there are so many obvious improvements available.

((Inspired somewhat by my frustration with Karma, but I'm honestly more interested in its relevance for outside situations.))

Tangentially, is it possible for a good reputation metric to survive attacks in real life?

Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.

One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to "recommend" the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let's suppose it isn't, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you "recommend" this party, which according to their model would help them win the election.

In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where ... (read more)

3gattsuru
There are simultaneously a large number of laws prohibiting employers from retaliating against persons for voting, and a number of accusations of retaliation for voting. So this isn't a theoretical issue. I'm not sure it's distinct from other methods of compromising trusted users -- the effects are similar whether the compromised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Certificates -- but it's a good demonstration that you simply can't trust any node inside a network. (There's some interesting overlap with MIRI's value stability questions, but they're probably outside the scope of this thread and possibly only metaphor-level.) Interestingly, there are some security metrics designed with the assumption that some number of their nodes will be compromised, and with some resistance to such attacks. I've not seen this expanded to reputation metrics, though, and there are technical limitations. TOR, for example, can only resist about a third of its nodes being compromised, and possibly fewer than that. Other setups have higher theoretical resistance, but are dependent on central high-value nodes that trade resistance against to vulnerability against spoofing. It seems like there's some value in closing the gap between carrier wave and signal in reputation systems, rather than a discrete reputation system, but my sketched out implementations become computationally intractable quickly.
3kpreid
I don't have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run. This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.
2alienist
And then there are absentee ballots which potentially make said laws a joke.
5Lumifer
The first problem is defining what do you want to measure. "Trust" and "reputation" are two-argument functions and "value" is notoriously vague.
6gattsuru
For clarity, I meant "trust" and "reputation" in the technical senses, where "trust" is authentication, and where "reputation" is an assessment or group of assessments for (ideally trusted) user ratings of another user. But good point, especially for value systems.
0Lumifer
I am still confused. When you say that trust is authentication, what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems? For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what? Note that weeding out idiots, sockpuppets, and trolls is much easier than constructing a useful-for-everyone ranking of legitimate users. Different people will expect and want your rankings to do different things.
4gattsuru
For starters, a system to be sure that a user or service is the same user or service it was previously. Web of trusts /or/ a central authority would work, but honestly we run into limits even before the gap between electronic worlds and meatspace. PGP would be nice, but PGP itself is closed-source, and neither PGP nor OpenPGP/GPG are user-accessible enough to even survive in the e-mail sphere they were originally intended to operate. SSL allows for server authentication (ignoring the technical issues), but isn't great for user authentication. I'm not aware of any generalized implementation for other use, and the closest precursors (keychain management in Murmur/Mumble server control?) are both limited and intended to be application-specific. But at the same time, I recognize that I don't follow the security or open-source worlds as much as I should. Oh, yeah. It's not an easy problem to solve Right. I'm more interested in if anyone's trying to solve it. I can see a lot of issues with a user-based reputation even in addition to the obvious limitation and tradeoffs that fubarobfusco provides -- a visible metric is more prone to being gamed but obscuring the metric reduces its utility as a feedback for 'good' posting, value drift without a defined root versus possible closure without, so on. What surprises me is that there are so few attempts to improve the system beyond the basics. IP.Board, vBulletin, and phpBoard plugins are usually pretty similar -- the best I've seen merely lets you disable them on a per-subfora basis rather than globally, and they otherwise use a single point score. Reddit uses the same Karma system whether you're answering a complex scientific question or making a bad joke. LessWrong improves on that only by allowing users to see how contentious a comment's scoring. Discourse uses count of posts and tags, almost embarrassingly minimalistic. I've seen a few systems that make moderator and admin 'likes' count for more. I think that's about the
1Lumifer
That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself? You don't need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before. Well, again, the critical question is: What are you really trying to achieve? If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it's still a two-argument function. Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well. Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.
4fubarobfusco
"All the usual things" are many, and some of them are quite wrong indeed. If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It's better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources. On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people's — maybe you want tripcodes. And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.
0gattsuru
In addition to the usual problems, which are pretty serious to start with, you're relying on the client. To borrow from information security, the client is in the hands of the enemy. Sockpuppet (sybil in trust networks) attacks, where entity pretends to be many different users (aka sockpuppets), and impersonation attacks, where a user pretends to be someone they are not, are both well-documented and exceptionally common. Every forum package I can find relies on social taboos or simply ignoring the problem, followed by direct human administrator intervention, and most don't even make administrator intervention easy. There are also very few sites that have integrated support for private-key-like technologies, and most forum packages are not readily compatible with even all password managers. This isn't a problem that can be perfectly solved, true. But right now it's not even got bandaids. "Normal" social reputation runs into pretty significant issues as soon as your group size exceeds even fairly small groups -- I can imagine folk who could handle a couple thousand names, but it's common for a site to have orders of magnitude more users. These systems can provide useful tools for noticing and handling matters that are much more evident in pure data than in "expert judgments". But these are relatively minor benefits. At a deeper level, a well-formed reputation system should encourage 'good' posting (posting that matches the expressed desires of the forum community) and discourage 'bad' posts (posting that goes against the expressed desires of the forum community), as well as reduce incentives toward me-too or this-is-wrong-stop responses. This isn't without trade-offs : you'll implicitly make the forum's culture drift more slowly, and encourage surviving dissenters to be contrarians for whom the reputation system doesn't matter. But the existing reputation systems don't let you make that trade-off, and instead you have to decide whether to use a far more naive sys
0Lumifer
Yes, of course, but if we start to talk in these terms, the first in line is the standard question: What is your threat model? I also don't think there's a good solution to sockpuppetry short of mandatory biometrics. Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that's not used anywhere and reputation determining what, how, and when can you post. Attack? Again, threat model, please. Not if you can trivially easy block/ignore them which is the case for Twitter and FB.
0gattsuru
An attacker creates a large number of nodes and overwhelms any signal in the initial system. For the specific example of a reddit-based forum, it's trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process. 10% of the problem is hard. That does not explain the small amount of work done on the other 90%. The vast majority of sockpuppets aren't that complicated: most don't use VPNs or anonymizers, most don't use large stylistic variation, and many even use the same browser from one persona to the next. It's also common for a sockpuppets to have certain network attributes in common with their original persona. Full authorship analysis has both structural (primarily training bias) and pragmatic (CPU time) limitations that would make it unfeasible for large forums... But there are a number of fairly simple steps to fight sockpuppets that computers handle better than humans, and yet still require often-unpleasant manual work to check. Yes, but there aren't open-source systems that exist and have documentation which do these things beyond the most basic level. At most, there are simple reputation systems where a small amount has an impact on site functionality, such as this site. But Reddit's codebase does not allow upvotes to be limited or weighed based on the age of account, does not have , and would require pretty significant work to change any of these attributes. (The main site at least acts against some of the more overt mass-downvoting by acting against downvotes applied to the profile page, but this doesn't seem present here?) If a large enough percentage of outside user content is "bad", users begin to treat that space as advertising and ignore it. Many forums also don't make it easy to block users (see : here), and almost none handle blocking even the most overt of sockpuppets well.
1alienist
Limit the ability of low karma users to upvote.
1Lumifer
You seem to want to build a massive sledgehammer-wielding mech to solve the problem of fruit flies on a banana. So the attacker expends a not inconsiderable amount of effort to build his sockpuppet army and achieves sky-high karma on a forum. And..? It's not like you can sell karma or even gain respect for your posts from other than newbies. What would be the point? Not to mention that there is a lot of empirical evidence out there -- formal reputation systems on forums go back at least as far as early Slashdot and y'know? they kinda work. They don't achieve anything spectacular, but they also tend not have massive failure modes. Once the sockpuppet general gains the attention of an admin or at least a moderator, his army is useless. You want to write a library which will attempt to identify sockpuppets through some kind of multifactor analysis? Sure, that would be a nice thing to have -- as long as it's reasonable about things. One of the problems with automated defense mechanisms is that they can be often used as DOS tools if the admin is not careful. That still actually is the case for Twitter and FB.
0iamthelowercase
Inre: Facebook/Twitter: TL;DR I think Twitter Facebook et al do have something complex, but it is outside the hood rather than under it. (I guess they could have both.) The "friending" system takes advantage of human's built-in reputation system. When I look at X's user page, it tells me that W, Y, and Z also follow/"friended" X. Then when I make my judgement of X, X leaches some amount of "free" "reputation points" from Z's "reputation". Of course, if W, Y, and Z all have bad reputations, that is reflected. Maybe W and Z have good reputations, but Y does not -- now I'm not sure what X's reputation should be like and need to look at X more closely. Of course, this doesn't scale beyond a couple hundred people.
3fubarobfusco
I don't know of one. I doubt that everyone wants the same sort of thing out of such a metric. Just off the top of my head, some possible conflicts: * Is a post good because it attracts a lot of responses? Then a flamebait post that riles people into an unproductive squabble is a good post. * Is a post good because it leads to increased readership? Then spamming other forums to promote a post makes it a better post, and posting porn (or something else irrelevant that attracts attention) is really very good. * Is a post good because a lot of users upvote it? Then people who create sock-puppet accounts to upvote themselves are better posters; as are people who recruit their friends to mass-upvote their posts. * Is a post good because the moderator approves of it? Then as the forum becomes more popular, if the moderator has no additional time to review posts, a diminishing fraction of posts are good. The old wiki-oid site Everything2 explicitly assigns "levels" to users, based on how popular their posts are. Users who have proven themselves have the ability to signal-boost posts they like with a super-upvote. It seems to me that something analogous to PageRank would be an interesting approach: the estimated quality of a post is specifically an estimate of how likely a high-quality forum member is to appreciate that post. Long-term high-quality posters' upvotes should probably count for a lot more than newcomers' votes. And moderators or other central, core-team users should probably be able to manually adjust a poster's quality score to compensate for things like a formerly-good poster going off the deep end, the revelation that someone is a troll or saboteur, or (in the positive direction) someone of known-good offline reputation joining the forum.
0Lumifer
You may be interested in the new system called Dissent

Can anybody give me a good description of the term "metaphysical" or "metaphysics" in a way that is likely to stick in my head and be applicable to future contemplations and conversations? I have tried to read a few definitions and descriptions, but I've never been able to really grok any of them and even when I thought I had a working definition it slipped out of my head when I tried to use it later. Right now its default function in my brain is, when uttered, to raise a flag that signifies "I can't tell if this person is speaking... (read more)

Metaphysics: what's out there? Epistemology: how do I learn about it? Ethics: what should I do with it?

Basically, think of any questions that are of the form "what's there in the world", "what is the world made of", and now take away actual science. What's left is metaphysics. "Is the world real or a figment of my imagination?", "is there such a thing as a soul?", "is there such a thing as the color blue, as opposed to objects that are blue or not blue?", "is there life after death?", "are there higher beings?", "can infinity exist?", etc. etc.

Note that "metaphysical" also tends to be used as a feel-good word, meaning something like "nobly philosophical, concerned with questions of a higher nature than the everyday and the mundane".

2polymathwannabe
Isn't that ontology? What's the difference?

"Ontology" is firmly dedicated to "exist or doesn't exist". Metaphysics is more broadly "what's the world like?" and includes ontology as a central subfield.

Whether there is free will is a metaphysical question, but not, I think, an ontological one (at least not necessarily). "Free will" is not a thing or a category or a property, it's a claim that in some broad aspects the world is like this and not like that.

Whether such things as desires or intentions exist or are made-up fictions is an ontological question.

1Gvaerg
Thanks! I've seen many times the statement that ontology is strictly included in metaphysics, but this is the first time I've seen an example of something that's in the set-theoretic difference.
0ChristianKl
Ontology is a subdiscipline of metaphysics. Is the many-world hypothesis true? Might be a metaphysical question that not directly ontology.
0[anonymous]
A confusion of mine: How is epistemology a separate thing? Or is that just a flag for "We're going to go meta-level" and applied to some particular topic. E.g. I read a bit of Kant about experience, which I suppose is metaphysics (right?) but it seems like if he's making any positive claim, the debate about the claim is going to be about the arguments for the claim, which is settled via epistemology?
5Anatoly_Vorobey
Hmm, I would disagree. If you have a metaphysical claim, then arguments for or against this claim are not normally epistemological; they're just arguments. Think of epistemology as "being meta about knowledge, all the time, and nothing else". What does it mean to know something? How can we know something? What's the difference between "knowing" a definition and "knowing" a theorem? Are there statements such that to know them true, you need no input from the outside world at all? (Kant's analytic vs synthetic distinction). Is 2+2=4 one such? If you know something is true, but it turns out later it was false, did you actually "know" it? (many millions of words have been written on this question alone). Now, take some metaphysical claim, and let's take an especially grand one, say "God is infinite and omnipresent" or something. You could argue for or against that claim without ever going into epistemology. You could maybe argue that the idea of God as absolute perfection more or less requires Him to be present everywhere, in the smallest atom and the remotest star, at all times because otherwise it would be short of perfection, or something like this. Or you could say that if God is present everywhere, that's the same as if He was present nowhere, because presence manifests by the difference between presence and absence. But of course if you are a modern person and especially one inclined to scientific thinking, you would likely respond to all this "Hey, what does it even mean to say all this or for me to argue this? How would I know if God is omnipresent or not omnipresent, what would change in the world for me to perceive it? Without some sort of epistemological underpinning to this claim, what's the difference between it and a string of empty words?" And then you would be proceeding in the tradition started by Descartes, who arguably moved the center of philosophical thinking from metaphysics to epistemology in what's called the "epistemological turn", later bo
0CBHacking
Thanks. That's still not even a little intuitive to me, but it's a Monday and I had to be up absurdly early, so if it makes any sense to me right now (and it does), I have hope that I'll be able to internalize it even if I always need to think about it a bit. We'll see, probably no sooner than tomorrow though (sleeeeeeeeeep...). I suspect that part of my problem is that I keep trying to decompose "metaphysics" into "physics about/describing/in the area of physics" and my brain helpfully points out that not only is it questionable whether that makes any sense to begin with, it almost never makes any sense whatsoever in context. If I just need to install a linguistic override for that word, I can do it, but I want to know what the override is supposed to be before I go to the effort. The feel-good-word meaning seems likely to be a close relative of the flag-statement-as-bullshit meaning. That feels like a mental trap, though. The problem is, at least half the "concrete" examples that I've seen in this thread also seem likely to have little to no utility (certainly not enough to justify thinking about it for any length of time). Epistemology and ethics have obvious value, but it seems metaphysics comes up all the time in philosophical discussion too.
[-]gjm120

This is in no way an answer to your actual question (Anatoly's is good) but it might amuse you.

"Meta" in Greek means something like "after" (but also "beside", "among", and various other things). So there is a

Common misapprehension: metaphysics is so called because it goes beyond physics -- it's mode abstract, more subtle, more elevated, more fundamental, etc.

This turns out not to be quite where the word comes from, so there is a

Common response": actually, it's all because Aristotle wrote a book called "Physics" and another, for which he left no title, that was commonly shelved after the "Physics" -- meta ta Phusika* -- and was commonly called the "Metaphysics". And the topics treated in that book came to be called by that name. So the "meta" in the name really has nothing at all to do with the relationship between the subjects.

But actually it's a bit more complicated than that; here's the

Truth (so far as I understand it): indeed Aristotle wrote those books, and indeed the "Metaphysics" is concerned with, well, metaphysics, and indeed the "Metaphysics" is called that because it ... (read more)

1TheOtherDave
In my experience people use "metaphysics" to refer to philosophical exploration of what kinds of things exist and what the nature, behavior, etc. of those things is. This is usually treated as distinct from scientific/experimental exploration of what kinds of things exist and what the nature, behavior, etc. of those things is, although those lines are blurry. So, for example, when Yudkowsky cites Barbour discussing the configuration spaces underlying experienced reality, there will be some disagreement/confusion about whether this is a conversation about physics or metaphysics, and it's not clear that there's a fact of the matter. This is also usually treated as distinct from exploration of objects and experiences that present themselves to our senses and our intuitive reasoning... e.g. shoes and ducks and chocolate cake. As a consequence, describing a thought or worldview or other cognitive act as "metaphysical" can become a status maneuver... a way of distinguishing it from object-level cognition in an implied context where more object-level (aka "superficial") cognition is seen as less sophisticated or deep or otherwise less valuable. Some people also use "metaphysical" to refer to a class of events also sometimes referred to as "mystical," "occult," "supernatural," etc. Sometimes this usage is consistent with the above -- that is, sometimes people are articulating a model of the world in which those events can best be understood by understanding the reality which underlies our experience of the world. Other times it's at best metaphorical, or just outright bullshit. As far as correct behavior goes... asking people to taboo "metaphysical" is often helpful.
2CBHacking
The rationalist taboo is one of the tools I have most enjoyed learning and found most useful in face-to-face conversations since discovering the Sequences. Unfortunately, it's not practical when dealing with mass-broadcast or time-shifted material, which makes it of limited use in dealing with the scenarios where I most frequently encounter the concept of metaphysics. I tend to (over)react poorly to status maneuvers, which is probably part of why I've had a hard time with the word; it gets used in an information-free way sufficiently often that I'm tempted to just always shelve it there, and that in turn leads me to discount or even ignore the entire thought which contained it. This is a bias I'm actively trying to brainhack away, and I'm now tempted to go find some of my philosophically-inclined social circle and see if I can avoid that automatic reaction at least where this specific word is concerned (and then taboo it anyhow, for the sake of communication being informative). I still haven't fully internalized the concept, but I'm getting closer. "The kinds of things that exist, and their natures" is something I can see a use for, and hopefully I can make it stick in my head this time.
0TheOtherDave
This seems like a broader concern, and one worth addressing. People drop content-free words into their speech/writing all the time, either as filler or as "leftovers" from precursor sentences. What happens if you treat it as an empty modifier, like "really" or "totally"?
0CBHacking
Leaving aside the fact that, by default, I don't consider "totally" to be content-free (I'm aware a lot of people use it that way, but I still often need to consciously discard the word when I encounter it), that still seems like at best it only works when used as a modifier. It doesn't help if somebody is actually talking about metaphysics. I'll keep it in mind as a backup option, though; "if I can't process that sentence when I include all the words they said, and one of them is 'metaphysical', what happens if I drop that word?"

Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.

True, false, or neither?: It is currently an open/controversial/speculative question in physics whether time is discretized.

The Wikipedia article on Planck time says:

Theoretically, this is the smallest time measurement that will ever be possible, roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change.

However, the article on Chronon says:

The Planck time is a theoretical lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times.

0iamthelowercase
So, if I understand this rightly- Any two events must take place at least one Plank time apart. But so long as they do, it can be any number of plank times -- even, say, pi. Right?
2Richard Korzekwa
Many things in our best models of physics are discrete, but as far as I know, our coordinates (time, space, or four-dimensional space-time coordinates) are never discrete. Even something like quantum field theory, which treats things in a non-intuitively discrete way does not do this. For example, we might view the process of an electron scattering off another electron as an exchange of many discrete photons between the two electrons, but it is all written in terms of integrals or derivatives, rather than differences or sums.

Maneki Neko is a short story about an AI that manages a kind of gift economy. It's an enjoyable read.

I've been curious about this 'class' of systems for a while now, but I don't think I know enough about economics to ask the questions well. For example- the story supplies a superintelligence to function as a competent central manager, but could such a gift network theoretically exist without being centrally managed (and without trivially reducing to modern forms of currency exchange)? Could a variant of Watson be used to automate the distribution of capi... (read more)

[-]badger110

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.

For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.

1Toggle
This looks very useful. Thanks! Another one of those interesting questions is whether the pricing system must be equivalent to currency exchange. To what extent are the traditional modes of transaction a legacy of the limitations behind physical coinage, and what degrees of freedom are offered by ubiquitous computation and connectivity? Etc. (I have a lot of questions.)
1badger
Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory. Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from "equal incomes". What benefits do you think a different system might provide, or what problems does monetary exchange have that you're trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.
1Lumifer
The field of study that deals with this is called economics. Any reason an intro textbook won't suit you?
1ChristianKl
The stock market has a lot of capable AIs that manage capital allocation.
0Toggle
Fair point. It's my understanding that this is limited to rapid day trades, with implications for the price of a stock but not cash-on-hand for the actual company. I was imagining something more like a helper algorithm for venture capital or angel investors, comparable to the PGMs underpinning the insurance industry.

Is it possible even in principle to perform a "consciousness transfer" from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require... (read more)

7CBHacking
I don't think anybody has hard evidence of answers to any of those questions yet (though I'd be fascinated to learn otherwise) but I can offer some conjectures: Possible in principle? Yes. I see no evidence that sentience and identity are anything other than information stored in the nervous system, and in theory the cognitive portion of a nervous system an organ and could be transplanted like any other. Preserving anatomical integrity? Not with anything like current science. We can take non-intrusive brain scans, but they're pretty low-resolution and (so far as I know) strictly read-only. Even simply stimulating parts of the brain isn't enough to basically re-write it in such a way that it becomes another person's brain. Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable. It's probably possible to do it without a full brain at all, which seems less evil if you can somehow do it my some mechanism other than what amounts to a pre-natal full lobotomy, but would require the physical brain transplant option for transference. Nerves connecting and healing? Nerves can repair themselves, though it's usually extremely slow. Stem cell therapies have potential here, though. Connecting the brain to the rest of the body is a lot of nerves, but they're pretty much all sensory and motor nerves so far as I know; the brain itself is fairly self-contained Personality change? That depends on how different the new body is from the old, I would guess. The obviously-preferable body is a clone, for many reasons including avoiding the need to avoid immune system rejection of the new brain. Personality is always going to be somewhat externally-driven, so I wouldn't expec
5Gunnar_Zarncke
I don't think this is currently possible. The body just wouldn't work. A large part of the 'wiring' during infant and childhood is connecting body parts and functions with higher and higher level concepts. Think about toilet training. You aren't even aware of how it works but it nonetheless somehow connects large scale planning (how urgent is it, when and where are toilets) to the actual control of the organs. Considering how differnt minds (including the connection to the body) are I think the minimum requirement (short of signularity-level interventions) is an identical twin. That said I think the existing techniques for transferring motion from one brain to another combined with advanced hypnosis and drugs could conceivably developed to a point where it is possible to transfer noticable parts of your identity over to another body - at least over an extended period of time where the new brain 'learn' to be you. To also transfer memory is camparably easy. Whether the result can be called 'you' or is sufficiently alike to you is another question.
1Dahlen
That's how I pictured it, yes. At this point I wouldn't concern myself with the ethics of it, because, if our technology advances this much, then simply the fact that humanity can perform such a feat is an extremely positive thing, and probably the end of death as we know it. What worries me more is that this wouldn't result in a functional mature individual. For instance: in order to develop the muscular system, the body's skeletal muscles would have to experience some sort of stress, i.e. be used. If you grow the organism in a jar from birth to consciousness transfer (as is probably most ethical), it wouldn't have moved at all its entire life up to that point, and would therefore have extremely weak musculature. What to do in the meantime, electrically stimulate the muscles? Maybe, but it probably wouldn't have results comparable to natural usage. Besides, there are probably many other body subsystems that would suffer similarly without much you could do about it. See Gunnar Zarncke's comment below. Yes, but I imagine most uses to be related to rejuvenation. It would mean that the genetic info required for cloning would have to be gathered basically at birth (and the cloning process begun shortly thereafter), and there would still be a 9-month age difference. There's little point in growing a backup clone for an organism so soon after birth. An age difference of 20 years between person and clone seems more reasonable.
2Alsadius
In order to provide a definite answer to this question, we'd need to know how the brain produces consciousness and personality, as well as the exact mechanism of the upload(e.g., can it rewire synapses?).
0Tem42
Not exactly true; we probably don't need to know how consciousness arises. We would certainly have to rewire synapses to match the original brain, and it is likely that if we exactly replicate brain structure neuron by neuron, synapse by synapse, we would still not know where consciousness lies, but would have a conscious duplicate of the original. Alternatively you could hypothesize a soul, but that seems like worry for worry's sake. The flip side to this is that there is no measurable difference between 'someone who is you and feels conscious' and 'someone who is exactly like you in every way but does not feel conscious (but will continue to claim that e does)'. Even if you identified a mental state on a brain scan that you felt certain that was causing the experience of consciousness, in order to approximate a proof of this you would have to be able to measure a group of subjects that are nearly identical except not experiencing consciousness, a group that has not yet been found in nature.
1hyporational
This can already be done via the senses. This also transfers consciousness of the content that is being transferred. What would consciousness without content look like?
1ChristianKl
There no such thing as "purely informational" when it comes to brains. If you want to focus on that problem it's likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.
0Dahlen
It's good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers. It occurred to me too, but I'm not sure this is the definite conclusion. Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn't leave the patient in as good a state as simply moving to a 20-year-old body.
0ChristianKl
Brains are no computers. Of course you wouldn't only heal one severage disease. You would also lengthen telomeres and do all sorts of other things that reduce aging effects.
0mwengler
Suppose all the memories in one person were wiped and replaced with your memories. I believe the new body would claim to be you. It would introspect as you might now, and find your memories as its own, and say "I am Dahlen in a new body." But would it be you? If the copying had been non-destructive, then Dahlen in the old body still exists and would "know" on meeting Dahlen in the new body that Dahlen in the new body was really someone else who just got all Dahlen's memories up to that point. Meanwhile, Dahlen in the new body would have capabilities, moods, reactions, which would depend on the substrate more than the memories. The functional parts of the brain, the wiring-other-than-memories as it were, would be different in the new body. Dahlen in the new body would probably behave in ways that were similar to how the old body with its old memories behaved. It would still think it was Dahlen, but as Dahlen in the old body might think, that would just be its opinion and obviously it is mistaken. As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body. Would anybody other than Dahlen in the old body have a reason to doubt that Dahlen in the new body wasn't really Dahlen? I don't think so, and especially Dahlen in the new body would probably be pretty sure it was Dahlen, even if it claimed to rationally understand how it might not be. It would know it was somebody, and wouldn't be able to come up with any other compelling idea for who it was other than Dahlen.
0Dahlen
I understand all this. And it's precisely the sort of personality preservation that I find largely useless and would like to avoid. I'm not talking about copying memories from one brain to another; I'm talking about preserving the sense of self in such a way that the person undergoing this procedure would have the following subjective experience: be anesthetized (probably), undergo surgery (because I picture it as some form of surgery), "wake up in new body". (The old body would likely get buried, because the whole purpose of performing such a transfer would be to save dying -- very old or terminally ill -- people's lives.) There would be only one extant copy of that person's memories, and yet they wouldn't "die"; there would be the same sort of continuity of self experienced by people before and after going to sleep. The one who would "die" is technically the person in the body which constitutes the recipient of the transfer (who may have been grown just for this purpose and kept unconscious its whole life). That's what I mean. Think of it as more or less what happens to the main character in the movie Avatar. I realize the whole thing doesn't sound very scientific, but have I managed to get my point across? Yes, but... Everybody's physiological basis for feelings is more or less the same; granted, there are structural differences that cause variation in innate personality traits and other mental functions, and a different brain might employ the body's neurotransmitter reserve in different ways (I think), but the whole system is sufficiently similar from human to human that we can relate to each other's experiences. There would be differences, and the differences would cause the person to behave differently in the "new body" than it did in the "old body", but I don't think one would have to move the glands or limbic system or what-have-you in addition to just the brain.
1mwengler
I understand what you are going for. And I present the following problem with it. Dahlen A is put to unconscious. While A is unconscious memories are completely copied to unconscious body B. Dahlen B is woken up. Your scenario is fulfilled, Dahlen B has entirely the memories of being put to sleep in body A and waking up in body B. Dahlen B examines his memories and sees no gap in his existence other than the "normal" one of the anesthesis to render Dahlen A unconscious. Your desires for a transfer scenario are fulfilled! Scenario 1: Dahlen A is killed while unconscious and body disposed of. Nothing ever interferes with the perception of Dahlen A and everyone around that there has been a transfer of consciousness from Dahlen A to Dahlen B. Scenario 2: A few days later Dahlen A is woken up. Dahlen A of course has the sense of continuous consciousness just as he would if he had undergone a gall bladder surgery. Dahlen A and Dahlen B are brought together with other friends of Dahlen. Dahlen A is introspectively sure that he is the "real" Dahlen and no transfer ever took place. Dahlen B is introspectively sure that he is the "real" Dahlen and that a transfer did take place. Your scenario assumes that there can be only one Dahlen. That the essence of Dahlen is a unique thing in the universe, and that it cannot be copied so that there are two. I think this assumption is false. I think if you make a "good enough" copy of Dahlen that you will have two essences of Dahlen, and that at no point does a single essence of Dahlen exist, and move from one body to another. Further, if I am right and the essence of Dahlen can be copied, multiplied, and each possessor of a copy has the complete introspective property of seeing that it is in fact Dahlen, then it is unscientific to think that in the absence of copying, that your day to day existence is anything more than this. That each day you wake up, each moment you experience, your "continuity" is something you experience subjec
1Jiro
By this reasoning, isn't it okay to kill someone (or at least to kill them in their sleep)? After all, if everyone's life is a constant sequence of different entities, what you're killing would have ceased existing anyway. You're just preventing a new entity from coming into existence. But preventing a new entity from coming into existence isn't murder, even if the new entity resembles a previous one.
-1mwengler
You tell me. If you don't like the moral implications of a certain hypothesis, this should have precisely zero effect on your estimation of the probability that this hypothesis is correct. The entire history of the growing acceptance of evolution as a "true" theory follows precisely this course. Many people HATED the implication that man is just another animal. That a sentiment for morality evolved because groups in which that sentiment existed were able to out-compete groups in which that sentiment was weaker. That the statue of David or the theory of General Relativity, or the love you feel for your mother or your dog arise as a consequence, ultimately, of mindless random variations producing populations from which some do better than others and pass down the variations they have to the next generation. So if the implications of the continuity of consciousness are morally distasteful to you, do not make the mistake of thinking that makes them any less likely to be true. A study of science and scientific progress should cure you of this very human tendency.
1Jiro
If your reasoning implies ~X, then X implies that your reasoning is wrong. And if X implies that your reasoning is wrong, then evidence for X is evidence against your reasoning. In other words, you have no idea what you are talking about. The fact that something has "distasteful implications" (that is, that it implies ~X, and there is evidence for X) does mean it is less likely to be true.
1mwengler
Help me out, readers. The fact that something has distasteful implications means it is less likely to be true. [pollid:802]
1mwengler
Historically, the hypothesis that the earth orbited the sun had the distasteful implications that we were not the center of the universe. Galileo was prosecuted for this belief and recanted it under threat. I am surprised that you think the distasteful implications for this belief were evidence that the earth did not in fact orbit the sun. Historically the hypothesis that humans evolved from non-human animals had the distasteful implications that humans had not been created by god in his image and provided with immortal souls by god. I am surprised that you consider this distaste to be evidence that evolution is an incorrect theory of the origin of species, including our own. This is a rationality message board, devoted to, among other things, listing the common mistakes that humans make in trying to determine the truth. I would have bet dollars against donuts that rejecting the truth of a hypothesis because its implications were distasteful would have been an obvious candidate for that list, and I would have apparently lost.
0Jiro
If you had reason to believe that the Earth is the center of the universe, the fact that orbiting the sun contradicts that is evidence against the Earth orbiting the sun. It is related to proof by contradiction; if your premises lead you to a contradictory conclusion, then one of your premises is bad. And if one of your premises is something in which you are justified in having extremely high confidence, such as "there is such a thing as murder", it's probably the other premise that needs to be discarded. If you have reason to believe that humans have souls, and evolution implies that they don't, that is evidence against evolution. Of course, how good that is as evidence against evolution depends on how good your reason is to believe that humans have souls. In the case of souls, that isn't really very good.
-2Tem42
Evidence that killing is wrong is certainly possible, but your statement "I think that killing is wrong" is such weak evidence that it is fair for us to dismiss it. You may provide reasons why we should think killing is wrong, and maybe we will accept your reasons, but so far you have not given us anything worth considering. I think that you are also equivocating on the word 'imply', suggesting that 'distasteful implications' means something like 'logical implications'.
0Eniac
The task you describe, at least the part where no whole brain transplant is involved, can be divided into two parts: 1) extracting the essential information about your mind from your brain, and 2) implanting that same information back into another brain. Either of these could be achieved in two radically different ways: a) psychologically, i.e. by interview or memoir writing on the extraction side and "brain-washing" on the implanting side, or b) technologically, i.e. by functional MRI, electro-encephalography, etc on the extraction side. It is hard for me to envision a technological implantation method. Either way, it seems to me that once we understand the mind enough to do any of this, it will turn out the easiest to just do the extraction part and then simulate the mind on a computer, instead of implanting it into a new body. Eliminate the wetware, and gain the benefit of regular backups, copious copies, and Moore's law for increasing effectiveness. Also, this would be ethically much more tractable. It seems to me this could also be the solution to the unfriendly AI problem. What if the AI are us? Then yielding the world to them would not be so much of a problem, suddenly.
1mwengler
I would expect recreating a mind from interviews and memoirs to be about as accurate as building a car based on interviews and memoirs written by someone who had driven cars. which is to say, the part of our mind that talks and writes is not noted for its brilliant and detailed insight into how the vast majority of the mind works.
0Eniac
Good point. I suppose it boils down to what you include when you say "mind". I think the part of our mind that talks and writes is not very different from the part that thinks. So, if you narrowly, but reasonably, define the "mind" as only the conscious, thinking part of our personality, it might not be so farfetched to think a reasonable reconstruction of it from writings is possible. Thought and language are closely related. Ask yourself: How many of my thoughts could I put into language, given a good effort? My gut feeling is "most of them", but I could be wrong. The same goes for memories. If a memory can not be expressed, can it even be called a memory?

In dietary and health articles they often speak about "processed food". What exactly is processed food and what is unprocessed food?

Definitions will vary depending on the purity obsession of the speaker :-) but as a rough guide, most things in cans, jars, boxes, bottles, and cartons will be processed. Things that are, more or less, just raw plants and animals (or parts of them) will be unprocessed.

There are boundary cases about which people argue -- e.g. is pasteurized milk a processed food? -- but for most things in a food store it's pretty clear what's what.

0timujin
Thanks! That does make sense.
1polymathwannabe
Anything that you could have picked from the plant yourself (a pear, a carrot, a berry) AND has not been sprinkled with conservants/pesticides/shiny gloss is unprocessed. If it comes in a package and looks nothing like what nature gives (noodles, cookies, jell-o), it's been processed. Raw milk also counts as unprocessed, but in the 21st century there's no excuse to be drinking raw milk.
1Lumifer
That's debatable -- some people believe raw milk to be very beneficial.
2polymathwannabe
Absolutely not worth the risk.
7AlexSchell
Do you have any sources that quantify the risk?
-1Lumifer
Oh, I'm sure the government wants you to believe raw milk is the devil :-) In reality I think it depends, in particular on how good your immune system is. If you're immunocompromised, it's probably wise to avoid raw milk (as well as, say, raw lettuce in salads). On the other hand, if your immune system is capable, I've seen no data that raw milk presents an unacceptable risk -- of course how much risk is unacceptable varies by person.
0Tem42
More relevant may be your supply chain. If you have given your cow all required shots and drink the milk within a day -- and without mixing it with the milk of dozens of other cows -- you are going to be a lot better off than if you stop off at a random roadside stand and buy a gallon of raw milk.
0timujin
So, it doesn't make sense to talk about processed meats, if you can't pick them from plants? If I roast my carrot, does it become processed?
0polymathwannabe
I'm assuming you value your health and thus don't eat any raw meat, so all of it is going to be processed---if only at your own kitchen. By the same standard, a roasted carrot is, technically speaking, "processed." However, what food geeks usually think of when they say "processed" involves a massive industrial plant where your food is filled with additives to compensate for all the vitamins it loses after being crushed and dehydrated. Too often it ends up with an inhuman amount of salt and/or sugar added to it, too.

I have a constant impression that everyone around me is more competent than me at everything. Does it actually mean that I am, or is there some sort of strong psychological effect that can create that impression, even if it is not actually true? If there is, is it a problem you should see your therapist about?

[-]Toggle250

Reminds me of something Scott said once:

And when I tried to analyzed my certainty that – even despite the whole multiple intelligences thing – I couldn’t possibly be as good as them, it boiled down to something like this: they were talented at hard things, but I was only talented at easy things.

It took me about ten years to figure out the flaw in this argument, by the way.

See also: The Illusion of Winning by Scott Adams (h/t Kaj_Sotala)

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

7Gunnar_Zarncke
This reminds me of my criteria for learning: "You have understood something when it appears to be easy." The mathematicians call this state 'trivial'. It has become easy because you trained the topic until the key aspects became part of your unconscious competence. Then it appears to yourself as easy - because you no longer need to think about it.

Impostor syndrome:

Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.

Psychological research done in the early 1980s estimated that two out of five successful people consider themselves frauds and other studies have found that 70 percent of all people feel like impostors at one time or another. It is not considered a psychological disorder, and is not among the conditions described in the Diagnostic and Statistical Manual of Mental Disorders.

4timujin
Err, that's not it. I am no more successful than them. Or, at least, I kinda feel that everyone else is more successful than me as well.
0Elo
Consider that maybe you might be wrong about the imposter syndrome. As a person without it - its hard to know how you think/feel and how you concluded that you couldn't have it. But maybe its worth asking - How would someone convince you to change your mind on this topic?
0timujin
By entering some important situation where my and his comparative advantage in some sort of competence comes into play, and losing.
2Elo
what if you developed a few bad heuristics about how other successful people were not inherently more successful but just got lucky (or some other external granting of success) as they went along; whereas your hard-earned successes were due to successful personal skills... Hard earned, personally achieved success. its probably possible to see a therapist about it; but I would suggest you can work your own way around it (consider it a challenge that can be overcome with the correct growth mindset)
5[anonymous]
I think people are quick to challenge this type of impression because it pattern matches to known cognitive distortions involved in things like depression, or known insecurities in certain competitive situations. For example, consider that most everyone will structure their lives such that their weaknesses are downplayed and their positive features are more prominent. This can happen either by choice of activity (e.g. the stereotypical geek avoids social games) or by more overt communication filtering (e.g. most people don't talk about their anger problems). Accordingly, it's never hard to find information that confirms your own relative incompetence, if there's some emotional tendency to look for it. Aside from that, a great question is "to what ends am I making this comparison?" I find it unlikely that you have a purely academic interest in the question of your relative competence. First, it can often be useful to know your relative competence in a specific competitive domain. But even here, this information is only one part of your decision process: You may be okay with e.g. choosing a lower expected rank in one career over a higher rank in another because you enjoy the work more, or find it more compatible with your values, or because it pays betters, or leaves more time for you family, or you're risk averse, or it's more altruistic, etc. But knowing your likely rank along some dimension will tell you a bit about the likely pay-offs of competing along that dimension. But what is the use of making an across-the-board self-comparison? Suppose you constructed some general measure of competence across all domains. Suppose you found out you were below average (or even above average). Then what? It seems you're in still in the same situation as before: You still must choose how to spend your time. The general self-comparison measure is nothing more than the aggregate of your expected relative ranks on specific sub-domains, which are more relevant to any specific
4NancyLebovitz
Possibly parallel-- I've had a feeling for a long time that something bad was about to happen. Relatively recently, I've come to believe that this isn't necessarily an accurate intuition about the world, it's muscle tightness in my abdomen. It's probably part of a larger pattern, since just letting go in the area where I feel it doesn't make much difference. I believe that patterns of muscle tension and emotions are related and tend to maintain each other. It's extremely unlikely that everyone is more competent than you at everything. If nothing else, your writing is better than that of a high proportion of people on the internet. Also, a lot of people have painful mental habits and have no idea that they have a problem. More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything? This sounds to me like something worth taking to a therapist, bearing in mind that you may have to try more than one therapist to find one that's a good fit. I believe there's strong psychological effect which can create that impression-- growing up around people who expect you to be incompetent. Now that I think about it, there may be genetic vulnerability involved, too. Possibly worth exploring: free monthly Feldenkrais exercise-- this are patterns of gentle movement which produce deep relaxation and easier movement. The reason I think you can get some evidence about your situation by trying Feldenkrais is that, if you find your belief about other people being more competent at everything goes away, even briefly, than you have some evidence that the belief is habitual.
1mwengler
Nancy, I believe you are describing anxiety. That you are anxious, that if you went to a psychologist for therapy and you were covered by insurance that they would list your diagnosis on the reimbursement form as "generalized anxiety disorder." I say this not as a psychologist but as someone who was anxious much of his life. For me it was worth doing regular talking therapy and (it seems to me) hacking my anxiety levels slowly downward through directed introspection. I am still more timid than I would like in situations where, for example, I might be very direct telling a woman (of the appropriate sex) I love her, or putting my own ideas forward forcefully at work. But all of these things I do better now than I did in the past, and I don't consider my self-adjustment to be finished yet. Anyway, If you haven't named what is happening to you as "anxiety," it might be helpful to consider that some of what has been learned about anxiety over time might be interesting to you, that people who are discussing anxiety may often be discussing something relevant to you.
1timujin
Do you know me? I find a lot of evidence for it, but I am not sure I am not being selective. For example, I am the only one in my peer group that never did any extra-curricular activities at school. While everyone had something like sports or hobbies, I seemed to only study at school an waste all my other time surfing the internet and playing the same video games over and over.
8ChristianKl
The idea that playing an instrument is a hobby while playing a video game isn't is completely cultural. It says something about values but little about competence.
3jaime2000
One important difference is that video games are optimized to be fun while musical instruments aren't. Therefore, playing an instrument can signal discipline in a way that playing a game can't.
0ChristianKl
I'm not sure that's true. There's selection pressure on musical instruments to make them fun to use. Most of the corresponding training also mostly isn't optimised for learning but for fun.
1alienist
There's also selection pressure on instruments to make them pleasant to listen to. There's no corresponding constraint on video games.
0ChristianKl
In an age of eSports I'm not sure that's true. Quite a lot of games are not balanced to make them fun for the average player but balanced for high level tournament play.
2NancyLebovitz
Having a background belief that you're worse than everyone at everything probably lowered your initiative.
0MathiasZaman
Obvious question: Are you better at those games than other people? (On average, don't compare yourself to the elite.) How easy did studying come to you?
1timujin
At THOSE games? Yes. I can complete about half of American McGee's Alice blindfolded. Other games? General gaming? No. Or, okay, I am better than non-gamers, but my kinda-gamer peers are crub-stomping me at multiplayer in every game. Studying - very easy. Now, when I am a university student - quite hard.
8MathiasZaman
Seems like you fell prey to the classic scenario of "being intelligent enough to breeze through high school and all I ended up with is a crappy work ethic." University is as good of a place as any to fix this problem. First of all, I encourage you to do all the things people tell you you should do, but most people don't: Read up before classes, review after classes, read the extra material, ask your professors questions or help, schedule periodic review sessions of the stuff you're supposed to know... You'll regret not doing those things when you get your degree but don't feel very competent about your knowledge. Try to make a habit out of this and it'll get easier in other aspects of your life. And try new things. This is probably a cliché in the LW-sphere by now, but really try a lot of new things.
0timujin
Thanks. Still, should I take it as "yes, you are less competent than people around you"?
4polymathwannabe
Maybe just less disciplined than you need to be. "Less competent" is too confusingly relative to mean anything solid.
1timujin
Well, here's a confusing part. I didn't tell the whole truth in parent post, there are actually two areas that I am probably more competent than peers, in which others openly envy me instead of the other way around. One is the ability to speak English (a foreign language, most my peers wouldn't be able to ask this question here), another is discipline. Everyone actually envies me for almost never procrastinating, never forgetting anything, etc. Are we talking about different disciplines here?
2polymathwannabe
If you already have discipline, what exactly is the difficulty you're finding to study now as compared to previous years?
3timujin
Sometimes, I just have trouble understanding the subject areas. I am going to take MathiasZaman's advice: I always used my discipline to complete in time and with quality what needs to be completed, but not into anything extra. Mostly, though, it is (social) anxiety - I can't approach a professor with anything unless I have a pack of companions backing me up, or can't start a project unless a friend confirms that I correctly understand what it is that has to be done. And my companions have awful discipline, worst of anyone I ever worked with (which is not many). So I end up, for example, preparing all assignments in time, but hand them in only long after the time is due, when a friend has prepared them. I am working on that problem, and it becomes less severe as the time goes.
3polymathwannabe
I agree; group assignments are the worst. Is there any way you can get the university to let you take unique tests for the themes you already master?
3timujin
First of all: I don't agree that group assignments are bad. Those problems are my problems, and most complex tasks in real life really benefit from, or require, collaboration. I think that universities should have more group assignments and projects, even if it would mean I'll drop out. Second, I wasn't talking about group assignments in my post. I was talking about being too anxious to work on your own personal assignment, unless a friend has already done it and can provide confirmation.
0Viliam_Bur
So it seems like you can solve the problems... but then you are somehow frozen by fear that maybe your solution is not correct. Until someone else confirms that it is correct, and then you are able to continue. Solving the problem is not a problem; giving it to the teacher is. On the intellectual level, you should update the prior probability that your solutions are correct. On the emotional level... what exactly is this horrible outcome your imagination shows you if you would give the professor a wrong solution? It is probably something that feels stupid if you try to explain it. (Maybe you imagine the professor screaming at you loudly, and the whole university laughing at you. It's not realistic, but it may feel so.) But that's exactly the point. On some level, something stupid happens in your mind, because otherwise you wouldn't have this irrational problem. It doesn't make sense, but it's there in your head, influencing your emotions and actions. So the proper way is to describe your silent horrible vision explicitly, as specifically as you can (bring it from the darkness to light), until your own mind finally notices that it really was stupid.
1timujin
I have no trouble imagining all the horrible outcomes, because I did get into trouble several times in similar scenarios, where getting confirmation from a friend would have saved me. For example, a couple of hours after giving my work to a teacher, I remembered that my friend wasn't there, even though he was ready. I inquired him about it, and it then turned out that I gave it to the wrong teacher, and getting all my hand-crafted drawings back ended up being a very time and effort consuming task.
0ChristianKl
Reading that it sounds like your core issue is around low self confidence. Taking an IQ test might help to dispell the idea that you are below average. You might be under the LW IQ average IQ of 140 but you are probably well above 100 which is the average in society.
0timujin
I can guess that my IQ has three digits. It's just that it doesn't enable me to do things better than others. Except solving iq tests, I guess.
1ChristianKl
It seems that you have a decent IQ. Additionally you seem to be conscious and can avoid procrastination which is a very, very valuable characteristic. On the other hand you have issues with self esteem. As far as I understand IQ testing gets used by real psychologists in cases like this. Taking David Burns CBT book, "The Feeling Good Handbook" and doing the exercises every day for 15 minutes would likely do a lot for you, especially if you can get yourself to do the exercises regularly. I also support Nancy's suggestion of Feldenkrais.
3timujin
Another stupid question to boot: will all this make me more content with my current situation? While not being a pleasant feeling, my discontent with my competence does serve as a motivator to actually study. I wouldn't have asked this question here and wouldn't receive all the advice if I were less competent than everyone else and okay with it.
1NancyLebovitz
That's a really interesting question, and I don't have an answer to it. Do you have any ideas about how your life might be different in positive ways if you didn't think you were less competent than everyone about everything? Is there anything you'd like to do just because it's important to you?
0timujin
Not anything specific. I have goals and values beyond being content or happy, but they are more than a couple of inferential steps away from my day-to-day routine, and I don't have that inner fire thingy that would bridge the gap. So, more often than not, they are not the main component of my actual motivation. Also, I am afraid of possibility of having my values changed.
0NancyLebovitz
I don't think I know you, but I'm not that great at remembering people. I made the claim about your writing because I've spent a lot of time online. I'm sure you're being selective about the people you're comparing yourself to.
3IlyaShpitser
There are two separate issues: morale management and being calibrated about your own abilities. I think the best way to be well-calibrated is to approximate pagerank -- to get a sense of your competence, don't ask yourself, average the extracted opinion of others that are considered competent and have no incentives to mislead you (this last bit is tricky, also the extracting process may have to be slightly indirect). Morale is hard, and person specific. My experience is that in long term projects/goals, morale becomes a serious problem long before the situation actually becomes bad. I think having "wolverine morale" ("You know what Mr. Grizzly? You look like a wuss, I can totally take you!") is a huge chunk of success, bigger than raw ability.
0Lumifer
Is Zuckerberg's "Move fast, break things" similar/related?
2Elo
Look up the imposter syndrome. And try not to automatically say; "I don't have it because I never did anything of noteworthyness" ---Oh dang; someone else got to it first. How did you go with your opinions of imposter syndrome now?
2elharo
Possible, but unlikely. We're all just winging it and as others have pointed out, impostor syndrome is a thing.
1EphemeralNight
I sometimes have a similar experience, and when I do, it is almost always simply an effect of my own standards of competence being higher than those around me. Imagine, some sort of problem arises in the presence of a small group. The members of that group look at each other, and whoever signals the most confidence gets first crack at the problem. But this more-confident person then does not reveal any knowledge or skill that the others do not possess, because said confidence was entirely do to higher willingness to potentially make the problem worse through trial and error. So, in this scenario, feeling less competent does not mean you are less competent; it means you are more risk-adverse. Do you have a generalized paralyzing fear of making the problem worse? If so, welcome to the club. If not, nevermind.
1mwengler
I personally am a fan of talking therapy. If you are thinking something is worth asking a therapist about, it is worth asking a therapist about. But beyond the generalities, thinking you are not good enough is absolutely right in the targets of the kinds of things it can be helpful to discuss with a therapist. Consider the propositions: 1) everyone is more competent than you at everything and 2) you can carry on a coherent conversation on lesswrong I am pretty sure that these are mutually exclusive propositions. I'm pretty sure just from reading some of your comments that you are more competent than plenty of other people at a reasonable range of intellectual pursuits. Anything you can talk to a therapist about you can talk to your friends about. Do they think you are less competent than everybody else? They might point out to you in a discussion some fairly obvious evidence for or against this proposition that you are overlooking.
1timujin
I asked my friends around. Most were unable to point out a single thing I am good at, except speaking English very well for a foreign language, and having a good willpower. One said "hmmm, maybe math?" (as it turned out, he was fast-talked by the math babble that was auraing around me for some time after having read Godel, Escher, Bach), and several pointed out that I am handsome (while a nice perk, I don't what that to be my defining proficiency).
4mwengler
Originally you expressed concern that all other people were better than you at all the things you might do. But here you find out from your friends that for each thing you do there are other people around you who do it better. In a world with 6 billion people, essentially every one of us can find people who are better at what we are good at than we are. So join the club. What works is to take some pleasure in doing things. Only you can improve your understanding of the world, for instance. No one in the world is better at increasing your understanding of the world than you are. I read comments here and post "answers" here to increase my understanding of the world. It doesn't matter that other people here are better at answering these questions, or that other people here have a better understanding of the world than I do. I want to increase my understanding of the world and I am the only person in the world who can do that. I also wish to understand taking pleasure and joy from the world and work to increase my pleasure and joy in the world. No one can do that for me better than I can. You might take more joy than me in kissing that girl over there. Still, I will kiss her if I can because having you kiss her gives me much less joy and pleasure than kissing her myself, even if I am getting less joy from kissing here than you would get for yourself if you kissed her . The concern you express to only participate in things where you are better than everybody else is just a result of your evolution as a human being. The genes that make you think being better than others around you have, in the past, caused your ancestors to find effective and capable mates, able to keep their children alive and able to produce children who would find effective and capable mates. But your genes are just your genes they are not the "truth of the world." You can make the choice to do things because you want the experience of doing them, and you will find you are better than anybody else
1LizzardWizzard
I suppose that the problem emerged only because you communicate only with people of your sort and level of awareness, try to go on a trip to some rural village or start conversations with taxists, dishwashers, janitors, cooks, security guards etc.
0Lumifer
Is that basically a self-confidence problem?
4timujin
Is it? I don't know.
0Lumifer
Well, does it impact what you are willing to do or try? Or it's just an abstract "I wish I were as cool" feeling? If you imagine yourself lacking that perception (e.g. imagine everyone's IQ -- except yours -- dropping by 20 points), would the things you do in life change?
0timujin
Guesses here. I would be taking up more risks in areas where success depends on competition. I would become less conforming, more arrogant and cynical. I would care less about producing good art, and good things in general. I would try less to improve my social skills, empathy and networking, and focus more on self-sufficiency. I wouldn't have asked this question here, on LW.
0MathiasZaman
I frequently feel similar and I haven't found a good way to deal with those feelings, but it's implausible that everyone around you is more competent at everything. Some things to take into account: * Who are you comparing yourself to? Peers? Everyone you meet? Successful people? * What traits are you comparing? It's unlikely that someone who is, for example, better at math than you are is also superior in every other area. * Maybe you haven't found your advantage or a way to exploit this. * Maybe you haven't spend enough time on one thing to get really good at it. Long shot: Do you think you might have ADHD?.pdf) (pdf warning) Alternatively, go over the diagnostic criteria
4gjm
Your link is broken because it has parentheses in the URL. Escape them with backslashes to unbreak it.
4MathiasZaman
Thank you very much.
4gjm
You're welcome!
0timujin
Peers. It being unlikely and still seeming to happen is the reason I asked this question. Maybe. And everyone else did, thus denying me of competitive advantage?

Is it a LessWrongian faux pas to comment only to agree with someone? Here's the context:

That's the kind of person that goes on to join LW and tell you. There are also people who read a sequence post or two because they followed a link from somewhere, weren't shocked at all, maybe learned something, and left. In fact I'd expect they're the vast majority.

I was going to say that I agree and that I had not considered my observation as an effect of survivorship bias.

I guess I thought it might be useful to explicitly relate what he said to a bias. Maybe that... (read more)

What prerequisite knowledge is necessary to read and understand Nick Bostrom's Superintelligence?

Mostly just out of curiosity:

What happens karma-wise when you submit a post to Discussion, it gets some up/downvotes, you resubmit it to Main, and it gets up/downvotes there? Does the post's score transfer, or does it start from 0?

2Kaj_Sotala
The post's score transfers, but I think that the votes that were applied when it was in Discussion don't get the x10 karma multiplier that posts in Main otherwise do.
0Gondolinian
Thanks!

How do I improve my ability to simulate/guess other people's internal states and future behaviors? I can, just barely, read emotions, but I make the average human look like a telepath.

1hyporational
It's trial and error mostly, paying attention to other people doing well or making mistakes, getting honest feedback from a skilled and trusted friend. Learning social skills is like learning to ride a bike, reading about it doesn't give you much of an advantage. The younger you are the less it costs to make mistakes. I think a social job is a good way to learn because customers are way less forgiving than other people you randomly meet. You could volunteer for some social tasks too. If your native hardware is somehow socially limited then you might benefit from reading a little bit more and you might have to develop workarounds to use what you've got to read people. It's difficult to learn from mistakes if you don't know you're making them. One thing I've learned about the average human looking like a telepath is that most people are way too certain about their particular assumption when there are actually multiple possible ways to understand a situation. People generally aren't as great at reading each other as they think that are.
0ilzolende
My native hardware is definitely limited - I'm autistic. The standard quick-and-dirty method of predicting others seems to be "model them as slightly modified versions of you", but when other people's minds are more similar to each other than they are to you, the method works far better for them than it does for you. My realtime modeling isn't that much worse than other people's, but other people can do a lot more with a couple of minutes and no distractions than I can. Thanks a bunch for the suggestions!
1hyporational
It certainly doesn't feel that way to me, but I might have inherited some autistic characteristics since there are a couple of autistic people in my extended family. Now that I've worked with people more, it's more like I have several basic models of people like "rational", "emotional", "aggressive", "submissive", "assertive", "polite", "stupid", "smart", and then modify those first impressions according to additional information. I definitely try not to model other people based on my own preferences since they're pretty unusual, and I hate it when other people try to model me based on their own preferences especially if they're emotional and extroverted. I find that kind of empathy very limited, and these days I think I can model a wider variety of people than many natural extroverts can, in the limited types of situations where I need to.
0ilzolende
Thanks! Your personality archetypes/stereotypes sound like a quick-and-dirty modeling system that I can actually use, but one that I shouldn't explain to the people who know me by my true name. That probably explains why I hadn't heard about it already: if it were less offensive-sounding, then someone would have told me about it. Instead, we get the really-nice-sounding but not very practical suggestions about putting yourself in other peoples' shoes, which is better for basic* morality than it is for prediction. *By "basic", I mean "stuff all currently used ethical systems would agree on", like 'don't hit someone in order to acquire their toys.'

Is "how do I get better at sex?" a solved problem?

Is it just a matter of getting a partner who will given you feedback and practicing?

0Lumifer
I think "how do you get better", mostly yes, but "how do you get to be very very good", mostly no.
1Capla
Ok. Is there a trick to that one or do you just need to have gotten the lucky genes?
0Lumifer
"No", as in "not a solved problem" implies that no one knows :-) Whether you need lucky genes is hard to tell. Maybe all you need is lack of unlucky ones :-/
1Capla
Is it a problem that anyone has put significant effort into? What's the state of the evidence? Now that I think about it, I'm a little surprised there isn't a subculture of people trying to excel at sex, sort of the way pickup artist try an excel at getting sex. Is this because there is no technique for for doing sex well? Because most people think there's no technique for for doing sex well? Because sex is good enough already? Because sex is actually more about status than pleasure? Because such a subculture exits and I'm ignorant of it?
2ChristianKl
Data suggest that a fair number of woman don't get orgasms during sex but the literature suggest that they could given the proper environment. Squirting in women seems to happen seldom enough that the UK bans it in their porn for being abnormal. But of course sex is about more than just orgasm length and intensity ;) Yes. In general one of the think that distinguish the pickup artist community is that it's full of people who rather sit in front of their computer to talk about techniques than interact face to face. That means you find a lot of information about it on the internet. Many of the people who are very kinesthetic don't spend much time on the net. But that doesn't mean there no information available on the internet. Getting ideas about how sex is supposed to work from porn is very bad. Porn is created to please the viewer, not the actors. Porn producers have to worry about issues like camera angles. Sensual touch can create feelings without looking good on the camera. Porn often ignores the state of mind of the actors. Books on the other hand do provide some knowledge, even when they alone aren't enough. Tim Ferriss has in it's "The 4-Hour Body" book two chapters about the subject, including the basic anatomy lesson of how the g-spot works. Apart from that I'm not familiar with English literature on the subject but Tim Ferriss suggests among others http://www.tinynibbles.com/ for further reading. The community in which I would expect the most knowledge are polyamorous people who speak very openly with each other. Using our cherished rationality skills we can start to break the skill down into subareas: 1) Everybody is different. Don't assume that every men or woman wants the same thing. 2) Consent: Don't do something that your partner doesn't want you to do to him. When in doubt, ask. 3) Mindset: Inconfidence and feeling pressure to perform can get in the way of being present. Various forms of "sex is bad"-beliefs can reduce enjoyment. Authentic e
2Lumifer
I'm sure there is, but I don't think it would want to be very... public about it. For one thing, I wouldn't be surprised if competent professionals were very good (and very expensive). Given Christianity's prudishness (thank you, St.Augustine), you may also want to search outside of the Western world -- Asia, including India, sound promising. But as usual, one of the first questions is what do you want to optimize for. And don't forget that men and women start from quite different positions.
2Capla
I don't know what you mean by this.
0Lumifer
The physiology of men and women is significantly different.

Here I be, looking at a decade old Kurzweil book, and I want to know whether the trends he's graphing hold up after in later years. I have no inkling of where on earth one GETs these kinds of factoids, except by some mystical voodoo powers of Research bestowed by Higher Education. It's not just guesstimation... probably.

Bits per Second per Dollar for wireless devices? Smallest DRAM Half Pitches? Rates of adoption for pre-industrial inventions? From whence do all these numbers come and how does one get more recent collections of numbers?

1knb
LW user Stuart Armstrong did a number of posts assessing Kurzweil's predictions: Here, here, here, and here.

A question about Lob's theorem: assume not provable(X). Then, by rules of If-then statements, if provable(X) then X is provable But then, by Lob's theorem, provable(X), which is a contradiction. What am I missing here?

3DanielFilan
I'm not sure how you're getting from not provable(X) to provable(provable(X) -> X), and I think you might be mixing meta levels. If you could prove not provable(X), then I think you could prove (provable(X) ->X), which then gives you provable(X). Perhaps the solution is that you can never prove not provable(X)? I'm not sure about this though.
0Ebthgidr
I forget the formal name for the theorem, but isn't (if X then Y) iff (not-x or Y) provable in PA? Because I was pretty sure that's a fundamental theorem in first order logic. Your solution is the one that looked best, but it still feels wrong. Here's why: Say P is provable. Then not-P is provably false. Then not(provable(not-P)) is provable. Not being able to prove not(provable(x)) means nothing is provable.
0DanielFilan
You're right that (if X then Y) is just fancy notation for (not(X) or Y). However, I think you're mixing up levels of where things are being proved. For the purposes of the rest of this comment, I'll use provable(X) to mean that PA or whatever proves X, and not that we can prove X. Now, suppose provable(P). Then provable(not(not(P))) is derivable in PA. You then claim that not(provable(not(P))) follows in PA, that is to say, that provable(not(Q)) -> not(provable(Q)). However, this is precisely the statement that PA is consistent, which is not provable in PA. Therefore, even though we can go on to prove not(provable(not(P))), PA can't, so that last step doesn't work.
0Ebthgidr
Wait. Not(provable(consistency)) is provable in PA? Then run that through the above.
0DanielFilan
I'm not sure that this is true. I can't find anything that says either way, but there's a section on Godel's second incompleteness theorem in the book "Set theory and the continuum hypothesis" by Paul Cohen that implies that the theorem is not provable in the theory that it applies to.
0Ebthgidr
I'll rephrase it this way: For all C: Either provable(C) or not(provable(C)) If provable(C), then provable(C) If not provable(C), then use the above logic to prove provable C. Therefore all C are provable.
0DanielFilan
Which "above logic" are you referring to? If you mean your OP, I don't think that the logic holds, for reasons that I've explained in my replies.
0Ebthgidr
Your reasons were that not(provable(c)) isn't provable in PA, right? If so, then I will rebut thusly: the setup in my comment immediately above(I.e. either provable(c) or not provable(c)) gets rid of that.
0DanielFilan
I'm not claiming that there is no proposition C such that not(provable(C)), I'm saying that there is no proposition C such that provable(not(provable(C))) (again, where all of these 'provable's are with respect to PA, not our whole ability to prove things). I'm not seeing how you're getting from not(provable(not(provable(C)))) to provable(C), unless you're commuting 'not's and 'provable's, which I don't think you can do for reasons that I've stated in an ancestor to this comment.
0Ebthgidr
Well, there is, unless i misunderstand what meta level provable(not(provable(consistency))) is on.
0DanielFilan
I think you do misunderstand that, and that the proof of not(provable(consistency(PA))) is not in fact in PA (remember that the "provable()" function refers to provability in PA). Furthermore, regarding your comment before the one that I am responding to now, just because not(provable(C)) isn't provable in PA, doesn't mean that provable(C) is provable in PA: there are lots of statements P such that neither provable(P) nor provable(not(P)), since PA is incomplete (because it's consistent).
0Ebthgidr
That doesn't actually answer my original question--I'll try writing out the full proof. Premises: 1. P or not-P is true in PA 2. Also, because of that, if p -> q and not(p)-> q then q--use rules of distribution over and/or So: 1. provable(P) or not(provable(P)) by premise 1 2: If provable(P), provable(P) by: switch if p then p to not p or p, premise 1 3: if not(provable(P)) Then provable( if provable(P) then P): since if p then q=not p or q and not(not(p))=p 4: therefore, if not(provable(P)) then provable(P): 3 and Lob's theorem 5: Therefore Provable(P): By premise 2, line 2, and line 4. Where's the flaw? Is it between lines 3 and 4?
0DanielFilan
I think step 3 is wrong. Expanding out your logic, you are saying that if not(provable(P)), then (if provable(P) then P), then provable(if provable(P) then P). The second step in this chain is wrong, because there are true facts about PA that we can prove, that PA cannot prove.
0Ebthgidr
So the statement (if not(p) then (if p then q)) is not provable in PA? Doesn't it follow immediately from the definition of if-then in PA?
0DanielFilan
(if not(p) then (if p then q)) is provable. What I'm claiming isn't necessarily provable is (if not(p) then provable(if provable(p) then q)), which is a different statement.
0Ebthgidr
Oh, that's what I've been failing to get across. I'm not saying if not(p) then (if provable(p) then q). I'm saying if not provable(p) then (if provable(p) then q)
0DanielFilan
You aren't saying that though. In the post where you numbered your arguments, you said (bolding mine) which is different, because it has an extra 'provable'.
0Ebthgidr
So then here's a smaller lemma: for all x and all q: If(not(x)) Then provable(if x then q): by definition of if-then So replace x by Provable(P) and q by p. Where's the flaw?
0DanielFilan
The flaw is that you are correctly noticing that provable(if(not(x) then (if x then q)), and incorrectly concluding if(not(x)) then provable(if x then q). It is true that if(not(x)) then (if x then q), but if(not(x)) is not necessarily provable, so (if x then q) is also not necessarily provable.
0Ebthgidr
is x or not x provable? Then use my proof structure again.
0DanielFilan
The whole point of this discussion is that I don't think that your proof structure is valid. To be honest, I'm not sure where your confusion lies here. Do you think that all statements that are true in PA are provable in PA? If not, how are you deriving provable(if x then q) from (if x then q)? In regards to your above comment, just because you have provable(x or not(x)) doesn't mean you have provable(not(x)), which is what you need to deduce provable(if x then q).
0Ebthgidr
To answer the below: I'm not saying that provable(X or notX) implies provable (not X). I'm saying...I'll just put it in lemma form(P(x) means provable(x): If P( if x then Q) AND P(if not x then Q) Then P(not x or Q) and P(x or Q): by rules of if then Then P( (X and not X) or Q): by rules of distribution Then P(Q): Rules of or statements So my proof structure is as follows: Prove that both Provable(P) and not Provable(P) imply provable(P). Then, by the above lemma, Provable(P). I don't need to prove Provable(not(Provable(P))), that's not required by the lemma. All I need to prove is that the logical operations that lead from Not(provable(P))) to Provable(P)) are truth and provability preserving
0DanielFilan
Breaking my no-comment commitment because I think I might know what you were thinking that I didn't realise that you were thinking (won't comment after this though): if you start with (provable(provable(P)) or provable(not(provable(P)))), then you can get your desired result, and indeed, provable(provable(P) or not(provable(P))). However, provable(Q or not(Q)) does not imply provable(Q) or provable(not(Q)), since there are undecideable questions in PA.
1Ebthgidr
Ohhh, thanks. That explains it. I feel like there should exist things for which provable(not(p)), but I can't think of any offhand, so that'll do for now.
0DanielFilan
0DanielFilan
I agree that if you could prove that (if not(provable(P)) then provable(P)), then you could prove provable(P). That being said, I don't think that you can actually prove (if not(provable(P)) then provable(P)). A few times in this thread, I've shown what I think the problem is with your attempted proof - the second half of step 3 does not follow from the first half. You are assuming X, proving Y, then concluding provable(Y), which is false, because X itself might not have been provable. I am really tired of this thread, and will no longer comment.
0Ebthgidr
Ok, thanks for clearing that up.
0[anonymous]
As far as I know, that is actually the solution. If you could prove "not provable(X)" then in particular you have proven that the proof system you're working in is consistent (an inconsistent system proves everything by explosion). But Godel.

Looking for some people to refute this recently hair-brained idea I came up with.

The time period from the advent of the industrial revolution to the so-called digital revolution was about 150 - 200 years. Even though computers were being used around WWII, widespread computer use didn't start to shake things up until 1990 or so. I would imagine that AI would constitute a similar fundamental shift in how we live our lives. So would it be a reasonable extrapolation to think that widespread AI would be about 150 - 200 years after the beginning of the information age?

By what principle would such an extrapolation be reasonable?

[-]Shmi150

If you are doing reference class forecasting, you need at least a few members in your reference class and a few outside of it, together with the reasons why some are in and others out. If you are generalizing from one example, then, well...

4NobodyToday
I'm a firstyear AI student, and we are currently in the middle of exploring AI 'history'. Ofcourse I don't know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc. The problem is they came across some hard problems which they can't really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem. Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can't seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec's paradox ). Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.
3Daniel_Burfoot
Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.
2Punoxysm
On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90's in the form of convolutional neural nets, and they had another burst of popularity in 2006. Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress. This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.
1DanielLC
It's possible. We're an example of that. The question is if it's humanly possible. There's a common idea of an AI being able to make another twice as smart as itself, which could make another twice as smart as itself, etc. causing an exponential increase in intelligence. But it seems just as likely that an AI could only make one half as smart as itself, in which case we'll never even be able to get the first human-level AI.
1ctintera
The example you give to prove plausibility is also a counterexample to the argument you make immediately afterwards. We know that less-intelligent or even non-intelligent things can produce greater intelligence because humans evolved, and evolution is not intelligent. It's more a matter of whether we have enough time to drudge something reasonable out of the problem space. If we were smarter we could search it faster.
0DanielLC
Evolution is an optimization process. It might not be "intelligent" depending on your definition, but it's good enough for this. Of course, that just means that a rather powerful optimization process occurred just by chance. The real problem is, as you said, it's extremely slow. We could probably search it faster, but that doesn't mean that we can search it fast.

Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise... (read more)

4DanielFilan
Not really. If you're in a suboptimal branch, but still doing better than if you didn't exist at all, then you aren't making the world better off by self-destructing regardless of whether other branches exist. It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn't important for this particular discussion) of branches where everything is stellar - just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn't so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1/2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1/2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.
0Kaura
Thanks! Ah, I'm probably just typical-minding like there's no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you "want to keep living", you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you "want to see humanity flourish indefinitely", you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering). To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life. Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch <=> rendering the branch irrelevant, pretty much. To me it did feel like this is obviously what's important, and the branches where you don't exist simply don't matter - there's no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans). If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a mora
0DanielFilan
As it happens, you totally can (it's called the Born measure, and it's the same number as what people used to think was the probabilities of different branches occurring), and agents that satisfy sane decision-theoretic criteria weight branches by their Born measure - see this paper for the details. This is a good place to strengthen intuition, since if you replace "killing myself" with "torturing myself", it's still true that none of your future selves who remain alive/untortured "would ever notice anything, vast amounts of future copies of [yourself] would wake up just like they thought they would the nloext morning, and carry on with their lives and aspirations". If you arrange for yourself to be tortured in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also wake up and get tortured. Similarly, if you arrange for yourself to be killed in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also get killed (which is presumably a bad thing even or especially if everybody else also dies). One way to intuitively see that this way of thinking is going to get you in trouble is to note that your preferences, as stated, aren't continuous as a function of reality. You're saying that universes where (1-x) proportion of branches feature you being dead and x proportion of branches feature you being alive are all equally fine for all x > 0, but that a universe where you are dead with proportion 1 and alive with proportion 0 would be awful (well, you didn't actually say that, but otherwise you would be fine with killing some of your possible future selves in a classical universe). However, there is basically no difference between a universe where (1-epsilon) proportion of branches feature you being dead and epsilon proportion of branches feature you being alive, and a universe where 1 proportion of branches feature you being dead and 0 proportion of branches feature
0ike
I'm sorry, but "sort of thing which is liable to lead to crazy behaviour" won't cut it. Could you give an example of crazy behaviour with this preference ordering? I still think this approach (not counting measure as long as some of me exists) feels right and is what I want. I'm not too worried about discontinuity at only x=0 (and if you look at larger multiverses, x probably never equals 0.) To argue over a specific example: if I set up something that chooses a number randomly with quantum noise, then buys a lottery ticket, then kills me (in my sleep) only if the ticket doesn't win, then I assign positive utility to turning the machine on. (Assuming I don't give a damn about the rest of the world who will have to manage without me.) Can you turn this into either an incoherent preference, or an obviously wrong preference? (Personally, I've thought about the TDT argument for not doing that; because you don't want everyone else to do it and create worlds in which only 1 person who would do it is left in each, but I'm not convinced that there are a significant number of people who would follow my decision on this. If I ever meet someone like that, I might team up with them to ensure we'd both end up in the same world. I haven't seen any analysis of TDT/anthropics applied to this problem, perhaps because other people care more about the world?)
1DanielFilan
Another way to look at it is this: imagine you wake up after the bet, and don't yet know whether you are going to quickly be killed or whether you are about to recieve a large cash prize. It turns out that your subjective credence for which branch you are in is given by the Born measure. Therefore, (assuming that not taking the bet maximises expected utility in the single-world case), you're going to wish that you hadn't taken the bet immediately after taking it, without learning anything new or changing your mind about anything. Thus, your preferences as stated either involve weird time inconsistencies, or care about whether there's a tiny sliver of time between the worlds branching off and being killed. At any rate, in any practical situation, that tiny sliver of time is going to exist, so if you don't want to immediately regret your decision, you should maximise expected utility with respect to the Born measure, and not discount worlds where you die.
0DanielFilan
Your preference already feels "obviously wrong" to me, and I'll try to explain why. If we imagine that only one world exists, but we don't know how it will evolve, I wouldn't take the analogue of your lottery ticket example, and I suspect that you wouldn't either. The reason that I wouldn't do this is because I care about the possible future worlds where I would die, despite the fact that I wouldn't exist there (after very long). I'm not sure what other reason there would be to reject this bet in the single-world case. However, you are saying that you don't care about the actual future worlds where you die in the many-worlds case, which seems bizarre and inconsistent with what I imagine your preferences would be in the single-world case. It's possible that I'm wrong about what your preferences would be in the single-world case, but then you're acting according to the Born rule anyway, and whether the MWI is true doesn't enter into it. (EDIT: that last sentence is wrong, you aren't acting according to the Born rule anyway.) In regards to my point about discontinuity, it's worth knowing that to know whether x = 0 or x > 0, you need infinitely precise knowledge of the wave function. It strikes me as unreasonable and off-putting that no finite amount of information about the state of the universe can discern between one universe which you think is totally fantastic and another universe which you think is terrible and awful. That being said, I can imagine someone being unpersuaded by this argument. If you are willing to accept discontinuity, then you get a theory where you are still maximising expected utility with respect to the Born rule, but your utilities can be infinite or infinitesimal. On a slightly different note, I would highly recommend reading the paper which I linked (most of which I think is comprehensible without a huge amount of technical background), which motivates the axioms you need for the Born rule to work, and dismotivates other decision rules.
0ike
I downloaded the paper you linked to and will read it shortly. I'm totally sympathetic to the "didn't want to make a long comment longer" excuse, having felt that way many times myself. I agree in the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can't know for sure that I live in a multiverse, which is one of the reasons I'm still alive in your world (the main reason being it's not practical for me right now, and I'm not really confident enough to bother researching and setting something like that up.) However, you also don't know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I'd say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.) Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing. I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks. If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don't have to worry about it. That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer e
0DanielFilan
Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous. Also - what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!
0DanielFilan
Here's the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn't take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don't exist - not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die - not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively. I'm not sure what you can mean by this comment, especially "the whole problem". My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x). ... I promise that you aren't going to be able to perform a test on a qubit that you can expect to tell you with 100% certainty that , even if you have multiple identical qubits. This wasn't my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it's like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can't distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV. I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and

These aren't so much "stupid" questions but ones which have no clear answer, and I'm curious what people here feel have to say about this.

-Why should (or shouldn't) one aspire to be "good" in the sense of prosocial, altruistic etc.?

-Why should (or shouldn't) one attempt to be as honest as possible in their day to day lives?

I have strong altruistic inclinations because that's how I'm predisposed to be and often because coincides with my values; other people's suffering upsets me and I would prefer to live a world in which people are ki... (read more)

[-]knb00

I have a vague notion from reading science fiction stories that black holes may be extremely useful for highly advanced (as in, post-singularity/space-faring) civilizations. For example, IIRC, in John C. Wright's Golden Age series, a colony formed near a black hole became fantastically wealthy.

I did some googling, but all I found was that they would be great at cooling computer systems in space. That seems useful, but I was expecting something more dramatic. Am I missing something?

6alienist
When you're sufficiently advanced, cooling your systems, technically disposing of entropy, is one of the main limiting constraint on your system. Also if you throw matter into a black hole just right you can get its equivalent (or half its equivalent I forgot which) out in energy. Edit: thinking about it, it is half the mass.
0orthonormal
Not in useful energy, if you're thinking of using Hawking radiation; it comes out in very high-entropy form. I was so sad when I realized that the "Hawking reactor" I'd invented in fifth grade would violate the Second Law of Thermodynamics.
4alienist
I wasn't talking about Hawkings radiation. If I throw matter in a black hole just right, I can get half the mass to come out in low-entropy photons. That's why the brightest objects in the universe are black holes that are currently eating something.
2orthonormal
Ah, cool! Forgot about how quasars are hypothesized to work.
0JoshuaZ
It is useable if you use small blackholes. You don't need to be able to use all of the energy for lots of purposes since a tiny bit of mass leads to so much energy.
0Lumifer
They make awesome garbage disposal units :-)

[Meta]

In the last 'stupid questions' thread, I posed the suggestion that I write a post called "Non-Snappy Answers to Stupid Questions", which would be a summary post with a list of the most popular stupid questions asked, or stupid questions with popular answers. That is, I'm taking how many upvotes each pair of questions and answers got as an indicator of how many people care about them, or how many people at least thought the answer to a question was a good one. I'm doing this so there will be a single spot where interesting answers can be fou... (read more)

[-][anonymous]00

Back in 2010, Will Newsome posted this as a joke:

Sure, everything you [said] made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.

But isn't it actually true?

0TheOtherDave
What would I do differently if I believed it was true, or wasn't? What expectations about future events would I have in one case, that I wouldn't have in the other? What beliefs about past events would I have in one case, that I wouldn't have in the other?
0[anonymous]
I understand that this has no decision-making value. I'm only interested in the philosophical meaning of this point.
0TheOtherDave
Hm. Can you say more about what you're trying to convey by "philosophical meaning"? For example, what is the philosophical meaning of your question?
0[anonymous]
That if we are to be completely intellectually honest and rigorous, we must accept complete skepticism.
0TheOtherDave
Hm. OK. Thanks for replying, tapping out here.
0Viliam_Bur
Maybe we could honestly accept than impossible demands of rigor are indeed impossible. And focus on what is possible. You can't convince a rock to agree with you on something. There is still some chance with humans.
0[anonymous]
This appears to be a circular argument. This is why I wrote this:
0IlyaShpitser
It means you should learn to like learning other languages/ways of thinking.

If the Bay Area has such a high concentration of rationalists, shouldn't it have more-rational-than-average housing, transportation and legislation?

Sadly, I know the stupid answers to this stupid questions. I just want to vent a bit.

The Bay Area has a high concentration of rationalists compared to most places, but I don't think it's very high compared to the local population. How many rationalists are we talking about?

Are rationalists more or less likely than non-rationalists to participate in local government?

2Lumifer
It is mostly rational for generating advantage to people with political pull and power.
0IlyaShpitser
Should start with toothpaste first.

Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.

Interestingly enough Rand seems to make a disclaimer about that in her novel Atlas Shrugged. The philosophy professor character Hugh Akston says of his star students, Ragnar Danneskjold, John Galt and Francisco d'Anconia:

"Don't be astonished, Miss Taggart,"

... (read more)
7Viliam_Bur
Seems to me that Rand's model is similar to LessWrong's "rationality as non-self-destruction". Objectivism in the novels doesn't give the heroes any positive powers. It merely helps them avoid some harmful beliefs and behaviors, which are extremely common. Not burdened by these negative beliefs and behaviors, these "normal men" can fully focus on what they are good at, and if they have high intelligence and make the right choices, they can achieve impressive results. (The harmful beliefs and behaviors include: feeling guilty for being good at something, focusing on exploiting other people instead of developing one's own skills.) Hank Rearden's design of a new railroad bridge was completely unrelated to his political beliefs. It was a consequence of his natural talent and hard work, perhaps some luck. The political beliefs only influenced his decision of what to do with the invented technology. I don't remember what exactly were his options, but I think one of them was "archive the technology, to prevent changes in the industry, to preserve existing social order", and as a consequence of his beliefs he refused to consider this option. And even this was before he became a full Objectivist. (The only perfect Objectivist in the novel is Galt; and perhaps the people who later accept Galt's views.) Francisco d'Anconia's fortune, as you wrote, was inherited. That's a random factor, unrelated to Objectivism. John Galt's "magical" motor was also a result of his natural talent and hard work, plus some luck. The political beliefs only influenced his decision to hide the motor from public, using a private investor and a secret place. Violating the law of thermodynamics, and surviving the torture without damage... that's fairy-tale stuff. But I think none of them is an in-universe consequence of Objectivism. So, what exactly does Objectivism (or Hank Rearden's beliefs, which are partial Objectivism plus some compartmentalization) cause, in-universe? It makes the heroes f
7fubarobfusco
Not that I'm aware of, but you might also be interested in A. E. Van Vogt's "Null-A" novels, which attempted to do this for a fictionalized version of Korzybski's General Semantics. (Van Vogt later did become involved in Scientology, as did his (and Hubbard's) editor John W. Campbell.)
6alienist
PTSS almost seems like a culture-bound syndrome of the modern West. In particular there don't seem to be any references to it before WWI and even there (and in subsequent wars) all the references seem to be from the western allies. Furthermore, the reaction to "shell shock", as it was then called, during WWI suggests that this was something new that the established structures didn't know how to deal with.
8NancyLebovitz
Not everyone who's had traumatic experiences has PTSD. More information
5bogus
There are significant confounders here, as modern science-based psychology got started around the same time - and WWI really was very different from earlier conflicts, not least in its sheer scale. But the idea is nonetheless intriguing; the West really is quite different from traditional societies, along lines that could plausibly make folks more vulnerable to traumatic shock.
2NancyLebovitz
For what it's worth, Rand was an unusually capable person in her specialty (she wrote two popular, and somewhat politically influential novels in her second language), but still not in the same class as her heroes. I'm not sure you've got the bit about Rearden right. I don't think there's any evidence that he came up with the final design for the bridge. There's a mention that he worked with a team to discover Rearden metal, and presumably he also had an engineering team. The point was that he (presumably) knew enough engineering to come up with something plausible, and that he was fascinated by producing great things enough to be distracted from something major going wrong that I don't remember. I have no idea whether Rand knew Galt's engine was physically impossible, though I think she should have, considering that other parts of the book were well-researched. Dagny's situation at Taggart Transcontinental was probably typical for an Operations vice-president in a family owned business. The description of her doing cementless masonry matched with a book on the subject. Atlas Shrugged was the only place I saw the possibility of shale oil mentioned until, decades later, it turned out to be a possible technology.
2CBHacking
The research fail that jumped out at me hardest in Atlas Shrugged was the idea that so many people would consider a metal both stronger and lighter than steel physically impossible. By the time the book was published, not only was titanium fairly well understood, it was also being widely used in military and (some; what could be spared from Cold War efforts) commercial purposes. Its properties don't exactly match Rearden Metal (even ignoring the color and other mostly-unimportant characteristic) but they're close enough that it should be obvious that such materials are completely possible. Of course, that part of the book also talks about making steel rails last longer by making them denser, which seems completely bizarre to me; there are ways to increase the hardness of steel, but they involve things like heat-treating it. TL;DR: I'm not sure I'd call the book "well-researched" as a whole, though some parts may well have been.
1Alsadius
The book exists in a deliberately timeless setting - it has elements of everything from about a century of span. Railroads weren't exactly building massive new lines in 1957, either.
1NancyLebovitz
The three people Akston was talking about didn't include Rearden. They were D'Anconia, Galt, and Danneskjold (the mostly off-stage pirate). I feel as though I've lost, not just geek points, but objectivist points both for forgetting something from the book, but also because I went along with everyone else who got it wrong. The remarkable thing about Galt and torture isn't that he didn't get PTSD, it's that he completely kept his head, and over-awed his torturers. He broke James Taggart's mind, not that Taggart's mind was in such great shape to begin with.
1gattsuru
A number of these matters seem more narrative or genre conveniences : Francisco acts a playboy in the same way Bruce Wayne does, Rearden's bridge development passes a lot of work to his specialist engineers (similarly to Rearden metal having a team of scientists skeptically helping him) and pretends that the man is still a one-man designer (among other handwaves). At the same time, Batman is not described as a superhuman engineer or playboy, nor would he act as those types of heroes. I'm also not sure we can know the long-term negative repercussions John Galt experiences given the length of the book, and not all people who experience torture display clinically relevant post-traumatic stress symptoms and many who do show them only sporadically. His engine is based on now-debunked theories of physics that weren't so obviously thermodynamics-violating at the time, similarly to Project Xylophone. These men are intended to be top-of-field capability from the perspective of a post-Soviet writer who knew little about their fields and could easily research less. Many of the people who show up under Galt's tutelage are similarly exceptionally skilled, but even more are not so hugely capable. On the other hand, the ability of her protagonists to persuade others and evaluate the risk of getting shot starts at superhuman and quickly becomes ridiculous. On the gripping hand, I'm a little cautious about emphasizing fictional characters and acknowledgedly Heroic abilities as evidence, especially when the author wrote a number of non-fiction philosophy texts related to this topic.
0mgin
Not to my knowledge, but they should have! PM me..
-3buybuydandavis
Not quite in the spirit of admitting ignorance, but since it's in this thread, I'll answer it. No. So despite what Rand or any Objectivist ever said or did, if you choose to view Objectivism as a nutty cult, you can. If you were actually interested in why Rand's characters are the way they are, you could read her book on art, "The Romantic Manifesto". Probably a quick google search on the book would give you your answer.