It seems like we suck at using scales "from one to ten". Video game reviews nearly always give a 7-10 rating. Competitions with scores from judges seem to always give numbers between eight and ten, unless you crash or fall, and get a five or six. If I tell someone my mood is a 5/10, they seem to think I'm having a bad day. That is, we seem to compress things into the last few numbers of the scale. Does anybody know why this happens? Possible explanations that come to mind include:
People are scoring with reference to the high end, where "nothing is wrong", and they do not want to label things as more than two or three points worse than perfect
People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)
I'm succumbing to confirmation bias and this isn't a real pattern
I'm succumbing to confirmation bias and this isn't a real pattern
No, this is definitely a real pattern. YouTube switched from a 5-star rating system to a like/dislike system when they noticed, and videogames are notorious for rank inflation.
Partial explanation: we interpret these scales as going from worst possible to best possible, and
One reason why this is only a partial explanation is that "possible" obviously really means something like "at least semi-plausible" and what's at least semi-plausible depends on context and whim. But, e.g., suppose we take it to mean something like: take past history, discard outliers at both ends, and expand the range slightly. Then I bet what you find is that
RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.
People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)
I don't think it's this. Belgium doesn't use letter-grading and still succumbs to the problem you mentioned in areas outside the classroom.
Is there any plausible way the earth could be moved away from the sun and into an orbit which would keep the earth habitable when the sun becomes a red giant?
According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.
This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).
Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.
Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).
So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.
There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.
I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.
I recall estimating the power required to run an equatorial superconducting ring a few meters thick 1 km or so under the Mars surface with enough current to simulate Earth-like magnetic field. If I recall correctly, it would require about the current level of power generation on Earth to ramp it up over a century or so to the desired level. Then whatever is required to maintain it (mostly cooling the ring), which is very little. Of course, an accident interrupting the current flow would be an epic disaster.
Would it be possible to slow down or stop the rise of sea level (due to global warming) by pumping water out of the oceans and onto the continents?
We could really use a new Aral sea, but intuitively I'd expected that this would be a tiny dent in the depth of the oceans. So, to the maths:
Wikipedia claims that from 1960 to 1998 the volume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.
I'm going to give that another 5% for more loss since then, as the South Aral Sea has now lost its eastern half enitrely.
This gives ~1100 * .85 = 935km^3 of water that we're looking to replace.
The Earth is ~500m km^2 in surface area, approx. 70% of which is water = 350m km^2 in water.
935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.
This is massively larger that I would have predicted, and it gets better. The current salinity of the Aral Sea is 100g/l which is way higher than that of seawater at 35g/l, so we could pretty much pump the water straight in still with net environmental gain. Infact this is a solution to the crisis that has been previously proposed, although it looks like most people would rather dilute the seawater first.
To acheive the desired result of 1 inch drop in sea level, we only need to find 9 equivalent projects around the world. Sadly, the only other one I know of is Lake Chad, which is significantly smaller than the Aral Sea. However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants: I would expect that we might be able to get this amount again if we undertook a desalination/irrigation program in the Sahara.
I recommend googling "geoengineering global warming" and reading some of the top hits. There are numerous proposals for reducing or reversing global warming which are astoundingly less expensive than reducing carbon dioxide emissions, and also much more likely to be effective.
To your direct question about storing more water on land, this would be a geoengineering project. Some straightforward approaches to doing it:
Use rainfall as your "pump" in order to save having to build massive energy using water pumps. Without any effort on our part, nature natually lifts water a km or more above sea level and then drops it, much of it dropped onto land. That water generally is funneled back to the ocean in rivers. With just the constructino of walls, some rivers might be prevented from draining into the ocean. Large areas would be flooded by the river, storing water other than in the ocean.
Use gravity as your pump. THere are many large locations on earth than are below sea level. Aquifers that took no net energy to do pumping could be built that would essentially gravity-feed ocean water into these areas. These areas can be hundreds of meters below sea level, ...
Can anyone link a deep discussion, including energy and time requirements, issues with spaceship shielding from radiation and collisions, etc., that would be involved in interstellar travel? I ask because I am wondering whether this is substantially more difficult than we often imagine, and perhaps a bottleneck in the Drake Equation
tl;dr: It is definitely more difficult than most people think, because most people's thoughts(even scientifically educated ones) are heavily influenced by sci-fi, which is almost invariably premised on having easy interstellar transport. Even the authors like Clarke with difficult interstellar transport assume that the obvious problems(e.g., lightspeed) remain, but the non-obvious problems(e.g., what happens when something breaks when you're two light-years from the nearest macroscopic object) disappear.
Is there a causal link between being relatively lonely and isolated during school years and (higher chance of) ending up a more intelligent, less shallow, more successful adult?
Imagine that you have a pre-school child who has socialization problems, finds it difficult to do anything in a group of other kids, to acquire friends, etc., but cognitively the kid's fine. If nothing changes, the kid is looking at being shunned or mocked as weird throughout school. You work hard on overcoming the social issues, maybe you go with the kid to a therapist, you arrange play-dates, you play-act social scenarios with them..
Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with popular kids. We were never part of any crowd. Instead we read books under our desks during lessons and read SF novels during the breaks and read science encyclopedias during dinner at home, and started programming at 10, and and and. Now you're working so hard to give your kid a full social life. You barely had any, are you sure now you'd rather you had it otherwise? Let me be...
Seems to me that very high intelligence can cause problems with socialization: you are different from your peers, so it is more difficult for you to model them, and for them to model you. You see each other as "weird". (Similar problem for very low intelligence.) Intelligence causes loneliness, not the other way round.
But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.
I am not sure about the relation between reading many books and being "less shallow". Do intelligent kids surrounded by intelligent kids also read a lot?
My friend isn't obviously-to-me wrong, but their argument is unconvincing to me.
It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.
It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.
Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.
Ditzy adolescent - how likely is this?
FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.
Are there any good trust, value, or reputation metrics in the open source space? I've recently established a small internal-use Discourse forum and been rather appalled by the limitations of what is intended to be a next-generation system (status flag, number of posts, tagging), and from a quick overview most competitors don't seem to be much stronger. Even fairly specialist fora only seem marginally more capable.
This is obviously a really hard problem and conflux of many other hard problems, but it seems odd that there are so many obvious improvements available.
((Inspired somewhat by my frustration with Karma, but I'm honestly more interested in its relevance for outside situations.))
Tangentially, is it possible for a good reputation metric to survive attacks in real life?
Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.
One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to "recommend" the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let's suppose it isn't, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you "recommend" this party, which according to their model would help them win the election.
In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where ...
Can anybody give me a good description of the term "metaphysical" or "metaphysics" in a way that is likely to stick in my head and be applicable to future contemplations and conversations? I have tried to read a few definitions and descriptions, but I've never been able to really grok any of them and even when I thought I had a working definition it slipped out of my head when I tried to use it later. Right now its default function in my brain is, when uttered, to raise a flag that signifies "I can't tell if this person is speaking...
Metaphysics: what's out there? Epistemology: how do I learn about it? Ethics: what should I do with it?
Basically, think of any questions that are of the form "what's there in the world", "what is the world made of", and now take away actual science. What's left is metaphysics. "Is the world real or a figment of my imagination?", "is there such a thing as a soul?", "is there such a thing as the color blue, as opposed to objects that are blue or not blue?", "is there life after death?", "are there higher beings?", "can infinity exist?", etc. etc.
Note that "metaphysical" also tends to be used as a feel-good word, meaning something like "nobly philosophical, concerned with questions of a higher nature than the everyday and the mundane".
"Ontology" is firmly dedicated to "exist or doesn't exist". Metaphysics is more broadly "what's the world like?" and includes ontology as a central subfield.
Whether there is free will is a metaphysical question, but not, I think, an ontological one (at least not necessarily). "Free will" is not a thing or a category or a property, it's a claim that in some broad aspects the world is like this and not like that.
Whether such things as desires or intentions exist or are made-up fictions is an ontological question.
This is in no way an answer to your actual question (Anatoly's is good) but it might amuse you.
"Meta" in Greek means something like "after" (but also "beside", "among", and various other things). So there is a
Common misapprehension: metaphysics is so called because it goes beyond physics -- it's mode abstract, more subtle, more elevated, more fundamental, etc.
This turns out not to be quite where the word comes from, so there is a
Common response": actually, it's all because Aristotle wrote a book called "Physics" and another, for which he left no title, that was commonly shelved after the "Physics" -- meta ta Phusika* -- and was commonly called the "Metaphysics". And the topics treated in that book came to be called by that name. So the "meta" in the name really has nothing at all to do with the relationship between the subjects.
But actually it's a bit more complicated than that; here's the
Truth (so far as I understand it): indeed Aristotle wrote those books, and indeed the "Metaphysics" is concerned with, well, metaphysics, and indeed the "Metaphysics" is called that because it ...
Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.
True, false, or neither?: It is currently an open/controversial/speculative question in physics whether time is discretized.
The Wikipedia article on Planck time says:
Theoretically, this is the smallest time measurement that will ever be possible, roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change.
However, the article on Chronon says:
The Planck time is a theoretical lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times.
Maneki Neko is a short story about an AI that manages a kind of gift economy. It's an enjoyable read.
I've been curious about this 'class' of systems for a while now, but I don't think I know enough about economics to ask the questions well. For example- the story supplies a superintelligence to function as a competent central manager, but could such a gift network theoretically exist without being centrally managed (and without trivially reducing to modern forms of currency exchange)? Could a variant of Watson be used to automate the distribution of capi...
My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.
Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.
For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.
Is it possible even in principle to perform a "consciousness transfer" from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require...
In dietary and health articles they often speak about "processed food". What exactly is processed food and what is unprocessed food?
Definitions will vary depending on the purity obsession of the speaker :-) but as a rough guide, most things in cans, jars, boxes, bottles, and cartons will be processed. Things that are, more or less, just raw plants and animals (or parts of them) will be unprocessed.
There are boundary cases about which people argue -- e.g. is pasteurized milk a processed food? -- but for most things in a food store it's pretty clear what's what.
I have a constant impression that everyone around me is more competent than me at everything. Does it actually mean that I am, or is there some sort of strong psychological effect that can create that impression, even if it is not actually true? If there is, is it a problem you should see your therapist about?
Reminds me of something Scott said once:
And when I tried to analyzed my certainty that – even despite the whole multiple intelligences thing – I couldn’t possibly be as good as them, it boiled down to something like this: they were talented at hard things, but I was only talented at easy things.
It took me about ten years to figure out the flaw in this argument, by the way.
See also: The Illusion of Winning by Scott Adams (h/t Kaj_Sotala)
Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.
But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.
I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.
It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.
Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.
Psychological research done in the early 1980s estimated that two out of five successful people consider themselves frauds and other studies have found that 70 percent of all people feel like impostors at one time or another. It is not considered a psychological disorder, and is not among the conditions described in the Diagnostic and Statistical Manual of Mental Disorders.
Is it a LessWrongian faux pas to comment only to agree with someone? Here's the context:
That's the kind of person that goes on to join LW and tell you. There are also people who read a sequence post or two because they followed a link from somewhere, weren't shocked at all, maybe learned something, and left. In fact I'd expect they're the vast majority.
I was going to say that I agree and that I had not considered my observation as an effect of survivorship bias.
I guess I thought it might be useful to explicitly relate what he said to a bias. Maybe that...
What prerequisite knowledge is necessary to read and understand Nick Bostrom's Superintelligence?
Mostly just out of curiosity:
What happens karma-wise when you submit a post to Discussion, it gets some up/downvotes, you resubmit it to Main, and it gets up/downvotes there? Does the post's score transfer, or does it start from 0?
How do I improve my ability to simulate/guess other people's internal states and future behaviors? I can, just barely, read emotions, but I make the average human look like a telepath.
Is "how do I get better at sex?" a solved problem?
Is it just a matter of getting a partner who will given you feedback and practicing?
Here I be, looking at a decade old Kurzweil book, and I want to know whether the trends he's graphing hold up after in later years. I have no inkling of where on earth one GETs these kinds of factoids, except by some mystical voodoo powers of Research bestowed by Higher Education. It's not just guesstimation... probably.
Bits per Second per Dollar for wireless devices? Smallest DRAM Half Pitches? Rates of adoption for pre-industrial inventions? From whence do all these numbers come and how does one get more recent collections of numbers?
A question about Lob's theorem: assume not provable(X). Then, by rules of If-then statements, if provable(X) then X is provable But then, by Lob's theorem, provable(X), which is a contradiction. What am I missing here?
Looking for some people to refute this recently hair-brained idea I came up with.
The time period from the advent of the industrial revolution to the so-called digital revolution was about 150 - 200 years. Even though computers were being used around WWII, widespread computer use didn't start to shake things up until 1990 or so. I would imagine that AI would constitute a similar fundamental shift in how we live our lives. So would it be a reasonable extrapolation to think that widespread AI would be about 150 - 200 years after the beginning of the information age?
If you are doing reference class forecasting, you need at least a few members in your reference class and a few outside of it, together with the reasons why some are in and others out. If you are generalizing from one example, then, well...
Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise...
These aren't so much "stupid" questions but ones which have no clear answer, and I'm curious what people here feel have to say about this.
-Why should (or shouldn't) one aspire to be "good" in the sense of prosocial, altruistic etc.?
-Why should (or shouldn't) one attempt to be as honest as possible in their day to day lives?
I have strong altruistic inclinations because that's how I'm predisposed to be and often because coincides with my values; other people's suffering upsets me and I would prefer to live a world in which people are ki...
I have a vague notion from reading science fiction stories that black holes may be extremely useful for highly advanced (as in, post-singularity/space-faring) civilizations. For example, IIRC, in John C. Wright's Golden Age series, a colony formed near a black hole became fantastically wealthy.
I did some googling, but all I found was that they would be great at cooling computer systems in space. That seems useful, but I was expecting something more dramatic. Am I missing something?
[Meta]
In the last 'stupid questions' thread, I posed the suggestion that I write a post called "Non-Snappy Answers to Stupid Questions", which would be a summary post with a list of the most popular stupid questions asked, or stupid questions with popular answers. That is, I'm taking how many upvotes each pair of questions and answers got as an indicator of how many people care about them, or how many people at least thought the answer to a question was a good one. I'm doing this so there will be a single spot where interesting answers can be fou...
Back in 2010, Will Newsome posted this as a joke:
Sure, everything you [said] made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.
But isn't it actually true?
If the Bay Area has such a high concentration of rationalists, shouldn't it have more-rational-than-average housing, transportation and legislation?
Sadly, I know the stupid answers to this stupid questions. I just want to vent a bit.
The Bay Area has a high concentration of rationalists compared to most places, but I don't think it's very high compared to the local population. How many rationalists are we talking about?
Are rationalists more or less likely than non-rationalists to participate in local government?
Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.
Interestingly enough Rand seems to make a disclaimer about that in her novel Atlas Shrugged. The philosophy professor character Hugh Akston says of his star students, Ragnar Danneskjold, John Galt and Francisco d'Anconia:
..."Don't be astonished, Miss Taggart,"
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.