What are your contrarian views?
As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (806)
I think raising the sanity waterline is the most important thing we can do, and we do too little of it because our discussions tend to happen amongst ourselves, i.e. with people who are far from that waterline.
Any attempt to educate people, including the attempt to educate them about rationality, should focus on teens, or where possible on children, in order to create maximum impact. HPMOR does that to some degree, but Less Wrong usually presupposes cognitive skills that the very people who'd benefit most from rationality do not possess. It is very much in-group discussion. If "refining the art of human rationality" is our goal, we should be doing a lot more outreach and a lot more production of very accessible rationality materials. Simplified versions of the sequences, with more pictures and more happiness. CC licensed leaflets and posters. Classroom materials. Videos (compare the SciShow video on Bayes' Theorem), because that's how many curious young minds get their extracurricular knowledge these days.
In fact, if we crowdfunded somebody with education materials production experience to do that (or better yet, crowdfund two or three and let them compete for the next round), I'd contribute significantly.
I think video are the wrong medium. Videos have the problem of getting people to think they understand something when they don't. People learn all the right buzzwords but that doesn't mean that they actually are more rational.
Kaj Sotala for example designs a game for his master thesis that's intended to teach Bayes method. I think such a game would be much more valuable than a video that explains Bayes method.
We have prediction book and the Credence game as tools to teach people to be more rational. They aren't yet on a quality level where the average person will use them. Focusing more energy on updating those concepts and making them work better is more valuable than producing videos.
CFAR also does develop teaching materials. A core feature of CFAR is that it actually focuses on produces quality instead of just producing videos and hoping that those videos will have an impact. I know that there someone in Germany who teaches a high school class based on CFAR inspired material.
Is this supposed to be a contrarian view on LW? If it is, I am going to cry.
Unless we reach a lot of young people, we risk than in 30-40 years the "rationalist movement" will be mostly a group of old people spending most of their complaining about how things were better when they were young. And the change will come so gradually we may not even notice it.
There are some I hold:
These are 10 different propositions. Fortunately I disagree with most of them so can upvote the whole bag with a clear conscience, but it would be better for this if you separated them out.
I agree with this meta-comment. Should I downvote it?
See my earlier comment on this.
Care to explain this one?
Yes.
A big pie, rotating in the sky, should have apparently shorter circumference than a non-rotating one, and both with the same radii.
I can't swallow this. Not because it is weird, but because it is inconsistent.
Why is it inconsistent?
I have two photos of two different pies, one of rotating one and one of not rotating. Photos are indistinguishable, I can't tell which is which.
On the other hand, both pies have one-to-one correspondence with photos an one should be slightly deformed on the edge.
Even if it is, on the photo can't be. The photo is perfectly Euclidean. I have measured no Lorentz contraction.
Place red and white equilength rulers on the edge of the cylinder. The rotating cylinder will have more and shorther rulers. Thus the photos are not the same. Even better have the cylinder slowly pulse in different colors. The edges will pulse more slowly thus not being in synch with the center.
Related phenomenon is that moving ladders fit into garages that stationary ones would not.
Saying that a moving ladder "fits" means that the start of the ladder is in the garage at the same time that the end of the ladder is. If the ladder is moving and contracted because of relativity, these two events are not simultaneous in all reference frames. Thus, you cannot definitely say that the moving ladder fits--whether it fits depends on your reference frame. (In another reference frame you would see the ladder longer than the garage, but you would also see the start of the ladder pass out of the garage before the end of the ladder passes into it.)
Why have that definition of "fit"? I could eqaully well say that fitting means that there is a reference frame that has a time where the ladder is completely inside.
If you had the carage loop back so that the end would be glued to the start you could still spin the ladder inside it. From the point of the ladder it would appear to need to pass the garage multiple times to oene fit ladder lenght but from the outside it would appear as if the ladder fits within one loop completely. With either perspective the one garage space enough to contain the ladder without collisions. In this way it most definetly fits. Usually garages are thought to be space-limited but not time limited. Thus the eating of the time-dimension is a perfectly valid way of staying within the spatial limits.
edit: actually there is a godo reazson to priviledge the rest frame oft he garage as the one that count as ragardst to fitting as then all of the fitting happens within its space and time.
In that case, the ladder fits.
Each rung of the ladder has a distinct reference frame. "From the point of the ladder" is meaningless.
They will multiply as the orbital speed increases? Say that Arab numerals are written on the rulers. Say that they are 77 at the beginning. Will this system know when to engage the number 78?
Or will there be two 57 at first? Or how is it going to be?
I was thinking of already spun cylinder and then adding the sticks by accelerating them to place.
If you had the same sticks already in place the stick would feel a stretch. If they resist this stretch they will pull apart so there will be bigger gaps between them. For separate measuring sticks they have no tensile strenght in the gaps between them. However if you had a measuring rope with continous tensile strenght and at a beginning / end point where the start would be fixed but new rope could freely be pulled from the end point you would see the numbers increase (much like waist measurements when getting fatter). However the purpoted cylinder has maximum tensile strenght anywhere continously. Thus that strenght would actually work against the rotating force making it resist rotation. a non-rigid body will rupture and start to look like a star.
So no there would not be duplicate sticks but yes the rope would know to engage number 78.
If you would fill up a rotating cylinder with sticks and spin it down the stick would press against each other crushing to a smaller lenght. A measuring rope with a small pull to accept loose rope would reel in. A non-rigid body slowing down would spit-out material in bursts that might come resemble volcanoes.
If the rotating pie is a pie that when nonrotating had the same radius as the other one, when it rotates it has a slightly larger radius (and circumference) because of centrifugal forces. This effect completely dominates over any relativistic one.
The centrifugal force can be arbitrary small. Say that we have only the outer rim of the pie, but as large as a galaxy. The centrifugal force at the half of speed of light is just negligible. Far less than all the everyday centrifugal forces we deal with.
Now say, that the rim has a zero around velocity at first and we are watching it from the centrer. Gradually, say in a million years time, it accelerates to a relativistic speed. The forces associated are a millionth of Newton per kilogram of mass. No big deal.
The problem is only this - where's the Lorentz contraction?
As long as we have only one spaceship orbiting the Galaxy, we can imagine this Lorentzian shrinking. In the case of that many, that they are all around, we can't.
If you have a large number of spaceships, each will notice the spaceship in front of it getting closer, and the circle of spaceships forming into an ellipse.
At least, that's assuming the spaceships have some kind of tachyon sensor to see where all the other ships are from the point of reference of the ship looking, or something like that. If they're using light to tell where all of the other ships are, then there's a few optical effects that will appear.
The question is what the stationary observer from the centre sees? When the galactic carousel goes around him. With the speed even quite moderate, for the observer has precise instruments to measure the Lorentzian contraction, if there is any.
At first, there is none, because the carousel isn't moving. But slowly, in many million years when it accelerate to say 0.1 c, what does the central observes sees? Contraction or no contraction?
He will see each spaceship contract. The distance between the centers of the spaceships will remain the same.
In other news, the earth is really flat because photographs of the earth are flat.
Just to clarify, is the spinning pie a set of particles in the same relative position as with a still pie, but rotating around the origin? Is it a set of masses connected by springs that has reached equilibrium (none of the springs are stretching or compressing) and the whole system is spinning? Is the pie a solid body?
What exactly we're looking at depends on which of the first two you picked. If you picked the third, it is contradictory with special relativity, but there's a lot more evidence for special relativity than there is for the existence of a solid body. Granted, a sufficiently rigid body will still be inconsistent with special relativity, but all that means is that there's a maximum possible rigidity. Large objects are held together by photons, so we wouldn't expect sound to travel through them faster than light.
The spinning set of particles is a toroidal with let say 1 million light years across - the big R. and with the small r of just 1 centimetre. It is painted red and white, differently each metre.
The whole composition starts to slowly rotate on the signal from the centre. And slowly, very slowly accelerate to reach the speed of 0.1 c in a several million years.
Now, do we see any Lorentzian contraction due to the SR, or not due to the GR?
(Small rockets powered by radioactive decay are more than enough to compensate for the acceleration and for the centrifugal force. Both incredibly small. This is the reason why we have choose such a big scale.)
I'm going to assume mass is small enough not to take GR into effect.
From the point of view of a particle on the toroid, the band it's in will extend to about 1.005 meters long. Due to Lorentz contraction, from the point of reference of someone in the center, it will appear one meter long.
The question is ONLY for the central observer. At first he sees 1 m long stripes, but when the whole thing reaches the speed of 0.1 c, how long is each stripe?
One meter.
I just want to clarify. I'm assuming the particles are not connected, or are elastic enough that stretching them by a factor of 1.005 isn't a huge deal. If you tried that with solid glass, it would probably shatter.
Come to think of it, this looks like a more complicated form of Bell's spaceship paradox.
Special Relativity + some basic mechanics leads to an apparent contradiction in the expected measurements, which is only resolved by introducing a curved space(time). So this would be a failure of self-consistency: the same theory leads to two different results for the same experiment.
However, the two measurements of ostensibly the same thing are done by different observers, so there is no requirement that they should agree. Introducing curved space for the rotating disk shows how to calculate distances consistently.
The problem is that it's inconsistent with solid-body physics?
Solid-body physics is an approximation. This isn't hard to show. Just bend something.
Consider the model of masses connected by springs. This is consistent with special relativity, and can be used to model solid-body physics. In fact, it's a more accurate model of reality than solid-body physics.
No, that's not the issue. The problem is that no flat-space configuration works.
There's a reason it's called special relativity. It only works in special cases. Eucludian geometry and Newtonian mechanics are inconsistent, btw. Special relativity solves these inconsistencies in the special contexts where they originally came up (predicting the Lorentz contraction and time dilation which is experimentally observed). It wasn't until the curved space of general relativity was discovered that we had a fully consistent model.
And yes, curved space of general relativity fully explains the rotating disc in a way that is self-consistent in in agreement with observed results (as proven by Gravity Probe B, among other things).
Is it any Lorentz contraction visible in the case of around the galaxy rim?
Are all the Lorentzian shrinks just cancelled out?
I'd really like to know that.
Special relativity is consistent. It just isn't completely accurate.
It's inconsistent with solid-body physics, but that's due to the oversimplifications inherent in solid-body physics, not the ones inherent in special relativity.
Trying to fit solid-body physics into general relativity is even worse. With special relativity, it works fine as long as it doesn't rotate or accelerate. Under general relativity, it can only exist on flat space-time, which basically means that nothing in the universe can have any mass whatsoever, including the object in question.
Twin paradox.
What about the twin paradox?
You don't need GR for a rotating disk; you only need GR when there is gravity.
Rotation drags spacetime.
Only if the rotating object is sufficiently massive.
Only if the rotating object has any mass at all.
You need GR if you want to treat talk about the rotating reference frame of the disk. Otherwise SR is fine.
There is no inconsistency. In one case you are measuring the circumference with moving rulers, while in the other case you are measuring the circumference with stationary rulers. It's not inconsistent for these two different measurements to give different results.
No. I am measuring from here, from the centre with some measuring device.
First I measure stationary pie, then I measure the rotating one. Those red-white stripes are either constant, either they shrink. If they are shrinking, they should multiply as well. If they are not shrinking, what happened with Mr. Lorentz's contraction?
If you measure a wheel with a ruler, and the wheel is moving relative to the ruler, then your measurement assumes that both ends of a piece of the wheel line up with both ends of a piece of the ruler at the same time. Whether these events happen at the same time, and therefore whether this is a measurement of the wheel, are different depending on the frame of reference.
[Please read the OP before voting. Special voting rules apply.]
The truth of a statement depends on the context in which the statement is made.
How is that a contrarian statement? Obviously natural language is heavily context-dependent. So what exactly do you mean when you say that?
I'm not saying something just focused on natural language. I think it's true for any statements.
If you look at http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/ there's no mention of context and how a statement can be true in context A but false in context B.
I think this is uncontroversial if taken as referring to the following two things:
and controversial but not startlingly so if taken as referring to the following:
Are you intending to state something more than those?
There are some people who believe that there something called objective reality and you can check whether a statement is true in objective reality.
I say that a statement might be true in context A but false in context B.
I don't think you answered my question. (Perhaps because you think it's meaningless or embodies false presuppositions or something.)
Aside from the facts that (1) the same utterance can mean different things in different contexts, (2) indexical terms can refer differently in different contexts, and (3) different values and preferences may be implicit in different contexts, do you think there are further instances in which the same statement may have different truth values in different contexts?
(I think the boundary between #1 and "real" differences in truth value is rather fuzzy, which I concede might make my question unanswerable.)
Some concrete examples may be useful. The following seem like examples where one can avoid 1,2,3. Are they ones where you think the truth value might be context-dependent, and if so could you briefly explain what sort of context differences would change the truth value?
The fact that you claim to get 7 digits of accuracy by multiplying two 4 digit numbers is very questionable. If I would go after my physics textbook 1234 times 4321 = 5332000 would be the prefered answer and 1234 times 4321 = 5332114 would be wrong as the number falsely got 3 additional digits of accuracy.
A more exotic issue is whether times is left or right associative. The python pep on matrix multiplication is quite interesting. It goes through edge cases such as whether matrix multiplication is right or left associative.
Red is actually a quite nice example. Does it mean #FF0000? If so, the one that my monitor displays? The one that my printer prints? On is red not a property of an object but a property of the light and it means light with a certain wavelength? That means that if I light the room a certain way the colors of objects change. If it's a property of the object, what's when the object emits red light but doesn't reflect it? Alternatively red could also be something that triggers the color receptors of humans in a specific way. In that case small DNA changes in the person who perceives red alter slightly what red means. But "human red" is even more complex because the brain does comlex postprocessing after the color receptors have given a certain output.
If red means #FF0000 then is #EE0000 also red or is it obviously not red because it's not #FF0000? What do you do when someone with design experience and who therefore has many names for colors comes along and says that freshly spilled human blood is crimson rather than red? If we look up the color crimson you will find that Indiana University has IU crimson and the University of Kansas has KU crimson. Different values for crimson make it hard to decide whether or not the blood is actually colored crimson.
Depending on how you define red mixing it with green and blue might give you white or it might give you black.
I used to naively think that I can calculate the difference between two colors by calculating the Hamilton distance of the hex values. There even a W3C recommendation of defining the distance of colors for website design that way. It turns out it you actually need a formula that's more complex and I'm still not sure whether the one the folks gave me on ux.stackexchange is correct for human color perception. Of course you need to have a concept of distance if you want to say that red is #FF0000 +- X.
I also had lately on LW a disagreement about what colors mean when I use red to mean whatever my monitor shows me for red/#FF0000 because my monitor might not be rightly calibrated.
You might naively think that the day after September 2 is always September 3. That turns out not to be true. There also a case where a September 14 follows after a September 2. Some people think that a minute always has 60 seconds but the official version is that it can also sometimes have 61. It get's worse. You don't know how many leap seconds will be introduced in the next ten years. It get's announced only 6 months in advance. That means it's practically impossible to build a clock that tells the time accurately down to a second in ten years. If you look closer at statements things usually get really messy.
The US airforce shoot down an US helicopter in Iraq partly because they don't consider helicopters to be aircraft. Most of the time you can get away with making vague statements for practical purposes but sometimes a change in context changes the truth value of a statement and then you are screwed.
Multiplication: so this looks like you're again referring to meanings being context-dependent (in this case the meaning of "= 5332114"). So far as I can see, associativity has nothing whatever to do with the point at issue here and I don't understand why you bring it up; what am I missing?
Redness: yeah, again in some contexts "red" might be taken to mean some very specific colour; and yes, colour is a really complicated business, though most of that complexity seems to me to have as little to do with the point at issue as associativity has to do with the question of what 1234x4321 is.
So: It appears to me that what you mean by saying that statements' truth values are context-dependent is that (1) their meanings are context-dependent and (2) people are often less than perfectly precise and their statements apply to cases they hadn't considered. All of which is true, but none of which seems terribly controversial. So, sorry, no upvote for contrarianism from me on this occasion :-).
The full meaning of a statement depends on the context in which it is made.
Just to clarify, you mean that there is a context in which "0 = 1" is a true statement, which is not tantamount to redefining "0", "=", or "1"? That is, in some alternate universe, "0 = 1" is consistent with the axioms of Peano arithmatic?
In most cases the numbers that normal humans use don't follow strictly the axioms of Peano. Most of the time the dates of a month follow Peano. Most of the time the day after September 2 is September 3. But not always.
On a computer you can't store every natural number as specified by Peano with common integers.
If you start counting apples and get really many apples you suddenly have a black hole and no apples anymore.
I don't know enough about the philosophy of math to get really deep by we had lately someone writing posts about constructivist math that also contained the notion that there are no absolute mathematical truths.
In that case, it sounds like you're just not a math realist. There are plenty of people who believe that Peano arithmetic somehow exists on its own. Or possibly people who have a different definition of "exist" from me. It's hard to tell the difference. But I don't think disagreeing with that is all that unusual.
I'm not good enough at math to confidently answer that question. I'm good enough at math that I think that people want to debate whether or not something like infinite small numbers exist.
I don't care primarily about math. I see math as a tool. I'm happy that there are some people who build useful math and I'm happy to use it when convenient but it's not central for me.
Social problems are nearly impossible to solve. The methods we have developed in the hard sciences and engineering are insufficient to solve them.
Would you disagree with the claim that several significant social problems have in fact been solved over the history of human civilization, at least in parts of the world? Or are you saying that those were the low-hanging fruit and the social problems that remain are nearly impossible to solve?
What would you say about the progress that has been made towards satisfying the Millennium Development Goals?
Looking at the list, I would say that to the extent progress has been made towards them (and to the extent they're worthy goals, the "sustainable development" one is trying to solve the wrong problem and the "gender equality" one is just incoherent) it is incidental to the efforts of the UN.
Yvain seems to agree.
The problem with Yvain's argument is that it appears to be an example of the PHB fallacy "anything I don't understand is easy to do". Or rather the "a little knowledge" problem "anything I sort of understand is easy to do".
During the Enlightenment, when people first started talking about reorganizing society on a large scale, it seemed like a panacea. Now that we have several centuries extremely messy experience with it, we know that it's harder than it at first appeared and there are many complications. Now that developments in biology seem to make it possible to make changes to biology it again looks like a panacea (at least to the people who haven't learned the lessons of the previous failure). And just as before, I predict people will discover that it's a lot more complicated, probably just a messily.
Finding better ways for structuring knowledge is more important than faster knowledge transfer through devices such as high throughput Brain Computer Interfaces.
It's a travesty that outside of computer programming languages few new languages get invented in the last two decades.
These are two separate (though related) propositions. For the purpose of this thread it would be better to separate them. (You'd probably also get more karma that way :-).)
I don't think they aren't separate. Languages are ways for structuring information. That might be what's the most contrarian thought in the post ;)
I understand; that's why they're related. But they're not the same statement; someone could agree with your first statement and disagree with your second. In fact, they could agree that finding better ways of structuring knowledge is really important, and agree that languages are ways of structuring information, but not think it's a bad thing that languages aren't being invented faster -- e.g., they might hold that outside of computer programming, there are almost always better ways to improve how we structure information than by inventing new languages.
[Please read the OP before voting. Special voting rules apply.]
The study and analysis of human movement is very underfunded. There a lot of researches into getting information about static information such as DNA or X-ray but very little about getting dynamic information about how humans move.
I agree with this, so I'm telling you instead of upvoting.
[Please read the OP before voting. Special voting rules apply.]
Toxicology research is underfunded. Investing more money into finding tools to measure toxicology makes more sense than spending money into trying to understand the functioning of various genes.
A word of advice: Perhaps anyone posting a comment here with the intention of voicing a contrarian opinion and getting upvotes for disagreement should indicate the fact explicitly in their comment. Otherwise I predict that the upvote/downvote signal will be severely corrupted by people voting "normally". (Especially if these comments produce discussion -- if A posts something you strongly disagree with and B posts a very good and clearly-explained reason for disagreeing, what are you supposed to do? I suggest the right thing here is to upvote both A and B, but it's liable to be easy to get confused...)
[EDITED to add: 1. For the avoidance of doubt, of course the above is not intended to be a controversial opinion and if you vote on it you should do so according to the normal conventions, not the special ones governing this discussion. 2. It is possible to edit your own comments; if you read the above and think it's sensible, but have already posted a contrarian opinion here, you can fix it.]
Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.
EDIT: I should clarify:
Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.
Why do you believe this? Countries with the most liberal immigration policies today don't seem to be on the verge of collapse.
You should visit Bradford someday.
I'm sure Bradford isn't the greatest place to live, but (1) it's better than many US inner cities, (2) the UK seems quite far from collapse, and generally (3) "such-and-such a country allows quite a lot of immigration, and there is one city there that has a lot of immigrants and isn't a very nice place" seems a very very very weak argument against liberal immigration policies.
I'm being flippant of course. I didn't intend it as a serious argument.
Quick response:
1) You cannot compare the UK's cities to the US' cities because the US has a 14% black population and the UK does not. "Inner city" is a codeword for the kind of black dysfunction that thankfully the UK does not possess.
2) The UK is not close to collapse because we don't have fully Open Borders yet. For all its faults, the EU's migration framework isn't quite letting in millions of third-worlders yet.
3) Of course.
If you don't mind, I don't want to get into a lengthy debate on the subject.
I am quite happy not to have a lengthy debate with you on this topic.
On the other hand, "such-and-such a country allows quite a lot of immigration, and the niceness of a city inversely correlates with the number of immigrants there" is a stronger argument. Especially if I can get an even stronger correlation by conditioning on types of immigrants.
Stronger, yes. But ...
Actually they probably do. That's why they immigrated in the first place.
Well it's remarkable how strong a correlation there is between one's support for immigration and how strong a bubble one has around oneself to protect oneself from them. Look how many of the most prominent immigration advocates live in gated communities.
Ebola?
Ebola is more an argument for colonialism than against open borders but let's not be picky.
Ebola is an example of a locally-originated virulent existential threat open borders fail to contain, biological, social or otherwise. Controlled borders, despite all the issues, at least can act as an immune system of sorts.
Yes I agree, I was just being facetious :s
[Please read the OP before voting. Special voting rules apply.]
Current levels of immigration are also terrible, and will significantly speed up the collapse of the Western world.
You can't solve AI friendliness in a vacuum. To build a friendly AI, you have to simultaneously work on the AI and the code of ethics it should use, because they are interdependent. Until you know how the AI models reality most effectively you can't know if your code of ethics uses atoms that make sense to the AI. You can try to always prioritize the ethics aspects and not make the AI any smarter until you have to do so, but you can't first make sure that you have an infallible code of ethics and only start building the AI afterwards.
How is this different from the LW mainstream?
The last time I saw someone suggest that one should build an AI without first solving friendliness completely, he was heavily downvoted. I found that excessive, which is why I posted this. I am positively surprised to see that I basically got no reaction with my statement. My memory must have been exaggerated with time, or maybe it was just a fluke.
edit: I now seriously doubt my previous statement. I just got downvoted on a thread in which I was explicitly instructed to post contrarian opinions and where the only things that should get downvotes are spam and trolling, which I didn't do. Of course it's also just possible that someone didnt read the OP and used normal voting rules.
Any work on AI implementation is seriously downvoted here.
Anti-contrarianism.
My current understanding of U.S. laws on cryonics is that you have to be legally pronounced brain-dead before you can be frozen. I think that defeats the entire purpose of cryonics; I can't trust attempts to reverse-engineer my brain if I'm already brain-dead; that is, if my brain cells are already damaged beyond resuscitation. I don't live in the U.S. anyway, but sometimes I consider moving there just to be close to cryonics facilities. However, as long as I can't freeze my intact brain, I can't trust the procedure.
I sense this opinion is not that marginal here, but it does go against the established orthodoxy: I'm pro-specks.
Define?
Meaning, in this scenario, I prefer 3^^^3 specks to 50 years of torture for one person.
I think that my objection is that the analysis sneaks in an ontological assumption: sensory experiences are comparable across a huge range. I'm not very sure that's true.
What does it mean for something to be incomparable? You can't just not decide.
I've always had problems with MWI, but it's just a gut feeling. I don't have the necessary specialized knowledge to be able to make a decent argument for or against it. I do concede it one advantage: it's a Copernican explanation, and so far Copernican explanations have a perfect record of having been right every time. Other than that, I find it irritating, most probably because it's the laziest plot device in science-fiction.
What's a Copernican explanation?
I've never heard the term before, but in context I'd guess it means something like "an explanation that implies we're less important than the previous explanation did". Heliocentrism vs. geocentrism, evolution vs. a supernatural creation narrative culminating in people, etc.
Why is MWI more Copernican than the Copenhagen interpretation?
You do realize that an "observer" doesn't have to be conscious, right? The Copenhagen interpretation doesn't treat humans specially. If anything, I'd say that the Copenhagen interpretation is more Copernican, since it explains the Born probabilities without requiring anthropics.
My comment was not intended to be an endorsement of polymathwannabe's analysis. I'm not a QM expert and am not qualified to comment on the details of either interpretation.
The Copernican principle states that there's nothing special or unique or privileged about our local frame of reference: we're not at the center of the solar system, we're not at the center of the galaxy, this is not the only galaxy, and the next logical step would be to posit that this is not the only universe.
I do not believe in reincarnation of any sort. I believe this is my only life.
It has been believed that the Earth was flat. I'm sure at least someone had considered the implication that the Earth goes on forever. This has turned out to be false. The Earth has positive curvature, and thus only finite surface area.
Quite a few people have considered the idea that atoms are little solar systems, which could have their own life. It turns out that electrons are almost certainly fundamental particles. And even if they're not, the way physics works on a small scale is such that life would be impossible.
Similarly, galaxies do not make up molecules. Even if there are other forces as would be necessary, the light speed limit combined with the expansion of the universe creates a cosmological event horizon. Beyond a certain scale, it is physically impossible for anything to interact.
There are a variety of physical theories that predict other universes. They work in different ways, and tend not to be contradictory. It would be unwise to reject them out of hand, but it would be equally unwise to automatically accept them.
Technical explanation: the problem with MWI is that it makes the fact that density matrices work seem like a complete epistemological coincidence.
Incidentally, I remember a debate between Eliezer and Scott Aaronson where the former confessed he stopped reading his QM textbook right before the chapter on density matrices.
[Please read the OP before voting. Special voting rules apply.]
Reductionism as a cognitive strategy has proven useful in a number of scientific and technical disciplines. However, reductionism as a metaphysical thesis (as presented in this post) is wrong. Verging on incoherent, even. I'm specifically talking about the claim that in reality "there is only the most basic level".
[Please read the OP before voting. Special voting rules apply.]
Causal connections should not be part of our most fundamental model of the Universe. Everything that is useful about causal narratives is a consequence of the Second Law of Thermodynamics, which is irrelevant when we're talking about microscopic interactions. Extrapolating our macroscopic fascination with causation into the microscopic realm has actually impeded the exploration of promising possibilities in fundamental physics.
That would explain why it took so long for someone to discover timeless physics.
Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.
I upvoted because I disagree (strongly) with the second conjunct, but I do agree that certain varieties of dualism are coherent, and even attractive, theories of mind.
Do you mean that, without strong evidence that we don't have, we should assume dualism, or that we have strong evidence for dualism?
If it's the second one, can you give me an example of such a piece of evidence?
Our society is ruled by a Narrative which has no basis in reality and is essentially religious in character. Every component of the Narrative is at best unjustified by actual evidence, and at worst absurd on the face of it. Moreover, most leading public intellectuals never seriously question the Narrative because to do so is to be expelled from their positions of prestige. The only people who can really poke holes in the Narrative are people like Peter Thiel and Nassim Taleb, whose positions of wealth and prestige are independently guaranteed.
The lesson is that in the modern world, if you want to be a philosopher, you should first become a billionaire. Then and only then will you have the independence necessary to pursue truth.
What exactly does that Narrative say?
Why would he answer you without first being a billionaire?
This looks like two posts I saw quite a while ago where contrarian posts were also intended to be up-voted. I can't seem to find those posts (searching for contrarian doesn't match anything and searching for 'vote' is obviously useless). Nonetheless those posts urged to mark each contrarian comment to clearly indicate the opposite voting semantics to avoid unsuspecting readers being misled by the votes. Maybe someone can provide the links?
I only remember this one:
http://lesswrong.com/r/discussion/lw/jvg/irrationality_game_iii/
Yupp. That one. Maybe the OP could change the title to include 'irrationality' (or add a tag) to stay in the original spirit.
My understanding is that this is explicitly meant not to be quite the same thing as the Irrationality Game. Specifically, in the IG the idea is to find things you think are more likely to be true than the LW consensus reckons them; here (I think) the idea is to find things you think are actually likely to be true despite the LW consensus.
That can only be answered by the OP.
Now that there's the karma toll, using downvotes to mean anything other than ‘I don't think this comment or post belongs here’ is a bad idea. Also, now we have poll syntax.
I'd want to vote comments in this thread according to whether they're interesting or boring, regardless of whether I agree with them.
I really wish there was a way to suspend the toll for irrationality game posts.
There is no territory, it's maps all the way down.
That sounds awfully like social constructionism.
Never heard of it until now, had to look it up, couldn't find a decent writeup about it. This link seems to be the best, yet it does not even give a clear definition.
Executive summary of social constructionism: all of reality is socially agreed; nothing is objective.
I'm lost at "socially agreed". I define models as useful if they make good predictions. This definition does not rely on some social agreement, only on the ability to replicate the tests of said predictions.
Every computation requires something that instatiates it, ie a abstract or concrete machine to run on. In a very extreme case you might come up with a very abstract idea. However then the instation provider is the imaginer. Every bit of information requires a transfer of energy. Instation is transitive relation. If there is simulation of me it neccesarily instanties my thoughts too.
Also the parent comment implies a belief in panpsychisism.
Is that contrarian? In the community I come from (physics), that's a pretty commonly considered theory, even if not commonly held as most probable.
I'm an ex-physicist, and I am pretty sure that realism, and more specifically scientific realism, is the standard, if implicit, ontology in physics.
"The territory" is just whatever exists. It may well be an infinite series of entities, each more refined than the last. It's still a territory.
If there is no territory, what is a map?
I don't normally call it a map, I call it a model, but whatever the name, it's something that turns observations into predictions of future observations, without claiming that the source of these observations is something called "reality". This can go as much meta as you like. The map-territory model is one such useful model, except when it's not.
Are you saying that the universe is built like Solomonoff induction? It randomly produces observations and eliminates possibilities that don't follow them? I'd still consider that as having a territory, but it's certainly contrarian.
At the very least, your model if the universe implies the existence of a series of maps along a timeline.
I think this post should win the thread for blowing the most minds. (I'll upvote even though I think your position is tenable, since I only assign it 20% probability or so.)
Meta: It is easy to take a position that is held by a significant number of people and exaggerate it to the point where nobody holds the exaggerated version. Does that count as a contrarian opinion (since nobody believes the exaggerated version that was stated) or as a non-contrarian opinion (since people believe the non-exaggerated version)?
(Edit: This is not intended to be a controversial opinion. It's meta.)
My understanding is that the idea is to post opinions you actually hold that count as contrarian.
I was mostly thinking of the one about open borders. Hardly anyone thinks that open borders would destroy civilization, but that's an exaggerated version of "open borders are a bad idea". If I disagree that they would destroy civilization, but I agree that they are a bad idea, should I treat it as a contrarian opinion or a non-contrarian opinion?
Furthermore, it sounds like that would not qualify as "opinions you actually hold" unless the poster thought it would destroy civilization.
Really? I consider it obvious for a sufficiently strong definition of "open boarders".
Also it wouldn't completely destroy civilization because the open boarders aspect would collapse before all of civilization did.
[META]
Previous incarnations of this idea: Closet survey #1, The Irrationality Game (More, II, III)
AI boxing will work.
EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.
"Can" is a very weak claim. With what probability will it work?
It seems unlikely that the first people to build fooming AGI will box it sufficiently thoughtfully.
I think it's likely to work if implemented very carefully by the first people to build AGI. For instance, if they were careful, a team of 100 people could manually watch everything the AI thinks, stopping its execution after every step and spending a year poring over its thoughts. With lots of fail-safes, with people assigned to watch researchers in case they try anything, with several nested layers of people watching so that if the AI infects an inner layer of people, an outer layer can just pull a lever and kill them all, etc. And with the AI inside several layers of simulated realities, so that if it does bad things in an inner layer we just kill it, and so on. Plus a thousand other precautions that we can think up if we have a couple centuries. Basically, there are asymmetries such that a little bit of human effort can make it astronomically more difficult for an AI to escape. But it seems likely that we won't take advantage of all these asymmetries, especially if e.g. there's something like an arms race.
(See also this, which details several ways to box AIs.)
Seems like an ad hominum attack. Why wouldn't the people working on this be aware of the issues? My contrarian point is that people concerned about FAI should be working on AI boxing instead.
Meta
I think LW is already too biased towards contrarian ideas - we don't need to encourage them more with threads like this.
Treated as a "contrarian opinion" and upvoted.
I think this thread is for opinions that are contrarian relative to LW, and not to the mainstream.
e.g. my opinion on open borders is something that a great majority of people share but is contrarian here, shown by the fact that as of the time of writing it is currently tied for highest-voted in the thread.
I think it's still a problem relative to LW.
Meta-comment: I'm not sure that structure or voting scheme is particularly useful. The hope would be to allow conversation about contrarian viewpoints which are actually worth investigating. I'm not sure how you separate the wheat from the chaff, but that should be the goal...
Yes. Contrarian position: This thread would be better if we upvoted contrarian positions that are interesting or caused updates, not those that we disagree with.
I think it might be better to have one where you upvote things you agree with, and just never downvote.
[Please read the OP before voting. Special voting rules apply.]
Sitting down and thinking really hard is a bad way of deciding what to do. A much better way is to find several trusted advisors with relevant experience and ask their advice.
[Please read the OP before voting. Special voting rules apply.]
Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.
[Please read the OP before voting. Special voting rules apply.]
The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.
[Please read the OP before voting. Special voting rules apply.]
For many smart people, academia is one of the highest-value careers they could pursue.
Clarify "many"?
~30% maybe?
Highest value for the person, for society, or both?
Also, by "high value" do you mean purely monetary or do you mean other benefits?
Society. For the second question, not quite sure what it would mean to provide monetary value to society, since money is how people trade for things within society rather than some extrinsic good.
[meta]
Is there some way to encourage coherence in people's stated views? For some of the posts in this thread I can't tell whether I agree or disagree because I can't understand what the view is. I feel an urge to downvote such posts, although this could easily be a bad idea, since extreme contrarian views will probably seem less coherent. On the other hand, if I can't even understand what is being claimed in the first place then it's hard for me to get much benefit out of it.
This thread is mixed up. A top level meta comment (like in the irrationality sequence) is missing for example.
[Please read the OP before voting. Special voting rules apply.]
Artificial intelligences are overrated as a threat, and institutional intelligences are underrated.
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism relies on so many levels of abstraction as to be practically useless in most situations.
I denotationally agree. In a given situation, utilitarianism will most likely have negligable value. But I think those others are a big deal. Knowing where to donate money makes a much larger difference than every other choice I make combined. In my experience, utilitarians are much better at deciding where to donate.
Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.
Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).
If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.
(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)
I agree that forming political beliefs is not a productive use of my time in the same way that earning a salary to donate to SCI to cure people of parasites is. I disagree that this makes it silly. The reasons you gave may not be the most noble of reasons, but they are still perfectly valid.
[Please read the OP before voting. Special voting rules apply.]
American intellectual discourse, including within the LW community, is informed to a significant extent by folk beliefs existing in the culture at large. One of these folk beliefs is an emphasis on individualism -- both methodological and prescriptive. This is harmful: methodological individualism ignores the existence of shared cultures and coordination mechanisms that can be meaningfully abstracted across groups of individuals, and prescriptive individualism deprives those who take it seriously of community, identity, and ritual, all of which are basic human needs.
Dollars and utilons are not meaningfully comparable.
Edited to restate: Dollars (or any physical, countable object) cannot stand in for utilons.
Can you explain what is wrong with the following comparison?
The value of a dollar in utilons is equal to the increase in expected utilons brought by being given another dollar.
Coherent Extapolated Volition is a bad way of approaching friendliness.
CEV assumes that things can be made coherent. This ignores on how tendensies of pulling in opposite directions play out in practise. This might be a feature and not a bug. The extra polation part is like predicting growth without actual growing. It will lead to another place than natural growth would result with. It also assumes that humans will on the whole cooperate. How should the AI approach conflicts of interest against the constituents? If there is another cold war style scenario does it mean most of the AI power is wasted on being neutral? Or does the AI just empower indiviudal symmetry breakingt decisions to bgi scales, in that the AI isn't the one that f up everything it just supplies the one at fault with tools to make it big.
Organization tend to be able to form stances that are way more narrower than individual living philosphies. Humanity can't settle on a volition that would could be broad enough to serve as a personal moral action guiding priniciple. If a dictator forces a "balanced" view why not just go all the way and make a Imposed Corrective Values.
Developing a rationalist identity is harmfull. Promoting a "-ism" or group affilication with the label "rational" is harmful.
Making your mind work better should not be a special action but a constant virtue. Categorising people into class A people that reach this high on sanity waterline and class B people that reach that high isn't healthy. Being rational isn't about being behind a particular answer to some central question. Being rational shouldn't be about social moment and it's inertia but about arguments and dissolving troubles.
The word choice itself is misleading as the word has very differnt meaning in mainstream use. It enforces and communicates an aura of superiority that hinders intercton with other same level disciplines. If being rational is about "prosperity by choice" or "prosperity by cognition" there exist other branches of "winning" that should not be positioned as enemies but rather accomplishes. There is atleast "prosperity by trust" and "prosperity by accumulation".
Friendliness by mathematical proof about exact trustworthiness of future computing principles is misguided.
Changing minds is usually impossible. People will only be shifted on things they didn't feel confident about in the first place. Changes in confidence are only weakly influenced by system 2 reasoning.
[Please read the OP before voting. Special voting rules apply.]
Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.
[opening post special voting rules yadda yadda]
Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.
Upvoted because it is much too specific (too many conjunctions) to be true. Even if many of them sound plausible.
Bah, I'm always doing that. I have clusters of related suspicions which I put down in one big chunk rather than as separate possibly independent points.
If I had to extract a main point it would be the first bit, biological hominids descended from modern humans existing tens of millions of years from now with their most obvious alterations to the world being an extension of what we have begun with agriculture.
Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.
I agree with this so much that, in order to not affect the mechanics of this thread, I'm going to upvote some other post of yours.
wait. now I'm not sure how to vote on THIS comment, which is brilliant.
Once you actually take human nature into account (especially, the the things that cause us to feel happiness, pride, regret, empathy), then most seemingly-irrational human behavior actually turns out to be quite rational.
Conscious thought processes are often deficient in comparison to subconscious ones, both in terms of speed and in terms of amount of information they can integrate together to make decisions.
From 1 and 2 it follows that most attempts at trying to consciously improve 'rational' behavior will end up falling short or backfiring.
[Please read the OP before voting. Special voting rules apply.]
The dangers of UFAI are minimal.
Please do elaborate!
Do you think that it is unlikely for a UFAI to be created, that if a UFAI is created it will not be dangerous, or both?
[Please read the OP before voting. Special voting rules apply.]
Politically, the traditional left is broadly correct.
"Traditional" left meaning what? Communism? Socialism? Democrats?
Correct meaning what? I'm interpreting "the traditional left" as a value system instead of a set of statements about the world.
[Please read the OP before voting. Special voting rules apply.]
The notion of freedom is incoherent. People would be better off abandoning the pursuit of it.
Where does the incoherence lie?
The way freedom is usually formulated, in the notion of free will or free choices.
To make sure I'm getting this right: is this the school of anti-freedom where the notion of moral responsibility is also deemed incoherent?
Freedom meaning what?
Free choice? I don't believe in that.
The right to make any choice which doesn't impair the choices of others? I strongly agree with that.
What do you think of Free Will Is as Real as Baseball?
[Please read the OP before voting. Special voting rules apply.]
An AI which followed humanity's CEV would make most people on this site dramatically less happy.
My intuition is in agreement with this, but I would love a more worked out description of your own thoughts (in part because my own thoughts aren't clear).
Do you mean that, if shown the results, we would decide that we don't like humanity's CEV, or that humanity desires that we be unhappy?
[Please read the OP before voting. Special voting rules apply.]
As a first approximation, people get what they deserve in life. Then add the random effects of luck.
Max L.
What ethical theory are you using for your definition of "deserve"?
[Please read the OP before voting. Special voting rules apply.]
The SF Bay Area is a lousy place to live.
Max L.