What are your contrarian views?
As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (806)
[Please read the OP before voting. Special voting rules apply.]
As long as you get the gist (think in probability instead of certainty, update incrementally when new evidence comes along), there's no additional benefit to learning Bayes' Theorem.
[Read the OP before voting for special voting rules.]
The many worlds interpretation of quantum mechanics is categorically confused nonsense. Its origins lie in a map/territory confusion, and in the mind projection fallacy. Configuration space is a map, not territory—it is an abstraction used for describing the way that things are laid out in physical space. The density matrix (or in the special case of pure states, the state vector, or the wave function) is a subjective calculational tool used for finding probabilities. It's something that exists in the mind. Any 'interpretation' of quantum mechanics which claims that any of these things exists in reality (e.g. MWI) therefore commits the mind projection fallacy.
While I am not pro-wireheading and I expect this to be only a semi-contrarian position here...
Happiness is actually far more important than people give it credit for, as a component of a reflectively coherent human utility function. About two thirds of all statements made of the form, "$HIGHERORDERVALUE is more important than being happy!" are reflectively incoherent and/or pure status-signaling. The basic problem that needs addressing is of distinction between simplistic pleasures and a genuinely happy life full of variety, complexity, and subtlety, but the signaling games keep this otherwise obvious distinction from entering the conversation simply because happiness of all kinds is signaled to be low-status.
What do you mean by "pure signalling"? You evolve to show the signal. Whether the actual metal process that makes the signal is a cost-benefit analysis on lying or actually believing what you say doesn't matter. Does evolving to automatically feel more empathy for children as a way of signalling that you'd be a good parent count as "pure signalling"?
History isn't over in any Fukuyamian sense; in fact the turmoil of the twenty-first century will dwarf the twentieth. A US-centered empire will likely take shape by century's end.
I will elaborate if requested.
I agree with the first sentence, I disagree with the second. Few countries managed more than a century of world domination and the US is already showing the classic signs of decay.
So? The Roman Republic managed to expand its holdings even as it was decaying into the Empire.
Which would be?
[Please read the OP before voting. Special voting rules apply.]
The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.
What do you mean by that. Technical all that is required is the proper arrangement of transistors.
I mean that the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI. There aren't any significant known unknowns left to be resolved.
I don't see anything in there about a goal system -- not even one that optimizes for paperclips. Goetzel and his lot are dualists and panpsychists: how can we expect them to complete a UFAI when they turn to mysticism when asked to design its soul?
Then where's the AI?
All the pieces for bitcoin were known and available in 1999. Why did it take 10 years to emerge?
So, um, what's the problem, then?
There are no problems. UFAI could be constructed by a few people who know what they are doing on today's commodity hardware with only a few years effort.
The outside view on this is that such predictions have been made since the start of A(G)I 50 or 60 years ago, and it's never panned out. What are the inside-view reasons to believe that this time it will? I've only looked through the table of contents of the Goertzel book -- is it more than a detailed survey of AGI work to date and speculations about the future, or are he and his co-workers really onto something?
My prediction / contrarian belief is that they are really onto something, with caveats (did you look at the second book? that's where their own design is outlined).
At the very highest level I think their CogPrime design is correct in the sense that it implements a human-level or better AGI that can solve many useful categories of real world problems, and learn / self-modify to solve those categories it is not well adapted to out of the box.
I do take issue with some of the specific choices they made in both fleshing out components and the current implementation, OpenCog. For example I think using the rule-based PLN logic engine was a critical mistake, but at an architectural level that is a simple change to make since the logic engine is / should be loosly coupled to the rest of the design (it's not in OpenCog, but c'est la vie. I think a rewrite is necessary anyway for other reasons). I'd swap it out for a form of logical inference based on Bayesian probabalistic graph models a la Pearl. There are various other tweaks I would make regarding the atom space, sub-program representation, and embodiment. I'd also implement the components within the VM language of the AI itself, such that it is able to self-modify its own core capabilities. But at the architectural level these are tweaks of implementation details. It's remains largly the same design outlined by Goertzel et al.
AI has been around for almost 60 years. However AGI as a discipline was invented by Goertzel et al only in the last 10 to 15 years or so. The story before that is honestly quite a bit more complex, with much of the first 50 years of AI being spent working on the sub-component projects of an integrative AGI. So without prototype solutions to the component problems, I don't find it at all surprising that progress was not made on integrating the whole.
Any evidence for that particular belief?
What do you think is missing from the implementation strategy outlined in Goertzel's Engineering General Intelligence?
Haven't read it, but I'm guessing a prototype..?
If you had that then you wouldn't need a few years to implement it now would you.
[Please read the OP before voting. Special voting rules apply.]
Moral realism is true.
Certainly when I dissolved the concept of universal normativity into agent-design normativity, I found myself looking at something that more closely resembles moral realism than any non-realist position I've seen.
Do you mean this (i.e. that a specific morality has or had evolutionary advatage) or something else?
I mean that moral statements have a truth-value, some moral statements are true, and the truth of moral statements isn't determined by opinion.
What does it mean for a moral statement to be true? After all, it is not a mathematical statement. How does one tell if a moral statement is true?
EDIT: it seems like a category error to me (morality is evaluated as if it were math), but maybe I am missing something.
In many religions (which do tend towards moral realism :-/) morality is quite similar to physics: it describes the way our world is constructed. Good people go to heaven, evil people go to hell, karma determines your rebirth, etc. etc. Morality is objective, it can be discovered (though not exactly by a scientific method), and "this moral statement is true" means the usual thing -- correspondence to reality.
It's hard to give a general answer to this, as different moral realists would answer this question differently. Most would agree that it means that there are facts about one ought to do and not do.
How do you tell if something is a fact?
That depends on what it's a fact about. If it's a fact about the physical world, I use my senses. If it's about mathematics, I use mathematical methods (e.g. proofs). If it's a moral fact, I reason about whether it's something that one should do.
How do you know if your reasoning is correct and someone else's (who disagrees with you) isn't?
By engaging with their arguments, seeing what they're based on, whether they really are what one ought to do, etc.
So, what do you do if you start from the same premises but then diverge? Is there an "objective" way to figure out who is right in absence of some mathematical theory of morality?
[Contrarian thread special voting rules]
I bite the bullet on the repugnant conclusion
[Contrarian thread special voting rules]
I would not want to be cryonically frozen and resurrected as my sense of who I am is tied into social factors that would be lost
I can't really disagree with the statement as is, because it is about your wants, not mine, but "I" do not feel the same way.
Would you be willing to freeze if your family did? Your friends and family? Your whole country? Or even if everyone in the world was preserved, would you expect the structure of society post-resurrection be different enough that you would refuse preservation?
I'm not usre about the friends and family examples, it would depend what I thought that future society would be like. If cryonics was the norm I probably wouldn't opt out of it because I would have reasonable expectation of, if resurrection was successful, there being other people in the same situation so there would be infrastructure to support us.
The social factors I'm thinking of include the skills, qualifications and experience that I have developed in my life, which would likely be irrelevant in a world that can resurrect me. At best I would be a historical curiosity with nothing to contribute.
[Contrarian thread, special voting rules apply]
Engaging in political processes (and learning how to do so) is a useful thing, and is consistently underrated by the LW consensus.
Just a reminder, the local meme "politics is the mind killer" is an injunction not against discussing politics, but against using political examples in a non-political argument.
Agreed. But there is also a generally negative attitude towards politics
[ Please read the OP before voting. Special voting rules apply.]
MWI is wrong, and relational QM is right.
Physicalism is wrong, because of the mind body problem, and other considerations, and dual aspect neutral monism is right.
STEM types are too quick to reject ethical Objectivism. Moreover moral subjectivism is horribly wrong. Don't know what the right answer is, but it could be some kind of Kantianism or Contractarianism.
Arguing to win is good, or to be precise, it largely coincides with truth seeking,
There is no kind of smart that makes you uniformly good at everything.
Even though philosophy has no established body of facts, it is possible to be bad at philosophy and make mistakes in it. Scientists who try to solve longstanding philosophical problems in their lunch breaks end up making fools of themselves. Philosophy is not broken science.
A physicalistically respectable form of free will is defensible.
Bayes is oversold, Quantifying what you haven't first understood is pointless. Being a good rationalist at the day to day level has a lot to do with noticing your own biases, and with emotional maturity, than mental arithmetic.
MIRI hasn't made a strong case for AI dangers.
The standard theism/atheism debate is stale, broken and pointless..people who cant understand metaphysics arguing with people who believe it but cant articulate it.
All epistemological positions boil down to fundamental uproveable, intuitions. Empiricism doesn't escape betause it is based on the intuition that if you can see something, it is really there. STEM types have an overly optimistic view of their existed8logo, because they are accelerated out of worrying about fundamental issues.
Rationality is more than one thing.
There are so many problems with this post I wish I could vote several times.
One example: how can you claim both "A physicalistically respectable form of free will is defensible" and "Physicalism is wrong?"
Easily. The wrongness of physicalism doesn't imply the wrongness of everything that is merely compatible with it.
Too much statements in a single post.
[Please read the OP before voting. Special voting rules apply.]
It would be of significant advantage to the world if most people started living on houseboats.
Waste management?
Is there even enough coast for that?
If people didn't live in cities, they'd have to commute more. There would be a large increase in transportation costs.
Where I live there is an abundance of canals. "Most people" is perhaps an exaggeration, but the main points in defence of increased houseboating would be:
(1) a house is a large, expensive, immobile and illiquid asset. A houseboat is rather less expensive, which frees up capital for other purposes.
(2) the internet makes it less necessary for most people to live in cities.
(3) there would be less costs associated with moving between different areas.
Your mileage may vary. Getting internet made me yearn to move to a larger city where I could meet more interesting people and do more interesting stuff---which in the end I did.
Sounds like a Dutch city.
But, it seems, no less desired. See e.g. LW meetups.
If you don't want much cost of moving you can simply rent a flat.
I am pretty sure that out of two equivalent houses the one which floats would be noticeably more expensive, and more expensive to maintain, too. Houseboats are typically less expensive than houses because they are smaller and less convenient.
Aren't RVs even cheaper?
And shacks made out of plywood and corrugated iron are cheaper still.
Indeed. I would in principle be willing to apply a similar argument to RVs, but (since living in an RV holds no aesthetic appeal for me, whereas houseboating does) I am rather less aware of what the logistics would be like.
I find it difficult to believe that houseboats are inherently less expensive. It seems more likely that there's some reason house boats cannot be made as large and expensive as regular houses, so the average houseboat is much cheaper than the average house, even if it's more expensive than a house of the same quality.
The internet gets much more difficult if you don't live in cities. While it mitigates the costs of people not living near each other, it does not remove them. There are still lots of people putting large amounts of time into physically commuting.
Why not use mobile homes? They can't be stacked in three dimensions like apartments, but at least you can put them in two-dimensional grids.
There certainly are houseboats much larger and more expensive than regular houses.
Your link is broken. I'm not sure the proper way to fix it, but it's hard to have links to pages with end parentheses in them.
Whoops. Fixed.
Motor homes might well make more sense for this. The reason I came to this view is that I like canals and so houseboating seemed like a pleasant idea; at around the same time, I read this NY Times piece suggesting that home ownership is not necessarily a good thing. Houseboating seemed like a way of dealing with that; motorhomes simply didn't occur to me as a (probably better) alternative.
[Please read the OP before voting. Special voting rules apply.]
There probably exists - or has existed at some time in the past - at least one entity best described as a deity.
Define deity?
[Please read the OP before voting. Special voting rules apply.]
Homeownership is not a good idea for most people.
Please elaborate.
The largest avoidable source of pain and boredom in the life of a typical western citizen is their commute. - The sane response to this problem is to live as close to your place of employment as at all practical - valuing the time spent commuting at the same rate as your hourly earnings, the monthly penalty for living any significant distance at all from your job can get quite absurd for professionals just in financial terms, and the human cost is greater, because it is sucking up the time you actually can dispose of as you wish.
Home ownership increases the costs of moving residence dramatically compared to renting, and is thus not a good idea unless you have a job which you anticipate keeping for a far greater period of time than is typical in modern society. IE, do you have tenure or the effective equivalent? Then buying over renting makes sense. If you don't, all buying does is make it hurt a lot more to move when you get a new job.
[Please read the OP before voting. Special voting rules apply.]
You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom. Attempts for a systematic rationality curriculum will end up as cargo cultism and hollow ingroup signaling at worst and heuristics and biases research literature scholarship at best. Once you know someone's SAT score, knowing whether they participated in rationality training will give very little additional predictive power on whether they will win at life.
What's the difference?
Upvoted because I disagree with the implicit assumption that the best way of teaching rationality-as-winning would look like heuristics and biases scholarship, rather than teaching charisma, networking, action, signaling strategies, and how to stop thinking.
No, I'm saying it's another failure mode for producing general awesomeness, but at least it might produce some useful scholarship.
EDIT: I also don't think that your description would go very far, it'd still end up with the innately clever people dominating, and the rest just stuck in the general arms race and the confusion of actually effectively applying the skills to the real world, just like all the self-help we already have that teaches that stuff seems to end up.
I'd like to hear a more substantive argument if you've got one. Do you think there are few general-purpose life skills (e.g. those purportedly taught in Getting Things Done, How to Win Friends and Influence People, etc.)? What's your best evidence for this?
I think that there is a huge unseen component in life skills where in addition to knowing about a skill, you need to recognize a situation where the skill might apply, remember about the skill, figure out if the skill is really appropriate given what's going on, know exactly how you should apply the skill in that given situation and so on. There isn't really an algorithm you can follow without also constantly reflecting on what is actually going on, and I think that in what basically looks like another instance of Moravec's paradox, the big difficult part is actually in the unconscious situation awareness and the things you can write in a book like GTD and give to people are a tiny offshoot on that.
No solid evidence for this except for the observation that there don't seem to be self-helpy systems for general awesomeness that actually do consistently make people who stick with them more awesome.
OK, what if you were to, say, at the end of each day brainstorm situations during the day when skill X could have been useful in order to get better at recognizing them?
Could meditation be useful for this?
Sounds like this would still run into the problem I anticipate and be hindered by poor innate memory and pattern matching abilities or low conscientiousness. Some people just won't recognize the situation even in retrospect or have already forgotten about it.
Here's an example of what a less than ideal teaching scenario might look like. If MIT graduates are one end of the spectrum, that's close to the another, and most people are going to be somewhere in between.
Meditation is definitely one of the more interesting self-improvement techniques where you basically just follow an algorithm. Still, it probably won't increase your innate g, much like nothing else seems to. And there are some not entirely healthy subcultures around extensive meditation practices (detachment from the physical world as in "the only difference between an ideal monk and a corpse is that the monk still has a beating heart" and so on), which might be trouble for someone who really wants an algorithm to follow and grabs on to meditation without having much of a counterweight in their worldview.
"There exists no rationality curriculum such that a person of average IQ can benefit from it" and "there exists no rationality curriculum such that a person of LW-typical IQ can benefit from it" are not the same statement.
shrug It sounds as though you want a rationality curriculum to fail, given that you are brainstorming this kind of creative failure mode.
I want to believe that the rationality curriculum will fail iff it is the case that the rationality curriculum will fail.
[Please read the OP before voting. Special voting rules apply.]
Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.
(I have a large interval for how controversial this is, so pardon me if you think it's not.)
Although the social sciences have undeniably helped a lot with our understanding of ourselves, their refusal to follow the scientific method is disgraceful.
As a social scientist (who spends a LOT of time and effort developing rigorous methodology in keeping with the scientific method), I find your dismissal of my entire academic superfield disgraceful. Perhaps you've confused social science with punditry?
What kind of social science do you do?
Computational Social Science (which is extremely methodology-oriented). I was trained in Political Science, but the lines between the social sciences are pretty fuzzy. I do substantive work which could be called Political Science, Sociology, or Economics.
The definitions that I found are very wide and very fuzzy, and, essentially, boil down to "social science but with computers!". Is it, basically, statistics (which nowadays is often called by the fancier name of "data science")?
I doubt you can find a widely-acceptable definition of Data Science which is any less fuzzy. Computational Social Science (CSS) is a subset of Data Science. Take Drew Conway's Data Science Venn Diagram: If your Substantive Expertise is a Social Science, you're doing Computational Social Science.
Statistics is an important tool in CSS, but it doesn't cover the other types of modeling we do: Agent-Based, System Dynamic, and Algorithmic Game Theoretic to name a few.
Ah, I see, so you're coming from that direction.
But let me ask a different question -- in what kind of business you're in? Are you in the business of making predictions? in the business of constructing explanations of how the world works? in the business of coming up with ways to manipulate the world to achieve desired ends?
I'm in the business of modeling. I do all three of those tasks, but the emphasis is definitely on the last.
Could you give examples of successful interventions that you field has come up with, that wouldn't otherwise have been put into practice?
Perhaps you were exposed to better education. In Latin American universities, the humanities are plagued with antipositivism. If you've managed to stay away from it, kudos to you.
Oof. You just trampled one of my pet peeves: Social Science is a subset of the Sciences, not the Humanities.
There's still a persistent anti-positivist streak in the Humanities in the US, but mostly positivism has just been irrelevant to the work of Humanities scholars (though this is changing in some interesting and exciting ways).
More importantly, the social sciences in the US are overwhelming positivist, even amongst researchers whose work is not strictly empirical. I wish I could take credit for those good influences, but I think you're probably the one deserving of kudos for managing to become a rationalist in such a hostile environment.
When I said humanities I didn't mean social sciences; in fact, I thought social sciences explicitly followed the scientific method. Maybe the word points to something different in your head, or you slipped up. Either way, when I say humanities, I actually mean fields like philosophy and literature and sociology which go around talking about things by taking the human mind as a primitive.
The whole point of the humanities is that it's a way of doing things that isn't the scientific method. The disgraceful thing is the refusal to interface properly with scientists and scientific things - but there's no shortage of scientists who refuse to interface with humanities either, when you come down to it. My head's canonical example is Indian geneticists who try to go around finding genetic caste differences; Romila Thapar once gave an entertaining rant about how anything they found they'd be reading noise as signal because the history of caste was nothing like these people imagined.
And, on the other hand, we have many Rortys and Bostroms and Thapars in the humanities who do interface.
Funny humanities people were saying the same thing about genetic racial differences until said difference started showing up.
a) Actually Thapar's point wasn't that there were no genetic differences (in fact, the theory of caste promulgated by Dalit activists is that it's created by the prohibition of inter-caste marriage and therefore pretty much predicts genetic differences) - but that the groupings done by the researchers wasn't the correct one.
b) I should actually check that what I surmised is what she said. Thanks for alerting me to the possibility.
So do you have independent evidence that the theory promulgated by the Dalit activists is correct, because theories promulgated by activists don't exactly have the best track record.
Actually, with the caveat that I don't have any object-level research, I doubt it; they assign a rigidity to the whole thing that seems hard to institute. My point was that 'do there exist genetic differences' is not the issue here.
So what is the issue, that geneticists didn't consult with Dalit activists before designing their experiment?
So, Romila Thapar is not a Dalit activist, just a historian (I'm guessing this is a source of confusion; I could be wrong).
I'm saying they should have read up before starting their project.
I can't find the study for some reason, so I'll try and do it from memory. They randomly picked from a city Dalits (Dalit is a catch-all term coined by B R Ambedkar for people of the lowest castes, and people outside the caste system, all of whom were treated horribly) and people from the merchant castes to look for genetic differences. Which is all fine and dandy - but for the fact that neither 'Dalit' not 'merchant-caste' is an actual caste; there are many castes which come into those two categories. So, assuming a simple no-inter-caste-marriage model of caste, a merchant family from village A thousands of kilometres from village B has about as much (or, considering marginal things like babies born out of rape, even less) genetic material in common than a merchant and Dalit family from the same village - unless there's a common genetic ancestor to all merchant families. And that's where reading historical literature comes in - the history of caste is much more complicated, involving for example periods when it was barely enforced and shuffling and all sorts of stuff. So, they will find differences in their study, but it won't reflect actual caste differences.
The problem is that they're trying to study areas where it's really hard to get enough scientific evidence.
Do you mean humanities in the abstract or the people currently occupying humanities departments?
In the abstract. Though, undoubtedly, many of the people can do wonders too.
the United States prison system is a tragedy on par or exceeding the horror of the Soviet gulags. In my opinion the only legitimate reason for incarcerating people is to prevent crime. The USA currently has 7 times the OECD average number of prisoners and crime rates similar to the OECD average. 6/7 of the Us penial system population is a little over 2 million people. If we are unnescesarily incarcerating anywhere close to 2 million people right now then the USA is a morally hellish country.
note: Less than half of the inamtes in the USa are there for drug related charges. It is very close to 50% federally but less at the state level. Immediately pardoning all criminals primarily gets us to 3.5 times the OECd average.
Is your claim that they're in prison for crimes they didn't commit, or that we should let more crimes go unpunished?
False dichotomy, It's about sentence length, eg three strikes.
So if we reduced sentences what effect do you think that would have on crime rates? Remember three strikes was passed in response to crime rates being too high.
Drastically increasing sentences didn't drastically reduce crime, so...
Comparable countries have lower crime rates and lower prison populations, so they must be doing something right.
You don't have to keep moving the big lever up and down: you can get Smart on Crime.
Well, the crime did fall. Whether it was due to increased sentences or something else is still being debated.
They also have fewer people from populations with high predisposition to violence (and yes, I mean blacks).
The last was disappointingly predictable.
I'm not the OP, but I'll throw a quote into this thread:
So which crimes would you take off the books and what percent of prisoners would that remove?
We can start with the drug war, things like civil forfeiture, and go on from there. You might be interested in this book.
The problems with the US criminal justice system go much deeper than just the abundance of laws, of course.
Civil forfeiture doesn't fill prisons.
The problem with having to many felonies is not that prisons get filled with people being punished for silly things, it's that the people who do get punished for silly things tend to correlate with the people actively opposing the current administration.
There are a LOT of problems with having too many felonies, but that's a large discussion not quite in the LW bailiwick...
Agreed, but the discussion was about there supposedly being too many people in prison.
This seems close to the (liberal) mainstream. Why do you think it is contrarian on LW?
I do not think most people consider this a problem on the par of the Soviet Gulag. Though possibly I am wrong.
The problem with the Soviet Gulag wasn't so much its size, but rather the whole system it was part of and things which got you sent to it.
Bitcoin and a few different altcoins can all coexist in the future and each have significant value, each fulfilling different functions based on their technical details.
That doesn't seem especially contrarian to me, given the base premise that cryptocurrency has legs in the first place. At the very least, it seems obvious that easy-to-trace and difficult-to-trace transaction systems have different and complementary niches.
I thought it was contrarian, but perhaps I am wrong? I've seen plenty of 'every altcoin is worthless, dont ever buy any' comments in discussions in the past.
I think it's a small (but loud and motivated) group of bitcoin fans that think that, with most people taking your position (at least conditional on the statement that any cryptocurrency is useful at all).
I'm upvoting top-level comments which I think are in the spirit of this post but I personally disagree with (in the case of comments with several sentences, if I disagree with their conjunction), downvoting ones I don't think are in the spirit of this post (e.g. spam, trolling, views which are clearly not contrarian either on LW nor in the mainstream), and leaving alone ones which are in the spirit of this post but I already agree with. Is that right?
What about comments I'm undecided about? I'm upvoting them if I consider them less likely than my model of the average LWer does and leaving them alone otherwise. Is that right?
I interpret the intention as "upvote serious ones you disagree with, downvote trolls, ignore those you agree with". In other words, you are not judging what you think LW finds contrarian, you are reporting whether you agree with the views posters perceive as contrarian, not penalizing people for misjudging what is contrarian.
Hopefully this thread is a useful tool for figuring out which views are the most out of the LW mainstream, but still are taken seriously by the community. 10+ upvotes would probably be in the ballpark.
The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.
Why? This looks as if you're taking a hammer to Ockham's razor.
In the strictest sense, yes I am. I design, build and test social models for a living (so this may simply be a case of me holding Maslow's Hammer). The universe exhibits a number of physical properties which resemble modeling assumptions. For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn't enforce upper bounds--what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability.
On any given day, I'll instantiate thousands of models. Having many models running in parallel is useful! We observe one universe, but if there's a non-zero probability that the universe is a model of something else (a possibility which Ockham's Razor certainly doesn't refute), the fact that I generate so many models is indicative of the possibility that a super-universal process or entity may be doing the same thing, of which our universe is one instance.
I do think its useful to use what we know about simulations to inform whether or not we live in one. As I said in my other comment, I don't think a finite speed of light, etc., says much either way, but I do want to note a few things that I think would be suggestive.
If time was discrete and the time step appeared to be a function of known time step limits (e.g., the CFL condition), I would consider that to be good evidence in favor of the simulation hypothesis.
The jury is still out whether time is discrete, so we can't evaluate the second necessary condition. If time were discrete, this would be interesting and could be evidence for the simulation hypothesis, but it'd be pretty weak. You'd need something further that indicates something how the algorithm, like the time step limit, to make a stronger conclusion.
Another possibility is if some conservation principle were violated in a way that would reduce computational complexity. In the water sprinkler simulations I've run, droplets are removed from the simulation when their size drops below a certain (arbitrary) limit as these droplets have little impact on the physics, and mostly serve to slow down the computation. Strictly speaking, this violates conservation of mass. I haven't seen anything like this in physics, but its existence could be evidence for the simulation hypothesis.
This is not true in general. I've considered a similar idea before, but as a reason to believe we don't live in a simulation (not that I think this is a very convincing argument). I work in computational fluid dynamics. "Low-Mach"/incompressible fluid simulations where the speed of sound is assumed infinite are much more easily tractable than the same situation run on a "high Mach" code, even if the actual fluid speeds are very subsonic. The difference of running time is at least an order of magnitude.
To be fair, it can go either way. The speed of the fluid is not "absolutely bounded" in these simulations. These simulations are not relativistic, and treating them as that would make things more complicated. The speed of acoustic waves, however, is treated infinite in the low Mach limit. I imagine there are situations in other branches of mathematical physics where treating a speed as infinite (as in the case of acoustic waves) or zero (as in the non-relativistic case) simplifies certain situations. In the end, it seems like a wash to me, and this offers little evidence in favor or against the simulation hypothesis.
Huh. It never occurred to me that imposing finite bounds might increase the complexity of a simulation, but I can see how that could be true for physical models. Is the assumption you're making in the Low Mach/incompressible fluid models that the speed of sound is explicitly infinite, or is it that the speed of sound lacks an upper bound? (i.e., is there a point in the code where you have to declare something like "sound.speed = infinity"?)
Anyway, I've certainly never encountered any such situation in models of social systems. I'll keep an eye out for it now. Thanks for sharing!
As a trivial point, imposing finite bounds means that you can't use the normal distribution, for example :-)
Not true: it means you shouldn't use a normal distribution, and when you do you should say so up front. I see no reason not to apply normal distributions if your limit is high (say, greater than 4 sigmas--social science is much fuzzier than physical science). Better yet, make your limit a function of the number of observations you have. As the probability of getting into the long tail gets higher, make the tail longer.
Truncated normal is not the same thing as a plain-vanilla normal. And using it does mean increasing the complexity of the simulation.
Sentence 1: True, fair point. Sentence 2: This isn't obvious to me. Selecting random values from a truncated normal distribution is (slightly) more complex than, say, a uniform distribution over the same range, but it is demonstrably (slightly) less complex than selecting random values from an unbounded normal distribution. Without finite boundaries, you'd need infinite precision arithmetic just to draw a value.
The problem is not with value selection, the problem is with model manipulation. The normal distribution is very well-studied, it has a number of appealing properties which make working with it rather convenient, there is a lot of code written to work with it, etc. Replace it with a truncated normal and suddenly a lot of things break.
Glad you found my post interesting. I found yours interesting as well, as I thought I was the only one who made any argument along those lines.
There's no explicit step where you say the speed of sound is infinite. That's just the net effect of how you model the pressure field. In reality, the pressure comes from thermodynamics at some level. In the low-Mach/incompressible model, the pressure only exists to enforce mass conservation, and in some sense is "junk" (though still compares favorably against exact solutions). Basically, you do some math to decouple the thermodynamic and "fluctuating" pressure (this is really the only change; the remainder are implications of the change). You end up with a Poisson equation for ("fluctuating") pressure, and this equation lacks the ability to take into account finite pressure/acoustic wave speeds. The wave speed is effectively infinite.
To be honest, I need to read papers like this to gain a fuller appreciation of all the implications of this approximation. But what I describe is accurate if lacking in some of the details.
In some ways, this does make things more complicated (pressure boundary conditions being one area). But in terms of speed, it's a huge benefit.
Here's another example from my field: thermal radiation modeling. If you use ray tracing (like 3D rendering) then it's often practical to assume that the speed of light is infinite, because it basically is relative to the other processes you are looking at. The "speed" of heat conduction, for example, is much slower. If you used a finite wave speed for the rays then things would be much slower.
That makes a lot of sense. I asked about explicit declaration versus implicit assumption because assumptions of this sort do exist in social models. They're just treated as unmodeled characteristics either of agents or of reality. We can make these assumptions because they either don't inform the phenomenon we're investigating (e.g. infinite ammunition can be implicitly assumed in an agent-based model of battlefield medic behavior because we're not interested in the draw-down or conclusion of the battle in the absence of a decisive victory) or the model's purpose is to investigate relationships within a plausible range (which sounds like your use case). That said, I'm very curious about the existence of models for which explicitly setting a boundary of infinity can reduce computational complexity. It seems like such a thing is either provably possible or (more likely) provably impossible. Know of anything like that?
I see your distinction now. That is a good classification.
To go back to the low-Mach/incompressible flow model, I have seen series expansions in terms of the Mach number applied to (subsets of) the fluid flow equations, and the low-Mach approximation is found by setting the Mach number to zero. (Ma = v / c, so if c, the speed of sound, approaches infinity, then Ma goes to 0.) So it seems that you can go the other direction to derive equations starting with the goal of modeling a low-Mach flow, but that's not typically what I see. There's no "Mach number dial" in the original equations, so you basically have to modify the equations in some way to see what changes as the Mach number goes to zero.
For this entire class of problems, even if there were a "Mach number dial", you wouldn't recover the nice mathematical features you want for speed by setting the Mach number to zero in a code that can handle high Mach physics. So, for fluid flow simulations, I don't think an explicit declaration of infinite sound speed reducing computational time is possible.
From the perspective of someone in a fluid-flow simulation (if such a thing is possible), however, I don't think the explicit-implicit classification matters. For all someone inside the simulation knows, the model (their "reality") explicitly uses an infinite acoustic wave speed. This person might falsely conclude that they don't live in a simulation because their speed of sound appears to be infinite.
Btrettel's example of ray tracing in thermal radiation is such a model. Another example from social science: basic economic and game theory often assume the agents are omniscient or nearly omniscient.
False: Assuming something is infinite (unbounded) is not the same as coercing it to a representation of infinity. Neither of those examples when represented in code would require a declaration that thing=infinity. That aside, game theory often assumes players have unbounded computational resources and a perfect understanding of the game, but never omniscience.
A better term is "logical omniscience".
Provably-secure computing is undervalued as a mechanism for guaranteeing Friendliness from an AI.
I'm not sure what you mean by provably-secure, care to elaborate?
It sounds like it might possibly be required and is certainly not sufficient.
Provably-secure computing is a means by which you have a one-to-one mapping between your code and a proof that the results will not give you bad outcomes to a certain specification. The standard method is to implement a very simple language and prove that it works as a formal verifier, use this language to write a more complex formal verifying language, possibly recurse that, then use the final verifying language to write programs that specify start conditions and guarantee that given those conditions outcomes will be confined to a specified set.
It seems to be taken for granted on LW and within MIRI that this does not provide much value because we cannot trust the proofs to describe the actual effects of the programs, and therefore it's discounted entirely as a useful technique. I think it would substantially reduce the difficulty of the problem which needs to be solved for a fairly minor cost.
I don't think it's true, that it's generally considered not useful. One of MIRIs interviews was with one person engaged into provably-secure computing and I didn't see any issues in that post. It's just that provably-secure computing is not enough when you don't have a good specification.
[Please read the OP before voting. Special voting rules apply.]
There is nothing morally wrong about eating meat, and vegetarianism/veganism aren't morally superior to meat-eating.
For most of the vegetarians I know, the issue isn't inherently eating meat. It's the way the animals are treated before they are killed.
Maybe you know a weird subset of vegetarians, but I don't think most would be fine with eating a dead animal that has been very well treated throughout its life.
That looks like a mainstream position, not contrarian.
It's contrarian among LWers, which is what the OP asked for.
Is that so? I know there are some vocal vegetarians on LW, I am not sure that makes them the local mainstream.
I think there are more LW members who are meat-eating and feel hypocritical/gulity about it than there are actual vegetarians.
Looking at the 2013 poll:
I can't speak to the feeling of guilt, but vegetarians are a small minority here.
At the time of the poll people had requested more granularity in the answers. I think a lot of folks leaned towards veggie ideas in the sense of reduced or substanceualy below average meat consumption without actualy being vegitayians.
As to how many a lot was, who knows. I think it's likely that vegetarianism is probably more acceptable here than in the population at large and so disagreeing with it would be more contrarian
agree (mostly), (not vegetarian) would you prefer to eat a bacterial-produced meat product? Assuming it could be made to taste the same...
If its price was less than or equal to the price of normal meat, I'd buy it, otherwise, I'd stick with normal meat.
I suspect it will end up being cheaper because it would be faster to produce than an entire life-cycle of an animal...
[Please read the OP before voting. Special voting rules apply.]
Superintelligence is an incoherent concept. Intelligence explosion isn't possible.
How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.
What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".
As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.
Possibly the concept of intelligence as something that can increase in a linear fashion is in itself incoherent
Well, I will predict this
Very bored Von Neumann.
People that are very good at solving tests which you use to measure intelligence.
[Please read the OP before voting. Special voting rules apply.]
Somewhere between 1950 and 1970 too many people started studying physics, and now the community of physicists has entered a self-sustaining state where writing about other people's work is valued much, much more than forming ideas. Many modern theories (string theory, AdS/CFT correspondence, renormalisation of QFT) are hard to explain because they do not consist of an idea backed by a mathematical framework but solely of this mathematical framework.
Agree with the first half, disagree with the second
[Please read the OP before voting. Special voting rules apply.]
Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.
Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.
Is this a claim about the choices we will make or what is possible? If 1 I can buy it as an argument that states will not be rational enough to choose better options, if 2 I think its false.
It isn't very hard to do a little digging here. http://en.wikipedia.org/wiki/Electricity_generation#mediaviewer/File:Annual_electricity_net_generation_in_the_world.svg
China's aggressive nuclear strategy seems reasonable.
Not exactly sure what you mean by "digging." I already comprehend the quantities of energy being consumed because of my education and experience in related fields, it's the average person who I think does not, since I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction."
Also, your figure only reflects electricity generation, not total energy consumption which is a much higher figure. Currently non-hydrocarbon fuel sources for transportation is very fringe.
The truth is that the price of fossil fuels has always and will continue to fluctuate in accord with simple supply-demand economics for a long time to come; the cheaper it gets to make energy via alternative methods, the cheaper fossil fuels will become to undercut those alternative sources.
We have roughly doubling in solar panel efficiency every 7 years. That's not what I would call "small increase".
Even if solar panels were 100% efficient it would not change the overall picture very much. Solar panels are expensive and do not use space efficiently.
With efficiency I meant the amount you pay per kilowatt hour. It's a variable that has seen consistent doubling every 7 years over the last two decades.
Space on top of most buildings is unused and there are huge deserts that aren't used.
Does the include the subsidies many governments have been providing to solar?
Subsidies per kilowatt hour didn't raise exponentially. I'm not sure to what extend they are factored out.
Solar is also not the only form of energy that get's subventioned. In Germany we used to pay billions per year in coal subventions.
They started from zero, so it's technically super-exponential.
I looked through the numbers and the trend line. I updated in your direction. Even nuclear can't make a big dent without true mass production of reactors, which almost certainly will not happen.
I give it well over 70 percent chance of happening. Mostly because I am expecting coal and gas to get really unpleasantly expensive in the next two decades. The remaining 30 percent is mostly taken up by "Technological surprise rendering all extant generation tech obsolete. One of the small-scale fusion plants working out very well, for example.
The only reason they have been getting expensive at all is that governments have been over-regulating them.
If you don't regulate them you don't pay directly but pay in medical cost for conditions such as asthma. You also get lower children IQ which is worth something. According to the EPA calculations the children IQ is worth more than the increased monetary cost of coal plants due to mercury regulation.
Ehrr.. Just No. Nuclear might be able to make that case, tough mostly the problem there is sticking with over-grown submarine reactors (pwr's are an asinine choice for use on land) but coal and gas? Those are, if anything underregulated due to excessive political clout. Fossil fuels will get more costly for straightforward reasons of supply and demand. The third world is industrializing, and the first world is going to use ever more electricity due to very predictable changes like the coming switch to all-electric motoring (Which, again, will not be driven by government policy, but by better batteries making the combustion engine a strictly inferior technology for cars) Thus, world wide electricity demand is going to go up. By a lot. That, in turn is going to bid up sea-borne coal and liquified natural gas to ridiculous heights because there just isn't any way to increase the supply to match. Very shortly after that, Resistance to more reactors is going to keel over and die - high electricity costs to industry being entirely unacceptable to people with lots of political clout and lots of media ownership, and suddenly mass-production is going to be on the menu. Hopefully of more sensible designs. Molten salt, molten lead, even sodium. Any design that doesn't require the power to be on for shut-down cooling to work, basically.
Unfortunately it is not quite this simple. The current oil price is on the order of $100 per barrel, but it never broke $40 per barrel prior to 1998. See figure. Also see this figure which is in terms of inflation-adjusted dollars, and shows another huge spike around 1980. The reason for these tremendous spike in price isn't simple supply-demand - complex nonlinear political factors are almost certainly to blame, and price stickiness is partially why oil remains as expensive as it is. It doesn't cost even in the ballpark of $100 per barrel to get oil out of the ground and it won't for a very very long time. The upshot is that the price of oil will continue to beat out other sources of energy by just enough to keep those sources of energy at a marginal level of profitability, because oil (and other fossil fuels) can remain profitable at much lower prices.
I would also point out that the scenario you have just described is highly complex and conjunctive, while "oil continues to do what it has been doing" is an intrinsically simple hypothesis.
Price is set on the margins. The marginal barrels of oil coming out of the ground are certainly in the ballpark of $100, from various shale and tight deposits.
The oil prices do not play by the economics textbook rules because most of the world's oil production is controlled by governments and governments have a variety of interests and incentives beyond what a profit-maximizing purely economic agent might have.
I assure you that this is not true, unless I misunderstand you.
edit:
The Finding and Development cost of a typical worthwhile shale play is $1.50/Mcf (many are even better), the current natural gas price is $3.50/Mcf. Of course there are crappy fields with higher F&D cost, and these won't be drilled until prices are high enough to justify it. In effect there is a continuum of price/barrel out there in the world and this is not what controls present day prices.
Eh, Ill stand by my reasoning, but I agree other people might not assign as high probabilities to each step in the chain as I do, so here is a much simpler causative chain that is going to lead to the same place.
China isn't going to keep sacrificing tens of thousands of it's people to the demon smog every year. And once the chinese are knocking of reactors at a high pace, the rest of the world will follow.
And a simple solution to this is just to copy the current-day US which does not use a lot of nuclear power and also does not sacrifice many people to the demon smog.
[Please read the OP before voting. Special voting rules apply.]
Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.
According to the 2013 LW survey, the when asked their opinion of feminism, on a scale from 1 (low) to 5 (high), the mean response was 3.8 , and social justice got a 3.6. So it seems that "feminism is a good thing" is actually not a contrarian view.
If I might speculate for a moment, it might be that LW is less feminist that most places, while still having an overall pro-feminist bias.
If by most places you're talking about the world (or Western/American world) in general, that's pretty clearly false. The considerable majority of Americans reject the feminist label, for example. If you're talking about internet communities with well-educated members, then it probably is true.
Like a few others, I agree with the first two but emphatically disagree with the last. And if you were right about it, I'd expect Ozy to have taken Scott to task about it, and him to have admitted to being somewhat wrong and updated on it.
EDIT: This has, in fact, happened.
See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence.
You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).
Imo this quote from her response is a pretty weak argument:
"The concept of female privilege is, AFAICT, looking at the disadvantages gender-non-conforming men face, noticing that women with similar traits don’t face those disadvantages, and concluding that this is because women are advantaged in society. "
In order for this to be a sensible counterpoint you would need to either say "gender conforming male privilege" or you would need to show that there are few men who mind conforming to gender roles. I don't really see why anyone believes most men are fine with living out standard gender norms and I certainly don't see how anyone has evidence for this.
If a high percentage fo men are gender non-conforming and such men are at a disdadvantage in society then the concept of male privilege is seriously weakened. And using it is dangerous as it might harm those men to here that they are "privileged" when this is not the case (at least in terms of gender, maybe they are rich etc).
OK, those things have indeed happened, to some degree. Above comment corrected.
I still don't understand what is uncharitable about the Wordsx3 post specifically. It accurately describes the behavior of a number of people I know (as in, have met, in person, and interacted with socially, in several cases extensively in a friendly manner), and I have no reason to consider them weak examples of feminist advocacy and every reason to consider typical (their demographics match the stereotype). I have carefully avoided catching the receiving end of it, because friends of mine have honestly challenged aspects of this kind of thing and been ostracized for their trouble.
There's something wrong with the first link (I guess you typed the URL on a smartphone autocorrecting keyboard or similar).
EDIT: I think this is the correct link.
Yeah, that happened when I edited a different part from my phone. Thanks, fixed.
Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)
(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)
For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men.
I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves.
The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the .
But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst.
I'm not sure he /needs/ to be charitable, again -- feminism should have its own internal speakers, I think mainstream modern feminism could use better critics than whoever's on Fox News next, so on -- but it's an understandable criticism.
((Upvoting the thread starter, but more because one and two are mu statements; either closed questions or not meaningful. Weakly agree on third.))
Being 5% of the group doesn't mean they are 5% of the influence. The loudest 5% may get to set the agenda of the remaining 95% if the remaining ones are willing to go along with things they don't particularly care about, but don't oppose enough to make these things deal-breakers either.
See also: http://www.smbc-comics.com/?id=2939
It also helps if the 5% have arguments for their positions.
Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.
How would you define "privilege"?
This is a good definition. In particular, "Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group, who are usually unaware of the privilege they possess. ... A privileged person is not necessarily prejudiced (sexist, racist, etc) as an individual, but may be part of a broader pattern of *-ism even though unaware of it."
No, this is not a motte.
Why focus only specific majority groups and thereby ignore things like men in domestic violence issues getting a lot less help from society than women?
Nearly everyone has some advantages and disadvantages. It's often not helpful to conflate that huge back of advantages and disadvantages into a single variable.