If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it's not good enough to say that we're really rational, scientific, altruist, utilitarian, etc, in contrast to those people -- they thought the same.)
So, how might we find that all these ideas are massively wrong?
Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.
Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.
1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.
2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don't launch any AI before civilization runs out of resources or collapses for some other reason.
3) Consciousness is sort of optional feature, intelligence can work just well without i...
Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.
How does this connect with the funding process of cryonics? When someone signs up and buys life insurance, they are eliminating consumption during their lifetime of the premiums and in effect investing it in the wider economy via the insurance company's investment in bonds etc; when they die and the insurance is cashed in for cryonics, some of it gets used on the process itself, but a lot goes into the trust fund where again it is invested in the wider economy. The trust fund uses the return for expenses like liquid nitrogen but it's supposed to be using only part of the return (so the endowment builds up and there's protection against disasters) and in any case, society's gain from the extra investment should exceed the fund's return (since why would anyone offer the fund investments on which they would take a loss and overpay the fund?). And this gain ought to compound over the long run.
So it seems to me that the main effect of cryonics on the economy is to increase long-term growth.
I think the whole MIRI/LessWrong memeplex is not massively confused.
But conditional on it turning out to be very very wrong, here is my answer:
A. MIRI
The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.
MIRI's AI work turns out to trigger a massive negative outcome -- either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.
It turns out that the UFAI explosion really is the risk, but that MIRI's AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.
B. CfAR
It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only f
If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
A few that come to mind:
It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren't signed up for cryonics.
Take a figure like Nassim Taleb. He's frequently quoted on LessWrong so he's not really outside the LessWrong memeplex. But he's also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don't take to their full conclusion.
So, how might we find that all these ideas are massively wrong?
It's a topic that's very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measure...
It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.
That's not LW-memeplex being wrong, that's just a LW-meme which is slightly more pessimistic than the more customary "the vast majority of all UFAI's are unfriendly but we might be able to make this work" view. I don't think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, "Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don't actually "FOOM" dramatically ... they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn't much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that."
If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)
Do you want to have a career at a conservative institution such a bank or a career in politics? If so, it's probably a bad idea to have too much attack surface by using your real name.
Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.
If you meet people in real life might already know you from your online commentary that they have read and you don't have to start introducing yourself.
It's really a question of whether you think strangers are more likely to hurt or help you.
Do you want to make as many connections with other people as possible? If so, using your real name helps. It increases the attention that other people pay yourself. If you are smart and write insightful stuff that can mean job offers, speaking and speaking gigs.
I think the best long-term strategy would be to invent a different name and use the other name consistently, even in the real life. With everyone, except the government. Of course your family and some close friends would know your real name, but you would tell them that you prefer to be called by that other name, especially in public.
So, you have one identity, you make it famous and everyone knows you. Only when you want to get anonymous, you use your real name. And the advantage is that you have papers for it. So your employer will likely not notice. You just have to be careful never to use your real name together with your fake name.
Unless your first name is unusual, you can probably re-use your first name, which is how most people will call you anyway, so if you meet people who know your true name and people who know your fake name at the same time, the fact that you use two names will not be exposed.
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
ETA: Potentially less contentious rephrase: why isn't making a life as important as saving a life?
Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn't, since nonexisting entities can't have preferences.
The question also turns on issues about population ethics. The previous paragraph assumes the "total view": that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn't count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn't.
For further discussion, see Peter Singer's Practical Ethics, chap. 4 ('What's wrong with killing?").
Making a person and unmaking a person seem like utilitarian inverses
Doesn't seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you're only focusing on the utility for the person made or unmade, then maybe (although see blacktrance's comment on that), but as a utilitarian you have no license for doing that.
Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.
If you're asking me, I'd say no, but I'm not a utilitarian, partly because utilitarianism answers "yes" to questions similar to this one.
Once you've killed them and they've become nonexistent, then they don't have preferences either.
How much does a genius cost? MIRI seems intent on hiring a team of geniuses. I’m curious about what the payroll would look like. One of the conditions of Thiel’s donations was that no one employed by MIRI can make more than one-hundred thousand a year. Is this high enough? One of the reasons I ask is I just read a story about how Google pays an extremely talented programmer over 3 million dollars per year - doesn't MIRI also need extremely talented programmers? Do they expect the most talented to be more likely to accept a lower salary for a good cause?
Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.
How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences - first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they're "clearly" being the victims of ...
The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind "I prefer A to B at time t1 and B to A at time t2", like the ones of your example. They are more like "I prefer A to B and B to C and C to A, all at the same time".
The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it---they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other "bad" kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn't actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.
On the Neil Degrasse Tyson Q&A on reddit, someone asked: "Since time slows relative to the speed of light, does this mean that photons are essentially not moving through time at all?"
Tyson responded "yes. Precisely. Which means ----- are you seated?Photons have no ticking time at all, which means, as far as they are concerned, they are absorbed the instant they are emitted, even if the distance traveled is across the universe itself."
Is this true? I find it confusing. Does this mean that a photon emitted at location A at t0 is a...
There are no photons. There, you see? Problem solved.
(no, the author of the article is not a crank; he's a Nobel physicist, and everything he says about the laws of physics is mainstream)
Other people have explained this pretty well already, but here's a non-rigorous heuristic that might help. What follows is not technically precise, but I think it captures an important and helpful intuition.
In relativity, space and time are replaced by a single four-dimensional space-time. Instead of thinking of things moving through space and moving through time separately, think of them as moving through space-time. And it turns out that every single (non-accelerated) object travels through space-time at the exact same rate, call it c.
Now, when you construct a frame of reference, you're essentially separating out space and time artificially. Consequently, you're also separating an object's motion through space-time into motion through space and motion through time. Since every object moves through space-time at the same rate, when we separate out spatial and temporal motion, the faster the object travels through space the slower it will be traveling through time. The total speed, adding up speed through space and speed through time, has to equal the constant c.
So an object at rest in a particular frame of reference has all its motion along the temporal axis, and no motion at all ...
What are your best arguments against the reality/validity/usefulness of IQ?
Improbable or unorthodox claims are welcome; appeals that would limit testing or research even if IQ's validity is established are not.
Doesn't cryonics (and subsequent rebooting of a person) seem obviously too difficult? People can't keep cars running indefinitely, wouldn't keeping a particular consciousness running be much harder?
I hinted at this in another discussion and got downvoted, but it seems obvious to me that the brain is the most complex machine around, so wouldn't it be tough to fix? Or does it all hinge on the "foom" idea where every problem is essentially trivial?
What motivates rationalists to have children? How much rational decision making is involved?
ETA: removed the unnecessary emotional anchor.
ETA2: I'm not asking this out of Spockness, I think I have a pretty good map of normal human drives. I'm asking because I want to know if people have actually looked into the benefits, costs and risks involved, and done explicit reasoning on the subject.
I wouldn't dream of speaking for rationalists generally, but in order to provide a data point I'll answer for myself. I have one child; my wife and I were ~35 years old when we decided to have one. I am by any reasonable definition a rationalist; my wife is intelligent and quite rational but not in any very strong sense a rationalist. Introspection is unreliable but is all I have. I think my motivations were something like the following.
Having children as a terminal value, presumably programmed in by Azathoth and the culture I'm immersed in. This shows up subjectively as a few different things: liking the idea of a dependent small person to love, wanting one's family line to continue, etc.
Having children as a terminal value for other people I care about (notably spouse and parents).
I think I think it's best for the fertility rate to be close to the replacement rate (i.e., about 2 in a prosperous modern society with low infant mortality), and I think I've got pretty good genes; overall fertility rate in the country I'm in is a little below replacement and while it's fairly densely populated I don't think it's pathologically so, so for me to have at least one child and probably
Why hasn't anyone ever come back from the future and stopped us all from suffering, making it so we never horrible things? Does that mean we ever never learn time travel, or at least time travel+a way to make the original tough experiences be un-experienced?
When non utilitarian rationalists consider big life changes, it seems to me that they don't do it based on how happy that will make them, Why?
Utilitarians could say they are trying to maximize the World's something.
But non utiltarians, like I used to be, and like most here still are, are just... doing it like everyone else does it! "Oh, that seems like a cool change, I'll do it! yay!" then two weeks later that particular thing has none of the coolness effect it had before, but they are stuck with the decision for years....... (in case of dec...
What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?
I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.
The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means...
What does changing a core belief feel like? If I have a crisis of faith, how will I know?
I would particularly like to hear from people who have experienced this but never deconverted. Not only have I never been religious, no one in my immediate family is, none of the extended family I am close with is, and while I have friends who believe in religion I don't think I have any who believe their faith. So I have no real point of comparison.
I have tremendous trouble with hangnails. My cuticles start peeling a little bit, usually near the center of the base of my nail, and then either I remove the peeled piece (by pulling or clipping) or it starts getting bigger and I have to cut it off anyway. That leaves a small hole in my cuticle, the edges of which start to wear away and peel more, which makes me cut away more. This goes on until my fingertips are a big mess, often involving bleeding and bandages. What should I do with my damaged cuticles, and how do I stop this cycle from starting in the first place?
Computers work by performing a sequence of computations, one at a time: parallelization can cut down the time for repetitive tasks such as linear algebra, but hits diminishing returns very quickly. This is vey different than the way the brain works. the brain is highly parallel. Is there any reason to think that our current techniques for making algorithms are powerful enough to produce "intelligence" whatever that means.
Society, by survival, in the survival of the fittest sense, stimulates people to be of service, be interesting, useful, effective, and even altruistic.
I suspect, and would like to know your opinion, that we are, for that social and traditional reason biased against a life of personal hedonic exploration, even if for some particular kinds of minds, that means, literally, reading internet comics, downloading movies and multiplayer games for free, exercising near your home, having a minimal amount of friends and relationships, masturbating frequently, and eating unhealthy for as long as the cash lasts.
So two questions, do you think we are biased against these things, and do you think doing this is a problem?
When will the experience machine be developed?
Average utilitarianism seems more plausible than total utilitarianism, as it avoids the repugnant conclusion. But what do average utilitarians have to say about animal welfare? Suppose a chicken's maximum capacity for pleasure/preference satisfaction is lower than a human's. Does this mean that creating maximally happy chickens could be less moral than non-maximally happy humans?
Don't raw utilitarians mind being killed by somebody who thinks they suffer too much?
_Stupid question: Wouldn't a calorie restriction diet allow Eliezer to lose weight?_
Not a single person who's done calorie restriction consistently for a long period of time is overweight. Hence, it seems that the problem of losing weight is straightforward: just eat less calories than you would normally.
I posted a version of this argument on Eliezer's Facebook wall and the response, which several people 'liked', was that there is a selection effect involved. But I don't understand this response, since "calorie restriction" is defined as restri...
Let's assume the following extremely simplified equation is true:
CALORIES_IN = WORK + FAT
Usually the conclusion is "less calories = less fat". But it also could be "less calories = less work". Not just in the sense that you consciously decide to work less, but also that your body can make you unable to work. Which means: you are extremely tired, unable to focus, in worst case you fall into coma.
The problem with calorie restriction is that it doesn't come with a switch for "please don't make me tired or falling in coma, just reduce my fat". -- Finding the switch is the whole problem.
If your metabolical switch is broken, calorie restriction can simply send you in a zombie mode, and your weight remains the same.
I’m curious, but despite a lot of time poking around Wikipedia, I don’t have the means to discriminate between the possibilities. Please help me understand. Is there reason to believe that an infinite quantity of the conditions required for life is/was/will be available in any universe or combination of universes?
If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0, considering that it would have the necessary additional constraints?
If this is the case, then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built. Are there upper limits on intelligence, or would there be diminishing returns as intellence grows?
Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!