You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don't I want my dog to die? Obviously, when I'm actually dead, I won't want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.
Where I live, people sometimes organize "markets" where they bring stuff that is potentially useful but they have no use for it. Everyone brings whatever they want, and everyone takes whatever they want (first come, first served). Sometimes there is a specific topic, e.g. "clothes" or "stuff for kids", sometimes there is no topic.
In theory, I would expect that such place would attract e.g. all homeless people around, which could make it quite unpleasant for other participants. But in practice, this doesn't happen, probably because those activities are usually organized online or through personal lines, so it's mostly middle-class people coming there, and many of them bring more than they take. Usually people take home all the stuff they brought but nobody else wanted; but sometimes there is an explicit rule (e.g. with the clothes) that at the end, all the untaken stuff will be collected by the organizers and donated to some charity (so it will "trickle down" towards poorer people until someone takes it).
So, if this is important for you, I recommend first doing some research (online, asking your neighbors), and if you can't find, maybe you can organize it. Find a few people to help you, rent a room with some tables (is best case, some organization sympathetic to your goals would lend you the room for free), send invitations on facebook. Call it a "no-money market" or "neighbors' exchange" or whatever. Maybe the first time you organize it, make sure you have at least five people who don't know each other and want to get rid of some potentially useful stuff.
Native Americans were "neutralized" mostly as a side effect of the diseases brought by colonists, and then outcompeted by economically more successful cultures. Instead of strategic effort to prevent WW1 and WW2 happening on another continent, settlers from different European nations actually had "violent clash over resources" with each other. (also here)
The reasoning may seem sound, but it doesn't correspond to historical facts.
I thought more after I posted and concluded that:
Most likely the energy will be released below sun’s photosphere, as its density is very low like 1 to 6000 of air. This would prevent immediate flash visibility.
The resulting hot gas will flow up eventually but it will cooler and energy less concentrated. But even if it takes several minutes, it still could produce burns on Earth.
Also something like large Solar flash could happen because of integration of the hot gas from the comet with Sun's magnetic field, and it hypothetically will result in superflare with strong Solar wind and magnetic effect on Earth.
The temperature during impact will be around 5 mln K on the edge of the comet, as I calculated, which is not enough for any meaningful nuclear reactions. But it doesn't include any additional heating connected with rising pressure because - and pressure would rise as the comet will compress as it decelerate in the solar medium.
If such reaction will happen it could add more energy to explosion and also produce some radioactive isotopes, which could later become part of Solar find and fallout on Earth. I saw an article long time before about possibility of nuclear reaction during impacts, and I will find it.
Insert peg A into slot B. Pleasure should ensue for both parties. Follow emergent heuristics.
If pleasure is not evoked or in case of mismatching heuristics, try to vary peg and/or slot and/or frequency/speed/depth of insertion.
In case of further problems please call your local support.
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening "accidentally", they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people's true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
Because you wrote one sentence without actually giving the argument. So I went with my prior on your argument.
That's what I'm suggesting you not do.
Writing out arguments, and in general, making one's thought processes transparent, is a lot of work. We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge.
I'm not saying you should (necessarily) believe what I say, just because I say it. You just shouldn't jump to the conclusion that I don't have justifications beyond what I have stated or am willing to bother stating.
Cf. Jonah's remark:
If I were to restrict myself to making claims that I could substantiate in a mere ~2 hours, that would preclude the possibility of me sharing the vast majority of what I know.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
I think we can all agree that an entity's anticipated future experiences matter to that entity. I hope (but would be interested to learn otherwise) that imaginary events such as fiction don't matter. In between, there is a hugely wide range of how much it's worth caring about distant events.
I'd argue that outside your light-cone is pretty close to imaginary in terms of care level. I'd also argue that events after your death are pretty unlikely to effect you (modulo basilisk-like punishment or reward).
I actually buy the idea that you care about (and are willing to expend resources on) subjunctive realities on behalf of not-quite-real other people. You get present value from imagining good outcomes for imagined-possible people even if they're not you. This has to get weaker as it gets more distant in time and more tenuous in connection to reality, though.
But that's not even the point I meant to make. Even if you care deeply about the far future for some reason, why is it reasonable to prefer weak, backward, stupid entities over more intelligent and advanced ones? Just because they're made of similar meat-substance as you seems a bit parochial, and hypocritical given the way you treat slightly less-capable organic beings like lettuce.
Woodchopper's post indicated that he'd violently interfere with (indirectly via criminalization) activities that make it infinitesimally more likely to be identified and located by ETs. This is well beyond reason, even if I overstated my long-term lack of care.
I think converting galaxies already includes paying attention, since if you don't know what's there it's difficult to change it into something else.
Maybe you're thinking of this as though it were a fire that just burned things up, but I don't think "converting galaxies" can or will work that way.
"Well now I see we disagree at a much more fundamental level." Yes. I've been saying that since the beginning of this conversation.
If humans are optimizers, they must be optimizing for something. Now suppose someone comes to you and says, "do you agree to turn on this CEV machine?", when you respond, are you optimizing for the thing or not? If you say yes, and you are optimizing the original thing, then the CEV cannot (as far as you know) be compromising the thing you were optimizing for. If you say yes and are not optimizing for it, then you are not an optimizer. So you must agree with me on at least one point: either 1) you are not an optimizer, or 2) you should not agree with CEV if it compromises your personal values in any way. I maintain both of those, but you must maintain at least one of them.
In earlier posts I have explained why it is not possible that you are really an optimizer (not during this particular discussion.) People here tend to neglect the fact that an intelligent thing has a body. So e.g. Eliezer believes that an AI is an algorithm, and nothing else. But in fact an AI has a body just as much as we do. And those bodies have various tendencies, and they do not collectively add up to optimizing for anything, except in an abstract sense in which everything is an optimizer, like a rock is an optimizer, and so on.
"We convert the resources of the world into the things we want." To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI -- that it might do that fanatically. We don't.
I understand you think that some creatures could have fundamental values that are perverse from your point of view. This is because you, like Eliezer, think that values are intrinsically arbitrary. I don't, and I have said so from the beginning. It might be true that slave owning values could be fundamental in some exterrestrial race, but if they were, slavery in that race would be very, very different from slavery in the human race, and there would be no reason to oppose it in that race. In fact, you could say that slavery exists in a fundamental way in the human race, and there is no reason to oppose it: parents can tell their kids to stay out of the road, and they have to obey them, whether they want to or not. Note that this is very, very different from the kind of slavery you are concerned about, and there is no reason to oppose the real kind.
Now, about simulation. The fact that they will be run serially is very unlikely apriori, so any probability shift from it will be not high. And could not be known from inside a simulation, or it is not a simulation, or at least completely isolated simulation. But it is not the main objection. The main is that if I know that I am in the exact time moment in future, I also know that I am in simulation, as my time is not the same as outside time provided to me. There is also problems with many my copies in the infinite number of simulation and real worlds, which make total calculation even more difficult. The same me could appear in real world and in simulation, so saying that I am in one specific type of world is meaningless until I get some evidences. I am the same in many worlds. But after I get evidence that I am in a simulation, it is not a simulation.
Phil of FB: (A more concrete example: 10,000 people are traveling to Mars. 1,000 board a large slow shuttle that takes a single trip to Mars between t1 and t3. Meanwhile, a really fast smaller shuttle takes 10 people at a time to Mars (going back and forth 900 times) during this same period. At time t3, all 10,000 people have safely arrived on Mars. If asked, at t3, whether one took the large slow shuttle or the fast small shuttle, one should say the latter. (Right?) But this is the opposite answer, I believe, that one should give if in the middle of the journey, at time t2, one is aroused from one's hibernation (let's say) and asked whether they are at that very moment on the slow or fast shuttle. Thus, it seems to matter whether the relevant event is ongoing or over. But I’m not exactly clear about why.)
My reply: Imagine there is a random person BOB. If Bob asked before flight to Mars, he will said that he will most likely fly small and quick spaceship. But if we ask a random person during the flight (And if it he is Bob - which is important point here) - than Bob is most likely on a large space plane. But the difference in both situation is that we must add probability that random person will be Bob. And this probability is rather small and will exactly compensate. The fact which is not represented is that there is third group of all travellers, which are already on Mars or wait start on earth, and when I am told that in the moment T3 I still flying, I get information, that I am not one of 8990 "waiters" and update my probabilities accordingly.
What are some rationality things I can do with my girlfriend? Any games? Preferably free and portable things (e.g. don't need any equipment). I don't want to go to rationality meetups with her, cause they're full of guys who'll hit on her no doubt! Also she's not a rationalist so no jargonny stuff.
I agree with your concerns regarding one world government. However, I am curious why you think that the following were "chance developments" of Britain: rule of law, property rights, contracts, education, reading, writing. Pretty much all of those things were in use in multiple times/locales throughout the ancient world. Are you arguing that Britain originated those things? Or that they were developed in Britain independently of their prior existence elsewhere?
Well now I see we disagree at a much more fundamental level.
There is nothing inherently sinister about "optimization". Humans are optimizers in a sense, manipulating the world to be more like how we want it to be. We build sophisticated technology and industries that are many steps removed from our various end goals. We dam rivers, and build roads, and convert deserts into sprawling cities. We convert the resources of the world into the things we want. That's just what humans do, that's probably what most intelligent beings do.
The definition of FAI, to me, is something that continues that process, but improves it. Takes over from us, and continues to run the world for human ends. Makes our technologies better and our industries more efficient, and solves our various conflicts. The best FAI is one that constructs a utopia for humans.
I don't know why you believe a slave owning race is impossible. Humans of course practiced slavery in many different cultures. It's very easy for even humans to not care about the suffering of other groups. And even if you do believe most humans could be convinced it's wrong (I'm not so sure), there are actual sociopaths that don't experience empathy at all.
Humans also have plenty of sinister values, and I can easily believe aliens could exist that are far worse. Evolution tended to evolve humans that cooperate and have empathy. But under different conditions, we could have evolved completely differently. There is no law of the universe that says beings have to have values like us.
I think optimizing anything is always immoral, exactly because it means imposing things that you should not be imposing. It is also the behavior of a fanatic, not a normal human being; that is the whole reason for the belief that AIs would destroy the world, namely because of the belief that they would behave like fanatics instead of like intelligent beings.
In the case of the slave owning race, I am quite sure that slavery is not consistent with their fundamental values, even if they are practicing it for a certain time. I don't admit that values are arbitrary, and consequently you cannot assume (at least without first proving me wrong about this) that any arbitrary value could be a fundamental value for something.
The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person's fundamental values.
The world we live in is "immoral" in that it's not optimized towards anyone's values. Taking a single person's values would be "immoral" to everyone else. CEV, finding the best possible compromise of values, would be the least immoral option, on average. Optimize the world in a way that dissatisfies the least people the least amount.
That does not necessarily mean "living separately".
Right. I said that's the realistic worst case, when no compromise is possible. I think most people have similar enough values that this would be rare.
you want to eliminate ones with values that you really dislike. I think that is basically racist.
I don't necessarily want to kill them, but I would definitely stop them from hurting other beings. Imagine you came upon a race of aliens that practiced a very cruel form of slavery. Say 90% of their population was slaves, and the slave owning class treated regularly tortured and overworked them. Would you stop them, if you could? Is that racist? What about the values of the slaves?
When Mao prepared his Great Leap forward he thought that it doesn't make sense to have the factories in the cities. He thought it would be much better to move them to the country-side.
That was one of the worst economic decisions in history, because a lot of those relocated factories stopped working. It turns out that having factories near other factories is useful. Millions starved.
These days we know how to run a steel mill or a car man well enough to have it in a rural area but in the beginning they had to be in cities.
North Korea is less stable now than it would be if it was the world's government because all sorts of outside pressure contribute to its instability (technology created by more free nations, pressure from foreign governments, etc).
The outside world also contributes to it's stability. The current leader is educated in Switzerland and he might be a less rational actor if he would simply be educated at a North Korean school
My point wasn't the direct effect of chlorine on the skin but that it kills the native bacteria on the skin and thus different bacteria might have it easier to populate the skin. Those might then create problems.
I'm not sure whether showering has similar effects than swimming in the first place because the average shower won't match swimming polls that are chlorinated to kill bacteria even without a shower filter.
musicman4534 writes an article on HDAC inhibitors: https://www.reddit.com/r/Nootropics/comments/596gbi/i_wrote_an_article_on_hdac_inhibitors_geared/
Yes, but evolutionary pressures wouldn't be shaping bioterrorism created viruses in the short run. Also, until we can cure the common cold what's to prevent terrorists (in 10 years with CRISPR) from making a cold virus that's much more virulent, that stays hidden for a few months, and then kills its host.
I described what it feels from the inside to run into philosophical skepticism
That was the content . The title was a final solution to philosophical scepticism. The title doesn't match the content . Scepticism is a set of problems about the possibility and limitation of knowledge. The title doesn't match the content.
Philosophical skepticism isn't a statement about the world; it's a mental feeling
It isn't either. Scepticism is a set of problems about the possibility and limitation of knowledge.
Philosophical skepticism isn't a statement about the world; it's a mental feeling
Pushing aside isn't solving, it's dissolving at best. You can't get to "everything is knowable" from "sometimes brain s get overheated".
"It doesn't seem as ethical as more conciliatory approaches." I agree. That is because it is not the best strategy. It may not be the worst possible strategy, but it is not the best. And since the people engaging in that strategy, their ability to think about it, over time, will lead them to adopt better strategies, namely more conciliatory approaches.
I don't say that the good is achieved by selection alone. It is also achieved by the use of reason, by things that use reason.
I'm not sure how you're addressing what I said. What do you mean by escaping vacuity? I used "good for them" in that comment because you did, when you said that not everything people value is good for them. I agree with that, if you mean the particular values that people have, but not in regard to their fundamental values.
Saying that something is morally good means "doing this thing, after considering all the factors, is good for me," and saying that it is morally bad means "doing this thing, after considering all the factors, is bad for me." Of course something might be somewhat good, without being morally good, because it is good according to some factors, but not after considering all of them. And of course whether or not it will benefit your communities is one of the factors.
We get the idea of "good" from the fact that we are tending to do various things, and we assume that those various things must have something in common that explains why we are tending to do all of them. We call that common thing "good."
....a word which means a number of things, which are capable of conflicting with each other. Moral good refers to things that are beneficial at the group level, but which individuals tend not to do without encouragement.
I mean, he can chime in, but I think he is looking at it from the perspective of a "thing that has happened". We don't have standing to object because we are gone.
I doubt he thinks there is a duty to roll over. (Don't want to put words in your mouth tho, man. Let me know if I'm misunderstanding you here.) The vibe I get from his argument is that, once we are gone, who cares what we think?
Arguments made by humans can effect other humans, and from that effect their actins, and from that effect the universe.
In this case, the argument is about whether humans should resist or acquiesce to their own replacement. I take Dagn's "good" to indicate support for the latter option.
The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person's fundamental values.
If ithey find it immoral in the sense of crossing a line that should never be crossed, then they are not going to play. I don't think the morals=values theory can tell you where the bright lines are, and that is why I think rules and a few other things are involved in ethics.
There is some truth in it, however, insofar as in reality, for reasons I have been saying, beings that have fundamental desires for others to suffer and die are very unlikely indeed, and any such desires are likely to be radically qualified. To that degree you are somewhat right: desires like that are in fact evil. But because they are evil, they cannot exist
Consider a harder case....a society that is ruthless in crushing any society that offers any rivalry or opposition to them, but otherwise leaves people alone. Since that is a survival promoting strategy, you can't argue that it would just be selected out. But it doesn't seem as ethical as more conciliatory approaches.
I was just listeing to an NPR piece about the problem of workplace noise, with some focus on noisy co-workers. For example, working near someone with a frequent loud cough is a miserable thing.
I don't know if there's a munchkin solution, but possibly being a noise consultant for businesses is a possible niche. Perhaps just selling white noise machines in bulk to businesses would work.
The economics of being a therapist.
http://siderea.livejournal.com/1303059.html
http://siderea.livejournal.com/1308342.html
http://siderea.livejournal.com/1308342.html
A very short version is that insurance doesn't compensate adequately, and a therapist can't work 40 hours/week because patients typically aren't available during conventional working hours. Also, therapists are only paid when patients show up, so schedulng enough hours is even harder than it sounds.
If chlorine is the problem, there are shower filters that take out a lot of the chlorine.
http://showerfilterscompared.net/
As for money, it's a complicated question, since ChristianKl contributed (I didn't know about chlorine as a possible problem) and you'd need to gamble by getting the filter.
If a fliter works and you feel like sending me $100/year, I won't turn it down.
More about coping: http://www.legalnomads.com/chronic-pain/
In Australia we currently produce enough food for 60 million people. This is without any intensive farming techniques at all. This could be scaled up by a factor of ten if it was really necessary, but quality of life per capita would suffer.
I think smaller nations are as a general rule governed much better, so I don't see any positives in increasing our population beyond the current 24 million people.
<shrug> it's consumption
That doesn't make it right or even tolerable.
Does it annoy you that a lot of money, time, and effort is spent on a big fancy dinner and then it just turns into poop?
In that case, there's less waste because your body extract whatever calories it can with an added bonus of pleasure.
But yes!, it annoys me that calories are seen as a major social lubricant, as much as it does annoy me all the Mton of food wasted by grocery stores each day all around the world!
Well, the only means I have of advertising my "leftovers" are say a local market and ebay. These are the markets that are accessible to me, but if there's someone who would want them but is in Japan and we could not communicate, then there's a want that cannot turn into demand (and so in value) because there's no market that connect us.
So I do not equate want and value, because in that case the Japanese collector and I do not have a mean to translate our demand/supply into an exchange.
It just sounds like you're saying that the final authority gets decided at run-time, based on whoever happens to have the most financial power.
That's just one of the many possibilities.
Why do you think this is preferable to a system where authority is agreed upon beforehand by a majority of the people?
Democracy inevitably becomes a grandiose popularity contest where the population votes based on social-signaling considerations which have little if nothing to do with putting into place an institution which will lead to sustainably benevolent results for the society. There are all sorts of oddities, such as systematic redistribution of resources from the productive members of the economy to the unproductive, shortsighted policy enactment because the real problems of society usually can't be solved without initial pain which the politician would be blamed for, and so forth.
The comparison to religion makes no sense. Unlike biological organisms, human governments are designed. For example, in the case of the US, the structure and function of the court system is very explicitly laid out in the US constitution, and it was carefully designed in a committee via months/years of debate.
The court system is an absolute wreck, no matter how "carefully designed" the designers believe it to be.
Imagine a pre-industrial world with two villages on either side of a large forest. The people need to get back and forth between these villages every few days or weeks. The first person through his own self-interest simply looks for the easiest path, breaking several branches on his way. The next person does the same, probably going on a completely different route, not thinking anything of the previous person. After quite a few iterations of this, some of the people will end up going on routes that were previously made a bit easier by previous hikers. After tens of thousands of iterations of this, there will be convenient trails going through the woods in an efficient way, with all the obstacles neutralized.
If a foreigner chanced upon this creation, they would surely think to themselves, "What a great trail system! I'm glad the people of this area were kind enough to make a trail for all to use!" They would immediately jump to the idea that the trail, looking like it was created for a purpose, must have been designed by a committee of individuals or commissioned by a wise member of one of the villages. But no such thing happened; each person acted upon their own self-interest, and the byproduct was a trail system that looks like it was designed but really was an automatically emergent order.
Most of what works very well in society is like this, and most of what breaks in a disastrous way is an attempt to design systems where simply setting the initial conditions for an emergent order would have been a much better idea.
Investors simply try to buy low and sell high for their own self-interest. Many of them, even very successful ones, probably have little or no appreciation for how important the role of investors is in the emergent order of the economic system.
You are currently saying that the good is what people fundamentally value, and what people fundamentally value is good....for them. To escape vacuity, the second phrase would need to be cashed out as something like "side survival".
But whose survival? If I fight for my tribe, I endanger my own survival, if I dodge the draft, I endanger my tribes'.
Real world ethics has a pretty clear answer: the group wins every time. Bravery beats cowardice, generosity beats meanness...these are human universals. if you reverse engineer that observation back into a theoretical understanding, you get the idea that morality is something programned into individuals by communities to promote the survival and thriving of communities.
But that is a rather different claim to The Good is the Good.
I'm not advocating the idea that morality is value, I am examining the implications of what other people have said.
You wrote an article purporting to explain the Yudkowskian theory of morality, and, indeed the one true theory of morality, since the two are the same.
Hypothetically, making a few comments about value, and nothing but value, doesn't do what is advertised on the label. The reader would need to know how value relates back to morality.
And in fact you supplied the rather definitional sounding statement that Morality is Values.
If you base an argument on a definition ,don't be surprised if people argue about it. The alternative, where someone can stipulate a definition, but no one can challenge it, is a game that will always be won by the first to move.
I heard that some employers ask the job candidates for their GitHub accounts, so they can check the quality of code these people write in their free time. I have no idea how frequent is this; I really hope it doesn't become an industry standard, because I believe in separation between job and free time. But if it does become a standard, here is a business opportunity...
Create GitHub accounts in other people's names, and provide high-quality patches to open-source software from there.
The idea is that you would find someone who is a good programmer and loves contributing to open-source software, but wouldn't mind making some money by pretending that some of those contributions were actually done by someone else. So they could either use someone else's account to submit a few patches from there, or they could have an account with some generic name (i.e. not their own name, but something like "kingoftheinternet2000") they don't mind lending to someone during their interview.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)