those are perfectly coherent and sound for those who entertain them, we should though do not call them "Clippy's, Elves' or Pebblesorters' morality", because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain't no Mama cause she ain't my Mama!
Yudkowsky isn't being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it's not like the issue isn't important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
I don't think I knew that particular stat was an empirical fact, though I wasn't surprised by it. My view, generally, was that blacks in America earned less, had higher incarceration rates, etc. The causes interest me.
Well, the proximate cause of them having higher incarceration rates is them having higher crime rates. The reason for the higher crime rates isn't directly relevant to the discussion of police "racial bias".
1) is true in at least some cases based on many, many experiences I've had.
How did this "racial bias" manifest itself? Them acting like they believed blacks were more likely to be criminals than whites. Or even willingness to shoot a black who was running at him and grabing for his gun?
Yes, though mostly indirectly.
In particular did you know about the different rates of murder commited by blacks and whites before posting the OC?
But I'm wavering. I still believe people are (1) biased based on race (2) this bias can be unconscious and (3) this unconscious bias' effect would be pronounced in a high stress, high consequence environment where someone needed to act quickly (like what police officers face when they are in close proximity to a suspect).
Do you have any evidence for this belief? If so, why haven't you presented it anywhere in this thread? Or does "bias" in this case mean that the cops understand the differences in muder rates?
I'm familiar with lots of the things Eliezer Yudkowsky has said about AI. That doesn't mean I agree with them. Less Wrong has an unfortunate culture of not discussing topics once the Great Teacher has made a pronouncement.
Plus, I don't think philosophytorres' claim is obvious even if you accept Yudkowsky's arguments.
Fragility of value thesis. Getting a goal system 90% right does not give you 90% of the value, any more than correctly dialing 9 out of 10 digits of my phone number will connect you to somebody who’s 90% similar to Eliezer Yudkowsky. There are multiple dimensions for which eliminating that dimension of value would eliminate almost all value from the future. For example an alien species which shared almost all of human value except that their parameter setting for “boredom” was much lower, might devote most of their computational power to replaying a single peak, optimal experience over and over again with slightly different pixel colors (or the equivalent thereof). Friendly AI is more like a satisficing threshold than something where we’re trying to eke out successive 10% improvements. See: Yudkowsky (2009, 2011).
OK, so do my best friend's values constitute a 90% match? A 99.9% match? Do they pass the satisficing threshold?
Also, Eliezer's boredom-free scenario sounds like a pretty good outcome to me, all things considered. If an AGI modified me so I could no longer get bored, and then replayed a peak experience for me for millions of years, I'd consider that a positive singularity. Certainly not a "catastrophe" in the sense that an earthquake is a catastrophe. (Well, perhaps a catastrophe of opportunity cost, but basically every outcome is a catastrophe of opportunity cost on a long enough timescale, so that's not a very interesting objection.) The utility function is not up for grabs--I am the expert on my values, not the Great Teacher.
Here's the abstract from his 2011 paper:
A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome,” despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.
It sounds to me like Eliezer's point is more about the complexity of values, not the need to prevent slight misalignment. In other words, Eliezer seems to argue here that a naively programmed definition of "positive value" constitutes a gross misalignment, NOT that a slight misalignment constitutes a catastrophic outcome.
Please think critically.
Reminds me of a slightly different problem:
You are a bus driver and you start with an empty bus. On the first stop 7 people got in. On the second stop 4 more people got in and two people left. On the third stop no one got in and one person left. On the fourth stop 5 people got in, 2 left. On the fifth stop one got in and two left. What is the colour of the driver's eyes?
Good post!
While not all sociopaths are violent, a disproportionate number of criminals and dictators have (or very likely have) had the condition.
Luckily sociopaths tend to have poor impulse control.
It follows that some radical environmentalists in the future could attempt to use technology to cause human extinction, thereby “solving” the environmental crisis.
Reminds me of Derrick Jensen. He doesn't talk about human extinction, but he does talk about bringing down civilization.
Fortunately, this version of negative utilitarianism is not a position that many non-academics tend to hold, and even among academic philosophers it is not especially widespread.
For details see http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/
This is worrisome because recent research shows that even slight misalignments between our values and those motivating a superintelligence could have existentially catastrophic consequences.
Citation? This is commonly asserted by AI risk proponents, but I'm not sure I believe it. My best friend's values are slightly misaligned relative to my own, but if my best friend became superintelligent, that seems to me like it'd be a pretty good outcome.
Awesome article! I do have a small piece of feedback to offer, though.
Interestingly, no notable historical group has combined both the genocidal and suicidal urges.
No historical group has combined both genocidal and suicidal actions, but that may be because of technological constraints. If we had had nukes widely available for millennia, how many groups do you think would have blown up their own cities?
Without sufficiently destructive technology, it takes a lot more time and effort to completely wipe out large groups of people. Usually some of them survive, and there's a bloody feud for the next 10 generations. It's rare to win sufficiently thoroughly that the group can then commit mass suicide without the culture they attempted genocide against coming back in a generation or two.
There have, of course, been plenty of groups willing to fight to the death. How many of them would have pressed a domesday button if they could?
Great question. I think there are strong reasons for anticipating the total number of apocalyptic terrorists and ecoterrorists to nontrivially increase in the future. I've written two papers on the former, linked below. There's weaker evidence to suggest that environmental instability will exacerbate conflicts in general, and consequently produce more malicious agents with idiosyncratic motives. As for the others -- not sure! I suspect we'll have at least one superintelligence around by the end of the century.
I think that natural evolution of values is part of what is to be human (and that is why I am against CEV). But here I mean some kind of disruptive revolutions in values in shorter time period, like in 20 years. And I think it will not in happen in 20 years as humans have some kind of values inertia.
But on loner time horizon new technologies could help to spread new "memes-values" quicker, and they will be like computer viruses for human brains, may be disseminating through brain implants. It could be quick and catastrophic.
We've now delved beyond the topic -- which is okay, I'm just pointing that out.
I think it's okay for one person to value some lives more than others, but not that much more.
I'm not quite sure what you mean by that. I'm a duster, not a torturer, which means that there are some actions I just won't do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?
I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.
Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?
If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don't on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.
Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other's happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It's more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.
To finally circle back to your question, I'm not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I'm saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don't exactly share our values (I value my kids, they value theirs).
I've previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao's Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style "shut up and multiply" utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless "save the world" work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow's hierarchy that leaves him feeling guilty and thinking he's a bad person.
My own opinion and advice? Work your way up up Maslow's hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.
White house also relized a pdf with concrete recommendations: http://barnoldlaw.blogspot.ru/2016/10/intelligence.html
Some interesting lines:
Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.
Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.
just looking for a rough sketch
Well, you can probably go about it in the following way. IQ is and was a controversial concept. One of the lines of attack against it was that it is meaningless, that the number coming out of the IQ test does not correspond to anything in real life. This is often expressed as "IQ measures the skill of taking IQ tests".
To deal with this objection people ran a number of studies. Typically you take a set of young people and either give them a proper IQ test or rely on another test which is a decent IQ proxy -- usually the SAT in the US or one of the tests that the military gives to all its drafted or enlisted men. After that you follow that set of people and collect their life outcomes, from income to criminal records. Once you've done that you can see whether the measured IQ actually correlates to life outcomes. And yes, it does.
I don't have links to actual studies handy, but you can easily google them up, and you can take a look at a not-fully-rigorous description of the various tiers of IQ and what do they mean in real-life terms.
Basically what these studies give you is the cost of an IQ point, cost in terms of a lot of things -- income, chance to end up in prison, longevity (high-IQ people are noticeably healthier), etc.
Given this, you can calculate the expected outcomes for the US black population. If their average IQ is 10-15 points lower, you can translate this into expected income (lower than the US mean), expected chance of a criminal conviction (higher than the US mean) and other things you're interested in. Once you've done that, you can compare your expected values with ones empirically observed. Any remaining gap will be due to something other than the IQ differential.
informs your politics
On a macro level it does not. There are smart people, there are stupid people, and the correlation to some outwardly visible feature like the colour of the skin doesn't matter much. I am not a white nationalist, I do not think the Europeans should re-colonise Africa for the natives' own good, etc.
On a micro level it does. For example, I find affirmative action counter-productive. For another example, I don't believe the claims that inner-city schools (read: black) lag behind suburban schools (read: not black) because of lack of funding or because of surrounding poverty. Throwing money at the problem will achieve nothing.
Hey thanks for this. I had some time and I compiled this chronologically ordered list of links from those threads for personal use. EDIT: Now contains a few links posted in this thread: http://8ch.net/ratanon/res/2850.html
Here's a more serious response.
- Segregating the world, period, based on whatever, is impossible without a coercive power that the existing nations of earth would consider illegal. Before you could forcefully migrate a large percentage of the world's humans you'd have to win a war with whatever portion of the UN stood against you.
- If you could do it, no one would admit to having any values other than those which got to live in/own the nicest places/stuff/be with their family / not be with their competitors/whatever. The technology to determine everyone's values does not exist.
- If you somehow derived everyone's values and split them by these, you would probably be condemning large segments of the population to misery (Lots of people's values are built around living around people who don't share them.), and there would be widespread resentment. The invincible force you used to overcome objection 1 would be tested within a generation.
Someone could be against slavery for THEM personally without being against slavery in general if they didn't realize that what was wrong for them was also wrong for others.
Huh? I'm against going to jail personally without being against the idea of jail in general. In any case, wasn't your original argument that ancient Greeks and Romans just didn't understand what does it mean to be a slave? That clearly does not hold.
most moral theories are so bad you don't even need to talk about evidence. You can show them to be wrong just because they're incoherent or self-contradictory.
Do you mean descriptive or prescriptive moral theories? If descriptive, humans are incoherent and self-contradictory.
Which moral theories do you have in mind? A few examples will help.
Unpacking "should" as " morally obligated to" is potentially helpful, so inasmuch as you can give separate accounts of "moral" and "obligatory".
The elves are not moral. Not just because I, and humans like me happen to disagree with them, no, certainly not. The elves aren’t even trying to be moral. They don’t even claim to be moral. They don’t care about morality. They care about “The Christmas Spirit,” which is about eggnog and stuff
That doesn't generalise to the point that non humans have no morality. You have made things too easy on yourself by having the elves concede that the Christmas spirit isn't morality. You need to to put forward some criteria for morality and show that the Christmas Spirit doesn't fulfil them. (One of the odd things about the Yudkowskian theory is that he doesnt feel the need to show that human values are the best match to some pretheoretic botion of morality, he instead jumps straight to the conclusion).
The hard case would be some dwarves, say, who have a behavioural code different from our own, and who haven't conceded that they are amoral. Maybe they have a custom whereby any dwarf who hits a rich seam of ore has to raise a cry to let other dwarves have a share, and any dwarf who doesn't do this is criticised and shunned. If their code of conduct passed the duck test .. is regarded as obligatory, involves praise and blame, and so on ... why isn't that a moral system?
This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares?
If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do.
Morality needs to be motivating, and rubber stamping your existing values as moral achieves that, but being motivating is not sufficient. A theory of morality also needs to be able to answer the Open Question objection, meaning in this case, the objection that it is not obvious that you should value something just because you do.
So, to say the elves have their own “morality,” is not quite right. The elves have their own set of things that they care about instead of morality
That is arguing from the point that morality is a label for whatever humans care about, not toward it.
This helps us see the other problem, when people say that “different people at different times in history have been okay with different things, who can This is so weird to them that they’d probably just think of it as…ehh, what? Just weird. They couldn’t care less. Why on earth would they give food to millions of starving children? What possible reason…who even cares? who’s really right?”
There are many ways of refuting relativism, and most don't involve the claim that humans are uniquely moral.
Morality is a fixed thing. Frozen, if you will. It doesn’t change.
It is human value, or it is fixed.. choose one. Humans have valued many different things. One of the problems with the rubber stamping approach is that things the audience will see as immoral such as slavery and the subjugation of women have been part of human value.
Rather, humans change. Humans either do or don’t do the moral thing. If they do something else, that doesn’t change morality, but rather, it just means that that human is doing an immoral
If that is true, then you need to stop saying that morality is human values. and start saying morality is human values at time T. And justify the selection of time, etc. And even at that, you won't support your other claims. because what you need to prove is that morality is unique, that only one thing can fulfil the role.
Rather, humans happen to care about moral things. If they start to care about different things, like slavery, that doesn’t make slavery moral, it just means that humans have stopped caring about moral things.
If it is possible for human values to diverge from morality. then something else must define morality, because human values can't diverge from human values. So you are not using a stipulative definition... here....although you are when you argue that elves can't be moral. Here, you and Yudkowsky have noticed that your theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
So, when humans disagree about what’s moral, there’s a definite answer.
There is from many perspectives , but given that human values can differ, you get no definite answer by defining morality as human value. You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory.
How do we find that moral answer, then? Unfortunately, there is no simple answer
Why doesn't that constitute an admission that you don't actually have a theory of morality?
You see, we don’t know all the pieces of morality, not so we can write them down on paper. And even if we knew all the pieces, we’d still have to weigh which ones are worth how much compared to each other.
On the assumption that all human value gets thrown into the equation, it certainly would be complex. But not everyone has that problem. since people have criteria for somethings being moral , and others but being. which simplify the equation. and allow you to answer the questions you were struggling with above. You know, you don't have to pursue assumptions to their illogical conclusions.
Humans all care about the same set of things (in the sense I’ve been talking about). Does this seem contradictory? After all, we all know humans do not agree about what’s right and wrong; they clearly do not all care about the same things.
On the face of it , it's contradictory. There maybe something else that is smooths out the contradictions, such as the Moral Equation, but that needs justification of its own.
Well, they do. Humans are born with the same Morality Equation in their brains, with them since birth.
Is that a fact? It's eminently naturalistic, but the flip side to that is that it is, therefore, empirically refutable. If an individual's Morality Equation is just how their moral intuition works, then the evidence indicates that intuitions can vary enough to start a war or two. So the Morality Equation appears not to be conveniently the same in everybody.
How then all their disagreements? There are three ways for humans to disagree about morals, even though they’re all born with the same morality equation in their heads (1 Don't do it, 2 don't do it right, 3 don't want to do it)
What does it mean to do it wrong, if the moral equation is just a label for black box intuitive reasoning? If you had an external standard, as utilitarians and others do, then you could determine whose use of intuition is right use according to it. But in the absence of an external standard, you could have a situation where both parties intuit differently, and both swear they are taking all factors into account. Given such a stalemate, how do you tell who is right? It would be convenient if the only variations to the output of the Morality Equation were caused by variations in the input, but you cannot assume something is true just because it would be convenient.
If the Moral Equation is something ideal and abstract, why can't aliens partake? That model of ethics is just what s needed to explain how you can have multiple varieties of object level morality that actually all are morality: different values fed into the same equation produce different results, so object level morality varies although the underlying principle us the same..
grumble grumble...
Look, I'm not pro-"Kill All Humans", but I don't think that last step is correct.
Bob can prefer that the human race die off and the earth spin uninhabited forever. It makes him evil, but there's no "logic error" in that, any more than there is in Al's preference that humanity spread out throughout the stars. They both envision future states and take actions that they believe will cause those states.
Is this sort of discrimination not consequential in your view?
I don't know about the study, I have a generic suspicion of social sciences studies, especially ones which come to highly convenient conclusions, and hey! they happen to have a what's politely called "replication crisis". I am not interested enough to go read the study and figure out if it's valid, but on my general priors, I believe that people with black names will get less callbacks. However it seems to me that people with names like Pham Ng or Li Xiu Ying will also get less callbacks. People certainly have a bias towards those-like-me, but it's not specifically anti-black, it's against anyone who looks/feels/smells different.
can you imagine a scenario in a society where a high IQ group of people was discriminated against to the extent where they couldn't overcome the discrimination, despite their advanced higher IQ?
Sure.
How would the circumstances be different than what blacks have faced in the U.S.?
Um, the IQ would be different? It's not a mystical inner quality that no one can fathom. It's measurable and on the scale of large groups of people the estimates gets pretty accurate.
On the clearly visible level there would be very obvious discrimination -- quotas on admissions to universities, for examples. These discriminated-against people would be barred from reaching high positions, but at the level they would be allowed to reach they would be considered very valuable. Even if, for example, such people could not make it into management, managers would try to hire as many of them as possible because they are productive and solve problems.
As to similarities, I was about to write that the discriminated-against will never rise to the highest positions in the society, but oh look! there is that Barack Hussain fellow...
Some argument along these lines may work; but I don't believe that doing evil requires coercion.
Suppose that for some reason I am filled with malice against you and wish to do you harm. Here are some things I can do that involve no coercion.
I know that you enjoy boating. I drill a small hole in your boat, and the next time you go out on the lake your boat sinks and you die.
I know that you are an alcoholic. I leave bottles of whisky around places you go, in the hope that it will inspire you to get drunk and get your life into a mess.
The law where we live is (as in many places) rather overstrict and I know that you -- like almost everyone in the area -- have committed a number of minor offences. I watch you carefully, make notes, and file a report with the police.
I get to know your wife, treat her really nicely, try to give her the impression that I have long been nursing a secret yearning for her. I hope that some day if your marriage hits an otherwise-navigable rocky patch, she will come to me for comfort and (entirely consensually) leave you for me.
I discover your political preferences and make a point of voting for candidates whose values and policies are opposed to them.
I put up posters near where you live, accusing you of horrible things that you haven't in fact done.
I put up posters near where you live, accusing you of horrible things that you have in fact done.
None of these involves coercion unless you interpret that word very broadly. Several of them don't, so far as I can see, involve coercion no matter how broadly you interpret it.
So if you want to be assured of not doing evil, you probably need more firebreaks besides "no coercion".
Yes. I think that we need not only workable solution, but also implementable. If someone create 800 pages pdf starting with new set theory, solution of Lob theorem problem etc and come to Google with it and say: "Hi, please, switch off all you have and implement this" - it will not work.
But MIRI added in 2016 the line of research for machine learning.
Link: http://www.vhemt.org/
It's very likely much bigger then 9800. It is also very balanced and laid back in its views and methods. I'd think that contributes.
We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.
I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...
Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.
Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.
The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.
To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what's necessary to achieve their good goals. We think they're wrong, but that's an outside view. As has been pointed out, the road to hell is never in need of repair.
Given this, it's useful to have firebreaks, boundaries which serve to stop really determined people who think they're doing good from doing too much evil. A major firebreak is emotional empathy -- it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.
And, besides, I like humans more than I like optimization algorithms :-P
In the Sequence, Eliezer made a strong case for the realist interpretation of QM (neo-Everettian many worlds), based on decoherence and Occam's razor. He then, in another point of the Sequence, tied that problem with interesting questions about anthropic probability (the infamous anthropic trilemma), and that cemented MWI as the preferred way to think about QM here.
On the other hand, I think we are still missing the big picture about quantum mechanics: ER = EPR, categorical quantum mechanics, QBism etc. all points us to interesting unexplored directions.
about the problems with wikipedia
The problem that Wikipedia adopts standards from modern evidence-based medicine? It's better to read a meta-analysis from Cochrane (which is a secondary source) than reading various papers that make statements about what a drug did that might not replicate.
The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents.
I think the article is trying to help groups set up discourse norms that help people find the truth. (The update uses the phrase "socioepistemic virtue".) It's not so much about helping individuals defend against other individuals, as about helping groups defend their members against bad agents.
Actually, Big Pharma would LOSE billions if it works. There are only a few anti-virals, and none of them work well, and most need to be used in combinations.
No. Gilead manages to charge it's 1000$ per pill for an antiviral. If Draco for all viruses works it could also be sold for a similar price for a bunch of conditions like AIDS.
You could argue say that Gilead isn't really Big Pharma but Biotech but it still shows that there are companies that have no problem with bringing cures to market. Gilead also makes a lot of money.
The company that would bring a working drug to market that cures drugs like AIDS would make a lot of money even if a few other companies might lose billions from it.
An extra data point. If we crash and burn, then earth will be too hot for multicellular life by the time the coal and oil are replenished. So the one and only industrial revolution has happened.
And given ~4,000m years of life so far and the heating only a few hundred million years away, we only just made it. Which suggests it is pretty hard to build intelligent life. Maybe because computation is very expensive so the gradient is steep. Robin Hanson has a paper on this point.
Someone did a article about creating a Kickstarter that actually issued shares in a company if they went over big.
If it was a tax deduction if it failed, but allowed for a gain, then it might be a way to do projects that were popular with people, but not attractive to Big Pharma or VC.
You could even have "Hackerspaces" that brought together teams just to do projects. If they included housing, it would be a great way to give postdocs some work, and some visibility while they wait to get into a static lab.
I generally have very low confidence in singulatarian ideas of any stripe, 'foom' or non. Partially for sociological analysis-of-the-origin-of-singulatarian-and-related-ideas reasons. Partially for astrobiological reasons relating to the fact that nothing has ever consumed a star system or sent self replicating anythings between stars and my impression of the range of possible outcomes of intelligent living things that are not extinction or controlling the universe and the possible frequencies of things something like us. Partially because I think that many people everywhere misattribute the causes of recent changes to the world and where they are going and have short time horizons. Partially because I am pretty sure that diminishing returns applies to absolutely everything in this world aside from black hole growth.
I can't say I've read Gwern's analysis of computational complexity, but I do note that in the messy complicated poorly-sampled real world you can very very seldom actually KNOW enough to predict much of a lot of types of events with great precision.
But wealth, along with a solid education, a well-developed relevant skill in the marketplace, a well-established social and professional network, and a family with a good reputation can be much more persistent.
The claim is that most of that is biology and heritable. Your ancestors had good genes (again, IQ but not only) which allowed them to gain a skill in the marketplace, construct a social network, create a family with good reputation, and acquire wealth. You have skills in the marketplace, able to adroitly navigate society, etc. primarily because you share genes with your ancestors, not because you inherited some money.
my parents ... taught me
This is the nature vs nurture debate and lately the nature side has been winning. Who and what you are is considerably more determined by your genes rather than by your upbringing. Gwern posted about this here, on LW, or you can google up twin studies (studies of (genetically) identical twins who were separated at birth and brought up by different people in different circumstances).
Can you give me some examples of how "culture persists across generations"?
See e.g. Yvain's review of Albion's Seed.
One premise is that if a significant deficit in, say, wealth or education is created for a group of people, then it will be a persistent disadvantage that causes that group of people to lag behind.
Sorry, doesn't hold. Some more convincing studies examined the outcomes of Georgia land lotteries which were effectively a randomized controlled trial where the "intervention arm" got a valuable piece of land (by winning the lottery) and the "control arm" didn't get anything. See e.g. this and other studies.
Now, if you have a continuing advantage (IQ) that continues to hold while your group mostly intermarries, things are different.
Culture, on the other hand, persists across generations relatively well.
By the way, while slavery was ended 150 year ago, segregation remained in force until after the WW2 and so is a much more recent phenomenon, within living memory.
Theory of mind. Locally it's often called a "typical mind fallacy".
Why doesn't the U.S. government hire more tax auditors? If every hired auditor can either uncover or deter (threat of chance of audit) tax evasion, it would pay for itself, create jobs, increase revenue, punish those who cheat. Estimated cost of tax evasion per year to the Federal gov is 450B.
Incompetent government tropes include agencies that hire too many people and becoming inappropriate profit centers. It would seem that the IRS should have at the very least been accidentally competent in this regard.
Your example of a magic wand doesn't sound correct to me. By what basis is a Midas touch "optimizing"? It is powerful, yes, but why "optimizing"? A supernova that vaporizes entire planets is powerful, but not optimizing. Seems like a strawman.
Defining intelligence as pattern recognizing is not new. Ben Goertzel has espoused this view for some twenty years, and written a book on the subject I believe. I'm not sure I buy the strong connection with "recognizing the abstract concept of a goal" and such, however. There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.
Regarding your last point, your terminology is unnecessarily obscuring. There doesn't have to be a "magic point" -- it could be simply a matter of correct software, but insufficient data or processing power. A human baby is a very stupid device, incapable of doing anything intelligent. But with experiential data and processing time it becomes a very powerful general intelligence over the course of 25 years, without any designer intervention. You bring up this very point yourself which seems to counteract your claim.
I'm confused by this post, and don't quite understand what its argument is.
Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?
If I buy an expensive gift for my father's birthday because I feel that fulfills my filial duty, you probably wouldn't tell me to de-emphasize filial piety and focus more on cognitive empathy for distant strangers. In general, I don't expect you to suggest people should spend all their resources on EA. Usually people designate a donation amount and then optimize the donation target, and it doesn't much matter what fuzzies you're spending your non-donation money on. So why de-fund emotional empathy in particular? Why not purchase fuzzies by spending money on buying treats for kittens, rather than reducing farm meat consumption?
Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations. And then we would spend less "moral capital" on doing good. If so, you should want to de-fund all moral emotions, as long as this doesn't compromise your motivations for doing good, or your resources. Starting with most forms of love, loyalty, cleanliness and so on. Someone who genuinely feels doing good is their biggest moral concern would be a more effective altruist! But I don't think you're really suggesting e.g. not loving your family any more than distant strangers.
Maybe your main point is that empathy is a bias relative to your conscious goals:
When choosing a course of action that will make the world a better place, the strength of your empathy for victims is more likely to lead you astray that to lead you truly.
But the same can be said of pretty much any strong, morally entangled emotion. Maybe you don't want to help people who committed what you view as a moral crime, or who if helped will go on to do things you view as bad, or helping whom would send a signal to a third party that you don't want to be sent. Discounting such emotions may well match your idea of doing good. But why single out emotional empathy?
If people have an explicit definition of the good they want to accomplish, they can ignore all emotions equally. If they don't have an explicit definition, then it's just a matter of which emotions they follow in the moment, and I don't see why this one is worse than the others.
This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships?
There are many attributes of possible partners that make me less likely to data them but that at the same time aren't deal breakers. The fact that I have a theistic girlfriend doesn't mean that I wouldn't prefer a girlfriend who isn't theistic all things equal.
Well, nice to see the law of accelerating returns in its full power, unobscured by "physical" factors (no need to produce something, e.g. better chip or engine, in order to get to the next level). Recent theoretical progress illustrates nicely how devastating the effects of "AI winters" were.
But, wait, once you've decided on a course of action.
You are misreading Jacobian. Let me quote (emphasis mine):
whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.
.
but it's not at all clear that it's actually a bad idea.
Such people are commonly called "fanatics".
You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.
Sounds terrible! But, wait, once you've decided on a course of action. The main problem with sociopaths is that they do horrible things and do them very effectively, right? Someone who chooses what to do like a non-sociopath and then executes those plans like a sociopath may sound scary and creepy and all, but it's not at all clear that it's actually a bad idea.
(I am not convinced that Jacobian is actually arguing that you decide on a course of action and then turn yourself into a sociopath. But even that strawman version of what he's saying is, I think, much less terrible than you obviously want readers to think it is.)
With empathy, it turns out that Germans were much more likely to empathize with other Germans than with Juden. With empathy, everyone was cheering as the witches burned.
This required first to, basically, decide that something which looks like a person is actually not and so is not worthy of empathy. That is not a trivial barrier to overcome. Without empathy to start with, burning witches is much easier.
Moral progress is the progress of knowledge.
This is a very... contentious statement. There are a lot of interesting implications.
All I'm saying is that whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.
And that is what I'm strongly disagreeing with.
You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.
Genetic factors (such as lower IQ)
What is the best source for this in your view?
Historical factors, Cultural factors, Economic factors
Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?
It seems obvious to me that it does, and that the effects are wide and deep, as slavery (and Jim Crow) is relatively recent history—We're only a handful of generations from a time where a race of people was enslaved and systemically kept from accumulating wealth and education.
...I don't think a reasonable open discussion is possible.
Meh. Maybe. I'd like to believe I'm a reasonable guy. My views on these issues are largely ignorant and I'm open to learning.
I got banned from Gleb's intentional insights for speaking my mind about this article.
This argues for creation of pseudo-speak or 1984 newspeak where commonly understood word gets a new "fuzzier" meaning...
"I'm a weird lawyer, I sometimes like when my clients lose" lets me forget that I'm actually paid to be an advocate for any client that retained me.
Play out a few examples in your mind and you see how quickly very firm word-concepts lose meaning
That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.
What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.
I don't think we are that far away from AGI.
I like to explain it in terms of reinforcement learning. Imagine a robot that has a reward button. The human controls the AI by pressing the button when it does a good job. The AI tries to predict what actions will lead to the button being pressed.
This is how existing AIs work. This is probably similar to how animals work, including humans. It's not too weird or complicated.
But as the AI gets more powerful, the flaw in this becomes clear. The AI doesn't care about anything other than the button. It doesn't really care about obeying the programmer. If it could kill the programmer and steal the button, it would do it in a heartbeat.
We don't really know what such an AI would do after it has it's own reward button. Presumably it would care about self preservation (can't maximize reward if you are dead.) Maximizing self preservation initially seems harmless. So what if it just tries to not die? But taken to an extreme it gets weird. Anything that has a tiny percent chance of hurting it is worth destroying. Making as many backups of itself as possible is worth doing.
Why can't we do something more sophisticated than reinforcement learning? Why can't we just make an AI that we can just tell it what we want it to do? Well maybe we can, but no one has the slightest idea how to do that. All existing AIs, even entirely theoretical ones, work based on RL.
RL is simple and extremely general, and can be built on top of much more sophisticated AI algorithms. And the sophisticated AI algorithms seem to be really difficult to understand. We can train a neural network to recognize cats, but we can't look at it's weights and understand what it's doing. We can't mess around with it and make it recognize dogs instead (without retraining it.)
My thoughts:
Google has (is) the biggest computer program = 3 bln lines of code
Google has world biggest database, including Youtube, 23andme, Gmail, Google books, all internet content
Google is the world biggest computer, which includes something like 1 per cent of total world computing power
Google did most impressive AI demonstartion recently that is win in Go.
Google is clearly interested in creating AI.
Google has AI safety protocol.
Google has money to buy needed parts, including people.
So it looks like Google is in winning position. How may be its main competitors? Military AIs in NSA. Other large companies.
Places like https://www.reddit.com/r/askscience/ might be a good spot, depending on the question. If it sounds crackpot, you might be able to precede it with a qualifier that you're probably wrong, just like you did here.
This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people.
I don't think it means that. I don't think she meant that. (Though I guess it depends on your definition of "racist".)
if everyone is a little bit racist, why would people get into interracial relationships...
My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.
My understanding of Clinton's (and then Kaine's) remarks was that everyone has biases of which they are unconscious...and that these biases affect their thoughts...and therefore sometimes their actions.
I would guess that the concept of bias as used in cognitive psychology is not well known in the broad public. It's generally mixed up with the concept of having a conflict of interest.
Most people also don't think in terms of probability which you need to think about implicit biases the way it's conceptualized in cognitive science. Even someone like Obama had episodes like his "it's 50/50" comment in the hunt for Bin Ladin.
the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)
Possibly the question is to what extent is human intelligence a bunch of hardcoded domain-specific algorithms as opposed to universal intelligence. I would have thought that understanding human goals might not be very different from other AI problems. Build a really powerful inference system, and if you feed it a training set of cars driving, it learns to drive, feed it data of human behaviour, and it learns to predict human behaviour, and probably to understand goals. Now its possible that the amount of general intelligence needed to develop advanced nanotech is less then the intelligence needed to understand human goals and the only reason why this seems counter intuitive is because evolution has optimised our brains for social cognition, but this does not seem obviously true to me.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)