Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Mac 19 August 2014 02:38:04PM 0 points [-]

I agree with you that automated processes will eventually have an absolute advantage in all areas of productivity. However, humans only need a comparative advantage to be employable. The theory of comparative advantage is a "powerful yet counter-intuitive insight in economics" and I recommend checking it out. Ricardo's example is especially instructive, link is below.

http://en.wikipedia.org/wiki/Comparative_advantage#Ricardo.27s_Example

Imagine Portugal is a robot, and England is a human.

Comment author: Viliam_Bur 20 August 2014 07:33:54AM *  0 points [-]

I am not sure this analogy will work.

As an extreme example, today a computer processor can calculate one addition in 1 nanosecond, and one multiplication also in 1 nanosecond. A human can calculate one addition in 10 seconds, and one multiplication in 100 seconds (multiple-digit integers).

Taking the law of comparative advantages too literally, if I have a computer, I should be able to trade with people, offering to multiply integers for them, if they will add integers for me, for a ratio of e.g. 3 multiplications for 1 addition. They should profit, because instead of spending 100 seconds doing one multiplication, they only need to spend 30 seconds doing three additions for me, and then I will do the multiplication for them. I should profit, because instead of wasting 3 nanoseconds for three additions, I only need to spend 1 nanosecond for one multiplication.

But in real life it wouldn't work for the obvious reasons (transaction costs being a few magnitudes higher than any possible profit).

This is a silly example mostly proving that even the comparative advantages are not guaranteed to save the day. If the difference between robots and humans becomes too large, the costs of having to deal with humans will outweigh the possible gains from trade.

Comment author: Slider 18 August 2014 08:23:30PM 0 points [-]

I have trouble understanding what is the problem.

Humans are not neccesary for work but we still command enourmous economic power. It's not like we are going to be poor. So we might have something like 1 working day and 6 days off in a week.

In the video wasn't it a happy ending for the horses? Majority of horses are in a life-long pension. I we can get stuff without working, why insist to work?

Comment author: Viliam_Bur 19 August 2014 09:03:29AM *  5 points [-]

It's not like we are going to be poor.

Unless those robots will belong to you, or the money from the owners of the robots will somehow get to you (e.g. taxation and basic income), you are going to be poor when you are no longer able to compete with the robots.

By the way, there will also be a robotic police and army, able to protect the system regardless of how dissatisfied how many humans become.

Comment author: Mac 18 August 2014 10:34:13PM -1 points [-]

This video is a polished example of the Luddite fallacy.

Are things “different this time”? As long as we haven't created godlike AI, humans will still have a comparative advantage in something.

Comment author: Viliam_Bur 19 August 2014 08:56:45AM 1 point [-]

humans will still have a comparative advantage in something

At this moment, I think the greatest advantage of humans with low intelligence is that they are relatively flexible, "easy to program", and come with built-in heuristics for unexpected situations. By which I mean that they can easily walk across your factory or shop; and you don't have to be a computer programmer to explain them you want them to pick boxes from one place and move them to other places sorting them by color. And in case of fire (which you forgot to mention explicitly during their job training), instead of quietly continuing to move the boxes until everything burns, they would call for help.

Give me reasonably cheap robots with these skills, and I think some people will have no economical comparative advantage left. Getting from there to replacing an average programmer would probably be a shorter distance than getting from zero to there.

Comment author: Vladimir_Nesov 17 August 2014 08:41:09PM *  10 points [-]

I agree. Viliam seems to be currently the most active of the people who would fit the role, if he's willing to take it.

Comment author: Viliam_Bur 18 August 2014 06:52:08AM *  21 points [-]

Thanks for the trust. I can imagine doing unplesant decisions based on data. I am not sure how to get those data; but I guess I would speak with developers and ask them to make me some database reports. Or Kaj would explain me what he did.

I accept the nomination. I am okay with doing this either alone or with someone else -- would slightly prefer to have a second opinion, but could act without it, too.

Comment author: Viliam_Bur 17 August 2014 02:10:12PM 4 points [-]

I will post some thoughts about the document in the second link:

I believe that speaking about a decline is an exaggeration. We have MIRI and CFAR, which seem to have enough money to survive and do some activities. We have meetups around the world. We have discussion on LW; sometimes more articles, sometimes less, but usually it's a few articles a day. I don't think we had more than this in the past. Sometimes it feels like we are losing momentum (which is not the same thing as decline), and I am not sure whether data support it. I admit that a few years ago I expected "something huge", and these days it's like "I don't even know what exactly is happening", but this may be a fact about my knowledge.

Maybe a good start would be to make a page on LW wiki describing the timeline of the rationalist movement. What happened when. Looking at this calendar would give us better idea about our progress. (Also it would provide some material to media.)

I agree with the idea of smaller task groups. I mean, we have MIRI, we have CFAR, but most of us at this website are members of neither. Yet some of us would like to do something. We could start informal groups cooperating on internet (or in person in those lucky places with enough rationalists), pick some smaller task and do it. There is a website for volunteers, but we don't have to limit ourselves to the tasks given by other people; we could also try our own ideas.

The details of motivation and organization of these groups seem to me overly complicated. If the group has half dozen members, I guess we can just use the common sense. The difficult part would be finding those half dozen people willing to spend their time towards a common goal. And I think that for the beginning, a sufficient motivation could be: just do your task, and then publish a bragging article on LW.

Educating those people in rationality seems like a task for one of such groups.

Comment author: Azathoth123 16 August 2014 04:38:34AM 2 points [-]

Where did those rules come from? What process generated them?

Where did your utility function come from? What process generated it?

Comment author: Viliam_Bur 16 August 2014 04:27:23PM 1 point [-]

Evolution, of course.

We could classify reflexes and aversions as deontological rules. Some of them would even sound moral-ish, such as "don't hit a person stronger than you" or "don't eat disgusting food". Not completely unlike what some moral systems say. I guess more convincing examples could be found.

But if the rule is more complex, if it requires some thinking and modelling of the situation and other people... then consequences are involved. Maybe imaginary consequences (if we don't give sacrifice to gods, they will be angry and harm us). Though this could be considered merely a rationalization of a rule created by memetic evolution.

Comment author: Vulture 13 August 2014 08:45:10PM 0 points [-]

Wait, what? Did considering genocide more heinous than regular mass murder only start with the end of WWII?

Comment author: Viliam_Bur 15 August 2014 08:49:55AM *  2 points [-]

Unfortunately, genocides happen all the time.

But only one of them got big media attention. Which made it the evil.

Cynically speaking: if you want the world to not pay attention to a genocide, (a) don't do it in a first-world country, and (b) don't do it during the war with other side which can make condemning the genocide part of their propaganda, especially if at the end you lose the war.

Comment author: polymathwannabe 13 August 2014 08:45:45PM 7 points [-]

Intelligence/muscles = a natural faculty every human is born with, in greater or lesser degree.

Rationality/gymastics = a teachable set of techniques that help you refine what you can do with said natural faculty.

Comment author: Viliam_Bur 15 August 2014 08:18:28AM *  1 point [-]

Probably this explains why the distinction between intelligence and rationality makes sense for humans (where some skills are innate, and some skills are learned), but doesn't necessarily make sense for self-improving AIs.

Intelligence is about our biological limits, which determine how much optimizing power we can produce in short term (on the scale of seconds), which is more or less fixed. Rationality is about using this optimizing power over long term. Intelligence is how much "mental energy" you can generate per second. Rationality is how you use this energy, if you are able to accumulate it, etc.

Seems like in humans, most of this generated energy is wasted, so there can be a great difference between how much "mental energy" you can generate per second, and whether you can accumulate enough "mental energy" to reach your life goals. (Known as: "if you are so smart, why aren't you rich?") A hypothetical perfect Bayesian machine could use all the "mental energy" efficiently, so there would be some equation connecting its intelligence and rationality.

Comment author: Viliam_Bur 15 August 2014 07:58:55AM *  4 points [-]

This reminds me of a part of Zombie Sequence, specifically the Giant Lookup Table. Yes, you can approximate consequentialism by a sufficiently complex set of deontological rules, but the question is: Where did those rules come from? What process generated them?

If we somehow wouldn't have any consequentialist intuitions, what is that probability that we would invent a "don't murder" deontological rule, instead of all the possible alternatives? Actually, why would we even feel a need for having any rules?

Deontological rules seem analogical to a lookup table. They are precomputed answers to ethical questions. Yes, they may be correct. Yes, using them is probably much faster than trying to compute them from scratch. But the reason why we have these deontological rules instead of some other deontological rules is partly consequentialism and partly historical accidents.

Comment author: Stefan_Schubert 14 August 2014 08:18:46PM 3 points [-]

Much of my current research (in philosophy, at LSE) concerns the general themes of "objectivity" and (strategies for strengthening) "co-operation", especially in politics. I didn't start doing research on these themes because of any concern with existential risk. However, it could be argued that in order to reduce X-risk, the political system needs to be improved. People need to become less tribal, more co-operative, and more inclined to accept rational arguments, both between and within nation states (though I mostly deal do research on the latter). In any case, here is what I'm working on/considering working on, in more precise terms:

1) Strategies for detecting tribalism. People's beliefs on independent but politically controversial questions, such as to what extent stricter gun laws would reduce the number of homicides and to what extent climate change is man-made, tend to be "suspiciously coherent" (i.e. either you take the pro-republican position on all of these questions, or the pro-democrat on all of them). The best explanation of this is that most people acquire whatever empirical beliefs the majority of their fellow tribe members hold instead of considering the actual evidence. I'm developing statistical techniques intended to detect this sort of tribalism or bias. For instance, people could take tests of their degree of political bias. Alternatively, you could try to read off their degree of bias from existing data. To make these inferences sufficiently precise and reliable promises to be a tough task, however.

2) Strategies for detecting "degrees of selfishness". This strategy is quite similar, but rather than testing the correlation between your empirical beliefs on controversial questions and those of the party mainstream, what is tested is rather the correlation betwen your opinions on policy and the policies that suit your interests. For instance, if you are male, have a high income, drive a lot and don't smoke and at the same time take an anti-feminist stance, are against progressive taxes, are against petrol taxes, and want to outlaw smoking, you would be given a high "selfishness score" (probably this score should be given another, less toxic, name). This would serve to highlight selfish behaviour among voters and politicians and promote objective and altruistic behaviour.

3) Voting Advice Applications (VAA) - i.e tests of what party is closest to your own views - are already being used to try to increase interest in politics and make people vote more on the basis of policy issues, and less on more emotional factors such as which politician they find most attractive or which party enjoys success at the moment the bandwagon effect. However, most voting advice applications are still appalingly bad, since many important questions are typically left out. Hence it's quite rational, in my opinion, for voters to discard their advice. I'd like to try to construct a radically improved VAA, which would be more than just a toy. Instead, the goal would be to construct a test which would be better at identifying which party best satisfies the voters' considered preferences than the voters intuitive judgments. If people then actually used these VAA's, this would, hopefully, lead to the politicians whose policies correspond most to those of the voters getting elected, as is intended in a democracy, and to politics getting more rational in general. The downside of this is that it is very hard to do in practice and that the market for VAA's is big.

4) Systematically criticizing politicians and other influential people's arguments. This could be done either by professionals (e.g. philosophers) or on a wiki-like webpage, something that is described here. What would be great would be if you somehow could gamify this; e.g., if, in election debates, referees would give and deduct points real-time, and that viewers could see this (e.g. through an app) instanteneously while watching the debate.

Any input regarding how tenable and important these ideas are (especially in relation to each other) in general, and how important they are for addressing x-risk are welcome.

Comment author: Viliam_Bur 15 August 2014 07:42:45AM *  9 points [-]

I like these ideas, but when speaking about political topics, even more attention must be paid to connotations. For example, most people would consider "cooperation" good, until you explain them that, technically, cooperation also includes things like "trading with slave-owners without trying to liberate their slaves". Suddenly it doesn't feel so good. Similarly, your definition of "selfishness" includes "not wanting to eat rat poison and supporting ban against adding rat poison to human food"; and a psychopath who doesn't want to eat rat poison but is perfectly okay with feeding it to other people is most "unselfish" in this specific topic.

Speaking about correlations (which are not always causations), it seems to me important to distinguish things like "I happen to have trait X, therefore I support law Y (which benefits X)" from things like "I honestly believe that having people with trait X benefits the society, which is why I developed a trait X, and why I also support law Y (which benefits X) to motivate more people to become X". Without this distinction we are putting opinions "I am white, therefore slavery is okay" and "I am a surgeon, and I think only people who studied medicine should be allowed to practice surgery" into the same category.

I guess the lesson here is that to avoid applause lights, to each noble-sounding definition you should try to also give an example of a horrible thing which technically matches the definition.

View more: Next