Comment author: TheOtherDave 19 April 2012 03:09:05PM 0 points [-]

I would say "Interesting, if true. Do you have any evidence that would tend to indicate that it's true?"

Comment author: HungryTurtle 21 April 2012 12:48:53AM 0 points [-]

I'm trying to find a LW essay, i can't remember what it is called, but it is about maximizing your effort in areas of highest return. For example, if you are a baseball player, you might be around 80% in terms of pitching and 20% in terms of base running. to go from 80% up in pitching becomes exponentially harder; whereas learning the basic skill set to jump from dismal to average base running is not.

Basically, rather than continuing to grasp at perfection in one skill set, it is more efficient to maximize basic levels in a variety of skill sets related to target field. Do you know the essay i am talking about?

Comment author: TheOtherDave 20 April 2012 02:57:42PM 0 points [-]

Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses

For a sufficiently broad understanding of "practical" and "the masses" (and understanding "rationalizing" the way I think you mean it, which I would describe as educating), no. Way too many people on the planet for any of the educational techniques I know about to affect more than the smallest fraction of them without investing a huge amount of effort.

It's worth asking what the benefits are of better educating even a small fraction of "the masses", though.

or a reason to think why a more efficient society would be any less oppressive or war driven

That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.

My best guess is that collectively we value things that war turns out to be an inefficient way of achieving. I'm not confident the same is true about oppression.

In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster.

Sure. But that scenario implies that wanting to kill ourselves is the goal we're striving for, and I consider that unlikely enough to not be worth worrying about much.

What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society

Similar, yes. A system designed to optimize the environment for the stuff humans value will, if it's a better optimizer than humans are, get better results than humans do.

or is it that humanity itself is digitized

Almost entirely orthogonal.

Comment author: HungryTurtle 21 April 2012 12:37:55AM 0 points [-]

That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.

So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?

Sure. But that scenario implies that wanting to kill ourselves is the goal we're striving for, and I consider that unlikely enough to not be worth worrying about much.

My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does. Killing ourselves and killing them are two very different things. I don't think many groups have the goal of killing themselves, but do you not think that the eradication of competing out groups could be seen as increasing in-group survival?

Almost entirely orthogonal.

You are going to have to explain what you mean here.

Comment author: Swimmer963 13 April 2012 01:59:38PM 0 points [-]

Based on our earlier discussion of exactly this topic, I would say he wants to use some way of slowing down technological progress... My main argument against this is that I don't think we have a way of slowing technological progress that a) affects all actors (it wouldn't be a better world if only those nations not obeying international law were making technological progress), and b) has no negative ideological effects. (Has there ever been a regime that was pro-moderation-of-progress without being outright anti-progress? I don't know, I haven't thoroughly researched this, so maybe I'm just pattern-matching.) Also, I'm not sure how you'd set up the economic system of that society so there weren't big incentives for people or companies to innovate and profit from it.

Of course, "no one has ever succeeded at X in the past" isn't an unstoppable argument against X at all... But I am worried than any attempt to transform our current, no-brakes-on society into a 'moderated' society would be messy in the short term, and probably fail in the long term. (At our current level of technology, it's basically possible for individuals to make progress on given problems, and that would be very hard to stop.)

Comment author: HungryTurtle 20 April 2012 03:13:56PM 0 points [-]

I don't think we have a way of slowing technological progress that a) affects all actors (it wouldn't be a better world if only those nations not obeying international law were making technological progress), and b) has no negative ideological effects.

By "negative ideological effects" do you mean the legitimization of some body of religious knowledge? As stated in my post to Dave, if your objective is to re-condition society to have a rational majority, I can see how religious knowledge (which is often narratively rather than logically sequenced) would be seen as having "negative ideological effects. However, I would argue that there are functional benefits of religion. One of which is the limitation of power. Historically technological progress has for millennia been slowed down by religious and moral barriers. One of the main effects of the scientific revolution was to dissolve these barriers that impeded the production of power (See Mannheim, Ideology and Utopia). However, the current constitution of American society still contains tools of limitation, even non-religious ones. People don’t often look at it this way, but taxation is used in an incredibly moral way. Governments tax highly what they want to dissuade and provide exemptions, even subsidies for what they want to promote. The fact that there is a higher tax on cigarettes is a type of morally based restriction on the expansion of the tobacco industry in our society.

Stronger than taxation there is ability to flat out illegalize something or stigmatize it. Compared to the state of marijuana as an illegal substance and the stigma it carries in many communities makes the limitation of the cigarettes industry through taxation seems relatively minor.

Whether social stigma, taxation, or illegalization, there are several tools at our nation’s disposal to alter the development of industries due to subjective moral values, next to none of which are aimed at limiting the information-technology industries. There is no tax on certain types of research based on a judgment of what is right or wrong. To the contrary, the vast majority of scientific research is for the development of weapons technologies. And who are the primary funders of this research? The department of homeland security and the U.S military make up somewhere around 65-80% of academic research (this statistic might be a little off).

In regards to non-academic research, one of the primary impetuses may not be militarization, but is without doubt entrepreneurialism. Where the primary focus of a person or group is the development of capital the purpose of innovation becomes not fulfilling some need, but to create needs to fulfill the endless goal of cultivating more wealth. Jean Baudrillard is a very interesting sociologist, whose work is built around the idea that in western society no longer do the desires (demands) of people lead to the production of a supply, but rather where desires (demands) are artificially produced by capitalists to fulfill their supplies. A large part of this production is symbolic,, and ultimately distorts the motivations and actions of people to contradict the territories they live in.

Comment author: TheOtherDave 13 April 2012 01:50:32PM 0 points [-]

Yup, implementation of technological innovation has costs as well as benefits.

What kind of moderation do you have in mind?

Comment author: HungryTurtle 20 April 2012 02:19:51PM 1 point [-]

Honestly, I would moderate society with more positive religious elements. In my opinion modern society has preserved many dysfunctional elements of religion while abandoning the functional benefits. I can see that a community of rationalists would have a problem with this perspective, seeing that religion almost always results in an undereducated majority being enchanted by their psychological reflexes; but personally, I don’t see the existence of an irrational mass as unconditionally detrimental.

It is interesting to speculate about the potential of a majorly rational society, but I see no practical method of accomplishing this, nor a reason to believe that, I see no real reason to believe that if there was such a configuration would necessarily be superior to the current model.

Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses, or a reason to think why a more efficient society would be any less oppressive or war driven. In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster. As for the project of pursuit of Friendly AI, I do not know that much about it. What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized? I would be very interested to know…without being told to read an entire tome of LW essays.

Comment author: TheOtherDave 13 April 2012 01:57:27PM 2 points [-]

I think you're welcome to have whatever goals you like, and so are the soccer players. But don't be surprised if the soccer players, acknowledging that your goal does not in fact seem to be at all relevant to anything they care about, subsequently allocate their resources to things they care about more and treat you as a distraction rather than as a contributor to their soccer-playing community.

Comment author: HungryTurtle 19 April 2012 12:29:39PM -1 points [-]

What would you say if I said caring about my goals in addition to their own goals would make them a better soccer player?

Comment author: DSimon 16 April 2012 03:57:53AM *  0 points [-]

In regards to why it's possible, I'll just echo what TheOtherDaveSaid.

The reason it's helpful to try for a single top-level utility function is because otherwise, whenever there's a conflict among the many many things we value, we'd have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two?

Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?

So I guess it comes down to: how important is it to you that your values are self-consistent?

More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.

Comment author: HungryTurtle 18 April 2012 12:44:35PM 0 points [-]

Thanks for the link. I'll respond back when I get a chance to read it.

Comment author: Desrtopa 15 April 2012 03:42:14AM *  7 points [-]

Eliezer hasn't argued for the unquestioned rightness of rapid, continual technological innovation. On the contrary, he's argued that scientists should bear some responsibility for the potentially dangerous fruits of their work, rather than handwaving it away with the presumption that the developments can't do any harm, or if they can, it's not their responsibility.

In fact, the primary purpose of the SIAI is to try and get a particular technological development right, because they are convinced that getting it wrong could fuck up everything worse than anything has ever been fucked up.

Comment author: HungryTurtle 18 April 2012 12:20:30PM 0 points [-]

Could you show me where he argues this?

Comment author: thomblake 13 April 2012 02:55:19PM 0 points [-]

Definitely barking up the wrong tree there. <strike>Chaos-worshippers</strike>Dynamists like me are under-represented here for such a technology-loving community - note that the whole basis of FAI is that rapidly self-improving technology by default results in a Bad End.

Contrast EY's notion of AGI with Ben Goertzel's.

Comment author: HungryTurtle 14 April 2012 01:19:28PM -2 points [-]

Definitely barking up the wrong tree there.

I am asking for Eliezer to apply the technique described in this essay to his own belief system. I don't see how that could be barking up the wrong tree, unless you are implying that he is some how impervious to "spontaneously self-attack[ing] strong points with comforting replies to rehearse, then to spontaneously self-attack the weakest, most vulnerable points."

Comment author: HungryTurtle 13 April 2012 01:41:20PM -3 points [-]

I would like to ask if you have turned this idea against your own most cherished beliefs?

I would be really interested to hear what you see when you "close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts" rationality and singularity the most.

If you would like to know what someone who partially disagrees with you would say:

In my opinion, the objective of being a rationalist contains the same lopsided view of technology's capacity to transform reality that you attribute to God in the Jewish tradition.

According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying "God did it!" only when you've been blessed with a baby girl, and just-not-thinking "God did it!" for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God's benevolent personality.

Technology cures diseases, provides a more materially comfortable life style for many people, and feeds over 7 billion. By saying "rapid innovation did it" when blessed with a baby girl who would have died in birth without modern medical equipment, and just-not-thinking "rapid implementation of innovation did it" for ecocide, the proliferation of nuclear waste, the destruction of the ocean, increase in cancer, and the ability to wipe out an entire city thousands of miles away, you can build up quite a lopsided picture of technological development's beneficial personality.

The unquestioned rightness of rapid, continual technological innovation that disregards any negative results as potential signs for the need of moderation is what I see as the weakest point of your beliefs. Or at least my understanding of them.

Comment author: Nectanebo 13 April 2012 05:08:06AM 1 point [-]

Maybe this was the a poor choice, but it was what I choose to do.

Good, now that you've realised that, perhaps you might want to abandon that name.

The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is innaccurate and kinda incendiary.

Comment author: HungryTurtle 13 April 2012 12:57:42PM -1 points [-]

The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is inaccurate and kinda incendiary.

Here is my reasoning for choosing this title. If you don't mind could you read it and tell me where you think I am mistaken.

I realize that saying 'rationally irrational' appears to be a contradiction. However, the idea is talking about the use of rational methodology at two different levels of analysis. Rationality at the level of goal prioritization potentially results in the adoption of an irrational methodology at the level of goal achievement.

L1- Goal Prioritization L2- Goal Achievement

L1 rationality can result in a limitation of L2 rationality within low priority goal context. Let’s say that someone was watching me play a game of soccer (since I have been using the soccer analogy). As they watched, they might critique the fact that my strategy was poorly chosen, and the overall effort exerted by me and my teammates was lackluster. To this observer, who considers themselves a soccer expert, it would be clear that my and my team’s performance was subpar. The observer took notes of all are flaws and inefficient habits, then after the game wrote them all up to present to us. Upon telling me all these insightful f critiques, the observer is shocked to hear that I am grateful for his effort, but am not going to change how I or my team plays soccer. He tries to convince me that I am playing wrong, that we will never win the way I am playing. And he is correct. To any knowledgeable observer I was poorly, even irrationally, playing the game of soccer. Without knowledge of L1 (which is not observable) the execution of L2 (which is observable) cannot be deemed rational or irrational, and in my opinion, will appear irrational in many situations.

Would you say that to you it appears irrational that I have chosen to label this idea as ‘rationally irrational?’ If that is correct. I would suggest that I have some L1 that you are unaware of, and that while my labeling is irrational in regard to L2 (receiving high karma points / recognition in publishing my essay on your blog) that I have de-prioritized this L2 for the sake of my L1. What do you think?

View more: Prev | Next