Comment author: RobbBB 29 November 2012 07:24:10AM *  5 points [-]

The point is very well-made. But it's not a philosophy-specific one. Mathematicians with a preferred ontology or axiomatization, theoretical physicists with a preferred nonstandard model or QM interpretation, also have to face up to the fact that neither intuitiveness nor counter-intuitiveness is a credible guide to truth — even in cases where there is no positive argument contesting the intuition. Some account is needed for why we should expect intuitions in the case in question to pick out truths.

Comment author: JaySwartz 30 November 2012 07:47:06PM *  1 point [-]

I think a semantic check is in order. Intuition can be defined as an immediate cognition of a thought that is not inferred by a previous cognition of the same thought. This definition allows for prior learning to impact intuition. Trained mathematicians will make intuitive inferences based on their training, these can be called breakthroughs when they are correct. It would be highly improbable for an untrained person to have the same intuition or accurate intuitive thoughts about advanced math.

Intuition can also be defined as untaught, non-inferential, pure knowledge. This would seem to invalidate the example above since the mathematician had a cognition that relied on inferences from prior teachings. Arriving at an agreement on which definition this thread is using will help clarify comments.

Comment author: DanArmak 30 November 2012 06:28:20PM 8 points [-]

epistemology has little or nothing to do with how untrained people gain confidence in their beliefs as knowledge, etc.

Epistemology is about how to acquire beliefs correctly. How untrained people actually acquire beliefs is some kind of social science. Just like rocketry is distinct from investigating how untrained people imagine rockets work.

Comment author: JaySwartz 30 November 2012 06:43:16PM 5 points [-]

More specifically, epistemology is a formal field of philosophy. Epistemologists study the interaction of knowledge with truth and belief. Basically, what we know and how we know it. They work to identify the source and scope of knowledge. An epistemological statement example goes something like this; I know I know how to program because professors who teach programming, authoritative figures, told me so by giving me passing grades in their classes.

In response to comment by JaySwartz on Wanting to Want
Comment author: NancyLebovitz 28 November 2012 10:45:14PM 2 points [-]

From what I've heard, the typical response to believing that blond people are dumb and observing that blond Sandy is intelligent is to believe that Sandy is an exception, but blond people are dumb.

Most people are very attached to their generalizations.

Comment author: JaySwartz 29 November 2012 03:52:23PM -1 points [-]

Quite right about attachment. It may take quite a few exceptions before it is no longer an exception. Particularly if the original concept is regularly reinforced by peers or other sources. I would expect exceptions to get a bit more weight because they are novel, but no so much as to offset higher levels of reinforcement.

Comment author: Vaniver 25 October 2012 10:59:51PM 4 points [-]

I think you're looking at this discussion from the wrong angle. The question is, "how do we differentiate first-order wants that trump second-order wants from second-order wants that trump first-order wants?" Here, the order only refers to the psychological location of the desire: to use Freudian terms, the first order desires originate in the id and the second order desires originate in the superego.

In general, that is a complicated and difficult question, which needs to be answered by careful deliberation- the ego weighing the very different desires and deciding how to best satisfy their combination. (That is, I agree with PhilGoetz that there is no easy way to distinguish between them, but I think this is proper, not bothersome.)

Some cases are easier than others- in the case of Sally, who wants to commit suicide but wants to not want to commit suicide, I would generally recommend methods of effective treatment for suicidal tendencies, not the alternative. But you should be able to recognize that the decision could be difficult, at least for some alteration of the parameters, and is the alteration is significant enough it could swing the other way.

There is also another factor which clouds the analysis, which is that the ego has to weigh the costs of altering, suppressing, or foregoing one of the desires. It could be that Larry has a twin brother, Harry, who is not homosexual, and that Harry is genuinely happier that Larry is, and that Larry would genuinely prefer being Harry to being himself; he's not mistaken about his second-order want.

However, the plan to be (or pretend to be) straight is much more costly and less likely to succeed than the plan to stop wanting to be straight, and that difference in costs might be high enough to determine the ego's decision. Again, it should be possible to imagine realistic cases in which the decision would swing the other way. (Related.)

It's also worth considering how much one wants to engage in sour grapes thinking- much of modern moral intuitions about homosexuality seem rooted in the difficulty of changing it. (Note Alicorn's response. Given that homosexuality is immutable, then plans to change homosexuals are unlikely to succeed, and they might as well make the best of their situation. But I hope it's clear that, at its root, this is a statement about engineering reality, not moral principles- if there were a pill that converted homosexuals to heterosexuals, then the question of how society treats homosexuals would actually be different, and if Larry asked you to help him make the decision of whether or not to take the pill, I'm sure you could think of some things to write in the "pro" column for "take the pill" and in the "con" column for "don't take the pill."

Why I said this is worth considering is that, as should be unsurprising, two wants conflict. Often, we don't expect the engineering reality to change. Male homosexuality is likely to be immutable for the lifetimes of the ones that are currently alive, and it's more emotionally satisfying to declare that homosexual desires don't conflict with important goals than reflect on the tradeoffs that homosexuals face that heterosexuals don't. Doing so, however, requires a sort of willful blindness, which may or may not be worth the reward gained by engaging in it.

In response to comment by Vaniver on Wanting to Want
Comment author: JaySwartz 28 November 2012 09:33:44PM -1 points [-]

While the Freudian description is accurate relative to sources, I struggle to order them. I believe it is an accumulated weighting that makes one thought dominate another. We are indeed born with a great deal of innate behavioral weighting. As we learn, we strengthen some paths and create new paths for new concepts. The original behaviors (fight or flight, etc.) remain.

Based on this known process, I conjecture that experiences have an effect on the weighting of concepts. This weighting sub-utility is a determining factor in how much impact a concept has on our actions. When we discover fire burns our skin, we don't need to repeat the experience very often to weigh fire heavily as something we don't want touching our skin.

If we constantly hear, "blonde people are dumb," each repetition increases the weight of this concept. Upon encountering an intelligent blond named Sandy, the weighting of the concept is decreased and we create a new pattern for "Sandy is intelligent" that attaches to "Sandy is a person" and "Sandy is blonde." If we encounter Sandy frequently, or observe many intelligent blonde people, the weighting of the "blonde people are dumb" concept is continually reduced.

Coincidentally, I believe this is the motivation behind why religious leaders urge their followers to attend services regularly, even if subconsciously. The service maintains or increases weighting on the set of religious concepts, as well as related concepts such as peer pressure, offsetting any weighting loss between services. The depth of conviction to a religion can potentially be correlated with frequency of religious events. But I digress.

Eventually, the impact of the concept "blonde people are dumb" on decisions becomes insignificant. During this time, each encounter strengthens the Sandy pattern or creates new patterns for blondes. At some level of weighting for the "intelligent" and "blonde" concepts associated to people, our brain economizes by creating a "blond people are intelligent" concept. Variations of this basic model is generally how beliefs are created and the weights of beliefs are adjusted.

As with fire, we are extremely averse to incongruity. We have a fundamental drive to integrate our experiences into a cohesive continuum. Something akin to adrenaline is released when we encounter incongruity, driving us to find a way to resolve the conflicting concepts. If we can't find a factual explanation, we rationalize one in order to return to balanced thoughts.

When we make a choice of something over other things, we begin to consider the most heavily weighted concepts that are invoked based on the given situation. We work down the weighting until we reach a point where a single concept outweighs all other competing concepts by an acceptable amount.

In some situations, we don't have to make many comparisons due to the invocation of very heavily weighted concepts, such as when a car is speeding towards us while we're standing in the roadway. In other situations, we make numerous comparisons that yield no clear dominant concept and can only make a decision after expanding our choice of concepts.

This model is consistent with human behavior. It helps to explain why people do what they do. It is important to realize that this model applies no division of concepts into classes. It uses a fluid ordering system. It has transient terminal goals based on perceived situational considerations. Most importantly, it bounds the recursion requirements. As the situation changes, the set of applicable concepts to consider changes, resetting the core algorithm.

Comment author: Kaj_Sotala 22 November 2012 07:48:39AM 3 points [-]

From the post:

If the community reacts positively (based on karma and comments) we'll support the potential contributors' effort to complete the paper

I don't think you should put very much weight on the reaction from LW, given that much more polished papers often get low karma. E.g. both my "Responses to Catastrophic AGI Risk: A Survey" and my and Stuart's "How We're Predicting AI — or Failing to" are currently at only 11 upvotes and rather few comments. If even finished papers get that little of a reaction, I would expect that even many drafts that genuinely deserved a great reception would get little to no response.

Comment author: JaySwartz 28 November 2012 07:51:45PM 0 points [-]

Kaj,

Thank you. I had noticed that as well. It seems the LW group is focused on a much longer time horizon.

Comment author: JaySwartz 28 November 2012 04:13:07AM 0 points [-]

In every human endeavor, humans will shape their reality, either physically or mentally. They go to schools where their type of people go and live in neighborhoods where they feel comfortable based on a variety of commonalities. When their circumstances change, either for the better or the worse, they readjust their environment to fit with their new circumstances.

The human condition is inherently vulnerable to wireheading. A brief review of history is rich with examples of people attaining power and money who subsequently change their values to suit their own desires. The more influential and wealthy they become, enabling them to exist unfettered, the more they change their value system.

There are also people who simply isolate themselves and become increasingly absorbed in their own value system. Some amount of money is needed to do this, but not a great amount. The human brain is also very good at compartmentalizing value sets such that they can operate by two (or more) radically different value systems.

The challenge in AI is to create an intelligence that is not like ours and not prone to human weaknesses. We should not attempt to replicate human thinking, we need to build something better. Our direction should be to create an intelligence that includes the desirable components and leaves out the undesirable aspects.

Comment author: JaySwartz 28 November 2012 02:32:15AM *  0 points [-]

Well, I'm a sailor and raising the waterline is a bad thing. You're underwater when the waterline gets too high.

Comment author: Kaj_Sotala 22 November 2012 10:51:45AM 1 point [-]

Since the paper is basically about predicting AGI, it might be better to call it a paper about predicting AGI. The "once we have AGI, we will soon after have superintelligence" step is somewhat contentious, and it's counterproductive to introduce contentious points if you're not going to do anything with them.

Comment author: JaySwartz 23 November 2012 06:02:42PM 0 points [-]

Thanks for the feedback. I agree on the titling; I started with the title on the desired papers list, so wanted some connection with that. I wasn't sure if there was some distinction I was missing, so proceeded with this approach.

I know it is controversial to say super intelligence will appear quickly. Here again, I wanted some tie to the title. It is a very complex problem to predict AI. To theorize about anything beyond that would distract from the core of the paper.

While even more controversial, my belief is that the first AGI will be a super intelligence in its own right. An AGI will have not have one pair of eyes, but as many as it needs. It will not have just one set of ears, it will immediately be able to listen to many things at once. The most significant aspect is an AGI will immediately be able to hold thousands of concepts in the equivalent of our short term memory, as opposed to the typical 7 or so for humans. This alone will enable it to comprehend immensely complex problems.

Clearly, we don't know how AGI will be implemented or if this type of limit can be imposed on the architecture. I believe an AGI will draw its primary power from data access and logic (i.e., the concurrent concept slots). Bounding an AGI to an approximation of human reasoning is an important step.

This is a major aspect of friendly AI because one of the likely ways to ensure a safe AI is to find a means to purposely limit the number of concurrent concept slots to 7. Refining an AGI of this power into something friendly to humans could be possible before the limit is removed, by us or it.

I just wanted to express some thoughts here. I do not intent to cover this in the paper as it is a topic for several focused papers to explore.

In response to comment by [deleted] on Wanting to Want
Comment author: Multiheaded 21 November 2012 06:57:36PM *  1 point [-]

There are many, many things in the similarly exploitative franchise known as "Real Life" that also appear to be crafted as "torture & horror porn". So I don't see the problem with linking to a fictionalized version.

I dare say that any story without elements that induce horror and revulsion in a reader would be an inadequate source of intuition for considering the most shocking aspects of our own world... or the ethics of knowingly creating a system which offers absolute security indiscriminately to those who would create such nightmares and those who'd seek to prevent them.

Example of a victim testimony 1: - trigger warnings for extreme child abuse, rape, pedophilia and psychological damage.

...Jura V jnf gjb lrnef byq zl zbgure zneevrq zl fgrcsngure. Jung sbyybjrq jnf fvkgrra lrnef bs frkhny nffnhyg...

Example of a victim testimony 2: - all of the above, except even more outspoken descriptions of the author's mental anguish. (NSFanywhere. The main blog has... images... that are more gore than extreme porn; don't look unless you're massively desensitized.)

...Gurfr cubgbf nyy rkcerff gur fvqr bs zlfrys V fgehttyr jvgu rirel qnl. Guvf vf gur fvqr bs zr gung yrnearq jung frk vf guebhtu encr. Guvf vf gur fvqr bs zr gung gevrq gb pbzzvg fhvpvqr sbe gur svefg gvzr jura V jnf frira lrnef byq. Guvf vf gur cneg bs zr gung V srne jvyy arire urny. Vgf orra fb znal lrnef ohg abg n qnl tbrf ol jurer V qba’g guvax nobhg jung jnf qbar gb zr...

...Nsgre lbh’ir orra encrq naq orngra jvguva na vapu bs lbhe yvsr rabhtu gvzrf, rirelguvat ryfr ybbfrf pbybe, gur jbeyq orpbzrf funqrf bs terl. Lbh orpbzr ahzo. V pna’g pel nalzber hayrff V’z orvat encrq. Abg sebz cnva, abg sebz fnqarff, abg sebz bavbaf, abg sebz nalguvat ryfr bgure guna fgehttyvat juvyr zl obql vf gbegherq naq hfrq. Yngryl V’ir orra nfxvat zl OQFZ cnegaref gb cynl-encr zr erthyneyl fb V pna pel. Vgf ernyyl dhvgr greevoyr npghnyyl; vg srryf yvxr V’z eryvivat gur uryy V penjyrq bhg bs nyy bire ntnva. Lrg V unir gb qb vg, whfg gb or noyr gb srry uhzna ntnva, gb srry nalguvat ntnva rabhtu gb pel...

Want some amnesiacs yet? You might be able to forget those stories faster if you don't think about the fact that something similar must be happening somewhere in your country, probably in your city, at this very moment. Oops, too late!

(Again, sorry for the confrontational tone and such - I wanted to hammer home the point that sometimes it's the violently emotional reaction to an objectively terrible problem that would be true to your desires, and trying to stay "detached" and "reasonable" would be self-deception. See: deathism.)

Comment author: JaySwartz 22 November 2012 09:47:29AM 1 point [-]

I struggle with conceiving wanting to want, or decision making in general, as a tiered model. There are a great many factors that modify the ordering and intensity of utility functions. When human neurons fire they trigger multiple concurrent paths leading to a set of utility functions. Not all of the utilities are logic-related.

I posit that our ability to process and reason is due to this pattern ability and any model that will approximate human intelligence will need to be more complex than a simple linear layer model. The balance of numerous interactive utilities combine to inform decision making. A multiobjective optimization model, such as PIBEA, is required.

I'm new to LW, so I can't open threads just yet. I'm hoping to find some discussions around evolutionary models and solution sets relative to rational decision processing.

Comment author: JoshuaZ 22 November 2012 12:03:59AM 0 points [-]

For a site promoting rationality this entire thread is amazing for a variety of reasons (can you tell I'm new here?). The basic question is irrational. The decision for one situation over another is influenced by a large number of interconnected utilities.

So in most forms of utilitarianism, there's still an overall utility function. Having multiple different functions amounts to the same thing as having a single function when one needs to figure out how to balance the competing interests.

In response to comment by JoshuaZ on Circular Altruism
Comment author: JaySwartz 22 November 2012 12:45:58AM 1 point [-]

Granted. My point is the function needs to comprehend these factors to come to a more informed decision. Simply doing a compare of two values is inadequate. Some shading and weighting of the values is required, however subjective that may be. Devising a method to assess the amount of subjectivity would be an interesting discussion. Considering the composition of the value is the enlightening bit.

I also posit that a suite of algorithms should be comprehended with some trigger function in the overall algorithm. One of our skills is to change modes to suit a given situation. How sub-utilities impact the value(s) served up to the overall utility will vary with situational inputs.

The overall utility function needs to work with a collection of values and project each value combination forward in time, and/or back through history, to determine the best selection. The nature of the complexity of the process demands using more sophisticated means. Holding a discussion at the current level feels to me to be similar to discussing multiplication when faced with a calculus problem.

View more: Next