In response to Crazy Ideas Thread
Comment author: lyghtcrye 19 July 2015 01:48:23AM -1 points [-]

I like to imagine that eventually we will be able to boil the counter-intuitive parts of quantum physics away into something more elegant. I keep coming back to the idea that every current interaction could theoretically be modeled as the interactions of variously polarized electromagnetic waves. Such as mass being caused by rotational acceleration of light, and charge being emergent from the cross-interactions of polarized photons. I doubt the idea really carves reality at the joints, but I think it's probably closer to accurate than the standard model, which is functional but patchworked, much like the predictive models used by astrologers prior to the acceptance of heliocentrism.

Comment author: [deleted] 28 September 2013 08:47:59AM *  2 points [-]

"Superhuman AI" as the term is generally used is a fixed reference standard, i.e. your average rationalist computer scientist circa 2013. This particular definition has meaning because if we posit that human beings are able to create an AGI, then a first generation superhuman AGI would be able to understand and modify its own source code, thereby starting the FOOM process. If human beings are not smart enough to write an AGI then this is a moot point. But if we are, then we can be sure that once that self-modifying AGI also reaches human-level capability, it will quickly surpass us in a singularity event.

So the point of whether IA advances humans faster or slower than AGI is a rather uninteresting point. All that matters is when a self-modifying AGI becomes more capable than its creators at the time of its inception.

As to your very last point, it is probably because the timescales for AI are much closer than IA. AI is basically a solvable software problem, and there are many supercompute clusters in the world that could are probably capable of running a superhuman AGI at real time speeds, if such a software existed. Significant IA, on the other hand, requires fundamental breakthroughs in hardware...

Comment author: lyghtcrye 28 September 2013 10:10:50AM 0 points [-]

I seem to have explained myself poorly. You are effectively restating the commonly held (on LessWrong) views that I was attempting to originally address, so I will try to be more clear.

I don't understand why you would use a particular fixed standard for "human level". It seems to be arbitrary, and it would be more sensible to use the level of human at the time when a given AGI was developed. You yourself say as much in your second paragraph ("more capable than its creators at the time of its inception"). Since IA rate determines the capabilities of the AIs creators, then a faster rate of IA than AI would mean that the event of a more capable AGI would never occur.

If a self-modifying AGI is less capable than its creators at the time of its inception, then it will be unable to FOOM, from the perspective of its creators, both because they would be able to develop a better AI in a shorter time than an AI could improve itself, and because if they were developing IA at a greater pace they would advance faster than the AGI that they had developed. Given the same intelligence and rate of work, an easier problem will see more progress. Therefore, if IA is given equal or greater rate of work than AI, and it happens to be an easier problem, then humans would FOOM before AI did. A FOOM doesn't feel like a FOOM from the perspective of the one experiencing it though.

Your final point makes sense, in that it address the point that the probability of the first fast takeoff being in the AI field may be larger than the IA field, or in that AI is an easier problem. I fail to see why a software problem is inherently easier than a biology or engineering problem though. A fundamental breakthrough in software is just as unlikely as a hardware, and there are more paths to success for IA than AI that are currently being pursued, only one of which is a man-machine interface.

I considered being a bit snarky and posting each of your statements as direct opposites (IE all that matters is if a self modifying human becomes more capable than an AI at the time of its augmentation), but I feel like that would convey the wrong message. The dismissive response genuinely confuses me, but I'm making the assumption that my poor organization has made my point too vague.

Comment author: lyghtcrye 28 September 2013 08:25:29AM 2 points [-]

I have been mulling around a rough and mostly unformed idea in my head regarding AI-first vs IA-first strategies, but I was loathe to try and put it into words until I saw this post, and noticed that one of the scenarios that I consider highly probable was completely absent.

On the basis that subhuman AGI poses minimal risk to humanity, and that IA increases the level of optimization ability required of an AI to be considered human level or above, it seems that there is a substantial probability that an IA-first strategy could lead to a scenario in which no superhuman AGI can be developed because it is economically infeasible to research that field as opposed to optimizing accelerating returns from IA creation and implementation. Development of AI whether friendly or not would certainly occur at a faster pace, but if IA proves to simply be easier than AI, which given our poor ability estimate the difficulty of both approaches may be true, development in that field would continue to outpace it. It could certainly instigate either a fast or slow takeoff event from our current perspective, but from the perspective of enhanced humans it would be simply an extension of existing trends.

A similar argument could be made in regard to Hanson's WBEM based scenarios, through the implication that given the ability to store a mind to some hardware system, it would be more economically efficient to emulate that mind at a faster pace than to parallel process multiple copies of that mind in the same hardware space, and likewise hardware design would trend toward rapid emulation of single workers rather than multiple instances in order to reduce costs accrued by redundancy and increase gains in efficiency accrued by experience. This would imply that mind enhancement of a few high efficiency minds would occur much earlier and that exceptional numbers of emulated workers would be unlikely to be created, but rather that a few high value workers would occupy a large majority of relevant hardware very soon after the creation of such technology.

An IA field with greater pace than AI does of course present its own problems, and I'm not trying to endorse moving towards an IA-first approach with my ramblings. I suppose I'm simply trying to express the belief that discussion of IA as an alternative to AI rather than an instrument toward AI is rather lacking in this forum and I find myself confused as to why.

Comment author: Vladimir_Nesov 12 September 2013 11:54:36PM *  6 points [-]

you should consider reading more about the issue so that you don't come off as naive

(Not being mistaken is a better purpose than appearing sophisticated.)

Comment author: lyghtcrye 13 September 2013 12:01:43AM 3 points [-]

I'm not sure why you phrased your comment as a parenthetical, could you explain that? Also, while I agree with your statement, appearing competent to engage in discussion is quite important for enabling one to take part in discussion. I don't like seeing someone who is genuinely curious get downvoted into oblivion.

Comment author: djm 12 September 2013 01:43:44AM 0 points [-]

Thank you both for the feedback - it is always useful. Yes I realise this is a hard job with no likely consensus, but what would the alternative be?

At some stage we need to get the AI to understand human values so it knows if it is being unfriendly, and at the very least if we have no measurable way of identifying friendliness how will progress be tracked?

Comment author: lyghtcrye 12 September 2013 11:41:32PM 0 points [-]

That question is basically the hard question at the root of the difficulty of friendly AI. Building an AI that would optimize to increase or decrease a value through its actions is comparably easy, but determining how to evaluate actions into a scale that measures results in a comparison with human values is incredibly difficult. Determining and evaluating AI friendliness is a very hard problem, and you should consider reading more about the issue so that you don't come off as naive.

Comment author: lyghtcrye 22 August 2013 08:32:22AM 5 points [-]

While personal identification with a label can be constraining, I find that the use of labels for signalling are tremendous. Not only does a label work in the same way as jargon, expressing a complex data set with a simple phrase, but because most labels carry tribal consequences it acts as a somewhat costly signal in terms of identifying alliances. Admittedly, one could develop a habit of using labels which becomes a personal identification, but being aware of such risk is the best way to combat the effects thereof.

Comment author: Viliam_Bur 05 May 2013 10:14:00AM 1 point [-]

Taboo "altruism" and "egoism". Those words in their original meaning are merely strawmen. Everyone cares about other people (except for psychopaths, but psychopaths are also bad at optimizing for their own long-term utility). Everyone cares about their own utility (even Mother Theresa was happy to get a lot of prestige for herself, and to promote her favorite religion). In real life, speaking about altruists and egoists is probably just speaking about signalling... who spends a lot of time announcing that they care about other people (regardless of what they really do for them), and who neglects this part of signalling (regardless of what they really do). Or sometimes it is merely about whom we like and whom we don't.

Comment author: lyghtcrye 05 May 2013 11:56:09PM 0 points [-]

I had no intention of implying extreme altruism or egoism. I should be clear that by altruism I mean the case in which an agent believes that the values of of some other entity or group have a smaller discount rate than those of the agent itself, while egoism is the opposite scenario. I describe myself as an egoist, but this does not mean that I am completely indifferent to others. In the real world, one would not describe a person who engages in altruist signalling as an altruist, but rather that person would choose the label of altruist as a form of signalling.

Either way, returning to the topic at hand with the taboo in effect, those who value the continuation of their society as a greater value than personal survival will be willing to accept greater risks to their own life to improve the chances of victory at war. Likewise those who consider their own survival more highly, even if they expect that loss in war may endanger their life, will choose actions that are less risky to themselves even if it is a less advantageous action for the group. By attempting to modify the values of other to place greater weight on society over the individual, and by providing evidence which makes pro-social actions seems more appealing ,such as by making them appear less risky to the self, one can improve the probability of victory and improve the chance of personal survival. Of course if everyone were engaging in this behavior, and we assume equal skills and resources amongst all parties, there would either be no net effect on the utility values of agents within the group, or a general trend toward greater pro-social behavior would form, depending on what level of skill and susceptibility we assume. This is a positive outcome as the very act of researching and distributing the information required would create greater net resources in terms of knowledge and strategy for effectively contributing to the war effort.

Comment author: lyghtcrye 05 May 2013 02:23:10AM -1 points [-]

While it may not be the point of the exercise, from examining the situation, it appears that the best course of action would be to attempt to convert as many people in Rationalistland as possible to altruism. The reason I find this interesting is because it mirrors real world behavior of many rationalists. There are a bevy of resources for effective altruism, discussions on optimizing the world based on an altruistic view, and numerous discussions which simply assume altruism in their construction. Very little exists in the way of openly discussing optimal egoism. As an egoist myself, I find this to be a perfectly acceptable situation, as it is beneficial for me if more people who aren't me or are in groups that do not include me become altruistic or become more effective at being altruistic.

In response to Optimal rudeness
Comment author: lyghtcrye 13 April 2013 04:13:07AM *  3 points [-]

I haven't compiled any data relating rudeness to karma, and thus only have my imperfect recollection of prior comments to draw on, but I can certainly see your point here. I doubt, however, that an unpopular opinion or argument would benefit from rudeness if the post is initially well formed. I would expect rudeness to amplify polarization, thereby benefiting popular arguments and high status posters, and politeness to mitigate it. Would you be willing to provide me with some examples for or against this expectation from your observations?

Comment author: lyghtcrye 11 April 2013 08:41:49AM 5 points [-]

Yet if it is about "10 good ways to prepare for the job interview" I usually don't read this kind of objections. On the contrary it is assumed that when going for an interview candidates will dress as well as they can, have polished their CVs and often waded through lists of common questions/problems and their solutions(speaking as a computer programmer here). Not doing so would be considered sloppy. It is rare to hear: "People, just go to the interview and present yourself as you are, if the company likes you it will take you."

While most of this post seems weakly designed and poorly edited (I will assume due purely to excessive haste), this statement brings up a point worth discussing. In truth, misrepresenting oneself in a job interview is a poor choice for one who desires stable and fruitful employment. Certainly one should strive to display their positive qualities while minimizing their negative qualities, but such a tactic is certainly not deceitful, as it is assumed by your interviewer that you will be performing such an optimization of your facade and will adjust their expectations accordingly. Likewise I believe that a critical difference between the "PUA" culture that is being discussed here and the central essence of optimizing one's ability to attract a mate is in the level of misrepresentation applied to an altered goal set.

A person not interested in keeping a job for any significant duration would have no motivation to be honest during an interview, as actually being effective is no longer a concern. A person attempting to attract a mate with no intention of producing offspring or maintaining a relationship that includes emotional investment is also lacking the motivation for honesty. One need not be in any way sexist for such a duplicitous mode of operation to be effective, it is merely the circumstance that our current culture expects male initiation of courtship rituals toward females. Refining the technique of initiating and succeeding in such social interactions is in and of itself a neutral goal, like any tool or technique, but when applying such an "art", there can certainly exist distasteful methods. The difference between a shrewd businessman and a con-man often lies primarily in the level of respect for the other member of their transactions, and the same can be said of this mating technique.

View more: Next