rwallace has been arguing the position that AI researchers are too concerned (or will become too concerned) about the existential risk from UFAI. He writes that
we need software tools smart enough to help us deal with complexity.
rwallace: can we deal with complexity sufficiently well without new software that engages in strongly-recursive self-improvement?
Without new AGI software?
One part of the risk that rwallace says outweighs the risk of UFAI is that
we remain confined to one little planet . . . with everyone in weapon range of everyone else
The only response rwallace suggests to that risk is
we need more advanced technology, for which we need software tools smart enough to help us deal with complexity
rwallace: please give your reasoning for how more advanced technology decreases the existential risk posed by weapons more than it increases it.
Another part of the risk that rwallace says outweighs the risk of UFAI is that
we remain confined to one little planet running off a dwindling resource base
Please explain how dwindling resources presents a significant existential risk. I can come up with several argument, but I'd like to see the one or two you consider the strongest arguments.
I agree that introspection certainly can be a valid tool.
I have a strong pain signal from lost money and from lost time. To the extent that I can introspect on the workings of my insula, I think that this is one impulse for me, rather than two as Yvain describes - one for time and one for money.
The most parsimonious explanation of what you observe is that it is human nature to be overconfident of the results of introspection.
When I wrote that "it is never in the financial self-interest of any [self-help] practitioner to do the hard long work to collect evidence that would sway a non-gullible client," I referred to long hard work many orders of magnitude longer and harder than posting a link to a web page. Consequently, your pointing out that you post links to web pages even when it is not in your financial self-interest to do so does not refute my point. I do not maintain that you should do the long hard work to collect evidence that would sway a non-guillible client: you probably cannot afford to spend the necessary time, attention and money. But I do wish you would stop submitting to this site weak evidence that would sway only a gullible client or a client very desperate for help.
And with that I have exceeded the time I have budgeted for participation on this site for the day, so my response to your other points will have to wait for another day. If I may make a practical suggestion to those readers wanting to follow this thread: subscribe to the feed for my user page till you see my response to pjeby's other points, then unsubscribe.
Previously in this thread I opined as follows on the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
PJ Eby took exception as follows:
you ignored the part where I just gave somebody a pointer to somebody else's work that they could download for free
Lots of people offer pointers to somebody else's writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader's while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader's while.
In the future, I will write "lasting change" when I mean "lasting useful psychological change".
you indirectly accused me of being more interested in financial incentives than results
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being. Let us call the probability I just referred to "probability D". (The D stands for deception.)
You have written (in a response to Eliezer) that you usually charge clients a couple of hundred dollars an hour.
The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.
The fact that you use hypnotic techniques on clients and write a lot about hypnosis raises probability D significantly because hypnotic techniques rely on the natural human machinery for negotiating who is dominant and who is submissive or the natural human machinery for deciding who will be the leader of the hunting party. Putting the client into a submissive or compliant state of mind probably helps a practitioner quite a bit to persuade the client to believe falsely that lasting change has been produced. You have presented no evidence or argument -- nor am I aware of any evidence or argument -- that putting the client into a submissive or compliant state helps a practitioner producing lasting change. Consequently, your reliance on and interest in hypnotic techniques significantly raises probability D.
Parenthetically, I do not claim that I know for sure that you are producing false beliefs rather than producing lasting change. It is just that you have not raised the probability I assign to your being able to produce lasting change high enough to justify my choosing to chase a pointer you gave into the literature or high enough for me to stop wishing that you would stop writing about how to produce lasting change in another human being on this site.
Parenthetically, I do not claim that your deception, if indeed that is what it is, is conscious or intentional. Most self-help and mental-health practitioners deceive because they are self-deceived on the same point.
You believe and are fond of repeating that a major reason for the failure of some of the techniques you use is a refusal by the client to believe that the technique can work. Exhorting the client to refrain from scepticism or pessimism is like hypnosis in that it strongly tends to put the client in a submissive or compliant state of mind, which again significantly raises probability D.
To the best of my knowledge (maybe you can correct me here) you have never described on this site an instance where you used a reliable means to verify that you had produced a lasting change. When you believe for example that you have produced a lasting improvement in a male client's ability to pick up women in bars, have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman's phone number or getting the woman to follow the client out to his car)?
In your extensive writings on this site, I can recall no instance where you describe your verifying your impression that you have created a lasting change in a client using reliable means. Rather, you have described only unreliable means, namely, your perceptions of the mental and the social environment and reports from clients about their perceptions of the mental and the social environment. That drastically raises probability D. Of course, you can bring probability D right back down again, and more, by describing instances where you have used reliable means to verify your impression that you have created a lasting change.
For readers who want to read more, here are two of Eliezer's sceptical responses to PJ Eby: 001, 002
If it makes you feel any better, I am not seeing you any more harshly than I see any other self-help, life-coach or mental-health practitioner, including those with PhDs in psychology and MDs in psychiatry and those with prestigious academic appointments. In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
Actually there is one way in which I resent you more than I resent other self-help, life-coach or mental-health practitioners: the other ones do not bring their false beliefs or rather their most-probably-false not-sufficiently-verified beliefs to my favorite place to read about the mental environment and the social environment. I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
Previously in this thread: PJ Eby asserts that the inability to refrain from conveying contempt is a common and severe interpersonal handicap. Nazgulnarsil replies, "This is my problem. . . . I can't hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners)."
I probably have the problem too. Although it is rare that I am aware of feeling contempt for my interlocutor, there is a lot of circumstantial evidence that messages (mostly nonverbal) conveying contempt are present in my face-to-face communication with non-friends (even if I would like the non-friend to become a friend).
I expect that PJ Eby will assure me that he has seen himself and his clients learn how to transcend this problem. Maybe he can even produce written testimonials from clients assuring me that PJ Eby has cured them of this problem. But I fear that PJ Eby has nothing that a strong Bayesian with long experience with self-help practitioners would consider sufficient evidence that he can help me transcend this problem. Such is the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
I changed the title of my post from "Mate selection for the male rationalist" to "Mate selection for the men here".
We differ in that respect, perhaps because I have had more time slowly to shape my emotional responses to women.
BTW it would be great to have all my writings subjected to examination by the community to determine whether the writings use probability distributions, utility functions and the language of causality correctly and sensibly.
Let us briefly review the discussion up to now since many readers use the the comments page which does not provide much context. rwallace has been arguing that AI researchers are too concerned (or will become too concerned) about the existential risk from reimplementing EURISKO and things like that.
You have mentioned two or three times, rwallace, that without more advanced technology, humans will eventually go extinct. (I quote one of those 2 or 3 mentions below.) You mention that to create and to manage that future advanced technology, civilization will need better tools to manage complexity. Well, I see one possible objection to your argument right there, in that better science and better technology might well decrease the complexity of the cultural information humans are required to keep on top of. Consider that once Newton gave our civilization a correct theory of dynamics, almost all of the books written before Newton on dynamics could safely be thrown away (the exceptions being books by Descartes and Galileo that help people understand Newton and put Newton in historical context) which of course constitutes a net reduction in the complexity of the cultural information that our civilization has to keep on top of. (If it does not seem like a reduction, that is because the possession of Newtonian dynamical theory made our civilization more ambitious about what goals to try for.)
But please explain to me what your argument has to do with EURISKO and things like that: is it your position that the complexity of future human culture can be managed only with better AGI software?
And do you maintain that that software cannot be developed fast enough by AGI researchers such as Eliezer who are being very careful about existential risks?
In general, the things you argue are dangerous are slow dangers. You yourself refer to "geological evidence" which suggests that they are dangerous on geological timescales.
In contrast, research into certain areas of AI seems to me genuinely fast dangers: things with a high probability of wiping out our civilization in the next 30, 50 or 100 years. It seems unwise to increase fast dangers to decrease slow dangers. But I suppose you disagree that AGI research if not done very carefully is a fast danger. (I'm still studying your arguments on that.)