Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: whpearson 02 November 2017 05:08:11PM *  0 points [-]

A brief reply.

Strategy is nothing without knowledge of the terrain.

Knowledge of the terrain might be hard to get reliably

Therefore there might be some time between AGI being developed and it being able to reliably acquire the knowledge. If these people that develop it are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

Comment author: Mitchell_Porter 06 November 2017 12:54:48PM 0 points [-]

Knowledge of the terrain might be hard to get reliably

Knowing that the world is made of atoms should take an AI a long way.

If these people that develop [AGI] are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

I hold to the classic definition of friendly AI as being AI with friendly values, which retains them (or even improves them) as it surpasses human intelligence and otherwise self-modifies. As far as I'm concerned, AlphaGo Zero demonstrates that raw problem-solving ability has crossed a dangerous threshold. We need to know what sort of "values" and "laws" should govern the choices of intelligent agents with such power.

Comment author: IlyaShpitser 01 November 2017 09:43:20PM 1 point [-]
Comment author: Mitchell_Porter 01 November 2017 10:28:48PM 0 points [-]

And you're the tax collector? Answer the question.

Comment author: IlyaShpitser 31 October 2017 04:19:46AM *  1 point [-]

So, a concrete bet then? What specifically are you worried about? In the form of a falsifiable claim, please.


edit: I am trying to make you feel better, the real way. The empiricist way.

Comment author: Mitchell_Porter 01 November 2017 09:19:15PM 0 points [-]

Just answer the question.

Comment author: IlyaShpitser 30 October 2017 02:42:38PM *  1 point [-]

Let's say 100 dollars, but the amount is largely symbolic. The function of the bet is to try to clarify what specifically you are worried about. I am happy to do less -- whatever is comfortable.

Comment author: Mitchell_Porter 31 October 2017 03:50:11AM 0 points [-]

Wake up! In three days, that AI evolved from knowing nothing, to comprehensively beating an earlier AI which had been trained on a distillation of the best human experience. Do you think there's a force in the world that can stand against that kind of strategic intelligence?

Comment author: IlyaShpitser 26 October 2017 02:34:18PM *  3 points [-]

You should probably stop listening to random voices.


More seriously, do you want to make a concrete bet on something?

Comment author: Mitchell_Porter 29 October 2017 08:28:11PM 0 points [-]

How much are you willing to lose?

Comment author: Mitchell_Porter 19 October 2017 09:31:45PM 3 points [-]

A voice tells me that we're out of time. The future of the world will now be decided at Deep Mind, or by some other group at their level.

Comment author: cousin_it 03 October 2017 11:11:50AM *  3 points [-]

Yeah, classical computers might need a lot of resources to simulate quantum mechanics. Quantum computers have no such limitation though, so it's probably not relevant to the simulation argument. Note that the paper doesn't mention the simulation argument, it was added by journalists working under evil incentives.

Comment author: Mitchell_Porter 03 October 2017 12:45:51PM 0 points [-]

EurekAlert mentions the simulation argument, and the page implies that this was a press release from Oxford - even providing a media contact - though I have not found the document on Oxford's own website.

I am also skeptical of what the paper (arxiv) is actually saying, on a technical level. It reminds me of another paper a few months ago, which was hyped as exhibiting a "gravitational anomaly" in a condensed-matter system. From all that I could make out, there was no actual gravitational effect involved, only a formal analogy.

This paper seems to engage in exactly the same equivocation, now with the objective of proving something about computational complexity. But I'll have to study it in more detail to be sure.

Comment author: RobQuesting 05 September 2017 09:58:13AM 0 points [-]

Regarding politics, and the frowning, is it acceptable to focus on measurable results, rather than ideologies (or political "teams" - re: cerulean vs blue vs green)? Whilst I understand the tribalism you refer to, it is a bias this group and website seems to be inherently about combating; as such falsely dichotomous thinking is irrational.

For example: No matter which party is in power, across most of the world's countries, economic systems remain largely unaltered over recent decades. The social and psychological effects on cultural norms, born of the structural economic framework, ought not be discussed despite their affect on trends of perceived rationality (the bias of culturally normal rational thought) because this topic bleeds into "politics". I don't see how economic debate can be considered separate from political or cultural debate. I don't see how rationality can be separated from politics.

Is that too political for the scope of this forum? Interdependent causation?

If so, that's okay, it just negates about half of my reasons for engaging here.

I don't know how it is possible to separate rational discourse and political discourse. I don't see how there can be a firewall between them. The social is the political, which defines what is considered rational, which is in turn influenced by cultural normalcy in the form of bias. Art, culture, community, education, social and even civilisation outcomes seem inextricable from the organisational structure we call the political sphere.

I could be wrong about all of the above.

It may be better to let me know now, if political discourse, about theory and measurable socio-cultural results, are beyond the scope of this forum, because then, I won't waste anyone's time.

I opened by saying: "I have unfortunately come to the conclusion that socioeconomic revolt, by any means necessary, is a moral and ethical imperative for all people, to maximise the chances of the survival of the human species."

This is my present, primary concern. If I am not allowed to discuss this, I am in the wrong place. Thanks.

Comment author: Mitchell_Porter 07 September 2017 12:08:33AM *  0 points [-]

It sounds like you want some second opinions and rational evaluation regarding your political conclusion - necessity of revolt. OK.

I can think of reasons for and reasons against such a conclusion, but probably you should spell out more of your reasoning first. For example, why will revolt help humanity survive?

Comment author: BeleagueredPotential 18 June 2017 08:10:22AM *  0 points [-]

I find myself in a potentially critical crossroads at the moment, one that could affect my ability to become a productive researcher for friendly AI in the future. I'll do my best to summarize the situation.

I had very strong mental capabilities 7 years ago, but a series of unfortunate health related problems including a near life threatening infection led to me developing a case of myalgic encephalomyelitis (chronic fatigue syndrome). This disease is characterized by extreme fatigue that usually worsens with physical or mental exertion, and is not significantly improved by rest. There are numerous other symptoms that are common to ME, I luckily escaped a great many of them. However I developed the concentration and memory problems which are common to ME to a very large degree.

I had somewhat bad ME until a few years ago when in conjunction with a mind/body specialist I was able to put it into partial remission. I am now able to do physically demanding activities without fatigue but I still have severe cognitive constraints; my intelligence now seems to be almost as sharp as it ever was despite deficits in mental energy, concentration, and memory (especially working memory). However having efficacious mental throughput relies so much on these attributes that support intelligence, and I am hardly useful at all as it stands. Therefore my primary concern these past few years has been to resolve my medical issues to a large enough degree to enable real productivity.

I am still in this state despite putting all of my effort towards remedying it, I have stuck to safer treatments (like bacteriotherapy or sublingual methylcobalamin) in order to prevent worsening my condition (although I have had some repercussions from following even this philosophy). I am wondering if I can reasonably expect to get better using this methodology though. It could be that I need to take more extreme risks, because I won't do any good as I am and time continues to tick away. Looking at the the big picture with a properly pessimistic outlook gives me the impression that friendly AI research does not have a lot of time to spare as it is.

There is a doctor that is recommended by a large amount of people on a ME forum I frequent who has exceptionally aggressive treatment protocols. His name is Dr. Kenny de Meirleir and while I have misgivings about some of the stuff I've read about him, I've pretty much given up on trying to find someone who is both good and doesn't have a long wait list. I've gotten on the wait list of one practitioner who is local but I do not have too much confidence in them. Dr. Meirleir wasn't too difficult to get an appointment with because he travels to the USA for a few days every couple of months and these appointments are not widely known about.

However even the cost of initial tests and evaluation could be an unrecoverable failure for me if they don't pan out like I hope. It will cost thousands of dollars to pay for travel to the states, hotel, the consultation, and the comprehensive tests he is likely to run; even considering how much of the lab tests my own country will probably cover. Although at least then I could finally confirm a lot of unknowns about my health, such as whether there are infectious agents still affecting me. Despite all the testing I've gone through over the years he does a lot of tests I haven't gotten yet.

It really depends on the results of the tests, but I'm reading plenty of anecdotal reports that suggest a high likelihood of me getting put on multiple antibiotics by him. Plenty of people whose stories I have read have reported worsening conditions and relapses of ME due to antibiotics, and I know from my research that ME treatments in general often have these risks.

The quantity of symptoms I have has always been small, which might indicate that there is a lot more of my physiology that is working the way it should be compared to the average ME patient. My condition is also in partial remission already and I am still under 30, so I consider myself to have better odds of major recovery than the low rates of total remission this disease is usually predicted to have.

The question then is; as rationalists what path do you think I should take here? If I choose to go to the appointment next weekend, I lose a large chunk of my limited capital but gain knowledge and possibilities for treatment. If I then proceed to do treatment of the type he often prescribes, I probably lose most or all of my remaining money in something that could stand the best chance of making me functional again but that could also do nothing or make me irrecoverably worse (or anything else between the two extremes). This is not money I can recover easily, work is difficult still and it could take me lots of time to save considering normal essential expenses. If I chose to do nothing, cancel the appointment, and continue on my safe but so far ineffective path then I keep the status quo and avoid risking my health. Although if I do this I waste precious time either waiting for one of my less risky solutions to work, or waiting for the unlikely possibility of researchers developing a cure anytime soon. The years it will take for me to finish developing and expanding my skills and knowledge after recovery have to be factored in as well, I cannot just jump into FAI research right away. There are no doubt other options and variables I cannot see at the moment but I haven't found them as of yet.

Due to the aforementioned cognitive restraints I know that my ideas and research I have done on my condition are probably riddled with biases, errors, and gaps in knowledge. If anyone can offer suggestions or comments about this situation it would be appreciated. It's safe to assume that the personal outcomes I face from this choice only matter in the context of whether it increases or decreases the probability of me being useful to friendly AI development in the future. Even if I only further recover partially and can contribute in other ways (like financially), I'll consider that worth the effort.

I might not get the chance to answer any responses in a timely manner because of how much strain writing causes me (and if I do decide not to cancel the appointment I will have to prepare for travel this coming weekend). However reading and thinking both cost me less energy so know that any responses posted will be considered by me as carefully as I can and it will give me more perspective to help decide what to do in this situation.

Comment author: Mitchell_Porter 18 June 2017 11:33:53AM 2 points [-]

I'm going to take a wild guess, and suggest that your attitude towards FAI research, and your experience of CFS, are actually related. I have no idea if this is a standard theory, but in some ways CFS sounds like depression minus the emotion - and that is a characteristic symptom in people who have a purpose they regard as supremely important, who find absolutely no support for their attempt to pursue it, but who continue to regard it as supremely important.

The point being that when something is that important, it's easy to devalue certain aspects of your own difficulties. Yes, running into a blank wall of collective incomprehension and indifference may have been personally shattering; you may be in agony over the way that what you have to do in order to stay alive, interferes with your ability to preserve even the most basic insights that motivate your position ... but it's an indulgence to think about these feelings, because there is an invisible crisis happening that's much more important.

So you just keep grinding away, or you keep crawling through the desert of your life, or you give up completely and are left only with a philosophical perspective that you can talk about but can't act on... I don't know all the permutations. And then at some point it affects your health. I don't want to say that this is solely about emotion, we are chemical beings affected by genetics, nutrition, and pathogens too. But the planes intersect, e.g. through autoimmune disorders or weakened disease resistance.

The core psychological and practical problem is, there's a difficult task - the great purpose, whatever it is - being made more difficult in ways that have no intrinsic connection to the problem, but are solely about lack of support, or even outright interference. And then on top of that, you may also have doubts and meta doubts to deal with - coming from others and from yourself (and some of those doubts may be justified!). Finally, health problems round out the picture.

The one positive in this situation, is that while all those negatives can reinforce each other, positive developments in one area can also carry across to another.

OK, so that's my attempt to reflect back to you, how you sound to me. As for practical matters, I have only one suggestion. You say

he travels to the USA for a few days every couple of months

so I suggest that you at least wait until his next visit, and use that extra time to understand better how all these aspects of your life intersect.

Comment author: sad_dolphin 06 June 2017 01:35:32PM *  2 points [-]

I am considering ending my life because of fears related to AI risk. I am posting here because I want other people to review my reasoning process and help ensure I make the right decision.

First, this is not an emergency situation. I do not currently intend to commit suicide, nor have I made any plan for doing so. No matter what I decide, I will wait several years to be sure of my preference. I am not at all an impulsive person, and I know that ASI is very unlikely to be invented in less than a few decades.

I am not sure if it would be appropriate to talk about this here, and I prefer private conversations anyway, so the purpose of this post is to find people willing to talk with me through PMs. To summarize my issue: I only desire to live because of the possibility of utopia, but I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, please consider sending me a message with a brief explanation of your beliefs and your intent to help me. I especially urge you to contact me if you already have similar fears of AI, even if you are a lurker and are not sure if you should. Because of the sensitive nature of this topic, I may not respond unless you provide an appropriately genuine introduction and/or have a legitimate posting history.

Please do not reply/PM if you just want to tell me to call a suicide prevention hotline, tell me the standard objections to suicide, or give me depression treatment advice. I might take a long time to respond to PMs, especially if several people end up contacting me. If nobody contacts me I will repost this in the next discussion thread or on another website.

Edit: The word limit on LW messages is problematic, so please email me at sad_dolphin@protonmail.com instead.

Comment author: Mitchell_Porter 08 June 2017 07:10:19PM 0 points [-]

If ASI-provided immortal life were possible, you would already be living it.

... because if you're somewhere in an infinite sequence, you're more likely to be in the middle than at the beginning.

View more: Next