Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Gunnar_Zarncke 04 November 2017 12:33:51PM *  2 points [-]

Abstract

We lay out the first general model of the interplay between intercellular competition, aging, and cancer. Our model shows that aging is a fundamental feature of multicellular life. Current understanding of the evolution of aging holds that aging is due to the weakness of selection to remove alleles that increase mortality only late in life. Our model, while fully compatible with current theory, makes a stronger statement: Multicellular organisms would age even if selection were perfect. These results inform how we think about the evolution of aging and the role of intercellular competition in senescence and cancer.

Full text: http://www.pnas.org/content/early/2017/10/25/1618854114.full.pdf

Note I came across it via this link which is not really saying what they model: https://science.slashdot.org/story/17/11/01/2324217/scientists-have-mathematical-proof-that-its-impossible-to-stop-aging

Comment author: whpearson 04 November 2017 02:09:58PM 1 point [-]

Thanks. At some point I'll have to dig into this and see if the same math would apply to AI/robots or not. It might have implications into whether to try for singletons or not.

Comment author: Mitchell_Porter 31 October 2017 03:50:11AM 0 points [-]

Wake up! In three days, that AI evolved from knowing nothing, to comprehensively beating an earlier AI which had been trained on a distillation of the best human experience. Do you think there's a force in the world that can stand against that kind of strategic intelligence?

Comment author: whpearson 02 November 2017 05:08:11PM *  0 points [-]

A brief reply.

Strategy is nothing without knowledge of the terrain.

Knowledge of the terrain might be hard to get reliably

Therefore there might be some time between AGI being developed and it being able to reliably acquire the knowledge. If these people that develop it are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

Comment author: whpearson 01 November 2017 12:21:21PM 0 points [-]

Has anyone here put much thought into parenting/educating AGIs?

I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.

I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.

Comment author: whpearson 07 October 2017 10:53:36PM *  2 points [-]

I'm curious about take off, when a computer learns to program itself. I think there are a number of general skills/capabilities involved in the act of programming, so I am interested in how human programmers think they program.

Rank the general skills you think are important for programming

The skills I have identified are the following. I think each of them might be useful in different ways.

Ability to read stack overflow and manuals to get information about programming Ability to read research papers to get information about previous algorithms Ability to learn specific facts about the domain (e.g. if you ) Ability to keep a complex model of the domain in your head (e.g. what happens in a factory when a certain actuator is activated) Ability to keep a complex model of the state of the computer system (e.g. the state of the database or the sorted-ness of an array) The ability to use trial and error while programming (e.g. trying out the system with a certain parameter and getting feedback about that)

Most important

Second most important

Third most important

Fourth most important

Fifth most important

What type of programming do you mainly do?

{}

Submitting...

Comment author: Thomas 18 September 2017 06:58:17PM 5 points [-]

tl;dr

But I saw this:

Time to put humans before business.

Time to put humans before oxygen, too? Silly stuff.

Comment author: whpearson 26 September 2017 12:05:38PM 0 points [-]

Humans existed before business. Not at the tech level we have today. Humans might exist after businesses go extinct, that is the dream of singularity and post-scarcity economies.

But with the tech we have today, yep this is not going to fly.

Comment author: turchin 28 August 2017 03:35:25PM *  1 point [-]

The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.

Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.

Comment author: whpearson 29 August 2017 05:54:40PM 0 points [-]

Is there anything we can realistically do about it? Without crippling the whole of biotech?

Comment author: RowanE 28 August 2017 10:00:20PM 1 point [-]

Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.

Comment author: whpearson 29 August 2017 05:48:34PM 0 points [-]

Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.

I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).

Comment author: whpearson 28 August 2017 01:11:03PM 0 points [-]

A short story - titled "The end of meaning"

It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.

Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.

Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.

She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn't make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.

blip

Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn't occurred yet, she could still fix it and make the most meaningful contribution to the human race's history by stopping death, suffering and pain.

As she went about her day's business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara's utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.

She should probably continue to try and save humanity, because of indexical uncertainty.

Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...

Another thought gripped her, what if she couldn't solve the meaningfulness problem from her nightmare? She would be trapped in a loop.

blip

A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.

Comment author: whpearson 25 August 2017 12:17:07PM 1 point [-]

While biotech risks are existential at the current time, they lessen as we get more technology. If we can have hermetically sealable living quarters and bioscanners that sequence and look for novel virus and bacteria we should be able to detect and lock down infected areas. Without requiring brain scanners and red goo.

I think we can do similar interventions for most other existential risks classes. The only one you need really invasive surveillance for is AI. How dangerous tool AI is depends on what intelligence actually is. Which is an open question. So I don't think red goo and brain scanners will become a necessity, conditional on my view of intelligence being correct.

Comment author: whpearson 21 August 2017 07:50:01PM 0 points [-]

Is there any appetite for trying to create a collective fox view of the future?

Model the world under various assumptions (energy consumption predictions + economic growth + limits to the earths energy dissipation + intelligence growth etc) and try and wrangle it into models that are combined together and updated collectively?

View more: Next