Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: whpearson 07 October 2017 10:53:36PM *  2 points [-]

I'm curious about take off, when a computer learns to program itself. I think there are a number of general skills/capabilities involved in the act of programming, so I am interested in how human programmers think they program.

Rank the general skills you think are important for programming

The skills I have identified are the following. I think each of them might be useful in different ways.

Ability to read stack overflow and manuals to get information about programming Ability to read research papers to get information about previous algorithms Ability to learn specific facts about the domain (e.g. if you ) Ability to keep a complex model of the domain in your head (e.g. what happens in a factory when a certain actuator is activated) Ability to keep a complex model of the state of the computer system (e.g. the state of the database or the sorted-ness of an array) The ability to use trial and error while programming (e.g. trying out the system with a certain parameter and getting feedback about that)

Most important

Second most important

Third most important

Fourth most important

Fifth most important

What type of programming do you mainly do?

{}

Submitting...

Comment author: Thomas 18 September 2017 06:58:17PM 5 points [-]

tl;dr

But I saw this:

Time to put humans before business.

Time to put humans before oxygen, too? Silly stuff.

Comment author: whpearson 26 September 2017 12:05:38PM 0 points [-]

Humans existed before business. Not at the tech level we have today. Humans might exist after businesses go extinct, that is the dream of singularity and post-scarcity economies.

But with the tech we have today, yep this is not going to fly.

Comment author: turchin 28 August 2017 03:35:25PM *  1 point [-]

The problem this consensus position is that it failed to imagine that several deadly pandemics could run simultaneously, and existential terrorists could deliberately organize it by manipulating several viruses. Rather simple AI may help to engineer deadly plagues in droves, and it should not be superintelligent to do so.

Personally, I see the big failure of all x-risks community in ignoring and not even discussing such risks.

Comment author: whpearson 29 August 2017 05:54:40PM 0 points [-]

Is there anything we can realistically do about it? Without crippling the whole of biotech?

Comment author: RowanE 28 August 2017 10:00:20PM 1 point [-]

Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.

Comment author: whpearson 29 August 2017 05:48:34PM 0 points [-]

Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.

I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).

Comment author: whpearson 28 August 2017 01:11:03PM 0 points [-]

A short story - titled "The end of meaning"

It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.

Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.

Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.

She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn't make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.

blip

Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn't occurred yet, she could still fix it and make the most meaningful contribution to the human race's history by stopping death, suffering and pain.

As she went about her day's business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara's utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.

She should probably continue to try and save humanity, because of indexical uncertainty.

Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...

Another thought gripped her, what if she couldn't solve the meaningfulness problem from her nightmare? She would be trapped in a loop.

blip

A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.

Comment author: whpearson 25 August 2017 12:17:07PM 1 point [-]

While biotech risks are existential at the current time, they lessen as we get more technology. If we can have hermetically sealable living quarters and bioscanners that sequence and look for novel virus and bacteria we should be able to detect and lock down infected areas. Without requiring brain scanners and red goo.

I think we can do similar interventions for most other existential risks classes. The only one you need really invasive surveillance for is AI. How dangerous tool AI is depends on what intelligence actually is. Which is an open question. So I don't think red goo and brain scanners will become a necessity, conditional on my view of intelligence being correct.

Comment author: whpearson 21 August 2017 07:50:01PM 0 points [-]

Is there any appetite for trying to create a collective fox view of the future?

Model the world under various assumptions (energy consumption predictions + economic growth + limits to the earths energy dissipation + intelligence growth etc) and try and wrangle it into models that are combined together and updated collectively?

Comment author: whpearson 19 August 2017 01:22:52PM 0 points [-]

I find it interesting to think about metamodernism and metarationalism. I find myself at a similar intersection as you communities-wise, so I'm also trying to find my place.

I think I am constrained rationalist. I think there are resource/informational/logcal constraints to forming one coherent world view. So I am comfortable with multiple views of reality. However we should still try and push up against those constraints and try and unify things as much as is possible, as they should impinge on each other. This can be seen as a meta-modern view point, trying to embrace both construction and the deconstruction.

There is lots that I am not sure about metamodernism though.

Comment author: Dagon 31 July 2017 10:57:07PM 1 point [-]

mainly non-violent competition

Heh. If you think there's any such thing as "non-violent competition", you're not seeing through some levels of abstraction. All resource allocation is violent or has the threat of violence behind it.

Poor competitors fail to reproduce, and that is the ultimate violence.

Comment author: whpearson 01 August 2017 07:58:01PM 0 points [-]

If the competition stops a person reproducing then sure it is a little violent. If it stops an idea reproducing, then I am not so sure I care about stopping all violence.

Poor competitors fail to reproduce, and that is the ultimate violence.

Failure to reproduce is not the ultimate violence. Killing someone and killing everyone vaguely related to them (including the bacteria that share a genetic code), destroying their culture and all its traces is far more violent.

Comment author: Lumifer 31 July 2017 08:48:06PM 0 points [-]

A few issues immediately come to mind.

  • What's "pro-social" and "anti-social"? In particular, what if you're pro-social, but pro-a-different-social? Consider your standard revolutionaries of different kinds.

  • Pro- and anti-social are not immutable characteristics. People change.

  • If access to technology/power is going to be gated by conformity, the whole autonomy premise goes out of the window right away

Comment author: whpearson 31 July 2017 09:30:52PM 0 points [-]

Pro-social is not trying to take over the entire world or threatening . It is agreeing to mainly non-violent competition. Anti-social is genocide/pogroms, biocide, mind crimes, bio/nano warfare.

I'd rather no gating, but some gating might be required at different times.

View more: Next