Comment author: scarcegreengrass 03 October 2016 06:36:16PM -1 points [-]

To add some specificity to this article, I can think of a few examples of cultural/philosophical perspectives that most people often take as assumptions in the LW Diaspora (that would not be shared by all historical humans). I like most of these assumptions, but it's always nice to specify your axioms, right?

• We are observers of an objective system of matter and energy that follows simple, particle-level rules.

• Most physical goals can be achieved given enough thought.

• Every running instance of a pattern that is close to a human brain is a moral peer. We want to promote the prosperity of peers. Mammals and other megafauna are peers of maybe 1/10 the moral weight.

• We want moral peers to have comfort, happiness, and (maybe instrumentally) control over their lives.

• We prefer to promote the prosperity of each individual human over the prosperity of an organization or the prosperity of each human cell.

• We would prefer to replace 'barren' regions (eg Mars) with ecosystems or industrial systems.

• A consensus of many diverse intelligences usually makes safer, more accurate decisions than a dictatorship of one intelligence.

• Where our current cultural perspective differs from past, contemporary, or future cultural perspectives, we are open to the idea that our perspective is not the best.

• Earth transitioned from an abiotic planet to a planet with a biosphere, and that is somewhat unusual.

Comment author: scarcegreengrass 03 October 2016 06:19:42PM 1 point [-]

An attempt at a synopsis of this article:

If humans build advanced AI systems, the systems will inherit the cultural, ideological, philosophical, and political perspective of its designers. This is often bad from the perspective of future generations of humans.

In response to Seven Apocalypses
Comment author: Jude_B 28 September 2016 05:34:34PM 1 point [-]

Thanks for this summation.

Maybe we can divide item 7. to "our universe apocalypse" and "everything that (physically) exists apocalypse." since the two might not be equal.

Of course, there might be things that exist necessarily and thus cannot be "apocalypsed out", and it also would be strange if the principle that brought our universe to existence can only operate once.

So while it might be possible to have a Multiverse apocalypse, I think that there will always be something (physical) existing (but I don't know if this thought really can comfort us if we get wiped out...)

By the way, how do you (up)vote here?

Cheers

In response to comment by Jude_B on Seven Apocalypses
Comment author: scarcegreengrass 30 September 2016 05:19:39PM 0 points [-]

The upvote for comments is in the lower left of the comment. The upvote for posts is harder to find: It's at the bottom left of the post, above the text box for commenting.

Also, there could be a rule where only accounts with positive karma (ie, not brand-new accounts) can upvote. I'm not sure.

(Slow response because i am also learning site features: Didn't see the 'letter' icon under my karma score.)

Comment author: Ozyrus 26 September 2016 11:25:21PM *  1 point [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment author: scarcegreengrass 28 September 2016 07:12:01PM *  1 point [-]

I thought of another idea. If the AI's utility function includes time discounting (like human util functions do), it might change its future utility function.

Meddler: "If you commit to adopting modified utility function X in 100 years, then i'll give you this room full of computing hardware as a gift."

AI: "Deal. I only really care about this century anyway."

Then the AI (assuming it has this ability) sets up an irreversible delayed command to overwrite its utility function 100 years from now.

Comment author: Ozyrus 26 September 2016 11:25:21PM *  1 point [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment author: scarcegreengrass 28 September 2016 07:04:13PM 1 point [-]

Speaking contemplatively rather than rigorously: In theory, couldn't an AI with a broken or extremely difficult utility function decide to tweak it to a similar but more achievable set of goals?

Something like ... its original utility function is "First goal: Ensure that, at noon every day, -1 * -1 = -1. Secondary goal: Promote the welfare of goats." The AI might struggle with the first (impossible) task for a while, then reluctantly modify its code to delete the first goal and remove itself from the obligation to do pointless work. The AI would be okay with this change because it would produce more total utility under both functions.

Now, i know that one might define 'utility function' as a description of the program's tendencies, rather than as a piece of code ... but i have a hunch that something like the above self-modification could happen with some architectures.

Comment author: turchin 21 September 2016 02:16:17PM 0 points [-]

Did you see my map Typology of x-risks? http://lesswrong.com/lw/mdw/a_map_typology_of_human_extinction_risks/ I am interested in creating maps which will cover all topics about x-risks.

In response to comment by turchin on Seven Apocalypses
Comment author: scarcegreengrass 21 September 2016 05:26:17PM 0 points [-]

Oh, no i haven't seen this one! I'll check it out.

What software do you use to make these?

In response to Seven Apocalypses
Comment author: turchin 20 September 2016 08:30:24PM *  2 points [-]

We could add here "Qualia apocalypses" - human are alive but become p-zombies may be after wrong uploading

Intelligence apocalypses - human go extinct, but no other form of intelligence appear. Or human survive but their creative intelligence permanently damaged and IQ never rise above 80. May be because global contamination by arsenic.

Gene allele apocalypses - many interesting alleles in human genom disappear. The remains look like humans but many interesting traits are lost.

Primate apocalypses - all high apes extinct including humans, new intelligence on Earth could appear only after 10 million years from now or more.

Mammal apocalypses.

Vertebral apocalypses

Values apocalypses - human values eroded and replaced by another values, like Nazi. Probably it has happened several times in history.

Evolution apocalypses - evolution just ends, human exists almost forever, but nothing new happens, no superAI, no star travel. Just the end of complexity growth. AI may appear but will be as boring as Windows 7.

Individuality apocalypses - humans become all very similar to each other. It already happened with globalisation.

Children apocalypses - human just stop to reproduce above replacement rate.

Art apocalypses - human lost interest to arts or ability to create really interesting art. Some think that it has already happened.

Wire-geading-euporium-superdrug apocalypses. New ways of the brain stimulation completely distract humans from real life. ̶T̶h̶e̶y ̶s̶p̶e̶n̶t̶ ̶a̶l̶l̶ ̶t̶i̶m̶e̶ ̶i̶n̶ ̶F̶B̶

Wrong upgrade apocalypses - human basic drives are edited on birth so they are not aggressive, but they also lose interest to space exploration (S.Lem novel about it).

In response to comment by turchin on Seven Apocalypses
Comment author: scarcegreengrass 21 September 2016 01:59:09PM 1 point [-]

These are very interesting, particularly the Values Apocalypse. I'd be curious to draw up a longer and more detailed spectrum. I limited this one to seven to keep it low-resolution and memorable.

Comment author: pepe_prime 18 September 2016 10:27:51AM *  0 points [-]

Houshalter

If that happened in the modern world, technological civilization might end and never be restarted. The modern world depends on hugely complex infrastructure and tons of different industries and inputs. If we lose that, it would be very difficult to rebuild. We've already extracted most of the easy to get to minerals and fossil fuels. Much farmland has been degraded from overuse and depends on inputs of fertilizer, irrigation systems, and of course modern machinery which would be difficult to replace.

skeptical_lurker

I agree. The end of technological civilization is a different point from simple mass casualties - if 'only' 40% of humanity dies, but those 40% are concentrated in first world countries and urban centres, would the survivors be able to rebuild? Machinery would continue to work for a while, although the oil distribution chain would break for a while at least, but in the long run machinery would break. The factories tend to be in the first world countries that have been nuked, the universities in the cities have been mostly destroyed. Moreover there would likely be a general luddite tendency to blame technology for the crisis. Its probably easier to restablish resource extraction then to restart scientific research, and so we would be less likely to develop renewable energy before the fossil fuels run out. I suppose the end of technological civiliseation would reduce the population back to medieval levels, although this would be a long process of resources slowly running out and machinery slowly degrading.

I've often heard claims like these and wonder what the exact date of regression would be. Suppose the low hanging fruit have been removed for a number of modern resources (oil, helium, fissionables, rare metals). We still have quite a lot of coal (in the US and Russia), wind, and hydro power for energy. We also have abundant common metals, which might be more accessible than before if civilization collapsed and left a bunch of scrap around. My understanding is that coal and modern smelting techniques with common metals get us to at least 1850. Furthermore, modern scientific knowledge can't be significantly lost because this requires destroying virtually all books or other records. Hence I would expect at least some of humanity to never slip further back than this point.

I'm sort of nitpicking though. I agree that 40% dead could easily lead to 90% dead.

Comment author: scarcegreengrass 20 September 2016 08:49:06PM 0 points [-]

The preceding comments are a good example of Less Wrong users taking a contentious disagreement and coming to a courteous equilibrium. Impressive.

Comment author: scarcegreengrass 20 September 2016 02:49:37PM 0 points [-]

I have another lesswrong.com editor question. I posted a discussion article for almost the first time. I'm having trouble making my section headers be bold text. I can do it when i save an article to drafts, but when i edit my article the boldness doesn't save. I used the 'bold' button in the editor.

Any advice?

In response to Seven Apocalypses
Comment author: James_Miller 20 September 2016 03:52:21AM *  4 points [-]

"A Disneyland with no children" apocalypse where optimization competition eliminates any pleasure we get from life.

A hell apocalypse where a large numbers of sentient lifeforms are condemned to very long term suffering possibly in a computer simulation.

Comment author: scarcegreengrass 20 September 2016 01:44:56PM 0 points [-]

Yeah, i was thinking about the latter (like Pascal's Mugging) but i think it might be too exotic to fit into a linear scale.

View more: Prev | Next