AnnaLeptikon comments on Meetup Report Thread: September 2014 - Less Wrong

9 Post author: Viliam_Bur 30 August 2014 12:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread.

Comment author: AnnaLeptikon 28 September 2014 07:01:29AM *  1 point [-]

Review about the Rationality Meetup in Vienna on 27.09.2014

Superintelligence Summary by Marko Thiel

People who were there: Andreas, Matthias Brandner, Manuel K., Monika, Marko, Viliam Bur, Alex, Philip, Anna, Philipp, Tino/Sandy, Austin, Milica, Luka, Ivan, Axel, Lea, Lio, Andreas V., Manuel M. (20 people - awesome! And so many were new!)

Superintelligence by Nick Bostrom (presented by Marko Thiel)

First announcement: ethics will not be discussed and rather be ignored today, because otherwise he would never finish

Quote "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

definition:

(I sadly didn't take notes here)

paths:

  • AI -> strong AI

  • whole brain emulation

  • biological cognition (improve humans, chemical substances/nootropics, embryo selection -> lots of generations in vitro)

  • interface with machine

  • networks and organizations

forms:

  • speed Superintellicence (like human but faster)

  • collective Superintelligence (together smarter)

  • quality Superintelligence (totally different to human)

intelligence explosion:

  • there will be a crossover after which the machine itself will go on improving the system

  • graph to show how similar an idiot and Einstein are compared to a mouse or Superintelligence

rate of intelligence change

AI takeover:

  1. precritial

  2. intelligence amplification

  3. cover + preparation

  4. overt implementation

-> arises the question of where the motivation to do so comes from!

the super intelligence will:

  • the orthogonality thesis: any intelligence can build a pair with any motivation (Eliezer would probably call it "anti prediction)

Castes:

  • oracles

  • genies

  • sovereign

  • tool

predictability:

  • design

  • convergent instrumental goals (for example: self-preservation, goal-content integrity, cognitive enhancement, technology, resource acquisition -> infrastructure profusion)

other topics: failure modes, control problem, acquire values, choosing who to choose, do what I mean, the strategic picture

Comments I wrote down

by Andreas: "Does SI include self-awareness?" answer: "No"

by Ivan: Has someone tried to breed an human intelligent dog?

by Lio: Is it possible to have something that's good in all the things? -> Viliam: Yes, with a network structure

by Austin: Is there a difference between biological or programmed happiness? -> Manuel: the utility function of a thermostat is to be at 20 degrees, it doesn't WANT to be there

in the evening:

  • We talked about daily things members of the group do which they consider rational/daily effective rituals/personal sleeping duration

for the next meetup:

  • We will vote on the date in the Facebook group

  • we will have three topics and the third one will be to collect personal goals we want to achieve with/in the group/ideas we have for the group