[Link] The Leverhulme Centre for the Future of Intelligence officially launches.

1 ignoranceprior 21 October 2016 01:22AM
Comment author: teddy-ak17 19 July 2016 04:16:39AM 0 points [-]

will the conference be available online?

Comment author: ignoranceprior 16 October 2016 04:35:10AM 1 point [-]

You can watch the archived videos here: http://livestream.com/nyu-tv/ethicsofAI

Comment author: Mac 15 October 2016 01:41:14PM *  1 point [-]

Is a unit of suffering less complex than a unit of happiness, and, therefore, more likely to occur in the universe, all else equal? I realize this is an insanely difficult question, but would be interested in current opinions and any related evidence.

Submitting...

Comment author: ignoranceprior 15 October 2016 10:41:02PM 1 point [-]

A similar question is whether happiness and suffering are equally energy-efficient.

[Link] Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority

6 ignoranceprior 14 October 2016 07:58PM
Comment author: ignoranceprior 05 September 2016 02:05:10AM *  2 points [-]

Has anyone here had success with the method of loci (memory palace)? I've seen it mentioned a few times on LW but I'm not sure where to start, or whether it's worth investing time into.

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

10 ignoranceprior 29 August 2016 10:43PM

Source article: http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/

UC Berkeley artificial intelligence (AI) expert Stuart Russell will lead a new Center for Human-Compatible Artificial Intelligence, launched this week.

Russell, a UC Berkeley professor of electrical engineering and computer sciences and the Smith-Zadeh Professor in Engineering, is co-author of Artificial Intelligence: A Modern Approach, which is considered the standard text in the field of artificial intelligence, and has been an advocate for incorporating human values into the design of AI.

The primary focus of the new center is to ensure that AI systems are beneficial to humans, he said.

The co-principal investigators for the new center include computer scientists Pieter Abbeel and Anca Dragan and cognitive scientist Tom Griffiths, all from UC Berkeley; computer scientists Bart Selman and Joseph Halpern, from Cornell University; and AI experts Michael Wellman and Satinder Singh Baveja, from the University of Michigan. Russell said the center expects to add collaborators with related expertise in economics, philosophy and other social sciences.

The center is being launched with a grant of $5.5 million from the Open Philanthropy Project, with additional grants for the center’s research from the Leverhulme Trust and the Future of Life Institute.

Russell is quick to dismiss the imaginary threat from the sentient, evil robots of science fiction. The issue, he said, is that machines as we currently design them in fields like AI, robotics, control theory and operations research take the objectives that we humans give them very literally. Told to clean the bath, a domestic robot might, like the Cat in the Hat, use mother’s white dress, not understanding that the value of a clean dress is greater than the value of a clean bath.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own,” Russell said. “This means we need cast-iron formal proofs, not just good intentions.”

One approach Russell and others are exploring is called inverse reinforcement learning, through which a robot can learn about human values by observing human behavior. By watching people dragging themselves out of bed in the morning and going through the grinding, hissing and steaming motions of making a caffè latte, for example, the robot learns something about the value of coffee to humans at that time of day.

“Rather than have robot designers specify the values, which would probably be a disaster,” said Russell, “instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence.”

Russell and his colleagues don’t expect this to be an easy task.

“People are highly varied in their values and far from perfect in putting them into practice,” he acknowledged. “These aspects cause problems for a robot trying to learn what it is that we want and to navigate the often conflicting desires of different individuals.”

Russell, who recently wrote an optimistic article titled “Will They Make Us Better People?,” summed it up this way: “In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Comment author: Dagon 25 August 2016 08:45:00PM -1 points [-]

I was around back in the day, and can confirm that this is nonsense. NRX evolved separtely. There was a period where it was of interest and explored by a number of LW contributors, but I don't think any of the thought leaders of either group were significantly influential to the other.

There is some philosophical overlap in terms of truth-seeking and attempted distinction between universal truths and current social equilibria, but neither one caused nor grew from the other.

Comment author: ignoranceprior 26 August 2016 04:38:33AM *  2 points [-]

I don't know whether you've heard of it, but someone wrote an ebook called "Neoreaction a Basilisk" that claims Eliezer Yudkowsky was an important influence on Mencius Moldbug and Nick Land. There was a lot of talk about it on the tumblr LW diaspora a few months back.

Comment author: Arielgenesis 24 July 2016 07:05:16PM 0 points [-]

Hi, I have silly question. How do I vote? It seems obvious but I cannot see any upvote or downvote button anywhere in this page. I have tried:

  1. looking at the top of the comment. Next to OP/TS is date, and then time, and then the points. At the far right is the 'minimize'
  2. looking at the bottom of the comment. I see Parent, Edit, Permalink, get notification
  3. The FAQ says: >you can vote submissions and comments up or down just like you can on Reddit but I cannot find the vote button anywhere near comments or post.
Comment author: ignoranceprior 24 July 2016 07:27:10PM *  2 points [-]

You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.

[Link] NYU conference: Ethics of Artificial Intelligence (October 14-15)

4 ignoranceprior 16 July 2016 09:07PM

FYI: https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/

This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including:

What ethical principles should AI researchers follow?
Are there restrictions on the ethical use of AI?
What is the best way to design morally beneficial AI?
Is it possible or desirable to build moral principles into AI systems?
When AI systems cause benefits or harm, who is morally responsible?
Are AI systems themselves potential objects of moral concern?
What moral framework is best used to assess questions about the ethics of AI?

Speakers and panelists will include:

Nick Bostrom (Future of Humanity Institute), Meia Chita-Tegmark (Future of Life Institute), Mara Garza (UC Riverside, Philosophy), Sam Harris (Project Reason), Demis Hassabis (DeepMind/Google), Yann LeCun (Facebook, NYU Data Science), Peter Railton (University of Michigan, Philosophy), Francesca Rossi (University of Padova, Computer Science), Stuart Russell (UC Berkeley, Computer Science), Susan Schneider (University of Connecticut, Philosophy), Eric Schwitzgebel (UC Riverside, Philosophy), Max Tegmark (Future of Life Institute), Wendell Wallach (Yale, Bioethics), Eliezer Yudkowsky (Machine Intelligence Research Institute), and others.

Organizers: Ned Block (NYU, Philosophy), David Chalmers (NYU, Philosophy), S. Matthew Liao (NYU, Bioethics)

A full schedule will be circulated closer to the conference date.

Registration is free but required. REGISTER HERE. Please note that admission is limited, and is first-come first-served: it is not guaranteed by registration.

Comment author: ArthurRainbow 14 July 2016 08:17:50AM 0 points [-]

Broken link and no copy on archive.org

Comment author: ignoranceprior 14 July 2016 01:09:55PM *  1 point [-]

Archive.org copy (takes a few seconds to load)

Archive.is copy

View more: Next