Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by RyanCarey on AI arms race
Comment author: woodchopper 06 May 2017 05:04:59PM 0 points [-]

Developing an AGI (and then ASI) will likely involve a serious of steps involving lower intelligences. There's already an AI arms race between several large technology companies and keeping your nose in front is already practiced because there's a lot of utility in having the best AI so far.

So it isn't true to say that it's simply a race without important intermediate steps. You don't just want to get to the destination first, you want to make sure your AI is the best for most of the race for a whole heap of reasons.

In response to comment by woodchopper on AI arms race
Comment author: RyanCarey 09 May 2017 07:08:16AM 0 points [-]

If your path to superintelligence spends lots of time in regions with intermediate-grade AIs that can generate power or intelligence, then that is true, so of course the phrase "arms race" aptly describes such situations.

It's the case where people are designing a superintelligence "from scratch" that the term "arms race" seems inappropriate.

In response to AI arms race
Comment author: RyanCarey 06 May 2017 09:18:59AM *  1 point [-]

Someone pointed out to me that probably we should calling superintelligence a possible "arms race". In an "arms race", you're competing to have a stronger force than the other person. You want to keep your nose in-front in case of a fight.

Developing superintelligence, on the other hand, is just a plain old race. A technology race. You simply want to get to the destination first.

(Likewise with developing the first nuke, which also involved arms but was not an arms race.)

[Link] Call for Special Issue on Superintelligence - Informatica

2 RyanCarey 03 May 2017 05:08AM
Comment author: RyanCarey 18 April 2017 04:39:23AM *  6 points [-]

Similarly to what others have said, it seems pretty unhelpful to start this kind of project without consulting the founders and administrators of LessWrong because of:

  • risk of desensitizing people to announcements about the LW community
  • increasing splintering of the LW community
  • missing out on good ideas from LW's founders and administrators
  • maintaining a good working relationship with LW administrators
  • having consent to use the LW brand at all

etc etc etc

Comment author: RyanCarey 13 April 2017 03:00:24AM *  2 points [-]

It would be kind-of surprising if the capabilities to create pleasure and suffering were very asymmetrical. Carl has written a little around this general topic - Are pain and pleasure equally energy-efficient?

Comment author: RyanCarey 17 March 2017 11:23:02AM *  1 point [-]

Two thoughts:

1 - Why buy? Can't you rent? Personally, I'd get most of the value by living with friends across two floors of a large house (Event Horizon) or in two nearby houses on a street (The Bailey). A few stable families could buy a big house later per Romeo.

2 - Suppose you actually buy a small dormitory or an old tiny hotel. Call this the hard mode version of the project. Such a building would accommodate at least the 20 you're looking for. But it would require commensurate investment. If I imagine pitching this project, my story for some rationalist investor is that it's a socially responsible investment that will pay itself back with some risk and low ROI but that nonetheless delivers social value by growing the rationalist community. But what projects would be run from such a venue, and what is my case for such? I could imagine mitigating the downside risk by arranging a Free-State-Project--like signup, with some deposits. I could increase the upside by promising to make a chain of such houses. I could buffer the EV by just already being a proven competent impressive startup founder. The hard mode version of the project does seem valuable, but not necessarily that valuable compared to how hard and expensive it is. It would take a serious leader to actually drive it.

Comment author: Elo 13 March 2017 12:13:15PM 3 points [-]

That meme is poor and should die. How are we actually to construct future converging standards if that meme gets in the way of real progress?

Also as someone in charge of one of the chat groups, I have no problem with another and am already in this discord.

So in response to your complaint - no.

In response to comment by Elo on LessWrong Discord
Comment author: RyanCarey 13 March 2017 05:21:56PM 3 points [-]

How are we actually to construct future converging standards if that meme gets in the way of real progress?

By deleting existing standards. By doing actual work to redistribute people toward better existing standards from worse ones. By having people migrate from at least two deleted standards every time you make a new one.

In response to LessWrong Discord
Comment author: RyanCarey 13 March 2017 09:35:54AM 4 points [-]
Comment author: RyanCarey 04 January 2017 03:05:48PM *  5 points [-]

Nitpick:

MIRI recently announced a new research agenda focused on "agent foundations". Yet even the Open Philanthropy Project, made up of people who at least share MIRI's broad worldview, can't decide whether that research direction is promising or useless. The Berkeley Center for Human-Compatible AI doesn't seem to have a specific research agenda beyond Stuart Russell. The AI100 Center at Stanford is just kicking off. That's it.

There's also:

  • MIRI's Alignment for Advanced Machine Learning Systems agenda
  • The Concrete Problems agenda by Amodei, Olah and others
  • Russell's Research Priorities doc written with Dewey and Tegmark, covers probably more than his CHCAI Centre
  • Owain Evans, Stuart Armstrong and Eric Drexler at FHI
  • Paul Christiano's thinking on AI Control
  • OpenAI's safety team in formation
  • DeepMind's safety team
  • (?) Wei Dai's thinking on metaphilosophy and AI... He occasionally comments e.g. on AgentFoundations
  • Other Machine Learning researchers, e.g. safe exploration in Deep RL, transparency.
Comment author: biker19 09 December 2016 09:57:05AM 2 points [-]
Comment author: RyanCarey 09 December 2016 01:14:35PM 0 points [-]

Thank-you!!

View more: Next