Comment author: torekp 17 March 2013 03:16:48PM 1 point [-]

It's a leap of faith to suppose that even our universe, never mind levels I-III, is exhausted by its mathematical properties, as opposed to simply mathematically describable. And I don't really see what it buys you. I suppose it's equally a leap of faith to suppose that our universe has more properties than that, but I just prefer not to leap at all.

Comment author: ESRogs 14 May 2016 08:11:05AM 1 point [-]

What would it mean for our universe not to be exhausted by its mathematical properties? Isn't whether a property seems mathematical just a function of how precisely you've described it?

Comment author: paulfchristiano 19 March 2016 09:04:51PM 2 points [-]

In that case, there would be severe principle-agent problems, given the disparity between power/intelligence of the trainer/AI systems and the users. If I was someone who couldn't directly control an AI using your scheme, I'd be very concerned about getting uneven trades or having my property expropriated outright by individual AIs or AI conspiracies, or just ignored and left behind in the race to capture the cosmic commons. I would be really tempted to try another AI design that does purport to have the AI serve my interests directly, even if that scheme is not as "safe".

Are these worse than the principal-agent problems that exist in any industrialized society? Most humans lack effective control over many important technologies, both in terms of economic productivity and especially military might. (They can't understand the design of a car they use, they can't understand the programs they use, they don't understand what is actually going on with their investments...) It seems like the situation is quite analogous.

Moreover, even if we could build AI in a different way, it doesn't seem to do anything to address the problem, since it is equally opaque to an end user who isn't involved in the AI development process. In any case, they are in some sense at the mercy of the AI developer. I guess this is probably the key point---I don't understand the qualitative difference between being at the mercy of the software developer on the one hand, and being at the mercy of the software developer + the engineers who help the software run day-to-day on the other. There is a slightly different set of issues for monitoring/law enforcement/compliance/etc., but it doesn't seem like a huge change.

(Probably the rest of this comment is irrelevant.)

To talk more concretely about mechanisms in a simple example, you might imagine a handful of companies who provide AI software. The people who use this software are essentially at the mercy of the software providers (since for all they know the software they are using will subvert their interests in arbitrary ways, whether or not there is a human involved in the process). In the most extreme case an AI provider could effectively steal all of their users' wealth. They would presumably then face legal consequences, which are not qualitatively changed by the development of AI if the AI control problem is solved. If anything we expect the legal system and government to better serve human interests.

We could talk about monitoring/enforcement/etc., but again I don't see these issues as interestingly different from the current set of issues, or as interestingly dependent on the nature of our AI control techniques. The most interesting change is probably the irrelevance of human labor, which I think is a very interesting issue economically/politically/legally/etc.

I agree with the general point that as technology improves a singleton becomes more likely. I'm agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.

I do think that a singleton is likely eventually. From the perspective of human observers, a singleton will probably be established relatively shortly after wages fall below subsistence (at the latest). This prediction is mostly based on my expectation that political change will accelerate alongside technological change.

Comment author: ESRogs 15 April 2016 03:51:52AM 0 points [-]

I agree with the general point that as technology improves a singleton becomes more likely. I'm agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.

I wonder -- are you also relatively indifferent between a hard and slow takeoff, given sufficient time before the takeoff to develop ai control theory?

(One of the reasons a hard takeoff seems scarier to me is that it is more likely to lead to a singleton, with a higher probability of locking in bad values.)

Request for help with economic analysis related to AI forecasting

6 ESRogs 06 February 2016 01:27AM

[Cross-posted from FB]

I've got an economic question that I'm not sure how to answer.

I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&D to be automated, that would be very interesting to know about!)

So I'd like to be able to measure the share of AI R&D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).

What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.

Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.

On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.

So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?

Comment author: Lumifer 29 January 2016 05:39:06PM 2 points [-]

An interesting comment:

The European champion of Go is not the world champion, or even close. The BBC, for example, reported that “Google achieves AI ‘breakthrough’ by beating Go champion,” and hundreds of other news outlets picked up essentially the same headline. But Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had “mastered” the game. DeepMind made major progress, but the Go journey is still not over; a fascinating thread at YCombinator suggests that the program — a work in progress — would currently be ranked #279.

Comment author: ESRogs 31 January 2016 06:24:38AM 0 points [-]

It will be interesting to see how much progress they've made since October.

My guess is that they think they're going to win (see for example David Silver's "quiet confidence" here: https://www.youtube.com/watch?v=g-dKXOlsf98&t=5m9s).

[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning

14 ESRogs 27 January 2016 09:04PM

DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.

 

Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history

[...]

But one game has thwarted A.I. research thus far: the ancient game of Go.


Comment author: ESRogs 09 December 2015 10:15:10AM 10 points [-]

Gwern has written an article for Wired, allegedly revealing the true identity of Satoshi Nakamoto:

http://www.wired.com/2015/12/bitcoins-creator-satoshi-nakamoto-is-probably-this-unknown-australian-genius/

Comment author: ESRogs 11 December 2015 08:26:42AM 1 point [-]

Follow-up -- after we've all had some time to think about it, I think this is the best explanation for who this would-be SN is:

https://www.reddit.com/r/Bitcoin/comments/3w9xec/just_think_we_deserve_an_explanation_of_how_craig/cxuo6ac

Comment author: ESRogs 09 December 2015 10:15:10AM 10 points [-]

Gwern has written an article for Wired, allegedly revealing the true identity of Satoshi Nakamoto:

http://www.wired.com/2015/12/bitcoins-creator-satoshi-nakamoto-is-probably-this-unknown-australian-genius/

Comment author: moridinamael 04 December 2015 03:40:54PM *  6 points [-]

This advice may go against other advice, but it's a tactic that has served me well: in making early-career decisions, such as your choice of major, always ask yourself which choice preserves future options.

For example, let's say you are considering a major, and you are equally interested in Architecture, Literature, and Engineering as careers.

Under my analysis, I would ask, which of these choices preserves the most options?

If you choose to pursue a degree in Literature, it is unlikely that you will be able to parley those skills into any kind of job in Architecture or in Engineering.

If you choose Architecture, you will find it very difficult (though not entirely impossible) to switch into Engineering for a graduate degree. However, you may find that you can try to pivot into some kind of Literary existence more easily.

If you choose Engineering, you'll find that Architectural schools will be eager to accept you for a graduate program, and the difficulty of switching from Engineering to a Literature program will probably be equal to the difficulty of switching from Architecture.

So, under this analysis, Engineering is the choice that preserves the most future options. At the point of choosing your college major, you're too young to be screening off future possibilities. Unless you're completely gung-ho about Literature, and feel a real certainty about what you want, it's best to keep more cards in your hand and let yourself make that exclusionary choice when you're older and wiser.

You may find that reading the classics in your spare time and writing a little bit of fiction now and then more than satisfies your Literary impulses, in which case, you'll be glad you didn't commit yourself to it as a career.

Conversely, if you commit to Engineering and find that you hate it, it's always easier to pivot to the other options.

As a general rule, things that are perceived as more difficult are easier to pivot away from, because the admissions gatekeepers for the perceived-as-less-difficult options will find you impressive due to where you're coming from. This heuristic is valid at all levels, for example, if you decide to go the Engineering route, choose the subdiscipline of Engineering that everybody else says is the hardest, scariest one. You can always punt to one of the easier ones if you don't find it to be a good fit, but it's much harder to go uphill from where you start.

Comment author: ESRogs 07 December 2015 10:42:56PM 1 point [-]

As a complement to this advice (which I think is good), it's important to make sure you still explore. Don't be so worried about making sure you do the thing that maximizes optionality that you're afraid to fail and don't try things.

So if you think you should study math rather than econ (as per Kaj's comment), then start with math as your default, but make sure to also take an econ class to see if you're so much more interested in it / better at it that it's worth it to specialize.

Comment author: Fluttershy 02 December 2015 10:48:36AM 7 points [-]

Four years ago, I asked three members of my close family who were likely to give me something for Christmas to make a donation to GiveWell/AMF (GiveWell's top charity at that point) instead of getting something for me. This wasn't burdensome at all for me, because I didn't have many unmet material needs at this point.

Anyways, in my case, it turned out that my upper middle class American relatives, who were culturally "normal", rather than being culturally close to any EA/Silicon Valley/rationality circles, were quite offended by this suggestion. This may have had something to do with my presentation--I don't remember myself being particularly good at politics or speaking back then, and I tried to be nice, but perhaps I was too bold, or too culturally insensitive. Still, I was quite surprised at how poorly my request was received.

Of the three family members I talked to, two told me that I was being unrealistic, and that I needed to realize how things worked in the real world, or something like that. They got me something comparable to what they'd gotten me the previous year. The third one actually made a donation to the Carter Center, but only after bemoaning how I didn't appreciate how hard making money was in the real world--I think they had liked the Carter Center because a friend had worked there, or something.

A couple family members I hadn't talked to heard about my request, and I later heard that one had been talking with other members of my family about how she had become "very worried" that I was going to become "too altruistic". Another actually bought a chicken (or goat?) in my name through Heifer International. That was interesting, since I had thought that I been clear that I preferred GiveWell's top charities over other charities.

I guess that I completely stopped being vocal about EA after that point. Still, I've often wondered if the type of EAs who hold birthday and Christmas fundraisers are more, or less culturally normal-feeling to the average first-worlder than my family is.

(Also, since this comment is about how I'm terrible at understanding how to get the tone right on EA things, I apologize if the tone of this comment itself is somewhat off.)

Comment author: ESRogs 07 December 2015 10:31:07PM 2 points [-]

Four years ago, I asked three members of my close family who were likely to give me something for Christmas to make a donation to GiveWell/AMF (GiveWell's top charity at that point) instead of getting something for me.

Do you remember what you said? Was it written (like a facebook post) or spoken?

Comment author: ESRogs 07 December 2015 10:05:00PM 0 points [-]

Is there a deadline for when the survey will close?

View more: Next