Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: BarbaraB 21 December 2016 05:57:40AM 0 points [-]

Are the uniforms at US schools reasonably practical, comfortable and do they have reasonable colour, e.g. not green ? As a girl of socialism, I experienced pioneer uniforms, which were not well designed. They forced short skirts on girls, which are impractical in some weather. The upper part, the shirt, needed to be ironed. There was no sweather or coat to unify kids in winter.. My mother once had to stand coatless in winter in a wellcome row for some event. I can also imagine some girls having aesthetic issues with the exposed legs or unflattering color. But what are the uniforms in the US usually like ?

Comment author: ESRogs 11 January 2017 07:24:32AM 0 points [-]

What's wrong with green?

Comment author: ESRogs 18 December 2016 09:18:37AM *  0 points [-]

Rather than relying on the moderator to actually moderate, use the model to predict what the moderator would do. I’ll tentatively call this arrangement “virtual moderation.”

...

Note that if the community can’t do the work of moderating, i.e. if the moderator was the only source of signal about what content is worth showing, then this can’t work.

Does the "this" in "this can't work" refer to something other than the virtual moderation proposal, or are you saying that even virtual moderation can't work w/o the community doing work? If so, I'm confused, because I thought I was supposed to understand virtual moderation as moderation-by-machine.

Comment author: ESRogs 18 December 2016 09:41:24AM 0 points [-]

Oh, did you mean that the community has to interact with a post/comment (by e.g. upvoting it) enough for the ML system to have some data to base its judgments on?

I had been imagining that the system could form an opinion w/o the benefit of any reader responses, just from some analysis of the content (character count, words used, or even NLP), as well as who wrote it and in what context.

Comment author: ESRogs 18 December 2016 09:18:37AM *  0 points [-]

Rather than relying on the moderator to actually moderate, use the model to predict what the moderator would do. I’ll tentatively call this arrangement “virtual moderation.”

...

Note that if the community can’t do the work of moderating, i.e. if the moderator was the only source of signal about what content is worth showing, then this can’t work.

Does the "this" in "this can't work" refer to something other than the virtual moderation proposal, or are you saying that even virtual moderation can't work w/o the community doing work? If so, I'm confused, because I thought I was supposed to understand virtual moderation as moderation-by-machine.

Comment author: SatvikBeri 27 November 2016 05:18:43PM 26 points [-]

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a problem I or other people were struggling with"
  • No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
  • Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I've made, not just the top level
  • Built-in argument mapping tools for comments
  • Shadowbanning, a la Hacker News
  • Initially restricted growth, e.g. by invitation only
Comment author: ESRogs 28 November 2016 10:19:44PM 1 point [-]

Built-in argument mapping tools for comments

Could you say more about what you have in mind here?

Comment author: torekp 17 March 2013 03:16:48PM 1 point [-]

It's a leap of faith to suppose that even our universe, never mind levels I-III, is exhausted by its mathematical properties, as opposed to simply mathematically describable. And I don't really see what it buys you. I suppose it's equally a leap of faith to suppose that our universe has more properties than that, but I just prefer not to leap at all.

Comment author: ESRogs 14 May 2016 08:11:05AM 1 point [-]

What would it mean for our universe not to be exhausted by its mathematical properties? Isn't whether a property seems mathematical just a function of how precisely you've described it?

Comment author: paulfchristiano 19 March 2016 09:04:51PM 2 points [-]

In that case, there would be severe principle-agent problems, given the disparity between power/intelligence of the trainer/AI systems and the users. If I was someone who couldn't directly control an AI using your scheme, I'd be very concerned about getting uneven trades or having my property expropriated outright by individual AIs or AI conspiracies, or just ignored and left behind in the race to capture the cosmic commons. I would be really tempted to try another AI design that does purport to have the AI serve my interests directly, even if that scheme is not as "safe".

Are these worse than the principal-agent problems that exist in any industrialized society? Most humans lack effective control over many important technologies, both in terms of economic productivity and especially military might. (They can't understand the design of a car they use, they can't understand the programs they use, they don't understand what is actually going on with their investments...) It seems like the situation is quite analogous.

Moreover, even if we could build AI in a different way, it doesn't seem to do anything to address the problem, since it is equally opaque to an end user who isn't involved in the AI development process. In any case, they are in some sense at the mercy of the AI developer. I guess this is probably the key point---I don't understand the qualitative difference between being at the mercy of the software developer on the one hand, and being at the mercy of the software developer + the engineers who help the software run day-to-day on the other. There is a slightly different set of issues for monitoring/law enforcement/compliance/etc., but it doesn't seem like a huge change.

(Probably the rest of this comment is irrelevant.)

To talk more concretely about mechanisms in a simple example, you might imagine a handful of companies who provide AI software. The people who use this software are essentially at the mercy of the software providers (since for all they know the software they are using will subvert their interests in arbitrary ways, whether or not there is a human involved in the process). In the most extreme case an AI provider could effectively steal all of their users' wealth. They would presumably then face legal consequences, which are not qualitatively changed by the development of AI if the AI control problem is solved. If anything we expect the legal system and government to better serve human interests.

We could talk about monitoring/enforcement/etc., but again I don't see these issues as interestingly different from the current set of issues, or as interestingly dependent on the nature of our AI control techniques. The most interesting change is probably the irrelevance of human labor, which I think is a very interesting issue economically/politically/legally/etc.

I agree with the general point that as technology improves a singleton becomes more likely. I'm agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.

I do think that a singleton is likely eventually. From the perspective of human observers, a singleton will probably be established relatively shortly after wages fall below subsistence (at the latest). This prediction is mostly based on my expectation that political change will accelerate alongside technological change.

Comment author: ESRogs 15 April 2016 03:51:52AM 0 points [-]

I agree with the general point that as technology improves a singleton becomes more likely. I'm agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.

I wonder -- are you also relatively indifferent between a hard and slow takeoff, given sufficient time before the takeoff to develop ai control theory?

(One of the reasons a hard takeoff seems scarier to me is that it is more likely to lead to a singleton, with a higher probability of locking in bad values.)

Request for help with economic analysis related to AI forecasting

6 ESRogs 06 February 2016 01:27AM

[Cross-posted from FB]

I've got an economic question that I'm not sure how to answer.

I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&D to be automated, that would be very interesting to know about!)

So I'd like to be able to measure the share of AI R&D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).

What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.

Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.

On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.

So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?

Comment author: Lumifer 29 January 2016 05:39:06PM 2 points [-]

An interesting comment:

The European champion of Go is not the world champion, or even close. The BBC, for example, reported that “Google achieves AI ‘breakthrough’ by beating Go champion,” and hundreds of other news outlets picked up essentially the same headline. But Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had “mastered” the game. DeepMind made major progress, but the Go journey is still not over; a fascinating thread at YCombinator suggests that the program — a work in progress — would currently be ranked #279.

Comment author: ESRogs 31 January 2016 06:24:38AM 0 points [-]

It will be interesting to see how much progress they've made since October.

My guess is that they think they're going to win (see for example David Silver's "quiet confidence" here: https://www.youtube.com/watch?v=g-dKXOlsf98&t=5m9s).

[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning

14 ESRogs 27 January 2016 09:04PM

DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.

 

Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history

[...]

But one game has thwarted A.I. research thus far: the ancient game of Go.


Comment author: ESRogs 09 December 2015 10:15:10AM 10 points [-]

Gwern has written an article for Wired, allegedly revealing the true identity of Satoshi Nakamoto:

http://www.wired.com/2015/12/bitcoins-creator-satoshi-nakamoto-is-probably-this-unknown-australian-genius/

Comment author: ESRogs 11 December 2015 08:26:42AM 1 point [-]

Follow-up -- after we've all had some time to think about it, I think this is the best explanation for who this would-be SN is:

https://www.reddit.com/r/Bitcoin/comments/3w9xec/just_think_we_deserve_an_explanation_of_how_craig/cxuo6ac

View more: Next