Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: sbenthall 18 September 2014 02:39:39AM 8 points [-]

So there's some big problems of picking the right audience here. I've tried to make some headway into the community complaining about newsfeed algorithm curation (which interests me a lot, but may be more "political" than would interest you) here:


which is currently under review. It's a lot softer that would be ideal, but since I'm trying to convince these people to go from "algorithms, how complicated! Must be evil" to "oh, they could be designed to be constructive", it's a first step. More or less it's just opening up the idea that Twitter is an interesting testbed for ethically motivated algorithmic curation.

I've been concerned more generally with the problem of computational asymmetry in economic situations. I've written up something that's an attempt at a modeling framework here. It's been accepted only as a poster, because it's results are very slim. It was like a quarter of a semester's work. I'd be interested in following through on it.


The main problem I ran into was not knowing a good way to model relative computational capacity; the best tool I had was big-O and other basic computational theory stuff. I did a little sort of remote apprenticeship with David Wolpert as Los Alamos; he's got some really interesting stuff on level-K reasoning and what he calls predictive game theory.


(That's not his most recent version). It's really great work, but hard math to tackle on ones own. In general my problem is there isn't much of a community around this at Berkeley, as far as I can tell. Tell me if you know differently. There's some demand from some of the policy people--the lawyers are quite open-minded and rigorous about this sort of thing. And there's currently a ton of formal work on privacy, which is important but not quite as interesting to me personally.

My blog is a mess and doesn't get into formal stuff at all, at least not recently.

Comment author: lukeprog 18 September 2014 02:42:17AM 2 points [-]


Comment author: lukeprog 17 September 2014 07:12:57PM 4 points [-]

Seb, what kind of work do you "try to do" in this area? Do you have some blog posts somewhere or anything?

Comment author: John_Maxwell_IV 17 September 2014 04:19:24AM 3 points [-]

It's an interesting story, but I think in practice the best way to learn to control owls would be to precommit to kill the young owl before it got too large, experiment with it, and through experimenting with and killing many young owls, learn how to tame and control owls reliably. Doing owl control research in the absence of a young owl to experiment on seems unlikely to yield much of use--imagine trying to study zoology without having any animals or botany without having any plants.

Comment author: lukeprog 17 September 2014 02:47:22PM 3 points [-]

But will all the sparrows be so cautious?

Yes it's hard, but we do quantum computing research without any quantum computers. Lampson launched work on covert channel communication decades before the vulnerability was exploited in the wild. Turing learned a lot about computers before any existed. NASA does a ton of analysis before they launch something like a Mars rover, without the ability to test it in its final environment.

Comment author: kgalias 16 September 2014 06:18:21AM 5 points [-]

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

Comment author: lukeprog 16 September 2014 08:24:58PM 1 point [-]


Comment author: lukeprog 16 September 2014 03:48:48AM 1 point [-]

I've seen several papers like "Quantum speedup for unsupervised learning" but I don't know enough about quantum algorithms to have an opinion on the question, really.

Comment author: lukeprog 16 September 2014 05:14:25PM 1 point [-]
Comment author: KatjaGrace 16 September 2014 01:36:53AM 4 points [-]

The 'optimistic' quote from the Dartmouth Conference seems ambiguous in its optimism to me. They say 'a significant advance can be made in one or more of these problems', rather than that any of them can be solved (as they are often quoted as saying). What constitutes a 'significant advance' varies with optimism, so their statement seems consistent with them believing they can make an arbitrarily small step. The whole proposal is here, if anyone is curious about the rest.

Comment author: lukeprog 16 September 2014 06:21:29AM 2 points [-]

Off the top of my head I don't recall, but I bet Machine Who Think has detailed coverage of those early years and can probably shed some light on how much advance the Dartmouth participants expected.

Comment author: VonBrownie 16 September 2014 01:42:47AM 1 point [-]

Are there any ongoing efforts to model the intelligent behaviour of other organisms besides the human model?

Comment author: lukeprog 16 September 2014 03:50:42AM 1 point [-]

Definitely! See Wikipedia and e.g. this book.

Comment author: jallen 16 September 2014 02:46:41AM 2 points [-]

I'm curious if any of you feel that future widespread use of commercial scale quantum computing (here I am thinking of at least thousands of quantum computers in the private domain with a multitude of programs already written, tested, available, economic and functionally useful) will have any impact on the development of strong A.I.? Has anyone read or written any literature with regards to potential windfalls this could bring to A.I.'s advancement (or lack thereof)?

I'm also curious if other paradigm shifting computing technologies could rapidly accelerate the path toward superintelligence?

Comment author: lukeprog 16 September 2014 03:48:48AM 1 point [-]

I've seen several papers like "Quantum speedup for unsupervised learning" but I don't know enough about quantum algorithms to have an opinion on the question, really.

Comment author: lukeprog 16 September 2014 01:16:04AM *  13 points [-]

I really liked Bostrom's unfinished fable of the sparrows. And endnote #1 from the Preface is cute.

Comment author: RomeoStevens 15 September 2014 04:02:58PM *  6 points [-]

In this particular instance I believe "rational" is intended with the finance connotation rather than the Lesswrong connotation. This was originally published for OPs investment firm, then crossposted here.

Comment author: lukeprog 15 September 2014 11:27:09PM 2 points [-]

Ah, good point.

View more: Next