Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: fubarobfusco 16 September 2014 07:20:39AM *  0 points [-]

[Please read the OP before voting. Special voting rules apply.]

Improving the typical human's emotional state — e.g. increasing compassion and reducing anxiety — is at least as significant to mitigating existential risks as improving the typical human's rationality.

The same is true for unusually intelligent and capable humans.

For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.

(Of course, there are cases where failures of rationality and failures of compassion coincide — the fundamental attribution error, for instance. It seems to me that attacking these problems from both System 1 and System 2 will be more effective than either approach alone.)

Comment author: Florian_Dietz 16 September 2014 07:16:38AM 0 points [-]

I took a few university courses, but ultimately I found it more efficient to just browse wikipedia for its lists of heuristics and biases. Then of course there is the book 'Thinking Fast and Slow', which is just great.

What other sources can you recommend?

Comment author: Florian_Dietz 16 September 2014 07:14:11AM 0 points [-]

Psychology

Comment author: Florian_Dietz 16 September 2014 07:10:05AM 0 points [-]

I know, but writing is hard :-( Also, I have made it way too hard for myself. It's easy to write notes about the personality of a completely non-human character, as long as you can intellectually understand its reasoning. But once I am forced to actually write its dialog, my head just hits a brick wall. The being is very intelligent and I want this to be rationalist fiction, so I have to think for a very long time just to find out in what exact way it would phrase its requests to maximize the probability of compliance. Writing the voices of the narrators/the administrator AIs of the simulation as they are slowly going insane is not easy, either.

Maybe I'm too perfectionist here. Do you think it's better to write something trashy first and rewrite it later, or is it more efficient to do it right the first time?

Comment author: Elo 16 September 2014 06:59:49AM 0 points [-]

been contemplating this recently. not sure if I agree or disagree. but going to come to my conclusions soon. Just need to find some time to sit down and thing about it...

Comment author: fubarobfusco 16 September 2014 06:54:11AM 0 points [-]

What's your reasoning? I expect serious attempts at an answer to have to cope with questions such as —

  • How many degrees of pain might a human be capable of? Is the scale linear? logarithmic?
  • How does the 'badness' (or 'natural evil', classically) of pain vary with its intensity and its duration? (Is having a nasty headache for seven days exactly seven times worse than having that headache for one day, or is it more or less than seven times worse?)
  • How does the 'badness' of some pain happening to N people scale with N? (If 100 people stub their toes, is that 100 times worse than one person stubbing his or her toe and 99 going safely unstubbed?)

Even if questions such as these can't be given precise answers, it should be possible to give some sort of bounds for them, and it's possible that those bounds are narrow enough to make the answer obvious.

Comment author: michaelkeenan 16 September 2014 06:47:44AM *  0 points [-]

If you liked Scott Alexander's essay, Meditations on Moloch, you might like this typographic poster-meme I made. I posted it on Facebook and it was shared six times and Liked about fifty times.

(If you haven't read Scott Alexander's essay, Meditations on Moloch, then you might want to check it out. As Stuart Armstrong said, it's a beautiful, disturbing, poetical look at the future.)

Comment author: Florian_Dietz 16 September 2014 06:43:19AM 0 points [-]

There is a levelling system. Every minute of work gives one experience point, with a bonus if it was done with the pomodoro technique. The program also contains a Todo list, which I use for everything. In this list, there is a section on habits. This section is filled with repeating tasks. Each evening, I tick off all the habits I kept that day. For each habit I don't tick off, I get a small experience drain the next morning. This encourages me to keep every habit, so that I can keep the daily experience drain to a minimum. Avoiding this negative reinforcement works very well as a motivator, and seeing the number for tomorrow's experience drain go down whenever I tick off a task also serves as positive reinforcement as well.

Comment author: kgalias 16 September 2014 06:41:18AM 0 points [-]

The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.

Comment author: kgalias 16 September 2014 06:33:57AM 0 points [-]

You could start at a time better suited for Europe.

Comment author: pragmatist 16 September 2014 06:27:55AM *  0 points [-]

The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like "light is both a particle and a wave" in quantum physics lectures. Really what teachers should be saying is that 'particle' and 'wave' are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves.

I do agree with you that entanglement is a bigger conceptual hurdle.

Comment author: Thomas 16 September 2014 06:26:07AM 0 points [-]

With a tiny force of 1 micro Newton per kilogram of mass over several million years.

This was the acceleration force.

The centrifugal force is much less.

Comment author: lukeprog 16 September 2014 06:21:29AM 0 points [-]

Off the top of my head I don't recall, but I bet Machine Who Think has detailed coverage of those early years and can probably shed some light on how much advance the Dartmouth participants expected.

Comment author: kgalias 16 September 2014 06:18:21AM 0 points [-]

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

Comment author: Liso 16 September 2014 06:13:34AM *  0 points [-]

First of all thanx for work with this discussion! :)

My proposals:

  • wiki page for collaborative work

There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it

  • better time for europe and world?

But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)

Comment author: pragmatist 16 September 2014 06:09:50AM *  1 point [-]

That is not in the rules for the thread. Given the karma toll (and the fact that sufficiently downvoted comment threads get collapsed), it would be a bad idea to make it one of the rules. I think you should simply not vote on comments you disagree with, and I suggest reversing any downvotes you've made for this reason.

I do agree that without downvoting it is hard to differentiate between views the community agrees with and views the community has no real opinion about, but I don't think adding this information is worth the disadvantages of downvoting for agreement.

Comment author: pragmatist 16 September 2014 06:07:08AM 0 points [-]

I don't understand what you mean. Could you explain? I'm familiar with QM, so you don't need to avoid technicality in your explanation.

Comment author: mvp9 16 September 2014 05:37:40AM 0 points [-]

A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test.

It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.

Comment author: DanielLC 16 September 2014 05:34:37AM 0 points [-]

It still doesn't work. It could be an extension, but I was guessing it was just the browser. I'm using Chrome. javascript:alert("test") seems to work if I type it direction or use a bookmark. It doesn't work if I copy and paste.

Comment author: Aleksander 16 September 2014 05:33:09AM 0 points [-]

Freud's psychoanalysis has been often put in the same category of "Copernican" things as heliocentrism and evolution.

Comment author: DanielLC 16 September 2014 05:30:07AM 0 points [-]

The OP was claiming that special relativity was incoherent, not just that it wasn't absolutely exact.

If you want absolutely exact results, you'll need a theory of everything. There are quantum effects messing with spacetime.

Comment author: paulfchristiano 16 September 2014 05:21:37AM 0 points [-]

I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation).

But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).

Comment author: mvp9 16 September 2014 05:19:23AM 0 points [-]

I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term.

The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.

Comment author: paulfchristiano 16 September 2014 05:13:24AM *  1 point [-]

I grant that there is a sense in which we "understand" intuitive physics but will never understand quantum mechanics.

But in a similar sense, I would say that we don't "understand" almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about "waves" to light.

As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine's understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I'm not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic.

I do expect them to have such advantages, but I don't expect them to be limited to topics that are at the edge of humans' conceptual grasp!

Comment author: mvp9 16 September 2014 05:13:05AM 1 point [-]

Oh, I completely agree with the prediction of explosive growth (or at least its strong likelihood), I just think (1) or something like it is a much better argument than 2 or 3.

Comment author: Mark_Friedenbach 16 September 2014 05:11:15AM *  0 points [-]

I don't disagree. This discussion was philosophical in the pejorative sense, being about absolutely exact results, not reasonable approximations.

Comment author: John_Maxwell_IV 16 September 2014 05:06:46AM *  0 points [-]

This has the problem that beliefs with a large inferential distance won't get stated.

Is it useful to have beliefs with a large inferential distance stated without supporting evidence? Given that the inferential distance is large, I'm not going to be able to figure it out on my own am I? At least having a sketch of an argument would be useful. The more you fill in the argument, the more minds you change and the more upvotes you get.

The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.

"Upvote if the comment caused you to change your mind" is not the same thing as "upvote if you disagree".

Another idea, which kinda seems to be getting adopted in this thread already: have a short note at the bottom of every comment right above the vote buttons reminding people of the voting behavior for the thread, to counteract instinctive voting.

Comment author: NancyLebovitz 16 September 2014 05:06:16AM 0 points [-]

Or possibly that if the majority of people got what they want, most people at LW would be incidentally made unhappy.

Comment author: Azathoth123 16 September 2014 04:57:26AM 1 point [-]

Taboo "racist".

Comment author: DanielLC 16 September 2014 04:55:55AM 0 points [-]

That's really more a personal taste than a view. The SF Bay Area is not inherently a good or bad place to live. Since you're the only person qualified to judge if you like living their, your opinion on the matter can hardly be considered contrarian. Not unless the majority of the people on LessWrong think you're wrong about not liking living there.

Comment author: TylerJay 16 September 2014 04:54:44AM 0 points [-]

Hmm... Yeah, that's not right. Maybe there was a problem when I pasted it? Here it is again.

javascript:window.location.replace("<http://justread.mpgarate.com/read?url=>" + escape(document.URL))

Only other thing I can think of is you may have a browser extension interfering.

Comment author: TylerJay 16 September 2014 04:52:29AM 0 points [-]

I started at 350 and that's still what I use most of the time. For light, non-technical articles, I can do 450, but it's a bit uncomfortable to focus that hard and I do miss things occasionally. I can usually tell if it was important or not though, so I know if I need to go pause and rewind. After playing with speedreeding off and on for a few years, I've come to the conclusion that it's definitely possible to read faster than I normally do with equal comprehension, but that there really is a limit and the claims you see from speedreading courses are hyperbolic. The thing I like about Squirt is that it eliminates the need to use a pacer.

Comment author: DanielLC 16 September 2014 04:48:18AM *  3 points [-]

Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?

Comment author: passive_fist 16 September 2014 04:48:03AM 0 points [-]

I've worked on the D-Wave machine (in that I've run algorithms on it - I haven't actually contributed to the design of the hardware). About that machine, I have no idea if it's eventually going to be a huge deal faster than conventional hardware. It's an open question. But if it can, it would be huge, as a lot of ML algorithms can be directly mapped to D-wave hardware. It seems like a perfect fit for the sort of stuff machine learning researchers are doing at the moment.

About other kinds of quantum hardware, their feasibility remains to be demonstrated. I think we can say with fair certainty that there will be nothing like a 512-qubit fully-entangled quantum computer (what you'd need to, say, crack the basic RSA algorithm) within the next 20 years at least. Personally I'd put my money on >50 years in the future. The problems just seem too hard; all progress has stalled; and every time someone comes up with a way to try to solve them, it just results in a host of new problems. For instance, topological quantum computers were hot a few years ago since people thought they would be immune to some types of incoherence. As it turned out, though, they just introduce sensitivity to new types of incoherence (thermal fluctuations). When you do the math, it turns out that you haven't actually gained much by using a topological framework, and further you can simulate a topological quantum computer on a normal one, so really a TQC should be considered as just another quantum error correction algorithm, of which we already know many.

All indications seem to be that by 2064 we're likely to have a human-level AI. So I doubt that quantum computing will have any effect on AI development (or at least development of a seed AI). It could have a huge effect on the progression of AI though.

Comment author: drethelin 16 September 2014 04:47:52AM 0 points [-]

get a twitter

Comment author: DanielLC 16 September 2014 04:45:11AM 1 point [-]

For a rotating object of sufficiently small mass, the mass can be ignored, and reasonably accurate results can be found with special relativity.

Comment author: mvp9 16 September 2014 04:44:05AM 0 points [-]

I'll take a stab at it.

We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to 'gravity' because he understood both (even though he didn't yet have the math for planetary motion)

Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks "communicating" at a distance faster than light, but (for now at least) I don't think we have really incorporate it into our (pre-symbolic) conception of the world.

Comment author: Azathoth123 16 September 2014 04:42:12AM 0 points [-]

I downvoted because I agree.

Comment author: DanielLC 16 September 2014 04:41:56AM *  0 points [-]
Comment author: Azathoth123 16 September 2014 04:40:54AM 0 points [-]

Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.

This has the problem that beliefs with a large inferential distance won't get stated.

The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.

Comment author: VonBrownie 16 September 2014 04:28:11AM 0 points [-]

Thanks... I will check it out further!

Comment author: paulfchristiano 16 September 2014 04:23:55AM *  1 point [-]

I think this is a very good question that should be asked more. I find it particularly important because of the example of automating research, which is probably the task I care most about.

My own best guess is that the computational work that humans are doing while they do the "thinking" tasks is probably very minimal (compared to the computation involved in perception, or to the computation currently available). However, the task of understanding which computation to do in these contexts seems quite similar to the task of understanding which computation to do in order to play a good game of chess, and automating this still seems out of reach for now. So I guess I disagree somewhat with Knuth's characterization.

I would be really curious to get the perspectives of AI researchers involved with work in the "thinking" domains.

Comment author: paulfchristiano 16 September 2014 04:19:39AM *  0 points [-]

Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.

The most prominent games of this partial information that I know are Bridge and Poker, and AI's can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy--in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.

For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.

Comment author: MaximumLiberty 16 September 2014 04:17:27AM 0 points [-]

I like the notion of the Superintelligence reading group: http://lesswrong.com/lw/kw4/superintelligence_reading_group/. But the topic of AI doesn't really interest me much.

A reading group on some other topic that is more along CFAR's lines than MIRI's would. For example, reading recent studies of cognitive bias would be interesting to me. Discussion on how practically to combat them might evolve from discussing the studies.

Max L.

Comment author: paulfchristiano 16 September 2014 04:13:25AM 1 point [-]

Based on the current understanding of quantum algorithms, I think the smart money is on a quadratic (or sub-quadratic) speedup from quantum computers on most tasks of interest for machine learning. That is, rather than taking N^2 time to solve a problem, it can be done in N time. This is true for unstructured search and now for an increasing range of problems that will quite possibly include the kind of local search that is the computational bottleneck in much modern machine learning. Much of the work of serious quantum algorithms people is spreading this quadratic speedup to more problems.

In the very long run quantum computers will also be able to go slightly further than classical computers before they run into fundamental hardware limits (this is beyond the quadratic speedup). I think they should not be considered as fundamentally different than other speculative technologies that could allow much faster computing; their main significance is increasing our confidence that the future will have much cheaper computation.

I think what you should expect to see is a long period of dominance by classical computers, followed eventually by a switching point where quantum computers pass their classical analogs. In principle you might see faster progress after this switching point (if you double the size of your quantum computer, you can do a brute force search that is 4 times as large, as opposed to twice as large with a classical computer), but more likely this would be dwarfed by other differences which can have much more than a factor of 2 effect on the rate of progress. This looks likely to happen long after growth has slowed for the current approaches to building cheaper classical computers.

For domains that experience the full quadratic speedup, I think this would allow us to do brute force searches something like 10-20 orders of magnitude larger before hitting fundamental physical limits.

Note that D-wave and its ilk are unlikely to be relevant to this story; we are a good ways off yet. I would even go further and bet on essentially universal quantum computing before such machines become useful in AI research, though I am less confident about that one.

Comment author: KatjaGrace 16 September 2014 04:11:52AM 0 points [-]

Which argument do you think are especially strong in this week's reading?

Comment author: KatjaGrace 16 September 2014 04:11:40AM 0 points [-]

Did you change your mind about anything as a result of this week's reading?

Comment author: KatjaGrace 16 September 2014 04:11:26AM 0 points [-]

Was there anything in particular in this week's reading that you would like to learn more about, or think more about?

Comment author: KatjaGrace 16 September 2014 04:10:06AM 0 points [-]

Are there foreseeable developments other than human-level AI which might produce much faster economic growth? (p2)

Comment author: hg00 16 September 2014 04:09:30AM 0 points [-]

Let's hear it!

Comment author: KatjaGrace 16 September 2014 04:08:58AM 0 points [-]

Without the benefit of hindsight, which technologies would you expect to make a big difference to human productivity?

Comment author: hg00 16 September 2014 04:08:05AM 0 points [-]

Thanks!

Comment author: KatjaGrace 16 September 2014 04:08:04AM 0 points [-]

Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?

Comment author: KatjaGrace 16 September 2014 04:05:13AM 0 points [-]

If you don't know what causes growth mode shifts, but there have been two or three of them and they seem kind of regular (see Hanson 2000, p14), how likely do you think another one is? (p2) How much evidence do you think history gives us about the timing and new growth rate of a new growth mode?

Comment author: MaximumLiberty 16 September 2014 04:05:05AM 1 point [-]

It is a fine question, since the word "deserve" is the link between an observation and a judgment about the person. I don't think I need an answer to it to make the observation that most people here don't hold that view. Which is a good thing, because I don't think I have a satisfactory answer beyond rough moral intuition.

Max L.

Comment author: Micaiah_Chang 16 September 2014 04:00:08AM *  0 points [-]

Along the lines of Remembering the Kanji, but significantly more entertaining is KanjiDamage, which features more yo momma jokes than necessary for learning Japanese, but is moderately entertaining and also provides example compound words and usage.

It also has a premade deck for Anki, if you wish to overcome the initial overwhelming barrier of having to make them. Inferior to making them yourself, as the cards tend to be too dense, but better than loafing around.

Incidentally, even if you do not end up using it, check out the Dupes Appendix which disambiguate homonyms which are also synonyms.

If you plan to practice by reading web pages, I highly recommend Rikaisama for Firefox and Rikaikun for Chrome.

These extensions automatically give definitions upon mousing over Japanese text. Highly useful as a way of eliminating the trivial inconvenience of lookup. I will warn you that EDICT translations (the default back end to rikai) tends to give a very incomplete and sometimes misleading definition of a word (seldom used meanings of a word are presented alongside the common ones without differentiation) but it's still better than nothing. I would advise moving onto a Japanese-Japanese dictionary as soon as possible (probably a year or so down the line depending on commitment).

Comment author: paulfchristiano 16 September 2014 03:59:07AM 1 point [-]

Here is another way of looking at things:

  1. From the inside it looks like automating the process of automation could lead to explosive growth.
  2. Many simple endogenous growth models, if taken seriously, tend to predict explosive growth at finite time. (Including the simplest ones.)
  3. A straightforward extrapolation of historical growth suggests explosive growth in the 21st century (depending on whether you read the great stagnation as a permanent change or a temporary fluctuation).

You might object to any one of those lines of arguments on their own, but taken together the story seems compelling to me (at least if one wants to argue "We should take seriously the possibility of explosive growth.")

Comment author: MaximumLiberty 16 September 2014 03:58:28AM 0 points [-]

Now, now. The rule of the game is to upvote if you disagree and don't vote otherwise. I lived there for four years, so I think I'm qualified to have an opinion.

Max L.

Comment author: KatjaGrace 16 September 2014 03:58:19AM 0 points [-]

Whatever the nature, cause, and robustness of growth modes, the important observation seems to me to be that the past behavior of the economy suggests very much faster growth is plausible.

Comment author: MaximumLiberty 16 September 2014 03:55:17AM 0 points [-]

I am re-learning negotiating by teaching it.

Max L.

Comment author: paulfchristiano 16 September 2014 03:53:35AM *  1 point [-]

I object (mildly) to this characterization of quantum mechanics. What notion of "understand" do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is "going on" in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory.

I grant there are senses in which I don't understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of "understand."

Comment author: John_Maxwell_IV 16 September 2014 03:50:53AM *  0 points [-]

This seems pretty similar to the irrationality game. That's not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):

  • Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.

  • Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy "contrarian" views that are in fact pretty widespread already.

  • Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let's reward that.

  • Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).

  • Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.

Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil's advocate be allowed, i.e. presenting novel arguments for unpopular positions you don't actually agree with, and in under what circumstances (are disclaimers required, etc.)

You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a "contrarian open thread" that operates under those rules.

Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.

Comment author: paulfchristiano 16 September 2014 03:50:47AM 1 point [-]

Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous.

While I broadly agree with this sentiment, I would like to disagree with this point.

I would consider even the creation of a single very smart human, with all human resourcefulness but completely alien values, to be a significant net loss to the world. If they represent 0.001% of the world's aggregative productive capacity, I would expect this to make the world something like 0.001% worse (according to humane values) and 0.001% better (according to their alien values).

The situation is not quite so dire, if nothing else because of gains for trade (if our values aren't in perfect tension) and the ability of the majority to stomp out the values of a minority if it is so inclined. But it's in the right ballpark.

So while I would agree that broadly human capabilities are not a necessary condition for concern, I do consider them a sufficient condition for concern.

Comment author: lukeprog 16 September 2014 03:50:42AM 0 points [-]

Definitely! See Wikipedia and e.g. this book.

Comment author: fubarobfusco 16 September 2014 03:50:40AM -1 points [-]

My contrarian idea: Roko's basilisk is no big deal, but intolerance of making, admitting, or accepting mistakes is cultish as hell.

Comment author: lukeprog 16 September 2014 03:48:48AM 0 points [-]

I've seen several papers like "Quantum speedup for unsupervised learning" but I don't know enough about quantum algorithms to have an opinion on the question, really.

Comment author: KatjaGrace 16 September 2014 03:46:22AM 0 points [-]

A somewhat limited effort to reduce tasks to one another in this vein: http://www.academia.edu/1419272/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial

Comment author: AlexSchell 16 September 2014 03:36:15AM 0 points [-]

I just recently started to learn basic multivariable calculus (using this book + Khan academy etc.) to make progress on my economics self-study (using this magnificent book). This turned out to involve some of relearning of single-variable also, because much of what I learned in college and high school didn't quite stick. What's the book you're using? Are you aware of the best textbooks thread? Why do you want to study calculus?

Comment author: fubarobfusco 16 September 2014 03:36:00AM -1 points [-]

The Old Left of labor unionism? The New Left of student activism?

Comment author: SteveG 16 September 2014 03:35:17AM 0 points [-]

The D-Wave quantum computer solves a general class of optimization problems very quickly. It cannot speed up any arbitrary computing task, but the class of computing problems which include an optimization task it can speed up appears to be large.

Many "AI Planning" tasks will be a lot faster with quantum computers. It would be interesting to learn what the impact of quantum computing will be on other specific AI domains like NLP and object recognition.

We also have:

-Reversible computing -Analog computing -Memristors -Optical computing -Superconductors -Self-assembling materials

And lithography, or printing, just keeps getting faster on smaller and smaller objects and is going from 2d to 3d.

When Bostrom starts to talk about it, I would like to hear people's opinions about untangling the importance of hardware vs. software in the future development of AI.

Comment author: KatjaGrace 16 September 2014 03:34:34AM 0 points [-]

In what sense do you think of an autonomous laborer as being under 'our control'? How would you tell if it escaped our control?

Comment author: KatjaGrace 16 September 2014 03:30:14AM 0 points [-]

Would you care to elaborate?

Comment author: KatjaGrace 16 September 2014 03:29:19AM 0 points [-]

If you knew AI to be radically more transformative than other technologies, I agree that predictions based straightforwardly on history would be inaccurate. If you are unsure how transformative AI will be though, it seems to me to be helpful to look at how often other technologies have made a big difference, and how much of a difference they have made. I suspect many technologies would seem transformative ahead of time - e.g. writing, but seem to have made little difference to economic growth.

Comment author: devi 16 September 2014 03:23:38AM 1 point [-]

I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.

Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.

I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'm just missing the part that goes: "This is very very hard. But if we knew it this other thing would be really easy."

Comment author: gallabytes 16 September 2014 03:22:48AM 1 point [-]

I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.

Comment author: NancyLebovitz 16 September 2014 03:22:25AM 0 points [-]

I'm not sure whether I agree or disagree. What's your line of thought?

Comment author: NancyLebovitz 16 September 2014 03:22:03AM 0 points [-]

I agree with this, so I'm telling you instead of upvoting.

Comment author: Mark_Friedenbach 16 September 2014 03:19:15AM 0 points [-]

Only if the rotating object has any mass at all.

Comment author: gallabytes 16 September 2014 03:16:46AM 0 points [-]

Do you have any examples of approaches that are indefinitely extendable?

Comment author: polymathwannabe 16 September 2014 03:14:15AM 0 points [-]

What ethical theory are you using for your definition of "deserve"?

View more: Next