Comment author: Arenamontanus 16 September 2015 03:10:31PM 2 points [-]

It would be neat to actually make an implementation of this to show sceptics. It seems to be within the reach of a MSc project or so. The hard part is representing 2-5.

Comment author: [deleted] 30 June 2015 08:32:03AM *  0 points [-]

Argument for a powerful AI being unlikely - was this considered before?

One problem I see here is the "lone hero inventor" implicit assumption, namely that there are people optimizing things for their goals on their own and an AI could be extremely powerful at this. I would like to propose a different model.

This model would be that intelligence is primarily a social, communication skill, it is the skill to disassemble (understand, lat. intelligo), play with and reassemble ideas acquired from other people. Like literally what we are doing on this forum. It is conversational. The whole standing on the shoulder of giants thing, not the lone hero thing.

In this model, inventions are made by the whole of humankind, a network, where each brain is a node communicating slightly modified ideas to each other.

In such a network one 10000 IQ node does not get very powerful, it doesn't even make the network very powerful i.e. a friendly AI does not quickly solve mortality even with human help.

The primary reason I think such a model is correct that intelligence means thinking, we think in concepts, and concepts are not really nailed down but they are constantly modified through a social communication process. Atoms used to mean indivisible units, then they became divisible into little ping-pong balls, and then the model was updated into something entirely different by quantum physics, but is quantum physics based atom theory about the same atoms that were once thought to be indivisible or is this a different thing now? Is modern atomic theory still about atoms? What are we even mapping here and where does the map end and the territory begin?

So the point is human knowledge is increased by a social communication process where we keep throwing bleggs at each other, and keep redefining what bleggs and rubes mean now, keep juggling these concept, keep asking what you really mean under bleggs, and so on. Intelligence is this communication ability, it is to disassemble Joe's concept of bleggs and understand how it differs from Jane's concept of bleggs and maybe assemble a new concept that describes both bleggs.

Without this communication, what would be even intelligence? What would lone intelligence be? It is almost a contradictory term in itself. What would a brain alone in a universe intelligere i.e. understand if nothing would talk to it? Just tinker with matter somehow without any communication whatsoever? But even if we imagine such an "idiot inventor genius", some kind of a mega-plumber on steroids instead of an intellectual or academic it needs goals for that kind of tinkering with that material stuff, for that it needs concepts, and concepts come and evolve from a constant social ping-pong.

An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.

In response to comment by [deleted] on Top 9+2 myths about AI risk
Comment author: Arenamontanus 01 July 2015 09:17:49AM 2 points [-]

I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.

The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.

Comment author: Andy_McKenzie 30 June 2015 01:28:04PM 4 points [-]

That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.

Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.

Comment author: Arenamontanus 01 July 2015 09:10:19AM 3 points [-]

I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.

As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.

The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.

Comment author: Arenamontanus 30 June 2015 08:14:43AM 1 point [-]

I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:

(1) Bioethics can work in a "prophetic" and a "regulatory" mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelines, point out problems, and suggest reforms, but their purpose is generally not to rethink these practices from the ground-up or to question the wisdom of the whole enterprise. As the debate about the role of speculative bioethics has shown, mixing the modes can be problematic. (Guyer and Moreno 2004) really takes bioethics to task for using science fiction instead of science to motivate arguments: they point out that this can actually be good if one does it inside the prophetic mode, but a lot of bioethics (like the President's Council on Bioethics at the time) cannot decide what kind of consideration it is.

(2) Is it possible to find out things about the world by making stuff up? (Elgin 2014) argues fictions and thought experiments do exemplify patterns or properties that they share with phenomena in the real world, and hence we can learn something about the realized world from considering fictional worlds (i.e. there is a homeomorphism between them in some domain). It does require the fiction to be imaginative but not lawless: not every fiction or thought experiment has value in telling us something the real or moral world. This is of course why just picking a good or famous piece of fiction as a source of ideas is risky: it was selected not for how it reflects patterns in the real world, but for other reasons.

Considering Eliezer's levels of intelligence in fictional characters is a nice illustration of this: level 1 intelligence characters show some patterns (being goal directed agents) that matter, and level 3 characters actually give examples of rational skilled cognition.

(3) Putting this together, if you want to use fiction in your argument, the argument better be in the more prophetic, open-ended mode (e.g. arguing that there is AI risks of various kind, what values are at stake etc.) and the fiction needs to have pretty not only high standards of not just internal consistency but actual mappability to the real world. If the discussion is on the more regulatory side (e.g. thinking of actual safeguards, risk assessment, institutional strategies) then fiction is unlikely to be helpful, and very likely (due to good story bias, easily inserted political agendas or different interpretations of worldview) introducing biasing or noise elements.

There are of course some exceptions. Hannu Rajaniemi provides a neat technical trick to the AI boxing problem in the second novel of his Quantum Thief trilogy (turn a classical computation into a quantum one that will decohere if it interacts with the outside world). But the fictions most people mention in AI safety discussions are unlikely to be helpful - mostly because very few stories succeed with point (2) (and if they are well written, they hide this fact convincingly!)

Comment author: Arenamontanus 26 May 2015 03:23:03PM 2 points [-]

Well, 70 years of 1/37 risk still has 13% chance of showing zero wars. Could happen. (Since we are talking about smaller ones rather than WWIII anthropics doesn't distort the probabilities measurably.)

One could buy a Pinker improvement scenario and yet be concerned about a heavy tail due to nuclear or bio warfare of existential importance. The median cases might decline and the rate of events go down, yet the tail get nastier.

Comment author: Stuart_Armstrong 21 October 2014 07:34:25PM 0 points [-]

Similar in that one quadrant is empty, otherwise a distinct effect.

Comment author: Arenamontanus 23 October 2014 12:50:01AM 2 points [-]

This is incidentally another way of explaining the effect. Consider the standard diagram of the joint probability density and how it relates to correlation. Now take a bite out of the upper right corner of big X and big Y events: unless the joint density started in a really strange shape this will tend to make the correlation negative.

Comment author: Stuart_Armstrong 21 October 2014 07:49:11PM 3 points [-]

When I read this, my first reaction was "I have to show this comment to Anders" ^_^

Comment author: Arenamontanus 21 October 2014 11:53:43PM 7 points [-]

It is pretty cute. I did a few Matlab runs with power-law distributed hazards, and the effect holds up well: http://aleph.se/andart2/uncategorized/anthropic-negatives/

Comment author: Arenamontanus 21 October 2014 07:44:00PM 11 points [-]

Neat. The minimal example would be if each risk had 50% chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk). If the chance of no disaster happening is N/(N+2), then the correlation will be -1/(N+1).

It is interesting to note that many insurance copula methods are used to make size-dependent correlations, but these are nearly always of the type of stronger positive correlations in the tail. This suggests - unsurprisingly - that insurance does not encounter much anthropic risk.

Comment author: Stefan_Schubert 15 October 2014 01:27:08PM 2 points [-]

This is excellent. I've had some vague ideas along these lines, but nothing this comprehensive and precise. Very helpful.

In a sense, the paper consists of three parts - title, abstract, and text - whereas there are five types of readers, according to your classificatory schema (though how to delineate these types of course is a bit arbitrary). One question is whether one should have even more layers, to clarify exactly what a skimmer and full reader should read. (This does exist to some extent - e.g. footnotes and appendices presumably are not for skimmers - but one could develop this further.) For instance, each section of the text could start off with a "mini-abstract" which the skimmers could focus on.

I get the sense that today's article formats are intended to satisfy deep readers (aside from the title and abstract readers) and that more could be done to help, e.g., skimmers. This is just a hunch, though, and I'd be interested in hearing whether people agree with this.

Comment author: Arenamontanus 15 October 2014 02:57:48PM 2 points [-]

In some journals there is a text box with up to four take home message sentences summarizing what the paper gives us. It is even easier to skim than the abstract, and typically stated in easy (for the discipline) language. I quite like it, although one should recognize that many papers have official conclusions that are a bit at variance with the actual content (or just a biased glass half-full/half-empty interpretation).

Comment author: Arenamontanus 15 October 2014 01:02:21PM 11 points [-]

The standard formula you are typically taught in science is IMRaD: Introduction, Methods, Results, and Discussion. This of course mainly works for papers that are experimental, but I have always found it a useful zeroth iteration for structure when writing reviews and more philosophical papers: (1) explain what it is about, why it is important, and what others have done. (2) explain how the problem is or can be studied/solved. (3) explain what this tells us. (4) explain what this new knowledge means in the large, the limitations of what we have done and learned, as well as where we ought to go next.

Experienced academics also scan the reference section to see who is cited. This is a surface level analysis of whether the author has done their homework, and where in the literature the paper is situated. It is a crude trick, but fairly effective in saving time. It also leads to a whole host of biases, of course.

Different disciplines work in different ways. In medicine everybody loves to overcite ("The brain [1] is an organ commonly found in the head [2,3], believed to be important for cognition [4-18,23].") Computer science is lighter on citations and more forgiving of self-cites (the typical paper cites Babbage/Turing, a competing algorithm, and two tech reports and a conference poster by the author about the earlier version of the algorithm). Philosophy tends to either be very low on citations (when dealing with ideas), or have nitpicky page and paragraph citations (when dealing with what someone really argued).

View more: Next