With what software was this done?
I used ClickCharts to make the diagrams.
My guess is that the over 200 points you have received recently for writing the article have not yet been added to the "total karma" and an older cached value is displayed instead. If I am correct, then this problem should fix itself in a few days.
(My first suspicion was "someone was mass-downvoting your old comments", but the numbers don't allow this interpretation. At this moment, your total karma is 101, 92% positive, which means you lost at most a dozen points by downvoting, which couldn't explain a drop from 245 to 101.)
I figured out what was going on after making the first comment. I waited a bit to see if it was true.
I originally posted my article in Discussion. Before getting moved, it acquired 16 points. Now (at the time of this comment) there are 28 points. So it got 12 points after being moved to Main.
I have two comments; the totals are at 19 points. The “regular” points I got are [19 from comments] + [16 from Discussion] = 35. The points from Main are 12 * 10 = 120. Adding these up, the total is [35 regular points] + [120 points from Main] = 155.
The Total Karma is indeed 155, whereas the Karma for the last 30 days is 299. This comes from a simple [19 from comments] + [28 * 10 from the post] = 299.
So it seems that the Total Karma and the Karma for the last 30 days are handling the case of a moved post differently from one another. Is there a moderator or someone with experience who can confirm or disconfirm?
Quick question on karma: why is my score for the past 30 days greater than my total score?
According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence.
This strikes me as slightly surprising. From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each, by which I mean the best aspects at the time at which the BCI is created, rather than trying to posit that biological intelligence has any fundamental advantage that sufficiently advanced computers can never overcome. A fairly close analogy is how teams of a competent chessplayer and a laptop chess program can beat both the best humans and computers with far more processing power.
Admittedly I don't know much about BCI technology, but I have heard promising things about optogenetics. Having to undergo brain surgery is a problem, but the extent of this problem seems to depend upon to what extent the interface needs to penetrate into the brain rather than just overlying the surface. If bootstrapping to greater levels of intelligence required repeated surgery to install better BCIs then this might be problematic, but intelligence gains could also be realised by working on the software, or adding more hardware, or adding more people to a swarm intelligence.
Of course, in the end it would transition to a fully or mostly machine intelligence, either through offloading increasingly more cognition to the machine components until the organic brains were only a tiny fraction of the mind, or though using the increased intelligence to develop FAI/WBE. But that doesn't make BCI a dead end, so much as a transnational stage.
Finally, in the last few years, Moore's law has started to show signs of slowing, and this should cause one to update in favour of BCI coming first, as it is probably the path least dependent upon raw computing power (unless de novo AI turns out to be far more computationally efficient than the brain).
As far as social constraints go, I don't think it would be all that hard to find volunteers, and in fact there is a natural progression from treatment of blindness, mental illnesses and so forth through to transhumanism. Legal challenges are perhaps a more likely problem, but as previously mentioned, medical use will likely provide the precedent to grandfather it in.
Note I'm not saying that this is necessary a desirable path - FAI is preferable - I'm arguing it seems at least somewhat plausible to come first. Having said that, in the event that progress on FAI is slower and other existential threats loom, than BCI could perhaps be a sensible backup plan.
Here are some relevant blockquotes of Bostrom's reasoning on brain-computer interfaces, from Superintelligence chapter 2:
It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain.64 But although the possibility of direct connections between human brains and computers has been demonstrated, it seems unlikely that such interfaces will be widely used as enhancements any time soon.65
To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. ... One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls. Treated subjects also reported more cognitive complaints.66 Such risks and side effects might be tolerable if the procedure is used to alleviate severe disability. But in order for healthy subjects to volunteer themselves for neurosurgery, there would have to be some very substantial enhancement of normal functionality to be gained.
Futhermore:
enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators.67 Patients who are deaf or blind might benefit from artificial cochleae and retinas.68 Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain.69 What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet.
Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone. So this limiting case just takes us back to the AI path, which we have already examined.
A Visualization of Nick Bostrom’s Superintelligence
Through a series of diagrams, this article will walk through key concepts in Nick Bostrom’s Superintelligence. The book is full of heavy content, and though well written, its scope and depth can make it difficult to grasp the concepts and mentally hold them together. The motivation behind making these diagrams is not to repeat an explanation of the content, but rather to present the content in such a way that the connections become clear. Thus, this article is best read and used as a supplement to Superintelligence.
Note: Superintelligence is now available in the UK. The hardcover is coming out in the US on September 3. The Kindle version is already available in the US as well as the UK.
Roadmap: there are two diagrams, both presented with an accompanying description. The two diagrams are combined into one mega-diagram at the end.

Figure 1: Pathways to Superintelligence
Figure 1 displays the five pathways toward superintelligence that Bostrom describes in chapter 2 and returns to in chapter 14 of the text. According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence. Biological cognition, i.e., the enhancement of human intelligence, may yield a weak form of superintelligence on its own. Additionally, improvements to biological cognition could feed back into driving the progress of artificial intelligence or whole brain emulation. The arrows from networks and organizations likewise indicate technologies feeding back into AI and whole brain emulation development.
Artificial intelligence and whole brain emulation are two pathways that can lead to fully realized superintelligence. Note that neuromorphic is listed under artificial intelligence, but an arrow connects from whole brain emulation to neuromorphic. In chapter 14, Bostrom suggests that neuromorphic is a potential outcome of incomplete or improper whole brain emulation. Synthetic AI includes all the approaches to AI that are not neuromorphic; other terms that have been used are algorithmic or de novo AI.
Hi, I'm Amanda. I'm interning at MIRI right now. I found HP:MoR 3 years ago, and started reading the Sequences shortly after. After 2 years of high school, I dropped out, and started at the University of Kansas. Reading the Sequences probably contributed a lot to this; I was tired of feeling like I wasn't doing anything important. Likewise, after a year at a state school, and now experiencing 5 weeks in the Bay Area, I'm motivated to get out of Kansas and back here.
I'm studying computer science, and I just finished my freshman year. I also do computer science research during the year. My advisor had me work with genetic algorithms, which, looking back now, was mainly to get me programming. My only experience was one high school class, which was predictably bad.
Anyway, I programmed a web project, and realized that I actually enjoy programming! My parents are both software engineers, so I had initially seen it as a boring 9-5 cubicle job. Later, I viewed it as a tool, useful enough to devote my studies to, but not particularly enjoyable. After working on the web app, I remember thinking, "Why didn't anyone tell me how cool coding could be?"
I decided to intern at MIRI to help narrow down what I want to do; either working directly on FAI research, or going into startups, in order to tackle another problem, while earning to give. (I'm leaning toward the startup route now.) I've had a great time so far. I have a few days left at MIRI, then I'll go to the other end of the office to volunteer with CFAR for a week, and finally I'll end my stay in Berkeley by attending a CFAR workshop.
I decided to end my lurking in order to post some of the things I've been working on for MIRI. More on that to come.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Done!