Comment author: AmandaEHouse 23 July 2014 05:18:37PM 10 points [-]

Here are some relevant blockquotes of Bostrom's reasoning on brain-computer interfaces, from Superintelligence chapter 2:

It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain.64 But although the possibility of direct connections between human brains and computers has been demonstrated, it seems unlikely that such interfaces will be widely used as enhancements any time soon.65

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. ... One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls. Treated subjects also reported more cognitive complaints.66 Such risks and side effects might be tolerable if the procedure is used to alleviate severe disability. But in order for healthy subjects to volunteer themselves for neurosurgery, there would have to be some very substantial enhancement of normal functionality to be gained.

Futhermore:

enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators.67 Patients who are deaf or blind might benefit from artificial cochleae and retinas.68 Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain.69 What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone. So this limiting case just takes us back to the AI path, which we have already examined.

Comment author: ChrisHibbert 26 July 2014 04:35:42PM 2 points [-]

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain.

This is all going to change over time. (I don't know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can't get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.

enhancement is likely to be far more difficult than therapy.

Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded.

The visual pathway is impressive, but it's very limited in the kinds of information it transmits. It's a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn't need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.

Comment author: ChrisHibbert 08 March 2014 06:26:32PM 0 points [-]

I'm confused by the framing of the Anvil problem. For humans, a lot of learning is learning from observing others, seeing their mistakes and their consequences. We can predict various events that will result in other's deaths based on previous observation of what happened to yet other people. If we're above a certain level of solipsism, we can extrapolate to ourselves.

Does the AIXI not have the ability to observe other agents? Is it correct to be a solipsist? Seems like a tough learning environment if you have to discover all consequences yourself.

It's still possible to extrapolate from stubbing your toe, burning your fingers on the stove, and mashing your thumb with a hammer. Is there some reason to expect that AIXI will start out its interactions with the world by picking up an anvil rather than playing with rocks and eggs?

Comment author: ChrisHibbert 30 November 2013 07:56:01PM 11 points [-]

I don't answer survey questions that ask about race, but if you met me you'd think of me as white male.

I'm more strongly libertarian (but less party affiliated) than the survey allowed me to express.

I have reasonably strong views about morality, but had to look up the terms "Deontology", "Consequentialism", and "Value Ethics" in order to decide that of these "consequentialism" probably matches my views better than the others.

Probabilities: 50,30,20,5,0,0,0,10,2,1,20,95.

On "What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?", I had to parse several words very carefully, and ended up deciding to read "significant" as "measureable" rather than "consequential". For consequential, I would have given a smaller value.

I answered all the way to the end of the super bonus questions, and cooperated on the prize question.

Comment author: Error 25 November 2013 04:19:14PM 16 points [-]

I've always found interrupt mode incredibly frustrating to deal with, because it takes me longer to come up with appropriate responses than most people's Maximum Wait Time, and by the time a monologue is finished, whatever I thought of is no longer on-topic. I do find it interesting that you associate interruption with nerddom, though. As a tech geek, interruption is a Very Bad Thing because it disrupts mental state. If I want someone's attention, I stand near them, look attentive, and wait for them to swap out. I expect others to do the same. Sometimes I have to ask. But I don't think I've had anyone competent refuse to wait or express confusion about why.

It's possible nerd communities make a distinction between work-interruption and conversation-interruption. Most of my communication is textual so I may not have noticed. One of the things I've always loved about communication on the Internet is that interruption is effectively impossible; both parties can get their say without being dependent on the other party to shut up for a moment.

Comment author: ChrisHibbert 30 November 2013 06:46:30PM 1 point [-]

In my group at work, it's relatively common to chat "interruptible?" to someone who's sitting right next to you. You can keep working until they're free to take the interrupt, and they don't need to take the interrupt utill they're ready.

In f2f conversations, it's mostly an interrupt culture, but with some conventions about not breaking in in groups larger than 4 or so.

Comment author: ChrisHibbert 14 September 2013 06:39:30PM 7 points [-]

I believe that emotions play a big part in thinking clearly, and understanding our emotions would be a helpful step. Would you mind saying more about the time you spend focused on emotions? Are you paying attention to your concrete current or past emotions (i.e. "this is how I'm feeling now", or "this is how I felt when he said X"), or more theoretical discussions "when someone is in fight-or-flight mode, they're more likely to Y than when they're feeling curiosity"?

You also mentioned exercises about exploiting emotional states; would you say more about what CFAR has learned about mindfully getting oneself in particular emotional states?

Comment author: wedrifid 07 August 2013 07:46:10AM 5 points [-]

Progress is reduction of expected work remaining.

No it isn't. Those things are often correlated but not equivalent. New information can be gained that increases the expected work remaining despite additional valuable work having been done.

Comment author: ChrisHibbert 10 August 2013 05:15:39PM 1 point [-]

|New information can be gained that increases the expected work remaining despite additional valuable work having been done.

That's progress.

Comment author: Desrtopa 28 February 2013 02:08:59AM 1 point [-]

I like the phrase "precedent utilitarianism". It sounds to utilitarians like you're joining their camp, while actually pointing out that you're taking a long-term view of utility, which they usually refuse to do

On what basis would you say it's the case that utilitarians usually refuse to take a long-term view of utility?

Comment author: ChrisHibbert 03 March 2013 06:32:36AM 0 points [-]

When I've argued with people who called themselves utilitarian, they seemed to want to make trade-offs among immediately visible options. I'm not going to try to argue that I have population statistics, or know what the "proper" definition of a utilitarian is. Do you believe that some other terminology or behavior better characterizes those called "utilitarians"?

Comment author: [deleted] 01 October 2012 08:15:19PM 23 points [-]

… if anyone asks, I did not tell you it was ok to do math like this.

In response to comment by [deleted] on Rationality Quotes October 2012
Comment author: ChrisHibbert 06 October 2012 06:06:16PM 1 point [-]

Did Munroe add that? It's incorrect. There are lots of situations in which it's reasonable to calculate while throwing away an occasional factor of 2.2.

Comment author: wedrifid 03 October 2012 12:28:16PM 1 point [-]

Agreed. Though of course, I don't really see Faramir as disagreeing -- it was, after all, the Rangers of Ithilien who ambushed the Haradrim and killed the soldier they're talking about.

I'm a little bit proud that I don't know who all these people are.

Comment author: ChrisHibbert 06 October 2012 06:04:06PM 3 points [-]

downvoted. You're saying you don't know anything about the context provided by a story that is apparently of interest to (at least) several readers here, and you're proud of not sharing the context. Doesn't seem like something to crow about without first finding out if the content is frivolous.

Comment author: Eliezer_Yudkowsky 14 August 2012 11:33:31PM 17 points [-]

Some non-nitwit (actual-economic-value-generating) startups I've heard proposed lately by people in this or related communities:

  • Kevin Fischer is interested in identifying useful sub-chemicals in certain legal psychoactive plants. Anyone with biotech, chemical-identifying training would be useful to him.
  • Mike Darwin (not LW-style rationalist, but cryonicist) says that his research and numerous other papers show that melatonin, among some other chemicals, is very effective at preventing cerebral-reperfusion ischemic injury which is the real killer in heart attacks and strokes, and for which there are apparently not currently approved medications.
  • Zvi Mowshowitz is now trying to refound a startup to provide evidence-based, rationalist-filtered medical care - evidence-based doctors as opposed to just evidence-based medical research that often gets ignored by actual doctors.
  • John Schloendorn is the most competent biotech guy I know. He was literally trying to cure cancer - by trying to duplicate the abilities of a 100%-cancer-immune strain of mice, in humans - when his startup ran out of money; and he has a lot of other low-hanging fruits on his list as well.
Comment author: ChrisHibbert 18 August 2012 09:00:47PM 1 point [-]

Atul Gawande has a new article on how the medical industry can learn from other businesses that use production methods to achieve consistent results. He mentions a couple of national start-ups that are trying to use consistent evidence-based practices, and continuous review of outcomes to make health care more reliable and consistent and do it at a profit.

View more: Next