Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ArisKatsaris 01 January 2018 02:11:56AM 0 points [-]

Meta Thread

Comment author: gwern 01 January 2018 03:49:54AM 0 points [-]
Comment author: gwern 08 December 2017 03:31:56AM 0 points [-]

On an intermediate class of anesthetics: "Surgical Patients May Be Feeling Pain—and (Mostly) Forgetting It: Amnesic anesthetics are convenient and help patients make a faster recovery, but they don't necessarily prevent suffering during surgery", Kate Cole-Adams:

In 1993, as a little-known anesthesiologist from the recursive Hull, England, Russell published a startling study. Using a technique almost primitive in its simplicity, he monitored 32 women undergoing major gynecological surgery at the Hull Royal Infirmary to assess their levels of consciousness. The results convinced him to stop the trial halfway through.

The women were put to sleep with a low-dose anesthetic cocktail that had been recently lauded as providing protection against awareness. The main ingredients were the (then) relatively new drug midazolam, along with a painkiller and muscle relaxant to effectively paralyze her throughout the surgery. Before the women were anesthetized, however, Russell attached what was essentially a blood-pressure cuff around each woman’s forearm. The cuff was then tightened to act as a tourniquet that prevented the flow of blood, and therefore muscle relaxant, to the right hand. Russell hoped to leave open a simple but ingenious channel of communication—like a priority phone line—on the off chance that anyone was there to answer him. Once the women were unconscious Russell put headphones over their ears through which, throughout all but the final minutes of the operation, he played a prerecorded one-minute continuous-loop cassette. Each message would begin with Russell’s voice repeating the patient’s name twice. Then each woman would hear an identical message. “This is Dr. Russell speaking. If you can hear me, I would like you to open and close the fingers of your right hand, open and close the fingers of your right hand.”

Under the study design, if a patient appeared to move her hand in response to the taped command, Russell was to hold her hand, raise one of the earpieces and say her name, then deliver this instruction: “If you can hear me, squeeze my fingers.” If the woman responded, Russell would ask her to let him know, by squeezing again, if she was feeling any pain. In either of these scenarios, he would then administer a hypnotic drug to put her back to sleep. By the time he had tested 32 women, 23 had squeezed his hand when asked if they could hear. Twenty of them indicated they were in pain. At this point he stopped the study. When interviewed in the recovery room, none of the women claimed to remember anything, though three days later several showed some signs of recall. Two agreed after prompting that they had been asked to do something with their right hand. Neither of them could remember what it was, but while they were thinking about it, said Russell, both involuntarily opened and closed that hand. Fourteen of the patients in the study (including one who was later excluded) showed some signs of light anesthesia (increased heart rate, blood-pressure changes, sweating, tears), but this was true of fewer than half of the hand-squeezers.* Overall, said Russell, such physical signs “seemed of little value” in predicting intraoperative consciousness.

He concluded thus:

If the aim of general anesthesia is to ensure that a patient has no recognizable conscious recall of surgery, and views the perioperative period [during the surgery] as a “positive” experience, then ... [this regimen] may fulfill that requirement. However, the definition of general anesthesia would normally include unconsciousness and freedom from pain during surgery—factors not guaranteed by this technique.

For most of the women in his study, he continued, the state of mind produced by the anesthetic could not be viewed as general anesthesia. Rather, he said, “it should be regarded as general amnesia.”...Twenty years after that discontinued study, Russell staged similar experiments using the isolated-forearm technique alongside a bispectral-index monitor (BIS), which tracks depth of anesthesia. While the number of women who responded dropped to one-third when staff used an inhalation anesthetic, another study using the intravenous drug propofol showed that during BIS-guided surgery, nearly three-quarters of patients still responded to command—half those responses within the manufacturer’s recommended surgical range.

...(This post is adapted from Cole-Adams’s new 2017 book, Anesthesia: The Gift of Oblivion and the Mystery of Consciousness.)

Comment author: ArisKatsaris 01 December 2017 09:02:38AM 0 points [-]

Meta Thread

Comment author: gwern 01 December 2017 06:26:19PM 1 point [-]
Comment author: Costanza 31 December 2010 01:09:26PM 10 points [-]

For those who haven't read it, take a look at Richard Feynman on cargo cult science if you want a good lecture on experimental design.

I loved it. I have a question for anyone who might know: In that 1974 speech, Richard Feyman made a very specific criticism of experimental psychology. He mentioned an "a-number-one experiment" on lab rats running through a maze by a "Mr. Young" in 1937, which corrected for a hugely non-intuitive experimental design error. But then, according to Feynman:

The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn't discover anything about the rats.

Who was "Mr. Young?" Did Richard Feynman succeed in drawing attention to this problem within the field of experimental psychology? Has Mr. Young been cited in any papers since 1974?

Comment author: gwern 22 November 2017 04:31:18PM *  1 point [-]

One interesting lead showed up on Twitter: Marvin Minsky on Usenet 10 April 1993 (<code>sci.bio</code> "Puling Habits out of Rats") in response to someone asking 'who was Mr Young and whatever happened with the mouse studies':

What happened around 1937 was that

  • [possibility #] 5. B. F. Skinner developed ways to control all those external variables by enclosing the experiment in a sealed, soundproof, lightproof, etc., box. The results were reliably reproducible, and a great deal was learned. The boxes were soon names "Skinner Boxes" and became the new paradigm for studying animal learning. Skinner and many others switched to pigeons, for various reasons, but others continued to use rats.

When I was undergraduate in the late '40s, I hung around that lab and helped with some switching and sequencing stuff to make the experiments more convenient. I don't remember the name of Young, but it was folklore that the change was because someone had found that rats appeared to be able to navigate by distant cues, e.g., the appearance of the ceiling, so that the traditional open-topped maze experiments might be flawed.

Comment author: gwern 29 April 2014 03:16:34AM *  2 points [-]

I noticed something interesting: in Google Scholar, when you punch in Young as author and the reasonable search terms 'rat' 'maze' 'sand' restricted to before Feynman's lecture, only 3 items pop up.

I don't have access to the 3, so I've requested them: http://lesswrong.com/lw/ji3/lesswrong_help_desk_free_paper_downloads_and_more/auye

(Frustratingly, Young wrote a whole textbook on rats/mice available on the Internet Archive - the year before Feynman says he did the experiment! Another textbook, Emotion in man and animal: its nature and dynamic basis, isn't on IA but is in Google Books; checking it with a few keywords like 'sand' and 'smell' and 'third', doesn't seem to throw up any particularly good hits.)

Comment author: gwern 22 November 2017 03:10:29AM *  1 point [-]

Emotion in man and animal can now be read on Hathitrust: https://catalog.hathitrust.org/Record/000426365 Checking the ToC doesn't turn up anything relevant, and an additional search for 'maze' shows some maze-running mice experiments but not the one in question.

I wonder if it's possible this is the wrong Young? It is not that rare a US surname (far from it, #28 in 1990). Thinking about it, isn't calling him "Mr. Young" a little odd? P.T. Young definitely had a PhD and was a tenured professor, so it's a bit disrespectful to not refer to him as 'Dr Young' or Professor Young'. (And some quick skimming doesn't turn up any obvious connections between Young's University of Illinois and Feynman, so how did he hear of it?)

Comment author: ArisKatsaris 02 November 2017 12:35:34AM 0 points [-]

Meta Thread

Comment author: gwern 04 November 2017 12:13:16AM 1 point [-]
Comment author: VipulNaik 31 October 2017 05:35:43AM *  0 points [-]

I tried looking in the IRS Form 990 dataset on Amazon S3, specifically searching the text files for forms published in 2017 and 2016.

I found no match for (case-insensitive) openai (other than one organization that was clearly different, its name had openair in it). Searching (case-insensitive) "open ai" gave matches that all had "open air" or "open aid" in them. So, it seems like either they have a really weird legal name or their Form 990 has not yet been released. Googling didn't reveal any articles of incorporation or legal name.

Comment author: gwern 02 November 2017 12:33:08AM 1 point [-]

As I said, their 2016 From 990 is not yet available (so their 2017 one definitely isn't) and I have already asked them so there can be no confusion on the matter.

Comment author: Tenoke 15 October 2017 09:21:08AM *  1 point [-]

Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.

I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably hurt a lot more than help - e.g. someone coming back in 4 months and seeing how we've went down to only 300 (!) responders in the annual survey is going to assume LW is even more dead than it really is. This reasoning goes beyond the survey.

Comment author: gwern 30 October 2017 07:55:41PM 2 points [-]

I did intend to take over the survey if Yvain stopped, although I didn't tell him in the hopes he would keep doing it rather than turn it over immediately. I'm not sure I would take it over now: the results seem increasingly irrelevant as I'm not sure the people taking the survey overlap much anymore as with the original LW surveys in 2009.

Comment author: gwern 30 October 2017 07:53:25PM 1 point [-]

Just going to make a minor point that OpenAI does not have $1b (and anyway, they don't spend most of their money on AI risk but generic AI research), they have only a pledge for $1b from Musk. I've asked them several times for their Form 990, which would show how much money they actually have, but their 2016 one is still unavailable.

Comment author: gwern 20 October 2017 01:45:08AM 3 points [-]

If anyone wants more details, I have extensive discussion & excerpts from the paper & DM QAs at https://www.reddit.com/r/reinforcementlearning/comments/778vbk/mastering_the_game_of_go_without_human_knowledge/

View more: Next