Comment author: Lumifer 30 October 2014 08:05:34PM 1 point [-]

I am not saying this narrow AI should be given direct control of IV drips :-/

I am saying that a doctor, when looking at a patient's chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants.

A system which automates almost all diagnoses would do that.

No, I don't think so because even if you rely on an automated diagnosis you still have to treat the patient.

Comment author: RobinZ 30 October 2014 08:34:23PM 1 point [-]

Even assuming that the machine would not be modified to give treatment recommendations, that wouldn't change the effect I'm concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they'll stop remembering how to diagnose disease and instead remember how to use the machine. It's called "transactive memory".

I'm not arguing against a machine with a button on it that says, "Search for conditions matching recorded symptoms". I'm not arguing against a machine that has automated alerts about certain low-probability risks - if there was a box that noted the conjunction of "from Liberia" and "temperature spiking to 103 Fahrenheit" in Thomas Eric Duncan during his first hospital visit, there'd probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, "No diagnosis found".

Comment author: Lumifer 30 October 2014 07:33:22PM 3 points [-]

the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn't fit

Medical expert systems are getting pretty good, I don't see why you wouldn't just jump straight to an auto-updated list of most likely diagnoses (generated by a narrow AI) given the current list of symptoms and test results.

Comment author: RobinZ 30 October 2014 07:57:53PM 1 point [-]

Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.

Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many 'pilots' don't know how to fly a plane. A system which automates almost all diagnoses would do that.

Comment author: RobinZ 30 October 2014 06:50:24PM 11 points [-]

True story: when I first heard the phrase 'heroic responsibility', it took me about five seconds and the question, "On TV Tropes, what definition fits this title?" to generate every detail of EY's definition save one. That detail was that this was supposed to be a good idea. As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that's a recipe for everyone getting in everyone else's way and burning out within a year. And, as you point out, you don't actually know the doctor's job better than the doctors do.

In my opinion, what we should be advocating is the concept of 'subsidiarity' that Fred Clark blogs about on Slacktivist:

Responsibility — ethical obligation — is boundless and universal. All are responsible for all. No one is exempt.

Now, if that were all we had to say or all that we could know, we would likely be paralyzed, overwhelmed by an amorphous, undifferentiated ocean of need. We would be unable to respond effectively, specifically or appropriately to any particular dilemma. And we would come to feel powerless and incapable, thus becoming less likely to even try.

But that’s not all that we can know or all that we have to say.

We are all responsible, but we are not all responsible in the same way. We each and all have roles to play, but we do not all have the same role to play, and we do not each play the same role all the time.

Relationship, proximity, office, ability, means, calling and many other factors all shape our particular individual and differentiated responsibilities in any given case. In every given case. Circumstance and pure chance also play a role, sometimes a very large role, as when you alone are walking by the pond where the drowning stranger calls for help, or when you alone are walking on the road to Jericho when you encounter the stranger who has fallen among thieves.

Different circumstances and different relationships and different proximities entail different responsibilities, but no matter what those differences may be, all are always responsible. Sometimes we may be responsible to act or to give, to lift or to carry directly. Sometimes indirectly. Sometimes our responsibility may be extremely indirect — helping to create the context for the proper functioning of those institutions that, in turn, create the context that allows those most directly and immediately responsible to respond effectively. (Sometimes our indirect responsibility involves giving what we can to the Red Cross or other such organizations to help the victims of a disaster.)

The idea of heroic responsibility suggests that you should make an extraordinary effort to coerce the doctor into re-examining diagnoses whenever you think an error has been made. Bearing in mind that I have no relevant expertise, the idea of subsidiarity suggests to me that you, being in a better position to monitor a patient's symptoms than the doctor, should have the power to set wheels in motion when those symptoms do not fit the diagnosis ... which suggests a number of approaches to the situation, such as asking the doctor, "Can you give me more information on what I should expect to see or not see based on this diagnosis?"

(My first thought regarding your anecdote was that the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn't fit, but this article about the misdiagnosis of Ebola suggests revising the system to make it more likely for doctors see the nurses' observations that would let them catch a misdiagnosis. You're in a better position to examine the policy question than I am.)

I have to admit, I haven't been following the website for a long while - these days, I don't get a lot of value out of it - so what I'm saying that Fred Clark is saying might be what a lot of people already see as the meaning of the concept. But I think that it is valuable to emphasize that responsibility is shared, and sometimes the best thing you can do is help other people do the job. And that's not what Harry Potter-Evans-Verres does in the fanfic.

Comment author: RobinZ 29 October 2014 06:16:05AM 25 points [-]

Completed survey less annoying question that required using an annoying scanner that makes annoying noises (I am feeling annoyed). Almost skipped it, but realized that the attitudes of ex-website-regulars might be of interest.

Comment author: Screwtape 10 June 2014 04:50:41PM 7 points [-]

Skyler here, a 21 year old technology student. Born and raised in the backwoods of Vermont to ahem philosophically diverse parents, was encouraged to read pretty much every philosophical book the library had except for Ayn Rand. So naturally I gravitated towards that as soon as I became enough of a teenager, but apparently completely missed the antagonism towards non-geniuses and couldn't for the life of me figure out why I seriously disliked every objectivist I met.

About two years ago, I had a professor who introduced me to HPMoR, which I enjoyed immensely. It took me around a month to move to the sequences. They seem to have had the curious property of seeming perfectly obvious, like someone simply expressing what I already knew just in better words, and while a lot of them do fall close in broad subject to things I'd written about before, the only use I'd had for bayesian statistics prior to reading them was spam filters. (And then the author's notes pointed me to Worm, which consumed a month or two.)

A couple of weeks ago however, I encountered a post on SlateStarCodex (which I'd been reading after stumbling upon it through unrelated browsing) about trans people, and somehow around the same time got linked to Alicorn's Polyhacking article. My positions previously were similar to the authors (Thought of both transgender and polyamory as mildly wrong and not understandable) and both made a solid argument that actually changed my mind. This was not the "Oh, of course I knew that" of the sequences, but a "Huh. I thought that was wrong, but they have good points. Let me think for five minutes and see if there are any more arguments for or against I can think of now." By the end of the respective days, I had a different opinion than I previously had, and was beginning to make changes in how I conducted myself because of one of them. In addition, they both seemed like interesting people I could relate to, and a community of such people could be really fun. (As opposed to Eliezer Y.- That is, I can imagine having a conversation with these people, whereas if I was in a conversation with Eliezer Y. I would feel compelled to take notes.)

So yeah. I'm here to see how many other topics require me to change my mind, and to hopefully have cool conversations with interesting people. Any recommendations on where to start?

Comment author: RobinZ 10 June 2014 11:39:12PM 0 points [-]

Also, I don't know if "Typical mind and gender identity" is the blog post that you stumbled across, but I am very glad to have read it, and especially to have read many of the comments. I think I had run into related ideas before (thank you, Internet subcultures!), but that made the idea that gender identity has a strength as well as a direction much clearer.

Comment author: Nornagest 10 June 2014 10:06:30PM 3 points [-]

Besides, substitute "blog host" for "government" and I think it becomes a bit clearer

Speaking for myself, I've got a fair bit of sympathy for the concept with that substitution and a fair bit of antipathy without it. It's a lot easier to find a blog you like and that likes you than to find a government with the same qualities.

Comment author: RobinZ 10 June 2014 10:25:24PM 2 points [-]

Hence the substitution. :)

Comment author: Screwtape 10 June 2014 04:50:41PM 7 points [-]

Skyler here, a 21 year old technology student. Born and raised in the backwoods of Vermont to ahem philosophically diverse parents, was encouraged to read pretty much every philosophical book the library had except for Ayn Rand. So naturally I gravitated towards that as soon as I became enough of a teenager, but apparently completely missed the antagonism towards non-geniuses and couldn't for the life of me figure out why I seriously disliked every objectivist I met.

About two years ago, I had a professor who introduced me to HPMoR, which I enjoyed immensely. It took me around a month to move to the sequences. They seem to have had the curious property of seeming perfectly obvious, like someone simply expressing what I already knew just in better words, and while a lot of them do fall close in broad subject to things I'd written about before, the only use I'd had for bayesian statistics prior to reading them was spam filters. (And then the author's notes pointed me to Worm, which consumed a month or two.)

A couple of weeks ago however, I encountered a post on SlateStarCodex (which I'd been reading after stumbling upon it through unrelated browsing) about trans people, and somehow around the same time got linked to Alicorn's Polyhacking article. My positions previously were similar to the authors (Thought of both transgender and polyamory as mildly wrong and not understandable) and both made a solid argument that actually changed my mind. This was not the "Oh, of course I knew that" of the sequences, but a "Huh. I thought that was wrong, but they have good points. Let me think for five minutes and see if there are any more arguments for or against I can think of now." By the end of the respective days, I had a different opinion than I previously had, and was beginning to make changes in how I conducted myself because of one of them. In addition, they both seemed like interesting people I could relate to, and a community of such people could be really fun. (As opposed to Eliezer Y.- That is, I can imagine having a conversation with these people, whereas if I was in a conversation with Eliezer Y. I would feel compelled to take notes.)

So yeah. I'm here to see how many other topics require me to change my mind, and to hopefully have cool conversations with interesting people. Any recommendations on where to start?

Comment author: RobinZ 10 June 2014 10:05:24PM 0 points [-]

I'm afraid I haven't been active online recently, but if you live in an area with a regular in-person meetup, those can be seriously awesome. :)

Comment author: Lumifer 04 June 2014 03:35:32PM 9 points [-]

Consider also that "don't argue with idiots" has much of the same superficial appeal as "allow the government to censor idiots".

The former has a fair amount of appeal for me and the latter I would find appalling and consider to be descent into totalitarianism. I don't think this comparison works.

Comment author: RobinZ 10 June 2014 10:03:36PM 4 points [-]

Jiro didn't say appeal to you. Besides, substitute "blog host" for "government" and I think it becomes a bit clearer: both are much easier ways to deal with the problem of someone who persistently disagrees with you than talking to them. Obviously that doesn't make "don't argue with idiots" wrong, but given how much power trivial inconveniences have to shape your behavior, I think an admonition to hold the proposed heuristic to a higher standard of evidence is appropriate.

Comment author: wobster109 10 June 2014 05:36:16PM 3 points [-]

It's a little funny that in our quest for a believably human conversation bot, we've ended up with conversations that are very much unhuman.

In no conversation would I meet someone and say, "oh hey, how many legs on a millipede?" They'd say to me "haha that's funny, so are you from around here?" and I'd reply with "how many legs on an ant in Chernobyl?" And if they said to me, "sit here with your arms folded for 4 minutes then repeat this sentence back to me," I wouldn't do it. I'd say "why?" and fail right there.

Comment author: RobinZ 10 June 2014 06:18:02PM 2 points [-]

Hmm ... that and a la shminux's xkcd link gives me an idea for a test protocol: instead of having the judges interrogate subjects, the judges give each pair of subjects a discussion topic a la Omegle's "spy" mode:

Spy mode gives you and a stranger a random question to discuss. The question is submitted by a third stranger who can watch the conversation, but can't join in.

...and the subjects have a set period of time they are permitted to talk about it. At the end of that time, the judge rates the interesting-ness of each subject's contribution, and each subject rates their partner. The ratings of confirmed-human subjects would be a basis for evaluating the judges, I presume (although you would probably want a trusted panel of experts to confirm this by inspection of live results), and any subjects who get high ratings out of the unconfirmed pool would be selected for further consideration.

Comment author: [deleted] 10 June 2014 04:11:36PM 1 point [-]

Suggestion: ask questions which are easy to execute for persons with evolved physical-world intuitions, but hard[er] to calculate otherwise. For example:

Suppose I have a yardstick which was blank on one side and marked in inches on the other. First, I take an unopened 12-oz beverage can and lay it lengthwise on one end of the yardstick so that half the height of the can is touching the yardstick and half is not, and duct-tape it to the yardstick in that position. Second, I take one-liter plastic water bottle, filled with water, and duct-tape it to the other end in a similar sort of position. If I lay a deck of playing cards in the middle of the open floor and place the yardstick so that the 18-inch mark is centered on top of the deck of cards, when I let go, what will happen?

Familiarity with imperial units is hardly something I would call an evolved physical-world intuition...

In response to comment by [deleted] on Come up with better Turing Tests
Comment author: RobinZ 10 June 2014 05:57:04PM 0 points [-]

Were I using that test case, I would be prepared with statements like "A fluid ounce is just under 30 cubic centimeters" and "A yardstick is three feet long, and each foot is twelve inches" if necessary. Likewise "A liter is slightly more than one quarter of a gallon".

But Stuart_Armstrong was right - it's much too complicated an example.

View more: Prev | Next