bokov

Wiki Contributions

Comments

bokov10

I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it's not my idea. It's happening with or without me.

My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.

I'm with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don't want to be rendered redundant.

The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.

Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don't necessarily need AI/ML and long predate AI/ML.

Then there are auto-suggestor/auto-reminder use-cases: "You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn't mean to use that more specific code?"

So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It's the more subtle screw-ups that I'm worried about at the moment.

Answer by bokov147

The first step is to see a psychiatrist and take the medication they recommend. For me it was an immediate night-and-day difference. I don't know why the hell I wasted so much of my life before I finally went and got treatment. Don't repeat my mistake.

bokov10

I actually tried running your essay through ChatGPT to make it more readable but it's way too long. Can you at least break it into non-redundant sections not more than 3000 words each? Then we can do the rest.

bokov10

I second that. I actually tried to read your other posts because I was curious to find out why you are getting downvoted-- maybe I can learn something outside the LW party-line from you.

But unfortunately, you don't explain your position in clear, easy to understand terms so I'm going to have to put off sorting through your stuff until I have more time.

bokov10

I meant prepping metaphorically, in the see of being willing to delve into the specifics of a scenario most other people would dismiss as unwinnable. The reason I posted this is that though it's obvious that the bunker approach isn't really the right one, I'm drawing a blank for what the right approach would even look like.

That being said, I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI. Are you saying scenarios where many but not all people die due to political/economic/environmental consequences of AI emergence are unlikely enough to disregard?

So let's talk about dystopias/wierdtopias. Do you see any categories into which these can be grouped? The question then becomes, who will lose the most and who will lose the least under various types of scenarios.

bokov10

I'm sad to see him go. I don't know enough about LWs history and have too little experience with forum moderation to agree or disagree with your decision. Though LW had been around for a very long time without imploding so that's evidence you guys know what you're doing.

Please don't take down his post though. I believe somewhere in there is a good faith opinion at odds with my own. I want to read and understand it. Just not ready for this much reading tonight.

I wish I could write so prolifically! Or maybe it's a curse rather than a blessing because then it becomes an obstacle to people understanding your point of view.

bokov30

Are there any links we can read about non-appeasing de-escalation strategies?

Either theoretical ones or ones that have been tried in the past are fine.

Load More