steven0461 comments on The Strangest Thing An AI Could Tell You - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (574)
Do we have any sort of data at all on what happens when decent rationalists are afflicted with things like anosognosia and Capgras?
Not that I know of offhand. I'm vastly curious as to whether I could beat it, of course - but wouldn't dare try to find out, even if there were a simulating drug that was supposedly strictly temporary, any more than I dare ride a motorcycle or go skydiving.
We can temporarily disrupt language processing through magnetically-induced electric currents in the brain. As far as anyone can tell, the study subjects suffer no permanent impairment of any kind. Would you be willing to try an anosognosia version of the experiment?
Perhaps such a test would become part of an objective method to measure rationality.
What!? I'm not rational if I rely on my right brain to do it's job? True rationalists act rational when you take out a big chunk of their circuitry? When you remove a component of your negative feedback loop (I assume: nature uses them often) you should act normal? I'd suspect a person who could would be paranoid that everyone is lying once the right brain is put back online!
From the little I understand, for people both unprepared for the experience (everyone who's had it) and not thinking of it as a test of rationality (again, everyone), the left brain confabulates elaborate scenarios to justify retaining the beliefs, and the (damaged) right brain fails to adequately consider new hypotheses.
It seems people with stronger left brains, roughly higher IQ, should be more prone to being stupid in this way, and failing the test, than people with less of an ability to justify their beliefs.
This would still be a way to test rationality. If it makes you stupid, you're probably rational.
My point is that it would give a rationality/intelligence ratio, so its ability to measure rationality depends on our separate ability to measure intelligence which is currently pretty crude. If we can induce measured degrees of artificial anosognosia, and report at what level each subject can no longer save him or herself with rationality, and measure intelligence, then we could nail down rationality more precisely.
My hypothesis is that the smarter someone is, the more impressed we will be with the extent he or she remained rational while being magnetically stultified.
A better test would be to remove the brain's left hemisphere and then test their confidence calibration.
I've heard an account of cortisone withdrawal from a generally rational person-- she said her hallucinations became more and more bizarre (iirc, a CIA center appeared in her hospital room), and she had no ability to check it for plausibility.
I wonder whether practicing lucid dreaming would give people more ability to remain reflective during non-dream hallucinations.
There are plenty of drugs that stimulate temporary psychosis, and some of them, like LSD, are quite safe, physically. What makes you so wary?
(I haven't tried LSD myself, due in part to unpleasant experiences with Ritalin as a child.)
My own experience with LSD was very pleasant, and didn't simulate any sort of psychosis or unusual beliefs; it just made everything look big and beautiful and deep, and made me pay closer attention to small details.
Marijuana, on the other hand, has almost always made me temporarily psychotic, or at least paranoid. It's also very safe physically. I'd be curious to know about any decent rationalists' attempts to "beat" this or other drugs.
By 'safe' it should be clear that Marijuana can be expected to cause a predictable minor about of permanent damage to the brain without, say, killing you.
Psyclobin should be preferred in most cases. It actually gives long term benefits in controlled circumstances.
More info on both of these statements, please! They both seem unlikely to me.
Is that an alternative spelling for the substance known as Psilocybin?
I suppose making blatant spelling errors could be considered an 'alternative'. ;)
I've used bayesianism to stop myself from being paranoid after smoking marijuana. I don't get it too badly, but I tend to think random events are related to me, eg that police car driving down the street with it's sirens on is coming for me, or the runner in the park is here to mug me. Besides being able to understand that I'll deliberately altered my mental state and can make reference to how I would feel in an unaltered state, I've also taken a moment to pause and say something along the lines of "OK, given that it's highly implausible that the police know I'm high, I have a fairly low prior for 'random police car is coming for me'. Do I have any evidence that would have caused me to update my beliefs? No. So no reason to believe them". It works pretty well, but marijuana is a pretty soft drug imho. In my limited experience it's harder to reason yourself out of adverse mental states that can come from psilocybin a (sufficient quantities of) LSD
I can easily "beat" alcohol (i.e., think and act the way I would if I were sober -- modulo motor impairments) if I want to (unless it is so much as to make me sick). I no longer smoke marijuana that often these days, but the only time I did after finding Less Wrong, I felt like I didn't want to beat it. I'd have to resolve to try and beat it before smoking it, I think.
You should know better. Of course "you" can't beat it, if the experimenting mad scientist is allowed to delete arbitrary subsystems. You won't be the same you. What you might have achieved is to force them to shut down more subsystems.
I suspect that by Eliezer's standards, "beat it" would be defined as "they would be forced to shut down enough subsystems to no longer have any semblance of a functioning intelligence".
Of course, I doubt that this is possible on the human cognitive architecture, but it would be a nice property of a fault-tolerant AI.
Highly unlikely in a human. I don't know, but I'd guess that self-checking is one subsystem. Lose that, and the plainest contradictions pass as uninteresting.
On the other hand, the engineering challenge catches my interest. Might there be any way to train other parts of your mind, parts that normally don't do checking, to sync up if everything is working OK or intervene if the normal part is out of action? Get the right brain in on the act, perhaps. That might give you something like Dune "truth sense" but turned inward. It would certainly feel very different from normal reason.
Conscious-mind rationality is good - might unconscious-mind rationality be better? You could self-monitor even on autopilot.
How many subsystems can be made rational?
There are many reasons to expect that the non-conscious part of the mind is largely arational, in the LW senses of rationality. My impression is that it seems to operate mostly on trained responses and associative connections/pattern matching, mediated by emotional responses. In practice this means it can often actually be more rational in certain ways than the conscious mind, because it seems to be better at collecting and correlating information, cf. people who have a non-rational aversion to certain foods for reasons they don't consciously understand, then years later discover they're actually allergic to it.
I expect the better approach would be to deliberately train the non-conscious mind to use associations and heuristics derived by the rational conscious mind, and I mean "train" in the sense of "training a dog".
Any sort of high-level self-monitoring is probably beyond its capabilities, though perhaps recognizing warning signs and alerting the conscious mind would work. Some sort of "panic on unexpected input" type heuristic, I guess.
But there's another, safe way to find out: beat one you already have.
Not exactly the same, but there's a famous case of [paranoid schizophrenia][nash].