All of mu_(negative)'s Comments + Replies

Hey, I remember your medical miracle post. I enjoyed it!

"Objectively" for me would translate to "biomarker" i.e., a bio-physical signal that predicts a clinical outcome. Note that for depression and many psychological issues this means that we find the biomarkers by asking people how they feel...but maybe this is ok because we do huge studies with good controls, and the biomarkers may take on a life of their own after they are identified.

I'm assuming you mean biomarkers for psychological / mental health outcomes specifically. This is spiritually pretty clo... (read more)

1rpglover64
Not my medical miracle post, just my comment on it. Yes, though I wouldn't restrict it to "clinical" because I care about non-medical outcomes, and "bio-physical" seems restrictive, though based on your example, that seems to be just my interpretation of the term. These are legitimate biomarkers, but they're not what I want, and I'm struggling to explain specifically why; the two things that come up are that they have low statistical power and they're a particularly lagging indicator (imagine for contrast e.g. being able to tell whether an antidepressant would work for you after taking it for a week, even if it takes two months to feel the effects). They're fine and useful for statistics, and even for measuring the effectiveness of a treatment in an individual, but a lot less useful for experimenting. That sounds really cool. I'm assuming there's nothing actionable available right now for patients? Yep. This is basically what I'm hoping to monitor in myself. For example, better vigilance might translate to better focus on work tasks, or better selective attention might imply better impulse control. QM doesn't work so well on phone and hasn't been updated on years and has major UX issues for my use case that makes it too hard to work with. It also doesn't expose the raw statistics. Cognifit (the only app I've found that does assessment and not just "brain training") reports even less. Do you have a specific app that you know of? I don't think this is true. My alternative hypothesis (which I think is also compatible with the data) is that it's not hard, but there's no money in it, so there's not much commercial "free energy" making it happen, and that it's tedious, so there's not much hobbyist "free energy", and academia is slow as things like this.

Took me a while to get back to this question. I didn't know the answer so I looked up some papers. The short answer is, knowing this requires long follow-up periods which studies are generally not good at so we don't have great answers. Definitely a significant number of people don't stay better.

The longer answer is, probably about half of people need some form of maintenance treatment to stay non-depressed for more than a year, but our view of this is very confounded. Some studies have used normal antidepressant medications for maintenance, and some studi... (read more)

Hi Sable, I'm a TMS (+EEG) researcher. I'm happy to see some TMS discussed here and this is a nice introductory writeup. If you had any specific questions about TMS or the therapy I'd be happy to answer them or point you in the right direction. Depression is not my personal area of study or expertise, but it's hard not to know a lot about depression treatment if you study TMS for a living because it's the most successful application of the technique.

Two specific things you mentioned - first, that TMS depression therapy does not require or use an MRI. It's ... (read more)

2Sable
That's awesome that you're doing that research! My biggest question is probably what the distribution looks like for people who get TMS for depression - how many of them are "cured" in the sense that they never need TMS again? How many need it again after a year? Two years? And so on.
1rpglover64
This isn't directly related to TMS, but I've been trying to get an answer to this question for years, and maybe you have one. When doing TMS, or any depression treatment, or any supplementation experiment, etc. it would make sense to track the effects objectively (in addition to, not as a replacement for subjective monitoring). I haven't found any particularly good option for this, especially if I want to self-administer it most days. Quantified mind comes close, but it's really hard to use their interface to construct a custom battery and an indefinite experiment. Do you know of anything?

Wanted to say that I enjoyed this and found it much more enlightening than I expected to, given that I have no intrinsic interest in dentistry. I would value a large cross-discipline sample of this question set and think it would have been very useful to my younger self. I think the advice millennials were given when considering college degrees and careers was generally unhelpful magical thinking. These practical questions are helpful. I'd be interested in slightly longer form answers. Are these edited, or was this interviewee laconic?

Yes, the notion of being superceded does disturb me. Not in principle, but pragmatically. I read your point, broadly, to be that there are a lot of interesting potential non-depressing outcomes to AI, up to advocating for a level of comfort with the idea of getting replaced by something "better" and bigger than ourselves. I generally agree with this! However, I'm less sanguine than you that AI will "replicate" to evolve consciousness that leads to one of these non-depressing outcomes. There's no guarantee we get to be subsumed, cyborged, or even superceded... (read more)

1Alex Beyman
Fair point. But then, our most distant ancestor was a mindless maximizer of sorts with the only value function of making copies of itself. It did indeed saturate the oceans with those copies. But the story didn't end there, or there would be nobody to write this. 

"For example, if I were making replicators, I'd ensure they were faithful replicators "

Isn't this the whole danger of unaligned AI? It's intelligent, it "replicates" and it doesn't do what you want.

Besides physics-breaking 6, I think the only tenuous link in the chain is 5; that AI ("replicators") will want to convert everything to comptronium. But that seems like at least a plausible value function, right? That's basically what we are trying to do. It's either that or paperclips, I'd expect.

(Note, applaud your commenting to explain downvote.)

1Alex Beyman
Well put! While you're of course right in your implication that conventional "AI as we know it" would not necessarily "desire" anything, an evolved machine species would. Evolution would select for a survival instinct in them as it did in us. All of our activities you observe fall along those same lines are driven by instincts programmed into us by evolution, which we should expect to be common to all products of evolution. I speculate a strong AI trained on human connectomes would also have this quality, for the same reasons. 

While I may or may not agree with your more fantastical conclusions, I don't understand the downvotes. The analogy between biological, neural, and AI systems is not new but is well presented. I particularly enjoyed the analogy that comptronium is "habitable space" to AI. Minus physics-as-we-know-it breaking steps, which are polemic and not crucial to the argument's point, I'd call on downvoters to be explicit about what they disagree with or find unhelpful.

Speculatively, perhaps at least some find the presentation of AI as the "next stage of evolution" inf... (read more)

1Alex Beyman
I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you? 

Netcentrica, in this letter your explicit opinion is that fiction with a deep treatment of the alignment problem will not be palatable to a wider audience. I think this is not necessarily true. I think that compelling fiction is perhaps the prime vector for engaging a wider, naive audience. Even the Hollywood treatment of I Robot touched on it and was popular. Not deep or nuanced, sure. But it was there. Maybe more intelligent treatments could succeed if produced with talent.

I mostly stopped reading sci Fi after the era of Asimov and Bradbury. I'd be inter... (read more)

3Netcentrica
Reading your response I have to agree with you. I painted with too broad a brush there.  Just because I don’t use elements the general public enjoys in my stories about benevolent AI doesn’t mean that’s the only way it can or has to be done. Thinking about it now I’m sure stories could be written where there is plenty of action, conflict and romance, while also showing what getting alignment right would look like. Thanks for raising this point. I think it’s an important clarification regarding the larger issue. 

Hmm, yeah, I guess that's a good point. I was thinking myopically at a systems level. The post is useful advice for a patient who is willing to do their own research, confident they can do it thoroughly, and is not afraid to "stare into the abyss" i.e risk getting freaked out or overwhelmed.

Although, I also wonder if insurance companies might try to exploit a patient's prior decision to decline recommended treatment/tests as a reason to not cover future costs...

.

2Kenny
Yes, it's a thorny problem, along many dimensions :) But this is the kind of 'impossible' task I like to throw myself at (at times)!

I don't disagree with you exactly, but I think the focus on rational decision making misses the context the decisions are being made in. Isn't this just an unaligned incentives problem? When a patient complains of an issue, doctors face exposure to liability if they do not recommend tests to clarify the issue. If the tests indicate something, doctors face liability for not recommending corrective procedures. They generally face less liability for positively recommending tests and procedures because the risk is quantifiable beforehand and the patient makes ... (read more)

2Kenny
I would agree that, in some sense, it is 'just' an "unaligned incentives problem". But those are thorny problems! The insight I found valuable from the post was 'just' the idea that 'going along with unaligned incentives' wasn't inevitable. That, in fact, if we know or expect that the 'incentive system' is 'unaligned', we could try to find a way to 'just not do that'. I now think that 'just not making this mistake' is something that's worth trying. Doctor: "Let's do Diagnostic." Me: "Okay" [Diagnostic is done.] Doctor: "Bad news. Diagnostic returned X. The standard treatment is Y." Me: "Y given X is stupid because of, e.g. base rates." And then either: Doctor: "But Y is the standard treatment!" Me: "No; goodbye." or: Doctor: "Oh yeah; good point. Let's not do Y then." Me: "Hurray!"

Thanks for that link! I agree that there is a danger this pitch doesn't get people all the way to X-risk. I think that risk might be worth it, especially if EA notices popular support failing to grow fast enough - i.e., beyond people with obviously related background and interests. Gathering more popular support for taking small AI-related dangers seriously might move the bigger x-risk problems into the Overton window, whereas right now I think they are very much not. Actually I just realized that this is a great summary of my entire idea, basically, "move... (read more)

Whoops, apologies, none of the above. I meant to use the adage "you can't wake someone who is pretending to sleep" similarly to the old "It is difficult to make a man understand a thing when his salary depends on not understanding it." A person with vested interests is like a person pretending to sleep. They are predisposed not to acknowledge arguments misaligned with their vested interests, even if they do in reality understand and agree with the logic of those arguments. The most classic form of bias.

I was trying to express that in order to make any impr... (read more)

Although I do like ACC, I haven't read any of the Rama series. It sounds like you're asking if I am advocating for a top down authoritarian society. It's hard to tell what triggered this impression without more detail from you, but possibly it was my mention of creating an "always-good-actor" bot that guards against other unaligned AGIs.

If that's right, please see my update to my post: I strongly disclaim to have good ideas about alignment, and should have better flagged that. The AGA bot is my best understanding of what Eliezer advocates, but that understanding is very weak and vague, and doesn't suggest more than extremely general policy ideas.

If you meant something else, please elaborate!

Thanks for your reply! I like your compressed version. That feels to me like it would land on a fair number of people. I like to think about trying to explain these concepts to my parents. My dad is a healthcare professional, very competent with machines, can do math, can fix a computer. If I told him superintelligent AI would make nanomachine weapons, he would glaze over. But I think he could imagine having our missile systems taken over by a "next-generation virus."

My mom has no technical background or interests, so she represents my harder test. If I re... (read more)

Thanks for your replies! I'm really glad my thoughts were valuable. I did see your post promoting the contest before it was over, but my thoughts on this hadn't coalesced yet.

At this time, I don't know how much sense it makes to risk posing as someone you're not (or, at least, accidentally making a disinterested policymaker incorrectly think that's what you're doing).

Thanks especially for this comment. I noticed I was uncomfortable while writing that part of my post , and I should have paid more attention to that signal. I think I didn't want to water down... (read more)

3trevor
I understand that vagueness is really appropriate under some circumstances. But you flipped a lot of switches in my brain when you wrote that, regarding things that you might potentially have been referencing. Was that a reference to things like sensor fusion or sleep tracking, or was that referring to policymakers who choose to be vague, was it about skeptical policymakers being turned off by off-putting phrases like "doom soon" or "cosmic endowment", or was it something else that I didn't understand? Whatever you're comfortable with divulging is fine with me.

Cool, I just wrote a post with an orthogonal take on the same issue. Seems like Eliezer's nanotech comment was pretty polarizing. Self promoting...Pitching an Alignment Softball

I worry that the global response would be impotent even if the AGI was sandboxed to twitter. Having been through the pandemic, I perceive at least the United States' political and social system to be deeply vulnerable to the kind of attacks that would be easiest for an AGI - those requiring no physical infrastructure.

This does not directly conflict with or even really address your a... (read more)

Hi Moderators, as this is my first post I'd appreciate any help in giving it appropriate tags. Thanks