All of benjamin.j.campbell's Comments + Replies

And it goes deeper. Because what if Mickey never actually woke up, and the brooms had been keeping him asleep the whole time? The Sleeping Beauty problem is actually present in quite a lot of Disney media where the MC goes to sleep. It's also a theme in Mulan. Maybe she never went to war and made her parents proud. It may well have been a dream she just didn't wake up from.

Thank you! This is such an important crux post, and really gets to the bottom of why the world is still so far from perfect, even though it feels like we've been improving it FOREVER. My only critique is that it could have been longer

It's worse than that. I've been aware of this since I was a teenager, but apparently there's no amount of correction that's enough. These days I try to avoid making decisions that will be affected in either direction by limerance, or pre-commit firmly to a course of action and then trust that even if I want to update the plan, I'm going to regret not doing what I pre-committed to earlier.

Seconded. The perfect level of detail to be un-put-down-able while still making sure everything is explained in enough detail to be gripping and well understood

Those are some extreme outliers for age. Was that self-reported, or some kind of automated information gathering related to their Positly profiles?

This is targeted at all 3 groups:

  • Every year, our models of consciousness and machine learning grow more powerful, and better at performing the same forms of reasoning as humans.
  • Every year, the amount of computing power we can throw at these models ratchets ever higher.
  • Every year, each human's baseline capacity for thinking and reasoning remains exactly the same.

There is a time coming in the next decade or so when we will have released a veritable swarm of different genies that are able to understand and improve themselves better than we can. At that point, the genies will not being going back in the bottle, so we can only pray they like us.

By this stage of their careers, they already have those bits of paper. MIRI are asking people who don't a priori highly value alignment research to jump through extra hoops they haven't already cleared, for what they probably perceive as a slim chance of a job outside their wheelhouse. I know a reasonable number of hard science academics, and I don't know any who would put in that amount of effort in the application for a job they thought would be highly applied for by more qualified applicants. The very phrasing makes it sound like they expect hundreds of applicants and are trying to be exclusive. If nothing else is changed, that should be.

1Daniel Kokotajlo
Maybe they do in fact receive hundreds of applicants and must exclude most of them? It's not MIRI's fault that there isn't a pre-existing academic discipline of AI alignment research. Imagine SpaceX had a branch office in some very poor country that literally didn't have any engineering education whatsoever. Should they then lower their standards and invite applicants who never studied engineering? No, they should just deal with the fact that they won't have very many qualified people, and/or they should do things like host workshops and stuff to help people learn engineering.

I gave this an upvote because it is directly counter to my current belief about how relative/absolute pitch work and interact with each other. I agree that if someone's internalised absolute pitch can constantly identify out of tune notes, even after minutes of repetition, this is a strong argument against my position. On the other hand, maybe they do produce one internal reference note of set frequency, and when comparing known intervals against this, it returns "out of tune" every time. I can see either story being true, but I would like to hunt down some more information on which of these models is more accurate

1paragonal
Rick Beato has a video about people losing their absolute pitch with age (it seems to happen to everyone eventually). There are a lot of anecdata by people who have experienced this both in the video and in the comments. Some report that after experiencing a shift in their absolute pitch, all music sounds wrong. Some of them adapted somehow (it's unclear to me how much development of relative abilities was involved) and others report not having noticed that their absolute pitch has shifted. Some report that only after they've lost their absolute pitch completely, they were able to develop certain relative pitch abilities. Overall, people's reported experiences in the comments vary a lot. I wouldn't draw strong conclusions from them. In any case, I find it fascinating to read about these perceptions.

I think your suggestion is effectively what everyone with absolute pitch is actually doing, if the reports from the inside I've heard are accurate. It's definitely how I would start converting my relative pitch proficiency into absolute

I know what you mean, and I think that similar to Richard Kennaway says below, we need to teach people new to the sequences and to exotic decision theories not to drive off a cliff because of a thread they couldn't resist pulling.

I think we really need something in the sequences about how to tell if your wild seeming idea is remotely likely. I.e a "How to Trust Your SatNav" post. The basic content in the post is: remember to stay grounded, and ask how likely this wild new framework might be. Ask others who can understand and assess your theory, and if they... (read more)

It's great that you have that satnav. I worry about people like me. I worry about being incapable of leaving those thoughts alone until I've pulled the thread enough be sure I should ignore it. In other words, if I think there's a chance something like that is true, I do want to trust the satnav, but I also want to be sure my "big if true" discovery genuinely isn't true.

Of course, a good innoculation against this has been reading some intense blogs of people who've adopted alternative decision-theories which lead them down really scary paths to watch.

I wor... (read more)

1Yitz
Thanks for your excellent input! It’s not really the potential accuracy of such dark philosophies that I’m worried about here (though that is also an area of some concern, of course, since I am human and do have those anxieties on occasion), but rather how easy it seems to be to fall prey to and subsequently act on those infohazards for a certain subclass of extremely intelligent people. We’ve sadly had multiple cases in this community of smart people succumbing to thought-patterns which arguably (probably?) led to real-world deaths, but as far as I can tell, the damage has mostly been contained to individuals or small groups of people so far. The same cannot be said of some religious groups and cults, who have a history of falling prey to such ideologies (“everyone in outgroup x deserves death,” is a popular one). How concerned should we be about, say, philosophical infohazards leading to x-risk level conclusions [example removed]? I suspect natural human satnav/moral intuition leads to very few people being convinced by such arguments, but due to the tendency of people in rationalist (and religious!) spaces to deliberately rethink their intuition, there seems to be a higher risk in those subgroups for perverse eschatological ideologies. Is that risk high enough that active preventative measures should be taken, or is this concern itself of the 1+1=3, wrong-side-of-the-abyss type?
Answer by benjamin.j.campbell30

Within reason, I can see how it might be wise for you. I think the largest uncertainty this question hinges upon is whether hospitals in your area have the capacity to treat you if your case is unexpectedly bad. You can get a good sense of this by monitoring available ICU beds in the immediate/short term, but beyond a week it's hard to know.

And here's maybe a more important question, though far harder to model: will hospitals in my area have more/less capacity to treat me later, if I just catch it at the naturally occurring rate?

I'm in NSW, Australia, so e... (read more)

My use of MicroCovid.org so far has probably been very different to most LWers, as I'm based in Sydney, Australia. So far, I've mostly been content to follow public health guidelines and have used MicroCovid.org about 4 times a year for the last 2 years. Each time I used it, I found it very useful for thinking about risk and improving my implicit understanding for how risky different activities were. My usage pattern looks set to change pretty dramatically though, as Australia is in the early stages of Omicron going exponential.

Of all the current features,... (read more)

There's an easier solution that doesn't run the risk of being or appearing manipulative. You can contract external and independent councillors and make them available to your staff anonymously. I don't know if there's anything comparable in the US, but in Australia they're referred to as Employee Assistance Programs (EAPs). Nothing you discuss with the councillor can be disclosed to your workplace, although in rare circumstances there may be mandatory reporting to the police (e.g. if abuse or ongoing risk of a minor is involved).

This also goes a long way toward creating a place where employees can talk about things they're worried will seem crazy in work contexts.

Solutions like that might work, but it's worth noting that just having an average therapist likely won't be enough.

If you actually care about a level of security that protects secrets against intelligence agencies, operational security of the office of the therapist is a concern. 

Governments that have security clearances don't want their employees to talk with therapists who don't have the secuirty clearances about classified information.

Talking nonjudgmentally with someone who has reasonable fears that the humanity won't survive the next ten years because of fast AI timelines is not easy. 

You and I are quite similar, in that we both default to taking responsibility for the painful outcomes that come our way. I do think though that there is a flaw in one little bit of this mindset (or maybe it's just the way you expressed it, so tell me if I didn't get what you were pointing at). It sounds like your formulation is basically "if an outcome is my responsibility, then I refuse to give myself compassion." It is possible to accept a high degree of responsibility while still being compassionate to yourself. I know, because I have seen others do it

... (read more)
1[anonymous]
I wouldn't phrase it in terms quite this absolutist, but I would say that when something is my responsibility I feel like I deserve much less compassion. That's pretty much what I say in the second paragraph: "we tend to feel less compassion for someone if we think that they’re responsible for their own misfortune". This is where my mind has been wandering post writing this essay as well. Is there a way to continue learning and reminding ourselves to improve our vigilance without the self-destructive self-flagellation aspect? There might be, which would be great.

Wow! Thank you so, so much for all of this!

I tend to think of myself as very engaged with LW2.0 (despite not commenting or posting often) but I didn't have a very good idea of how much work you were putting in to add features and fix bugs. I'd love to see more frequent posts like this, so we can all get excited together, and show appreciation for everything you and the team are doing. I'm sure that you have a very clear idea of all the features that are being implemented, and how much effort you've been pouring into it, but that isn&apo... (read more)