Ruby

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

Sorted by
Ruby123

Here is a fine place if you're just making a shortform.

I think it's good for the soul to study, learn, grow, and the time current society gives you to do it at university is pretty great if you make use of it, but also it's possible to do that outside of uni. This is putting aside value for careers, because indeed, with AI is hard to say.

But being 19 (or whatever age really), the frame I'd give is think about where you'll develop most. From a practical standpoint, I'd spend a lot of time trying to do valuable things together with AI. Eventually AI won't need us, but in the meantime symbiosis seems like a guess as how to still generate economic value.

Ruby61

For me, S2 explicitly I can't justify being quite that confident, maybe 90-95%, but emotionally 9:1 odds feels very like "that's what's happening".

Ruby20

I'm just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was.

I think that's a very reasonable question to be asking. My answer is I think it was justified, but not obvious.

My understanding is it wasn't taken for granted that we had a way to get more progress with simply more compute until deep learning revolution, and even then people updated on specific additional data points for transformers, and even then people sometimes say "we've hit a wall!"

Maybe with more time we'd have time for the US system to collapse and be replaced with something fresh and equal to the challenges. To the extent the US was founded and set in motion by a small group of capable motivated people, it seems not crazy to think a small to large group such people could enact effective plans with a few decades.

Ruby20

So gotta keep in mind that probabilities are in your head (I flip a coin, it's already tails or heads in reality, but your credence should still be 50-50). I think it can be the case that we were always doomed even if weren't yet justified in believing that.

Alternatively, it feels like this pushes up against philosophies of determinism and freewill. The whole "well the algorithm is a written program and it'll choose what is chooses deterministically" but also from the inside there are choices.

I think a reason to have been uncertain before and update more now is just that timelines seem short. I used to have more hope because I thought we had a lot more time to solve both technical and coordination problems, and then there was the DL/transformers surprise. You make a good case and maybe 50 years more wouldn't make a difference, but I don't know, I wouldn't have as high p-doom if we had that long.

Ruby20

But since the number is subjective living your life like you know you are right is certainly wrong

 

I don't think this makes sense. Suppose you have a subjective belief that a vial of tasty fluid is lethal poison 90%, you're going to act in accordance with that belief. Now if other people think differently from you, and you think they might be right, maybe you adjust your final subjective probability to something else, but at the end of the day it's yours. That it's subjective doesn't rule it out being pretty extreme.

If what you mean is you can't be that confident given disagreement, I dunno, I wish I could have that much faith in people.

Ruby20

"Serendipity" is a term I've been seen used for this, possibly was Venkatesh Rao.

Ruby42

Curated. The wiki pages collected here, despite being written in 2015-2017 remain excellent resources on concepts and arguments for key AI alignment ideas (both still widely used and those lesser known). I found that even for concepts/arguments like the orthogonality thesis and corrigibility, I felt a gain in crispness from reading these pages. The concept of, e.g. epistemic and instrumental efficiency I didn't have, yet feels useful in thinking about the rise of increasingly powerful AI.

Of course, there's also non-AI content that got imported. The Bayes guide likely remains the best resource for building Bayes intuition, and same with the guide on logarithms that is extremely thorough.

Ruby42

I think the guide should be 10x more prominent in this post.

Load More