All of bigbird's Comments + Replies

bigbird4-3

This is just "are you a good person" with few or no subtle twists, right?

3Lone Pine
Really, it's a meaningless question unless you provide a definition of 'aligned'. If your definition is 'a good person' then the question is really just "do you see yourself as moral?"
bigbird*10

Just FYI, TT, please keep telling people about value sharding! Telling people about working solutions to alignment subproblems is a really good thing!!

Ah, that wasn't my intention at all!

1bigbird
Just FYI, TT, please keep telling people about value sharding! Telling people about working solutions to alignment subproblems is a really good thing!!
bigbird*10

A side-lecture Keltham gives in Eliezer's story reminds me about some interactions I'd have with my dad as a kid. We'd be playing baseball, and he'd try to teach me some mechanical motion, and if I didn't get it or seemed bored he'd say "C'mon ${name}, it's phsyics! F=ma!"

Different AIs run built and run by different organizations would have different utility functions and may face equal competition from one another, that's fine. My problem is the part after that where he implies (says?) that the Google StockMaxx AI supercluster would face stiff competition from the humans at FBI & co.

bigbird*00

[Removed, was meant to be nice but I can see how it could be taken the other way]

2Ben Pace
FTR I didn't really like this meme, in particular because I think it has an implicit element of "this person is unattractive", which feels like an irrelevant personal attack. I didn't want to just downvote because I don't want to implicitly signal "all memes bad", but this one does leave a bad taste in my mouth.

I think it'd be good to get these people who dismiss deep learning to explicitly state whether or not the only thing keeping us from imploding, is an inability by their field to solve a core problem it's explicitly trying to solve. In particular it seems weird to answer a question like "why isn't AI X-risk a problem" with "because the ML industry is failing to barrel towards that target fast enough".

8Michaël Trazzi
I think it makes sense (for him) to not believe AI X-risk is an important problem to solve (right now) if he believes that the "fast enough" means "not in his lifetime", and he also puts a lot of moral weight on near-term issues. For completeness sake, here are some claims more relevant to "not being able to solve the core problem". 1) From the part about compositionality, I believe he is making a point about the inability of generating some image that would contradict the training set distribution with the current deep learning paradigm 2) From the part about generalization, he is saying that there is some inability to build truly general systems. I do not agree with his claim, but if I were to steelman the argument it would be something like "even if it seems deep learning is making progress, Boston Robotics is not using deep learning and there is no progress in the kind of generalization needed for the Wozniak test"
bigbird180

I am slightly baffled that someone who has lucidly examined all of the ways in which corporations are horribly misaligned and principle-agent problems are everywhere, does not see the irony in saying that managing/regulating/policing those corporations will be similar to managing an AI supercluster totally united by the same utility function.

He says the "totally united by the same utility function" part is implausible:

he claims (quite implausibly I think) that all AGIs naturally coordinate to merge into a single system to defeat competition-based checks.

4oge
(I think) he thinks that managing/regulating/policing those corporations is the best that humans are willing to do.
bigbird3-4

Why not also have author names at the bottom, while you're at it.

The craziest part of being a rationalist is regularly reading completely unrelated technical content, thinking "this person seems lucid", then going to their blog and seeing that they are Martin Sustrik.

peaceful protest of the acceleration of agi technology without an actually specific written & coherent plan for what we will do when we get there

2[anonymous]
Do you suppose that peaceful protest would have stopped the manhattan project? Update: what I am saying is the humans working on the manhattan project anticipated possessing a basically unstoppable weapon allowing them to vaporize cities at will.  They wouldn't care if some people disagreed so long as they have the power to prevent those people from causing any significant slowdown of progress.   For agi technology humans anticipate the power to basically control local space at will, being able to order agis to successfully overcome the barriers in the way of nanotechnology and automated construction and mining and our individual lifespan limits.  As long as the peaceful protestors are not physically able to interfere or get a court to interfere it's not going to dissuade anyone who believes they are going to succeed in their personal future.  (note that the court generally is unable to interfere if the agi builders are protected or are themselves a government entity)

Seriously this is the funniest shit

Nothing Yudkowsky has ever done has impressed me as much as noticing the timestamps on the Mad Investor Chaos glowfic. My peewee brain is in shock. 

How much coordination went on behind the scenes to get the background understanding of the world? Do they list out plot points and story beats before each session? What proportion of what I'm seeing is railroaded vs. made up on the spot? I really wish I had these superpowers, damnit.

2gjm
"noticing the timestamps" not because there's anything impressive in the timestamps themselves, but just because it indicates Eliezer is cranking out writing very quickly? I think there is at least sometimes substantial offline planning before a bunch of new stuff comes along. (And sometimes, when they wing it and it doesn't work out, material gets reworked or just deleted, though neither of those happens very often.)
1bigbird
Seriously this is the funniest shit

You went from saying telling the general public about the problem is net negative to saying that it's got an opportunity cost, and there are probably unspecified better things to do with your time. I don't disagree with the latter.

6Rob Bensinger
If it were (sufficiently) net positive rather than net negative, then it would be worth the opportunity cost.

One reason you might be in favor of telling the larger public about AI risk absent a clear path to victory is that it's the truth, and even regular people that don't have anything to immediately contribute to the problem deserve to know if they're gonna die in 10-25 years.

8Rob Bensinger
Time spent doing outreach to the general public is time not spent on other tasks. If there's something else you could do to reduce the risk of everyone dying, I think most people would reflectively endorse you prioritizing that instead, if 'spend your time warning us' is either neutral or actively harmful to people's survival odds. I do think this is a compelling reason not to lie to people, if you need more reasons. But "don't lie" is different from "go out of your way to choose a priority list that will increase people's odds of dying, in order to warn them that they're likely to die".