Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
bigbird4-3

This is just "are you a good person" with few or no subtle twists, right?

Just FYI, TT, please keep telling people about value sharding! Telling people about working solutions to alignment subproblems is a really good thing!!

Ah, that wasn't my intention at all!

A side-lecture Keltham gives in Eliezer's story reminds me about some interactions I'd have with my dad as a kid. We'd be playing baseball, and he'd try to teach me some mechanical motion, and if I didn't get it or seemed bored he'd say "C'mon ${name}, it's phsyics! F=ma!"

Different AIs run built and run by different organizations would have different utility functions and may face equal competition from one another, that's fine. My problem is the part after that where he implies (says?) that the Google StockMaxx AI supercluster would face stiff competition from the humans at FBI & co.

[Removed, was meant to be nice but I can see how it could be taken the other way]

I think it'd be good to get these people who dismiss deep learning to explicitly state whether or not the only thing keeping us from imploding, is an inability by their field to solve a core problem it's explicitly trying to solve. In particular it seems weird to answer a question like "why isn't AI X-risk a problem" with "because the ML industry is failing to barrel towards that target fast enough".

bigbird180

I am slightly baffled that someone who has lucidly examined all of the ways in which corporations are horribly misaligned and principle-agent problems are everywhere, does not see the irony in saying that managing/regulating/policing those corporations will be similar to managing an AI supercluster totally united by the same utility function.

bigbird3-4

Why not also have author names at the bottom, while you're at it.

Load More