Wiki Contributions

Comments

Sorted by
Furcas103

This comment has gotten lots of upvotes but, has anyone here tried Vicuna-13B?

Furcas226

Well, this is insanely disappointing. Yes, the OP shouldn't have directly replied to the Bankless podcast like that, but it's not like he didn't read your List of Lethalities, or your other writing on AGI risk. You really have no excuse for brushing off very thorough and honest criticism such as this, particularly the sections that talk about alignment.

And as others have noted, Eliezer Yudkowsky, of all people, complaining about a blog post being long is the height of irony.

This is coming from someone who's mostly agreed with you on AGI risk since reading the Sequences, years ago, and who's donated to MIRI, by the way.

On the bright side, this does make me (slightly) update my probability of doom downwards.

Furcas10-2

You may be right about Deepmind's intentions in general but, I'm certain that the reason they didn't brag about AlphaStar is because it didn't quite succeed. There never was an official series between the best SC2 player in the world and AlphaStar. And, once Grandmaster-level players got a bit used to playing against AlphaStar, even they could beat it, to say nothing of pros. AlphaStar had excellent micro-management and decent tactics, but zero strategic ability. It had the appearance of strategic thinking because there were in fact multiple AlphaStars, each one having learned a different build during training. But then each instance would always execute that build. We never saw AlphaStar do something as elementary as scouting the enemy's army composition and building the units that would best counter it.

So Deepmind saw they had only partially succeeded, but for some reason instead of continuing their work on AlphaStar they decided to declare victory and quietly move on to another project.

Furcas30

I'd guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.

Furcas40

Huh. Is it possible that the corpus callosum has (at least partially) healed since the original studies? Or that some other connection has formed between the hemispheres in the years since the operation?

Furcas00

Yes it was video. As Brillyant mentioned, the official version will be released on the 29th of September. It's possible someone will upload it before then (again), but AFAIK nobody has since the video I linked was taken down.

Furcas00

I changed the link to the audio, should work now.

Furcas60

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Furcas30

If you don't like it now, you never will.

Load More