TinkerBird

Wiki Contributions

Comments

Sorted by

I imagine it's a sales tactic. Ask for $7 trillion, people assume you believe you're worth that much, and if you've got such a high opinion of yourself, maybe you're right... 

In other news, I'm looking to sell a painting of mine for £2 million ;)

This looks fantastic. Hopefully it may lead to some great things as I've always found the idea of exploiting the collective intelligence of the masses to be a terribly underused resource, and this reminds me of the game Foldit (and hopefully in the future will remind me of the wild success that that game had in the field of protein folding). 

This sounds like it would only work on a machine too dumb to be useful, and if it's that dumb, you can switch it off yourself. 

It doesn't help with the convergent instrumental goal of neutralizing threats, because leaving a copy of yourself behind to kill all the humans allows you to be really sure that you're switched off and won't be switched on again. 

I really appreciate these. 

  1. Why do some people think that alignment will be easy/easy enough? 
  2. Is there such thing as 'aligned enough to help solve alignment research'? 

I think there's a lot we could learn from climate change activists. Having a tangible 'bad guy' would really help, so maybe we should be framing it more that way. 

  • "The greedy corporations are gambling with our lives to line their pockets." 
  • "The governments are racing towards AI to win world domination, and Russia might win."
  • "AI will put 99% of the population out of work forever and we'll all starve."

And a better way to frame the issue might be "Bad people using AI" as opposed to "AI will kill us".

If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own. 

I'm with you on this. I think Yudkowsky was a lot better in this with his more serious tone, but even so, we need to look for better. 

Popular scientific educators would be a place to start and I've thought about sending out a million emails to scientifically minded educators on YouTube, but even that doesn't feel like the best solution to me. 

The sort of people that are listened to are the more political types, so they I think are the people to reach out to. You might say they need to understand the science to talk about it, but I'd still put more weight on charisma vs. scientific authority. 

Anyone have any ideas on how to get people like this on board? 

As a note for Yudkowsky if he ever sees this and cares about the random gut feelings of strangers: after seeing this, I suspect the authoritative, stern strong leader tone of speaking will be much more effective than current approaches.

EDIT: missed a word

I've wanted something for AI alignment for ages like what the Foldit researchers created, where they turned protein folding into a puzzle game and the ordinary people online who played it wildly outperformed the researchers and algorithms purely by working together in vast numbers and combining their creative thinking. 

I know it's a lot to ask for with AI alignment, but still, if it's possible, I'd put a lot of hope on it. 

As someone who's been pinning his hopes on a 'survivable disaster' to wake people up to the dangers, this is good news.  

I doubt anything capable of destroying the world will come along significantly sooner than superintelligent AGI, and a world in which there are disasters due to AI feels like a world that is much more likely to survive compared to a world in which the whirling razorblades are invisible. 

EDIT: "no fire alarm for AGI." Oh I beg to differ, Mr. Yudkowsky. I beg to differ. 

This confuses me too. I think Musk must be either smarter or a lot dumber than I thought he was yesterday, and sadly, dumber seems to be the way it usually goes. 

That said, if this makes OpenAI go away to be replaced by a company run by someone who respects the dangers of AI, I'll take it.

Load More