Yes. I've witnessed how John Searle turns undergrad Cognitive Science majors against reductionism at UC Berkeley. Searle's "emergence" and Chinese Room argument would be very fertile topics for a diavlog.
It would be interesting to see Searle debate anyone who didn't defer to his high status and common-sense-sounding arguments and pressed him to the wall on what exactly would happen if you, say, simulated a human brain in high resolution. His intuition pumps are powerful ("thought is just like digestion, you don't really believe a computer will digest food if you simulate gastric enzymes, do you?"), but he never really presents any argument on his views of consciousness or AI, at least what I've seen.
No specific use cases or examples, just throwing out ideas. On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit. Now OpenCog is supposed by its creators to be a fully general knowledge representation system so maybe it's possible to use it as a sort of notation (like a probabilistic-logic version of mathematica? or maybe with a natural language front end of some kind? i think Ben Goertzel likes lojban so maybe an intermediate language like that)
Anyway, it's not really a product spec just one possible sort of way someday to use machines to make people smarter.
(but that was before I realized we were talking about pills to make people stop liking their favorite tv shows, heh)
On the one hand it would be cool if the notes one jots down could self-organize somehow, even a little bit.
While I agree that it it would be cool, anything that doesn't keep your notes exactly like you left them is likely to be more annoying than productive unless it is very cleverly done. (Remember Microsoft Clippy?) You'd probably need to tag at least some things, like persons and places.
We shall not cease from exploration and the end of our exploring shall be to return where we started and know the place for the first time.
-- T.S. Eliot
Once again, we are saddled with a Stone Age moral psychology that is appropriate to life in small, homogeneous communities in which all members share roughly the same moral outlook. Our minds trick us into thinking that we are absolutely right and that they are absolutely wrong because, once upon a time, this was a useful way to think. It is no more, though it remains natural as ever. We love our respective moral senses. They are as much a part of us as anything. But if we are to live together in the world we have created for ourselves, so unlike the one in which our ancestors evolved, we must know when to trust our moral senses and when to ignore them.
--Joshua Greene
As long as you have a communications channel to the AI it would not be secure, since you are not a secure system and could be compromised by a sufficiently intelligent agent.
Intelligence is no help if you need to open a safe that only gets opened by one of the 10^10 possible combinations. You also need enough information about the correct combination to have any chance of guessing it. Humans likely have different compromising combinations, if any, so you'd also need to know a lot about a specific person, or even about their state of mind at the moment, the knowledge of human psychology in general might not be enough.
(But apparently what would look to a human like almost no information about the correct combination might be more than enough to a sufficiently clever AI, so it's unsafe, but it's not magically unsafe.)
If you had a program that might or might not be on a track to self-improve and initiate an Intelligence explosion you'd better be sure enough that it would remain friendly to, at the very least, give it a robot body, a scalpel, and stand with your throat exposed before it.
Surrounding it with a sandboxed environment couldn't be guaranteed to add any meaningful amount of security. Maybe the few bits of information you provide through your communications channel would be enough for this particular agent to reverse-engineer your psychology and find that correct combination to unlock you, maybe not. Maybe the extra layer(s) between the agent and the physical world would be enough to delay it slightly or stall it completely, maybe not. The point is you shouldn't rely on it.
This is one of my all-time favourite posts of yours, Eliezer. I can recognize elements of what you're describing here in my own thinking over the last year or so, but you've made the processes so much more clear.
As I'm writing this, just a few minutes after finishing the post, it's increasingly difficult not to think of this as "obvious all along" and it's getting harder to pin down exactly what in the post that caused me to smile in recognition more than once.
Much of it may have been obvious to me before reading this post as well, but now the verbal imagery needed to clearly explain these things to myself (and hopefully to others) is available. Thank you for these new tools.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You probably meant to write "alcohol" here.
All data, even anecdotal, on how to beat akrasia is great, and this sounds like a method that might work well in many cases. If you wanted to raise your odds of succeeding even more you could probably make your oath in front of a group of friends or family members, or even include a rule about donating your money or time if you failed, preferably to a cause you hated for bonus motivation.
I'd like to give a public oath myself, but I'm going away shortly and will be busy with various things, so I don't know how much time I will have for self-improvement. In somewhat of a coincidence, I just received "Breakdown of Will" in the mail yesterday. How about this.. I proudly and publicly swear to read the entire book "Breakdown of Will" by George Ainslie and write an interesting post on LW based on the book before July 17th 2009, so help me Bayes.