In case it was not obvious, the correct takeaway from this article is that you should go and get a flu shot, if you haven't gotten one already this year. If you have already gotten a flu shot this year, and you reply to this comment with a message that states that you have done so, I would be more than happy to upvote you.
Meetup : MIRIxAtlanta - MIRI Research Guide + Corrigibility
Discussion article for the meetup : MIRIxAtlanta - MIRI Research Guide + Corrigibility
We'll go over the new research guide http://intelligence.org/research-guide/ which discusses which mathematical knowledge is necessary for doing FAI research, and goes over the major lines of research done at MIRI.
We will also look at a new line of research called corrigibility. From the research guide: "As artificially intelligent systems grow in intelligence and capability, some of their available options may allow them to resist intervention by their programmers. We call an AI system “corrigible” if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences."
There will also be snacks and cats! Hope to see you there!
Discussion article for the meetup : MIRIxAtlanta - MIRI Research Guide + Corrigibility
I got one this year! I didn't get one last year, and someone else ended up getting very sick as a direct consequence... :(
I have a hypothesis based on systems theory, but I don't know how much sense it makes.
A system can only simulate a less complex system, not one at least as complex as itself. Therefore, human neurologists will never come up with a complete theory of the human mind, because they'll not be able to think of it, i.e. the human brain cannot contain a complete model of itself. Even if collectively they get to understand all the parts, no single brain will be able to see the complete picture.
Am I missing some crucial detail?
Seems unlikely, given the existence of things like quines, and the fact that self-reference comes pretty easily. I recommend reading Godel Escher Bach, it discusses your original question in the context of this sort of self-referential mathematics, and is also very entertaining.
Already has been, see Reddit.
What was the string that generated the hash, then?
ETA: See Lumifer's link above.
Your poll is somewhat broken (last option missing). Note that ability to rotate in the mind is very differently expressed. Some people do it effortlessly, some even with multiple elements (Tesla was said to be able to animate whole machines in his mind). Therefore I'd recommend to provide a scaled or indexical poll ("not at all", "partial/limited", "single element single rotation", "single element, multiple motions/changes", "multiple elements interacting (gears)", "whole machines"). As only 4 people (me included) voted I recommend to repost the poll and extend it.
Thanks for catching the error, and I think the rest of your suggestion is good, but unfortunately 32 people have taken it now (wow!) and I don't think I can change it without breaking it.
Most comments show exactly one downvote without a clear pattern why. I'd guess that a single person downvoted all these short comments. Can it be that this user doesn't know the custom of upvoting survey-takers?
ADDED 2014-10-25T16:20 UTC: The single downvotes disappeared.
ADDED 2014-10-26T21:10 UTC: The single downvotes reappeared again (at least for a lot of high scoring comments).
Almost everyone has a downvote again. What's more interesting is the short list of people who don't...
It has been reported here that largest volume, longest length, and largest mass all give the same result.
That still doesn't help for the purposes of calibration, when you have uncertainty over whether these are all the same.
I agree about all of that except for contrarianism (and yes, I'm aware of the irony). You want to have some amount of contrarianism in your ecosystem, because people sometimes aren't satisfied with the hivemind and they need a place to go when that happens. Sometimes they need solutions that work where the mainstream answers wouldn't, because they fall into a weird corner case or because they're invisible to the mainstream for some other reason. Sometimes they just want emotional support. And sometimes they want an argument, and there's a place for that too.
What you don't want is for the community's default response to be "find the soft bits of this statement, and then go after them like a pack of starving hyenas tearing into a pinata made entirely of ham". There need to be safe topics and safe stances, or people will just stop engaging -- no one's always in the mood for an argument.
On the other hand, too much agreeableness leads to another kind of failure mode -- and IMO a more sinister one.
The article talked about endless contrarianism, where people disagree as a default reaction, instead of because of a pre-existing difference in models. I think that is a problem in the LW community.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Quines don't say anything about human working memory limitations or the amount of time a human would require for learning to understand the whole system, and furthermore only talk about printing the source code not understanding it, so I'm not sure how they're relevant for this.
I wouldn't be too surprised if the hypothesis is true for unmodified humans, but for systems in general I expect it to be untrue. Whatever 'understanding' is, the diagonal lemma should be able to find a fixed point for it (or at the very least, an arbitrarily close approximation) - it would be very surprising if it didn't hold. Quines are just an instance of this general principle that you can actually play with and poke around and see how they work - which helps demystify the core idea and gives you a picture of how this could be possible.