If I was a strong moral realist, I'd also believe that an AI should be able to just "figure it out". I wonder instead if exposure to the field of AI reserach, where cost functions and methods of solution are pretty orthogonal would help alleviate the moral realism?
"I want to make papperclips, if I think a bit about morality I will realize the error of my ways and cease wanting to make paperclips, thus less pappercips will be made, hence I must not think about morality."
"If I had more time, I would have written a shorter letter." - Blaise Pascal (and probably some other people too)
A noteworthy counter-point is that not all important ideas are simple.
Or simple ideas have all sorts of implications that naturally follow, but which most readers need to have teased out for them.
Be suspicious of overly bold claims in evolutionary psychology - check and mostly agreed, though see Dennett's Darwin's Dangerous Idea for something that might slightly re-inflate the idea of good ev-psych.
But I don't see how your suggestion to think about minds as "lined slates" follows. It seems to not follow even more after having read your argument. What reason do be have to think that our minds have evolved to be very flexible general learning processes? Your argument makes me imagine my mind as the opposite - after reading your argument, out brains look even more like they should be a bunch of wires, randomly attached.
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP6. "What are you doing?", asked Minsky. "I am training a randomly wired neural network to play Tic Tac Toe". "Why is the net wired randomly?" asked Minsky. "I do not want it to have any preconceptions of how to play." Minsky shut his eyes: "Why do you close your eyes?" Sussman asked his teacher. "So the room will be empty." At that moment, Sussman was enlightened.
Actually we have theory-of-mind shaped holes in the brain. I don't have an iPhone, and I haven't seen a live demonstration of the Siri app yet, but the commercials and videos about Siri I've seen on YouTube show that it doesn't take much to trick the theory of mind into treating Siri as a person.
Gods make me think of Siri-like apps. People attribute the theory of mind to their "god apps," and they try to communicate with these apps through worship, prayer, the study of obscure scriptures and the infliction of self-harm, as David Hume describes in my post below.
I believe that and agree that it's gotta be a major factor in driving god-belief and other types of animism (it's one of the brain-holes I'm talking about). Yet, religion seems to be a superset -- and sometimes a large one -- of god-belief. There's seemingly more to explain. There are likely several other brain-holes involved here.
There was so much talk of "religion-shaped holes" in the brain in those comments! Shouldn't it be pretty obvious to people who are aware of the "meme" concept that religions are brain-hole shaped and not the other way around?
Of course it's ok if a rocket-ship fills a certain brain-hole in a similar way the religion does - rocket ships are benign. It's naming one or several of those holes "religion-shaped" that seems to have a dark-artsy kind of effect and turn us all stupid.
I agree, this class of problems is enormous, and has a hand in practically all mismanagement and misdirection of human effort. The problem is so aggravating, though, because humans seem not to expect it to happen. Why do these feel like "problems", when the underlying behavior is exactly what we'd expect given our knowledge of stable strategy?
In the case of human endeavor, I suspect this is a problem because we do not try hard enough to defend against it. We seem surprised and indignant when systems that purport to do good are "gamed." If more people were more aware that this was the consequence of strategic agents, they might watch harder for signs of destructive strategy, and more carefully design the systems that they build and manage against such strategy.
(Hmm. I notice that I'm posing a purportedly-fully-general strategy to a fully-general problem, without evidence or examples. And I'm claiming that better global understanding is a solution, when it is probably just applause lights to my imagined audience. And I still think that what I'm saying is right! Wow. You should probably ignore this comment's content, but I'll still leave it here, as a counterexample.)
To add some credence to your recommendations: since actually understanding the logic of stable strategies, I feel much less frustrated by the examples cited by Bakkot than I used to when I assumed they were the result of evil. I also view them as problems to be solved, not enemies to scorn. This really truly seems like an improved disposition being caused by understanding.
Thought to be fair, my actions have been changed much less than my dispositions have. Such understanding has, at most, impacted my behaviours which I associate with far-mode: how I vote, argue and make life decisions. My leisure activities, smoking habits, purchasing habits haven't changed.
Good point.
Scientific papers are usually written with a structure where the abstract and actual text are independent of each other, i.e. the paper doesn't presume that you've read the abstract and often ends up repeating some of its content in the introduction. I imitated that structure out of habit, but I'm not sure whether it's a good structure to use for blog posts.
It didn't bother me. Though this may just be beacause I'm already habituated to ignoring it after having read many journal articles.
Where are you from that the school system is sane enough to assume calculus for undergrads?
My first-year courses in engineering (in Canada) made basic use of calculus without assuming any real understanding of it. By second-year, the calculus was assumed and for linear ODE's and similar. Third-year, we moved to Laplace and Fourier transforms and the final year finally started to get into applications and standards and "real" things.
I've always wondered how different other engineering school curricula are.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I see your point, but I'll argue that yes, crowdsourcing is the appropriate term.
Google may be the collective brain of the entire planet, but it will give you only those results you search for. The entire idea here is that you utilize things you can't possibly think of yourself - which includes "which terms should I put into the Google search."
Yes. The art of Googling can be pretty difficult, and a few brains are still smarter (though less broadly knowledgeable, perhaps) than Google, at this point in time.