Or, to put it another way, to try and require a knowledge above "merely" storing a number, that includes "knowing you know" and "knowing you know you know" and so on, is to make a similar mistake to those who postulated a homunculus inside our heads, doing the looking when we look at things.
On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it's trapped in some simulation). It's an interesting question how intelligent can an AI get without having the need (or ability) to go meta.
Given that a parliament of humans (where they vote on values) is not accepted as a (final) solution to the interpersonal value / well-being comparison problem, why would a parliament be acceptable for intrapersonal comparisons?
Why is an opinion on Gaza more likely to become a part of someone's (from Europe or America) identity than e.g. an opinion on Darfur?
For me it's mostly the network effect. I care more, because people around me care more. I also care more because I have more information, but that again is because people around me care more. If people around me stopped talking about Gaza, it would be just as easy to forget as Darfur.
What keeps this topic alive, is the memetic chain: Gaza is linked to Israel, which is linked to Jews, which is linked to Nazis, which is linked to WW2 and its aftermath, which is linked to our contemporary politics. Also Jews are linked to Old Testament, which is linked to Christianity; in USA, Israel is linked to Religious Right; and the religion is again linked to politics. This all together gives Gaza a high "Page Rank".
Darfur could get some "Page Rank" through the former colonies of European countries, but that link is much weaker and outdated.
The mindkilling emotions are not caused by the human suffering, but by pattern-matching it to the political situation around us. This triggers the feeling of "it could happen to me, too" and switches the brain to the battle mode.
It seems like people sort of turn into utility monsters - if people around you have a strong opinion on a certain topic, you better have a strong opinion too, or else it won't carry as much "force".
I'm bothered by the apparent assumption that morality is something that can be "solved".
What about "decided on"?
With regards to the singularity, and given that we haven't solved 'morality' yet, one might just value "human well-being" or "human flourishing" without referring to a long-term self concept. I.e. you just might care about a future 'you', even if that person is actually a different person. As a side effect you might also equally care about everyone else in to future too.
They get it correct when it's in an appropriate social context, not simply because it's happening in real life. If it didn't happen in real life, confirmation bias wouldn't be a real thing.
Right, but I want to use a closer to real life situation or example that reduces to the wason selection task (and people fail at it) and use that as the demonstration, so that people can see themselves fail in a real life situation, rather than in a logical puzzle. People already realize they might not be very good at generalized logic/math, I'm trying to demonstrate that the general logic applies to real life as well.
The Wason selection task is a good go-to example of confirmation bias.
Well the thing is that people actually get this right in real life (e.g. with the rule 'to drink you must be over 18'). I need something that occurs in real life and people fail at it.
I'm planning on doing a presentation on cognitive biases and/or behavioral economics (Kahneman et al) in front of a group of university students (20-30 people). I want to start with a short experiment / demonstration (or two) that will demonstrate to the students that they are, in fact, subject to some bias or failure in decision making. I'm looking for suggestion on what experiment I can perform within 30 minutes (can be longer if it's an interesting and engaging task, e.g. a game), the important thing is that the thing being demonstrated has to be relevant to most people's everyday lives. Any ideas?
I also want to mention that I can get assistants for the experiment if needed.
Edit: Has anyone at CFAR or at rationality minicamps done something similar? Who can I contact to inquire about this?
I don't think this deserves its own top level discussion post and I suspect most of the downvotes are for this reason. Maybe use the open thread next time?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Shouldn't you be applying this logic to your own motivations to be a rationalist as well? "Oh, so you've found this blog on the internet and now you know the real truth? Now you can think better than other people?" You can see how it can look from the outside. What would the implication for yourself be?