An idea that I think would be very helpful to people - and relatively simple to grasp - is the idea of tribalism, and how much it really motivates us, even to this day. Not just that politics is the mindkiller, but why. I think if more people were able to take a step back every once in a while and think, "Hey, I don't even care about or like this idea...why am I defending it? Because it's an idea that I think a group I consider myself a part of holds, and by attacking one idea of my tribe, it seems like you're attacking every idea of my tribe? Does this make sense?" then the world would be a much more friendly place, at least.
Recommended Reading for Evolution?
I'll make this short and sweet.
I've been reading Dawkin's The Selfish Gene, and it's been really helpful filling in some of the gaps I have in my understanding of how evolution actually works.
The last biology class I took was in high school, and I don't think the mechanics of evolution is covered particularly well in American high schools.
I'm looking for recommendations - has anyone read any books that accurately describe the process of evolution for someone without specialized knowledge of biology? I've already checked LessWrong's recommended textbooks, and while it recommends some books on evolutionary psychology and on animal behavior from an evolutionary perspective, it doesn't appear to have anything that describes evolution itself in sufficient detail to model it.
I'm toying with the idea of trying to program an evolution simulator, and so I need a fairly detailed, accessible account.
Thanks for the help!
A Challenge: Maps We Take For Granted
Imagine that you were instantly transported into (roughly) the 13th century. I'm not great at history, but I'm picturing sometime around the crusades. You're sitting there, reading this post on your computer, and BAM! Some guy in chain mail is asking you if thou art the spawn of a demon.
Given this situation, I present to you a challenge:
You are stranded in the past. You have no modern technology except your everyday clothes. The only thing you do have is your knowledge from the future.
What do you do?
I'll make this a little more structured for the sake of clarity.
1) You appear in Great Britain (or the appropriate analogue for your native culture).
2) Assume the language barrier is surmountable - in other words, it may not be easy, but you can communicate effectively (by learning the language, or simply adapting to an older version of your native tongue).
3) Further assume that you manage to gain the ear of a ruling lord (how is not important, just say you're a wizard or something) and that he provides you with enough money, labor, and expertise (carpenters, smiths, etc.) to build something *so long as you can describe it in enough detail*.
4) You are only allowed to pull from general, scientifically literate knowledge - high school/bachelor's level only.
5) You can't use your knowledge of future events to your advantage, as it requires too expert a grasp of history. Only your knowledge of the way the world actually works is available.
The reason for 4) has to do with the point of the question. I'm trying to figure out the kind of maps that we have today that are considered "general knowledge" - the kinds of things that are so obvious to us we tend to not realize that people in the past didn't know them.
I'll go first.
The germ theory of disease didn't achieve widespread acceptance until the 19th century. In other words, I'm the only person in the past who is quite confident about how diseases are spread. This means that I can offer practical advice about sanitation when dealing with injuries and plagues. I can make sure that people wash their hands before cutting other people up, and after dealing with corpses. I can make sure that cutting instruments are sanitized (they did have alcohol) before use. And so on. This should reduce the number of deaths from disease in the kingdom, and prove my worth to the king.
I'm trying to build a list of things like this - maps of the way the world really is that we take for granted.
Have fun!
I'm relatively new here, so I have trouble seeing the same kinds of problems you do.
However, I can say that LessWrong does help me remember to apply the principles of rationality I've been trying to learn.
I'd also like to add that - much like writing a novel - the first draft rarely addresses all of the possible faults. LessWrong is one of (if not the first) community blogs devoted to "refining the art of human rationality." Of course we're going to get some things wrong.
What I really admire about this site, though, is that contrarian viewpoints end up being some of the most highly upvoted - people admire and discourse with dissenters here. So if you truly believe that LessWrong isn't the best use of your time, then I wish you the best with whatever efforts you pursue. But I think if you wrote a bit more on this subject and found a way to add it to the sequences, everyone would only thank you.
This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.
This may be retreating to the motte's bailey, so to speak, but I don't think anyone seriously thinks that a superintelligence would be literally impossible to understand. The worry is that there will be such a huge gulf between how superintelligences reason versus how we reason that it would take prohibitively long to understand them.
I think a laptop is a good example. There probably isn't any single human on earth that knows how to build a modern laptop from scratch. There's are computer scientists that know how the operating system is put together--how the operating system is programmed, how memory is written and retrieved from the various buses; there are other computer scientists and electrical engineers who designed the chips themselves, who arrayed circuits efficiently to dissipate heat and optimize signal latency. Even further, there are material scientists and physicists who designed the transistors and chip fabrication processes, and so on.
So, as an individual human, I don't know what it's like to know everything about a laptop all at once in my head, at a glance. I can zoom in on an individual piece and learn about it, but I don't know all the nuances for each piece--just a sort of executive summary. The fundamental objects with which I can reason have a sort of characteristic size in mindspace--I can imagine 5, maybe 6 balls moving around with distinct trajectories (even then, I tend to group them into smaller subgroups). But I can't individually imagine a hundred (I could sit down and trace out the paths of a hundred balls individually, of course, but not all at once).
This is the sense in which a superintelligence could be "dangerously" unpredictable. If the fundamental structures it uses for reasoning greatly exceed a human's characteristic size of mindspace, it would be difficult to tease out its chain of logic. And this only gets worse the more intelligent it gets.
Now, I'll grant you that the lesswrong community likes to sweep under the rug the great competition of timescales and "size"scales that are going on here. It might be prohibitively difficult, for fundamental reasons, to move from working-mind-RAM of size 5 to size 10. It may be that artificial intelligence research progresses so slowly that we never even see an intelligence explosion--just a gently sloped intelligence rise over the next few millennia. But I do think it's a maybe not a mistake but certainly naiive to just proclaim, "Of course we'll be able to understand them, we are generalized reasoners!".
Edit: I should add that this is already a problem for, ironically, computer-assisted theorem proving. If a computer produces a 10,000,000 page "proof" of a mathematical theorem (i.e., something far longer than any human could check by hand), you're putting a huge amount of trust in the correctness of the theorem-proving-software itself.
Isn't using a laptop as a metaphor exactly an example of
Most often reasoning by analogy?
I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can't accurately predict anything without more data.
So maybe the next step in AI should be to create an "Aquarium," a self-contained network with no actuators and no way to access the internet, but enough processing power to support a superintelligence. We then observe what that superintelligence does in the aquarium before deciding how to resolve further uncertainties.
Being able to "feel" electric/magnetic fields with your hands would be great. Not dissimilar to wifi sensing, but enough to be able to intuit what a circuit is doing just by observing/feeling it.
I also don't think that anyone's mentioned having a true internal clock. Some people can already wake up at a specific time of day just by wanting to - that'd be useful. Also for the ability to time things.
Lastly, while being able to detect neurotransmitter levels in your own brain would be great, being able to detect them in the brains of others would be even better. Kind of a toned-down empathic ability - you could tell who was stressed, who was happy, and so on by the amount of cortisol or dopamine in their brain.
I'm not limiting myself to " high-risk activities that pay well", I'm limiting myself to "legally feasible high risk and helpful services that also pay really well" ;)
The "helpful" is the goal, the rest are instrumental. I think most stuff leading to morally good outcomes is legal. Even illegal stuff which might be good if only it were legal turns out bad simply due to the practical realities of illegal operations.
Out of curiosity, can you name any such activities? The first thing I thought of was donating your organs (whichever ones were healthy enough to donate). Especially if you could arrange to have them all taken at once when you die, and then put the money into a college fund for your kids or whatever.
To be honest, if I'd know one of my parent's kidneys had gone into paying for my chemistry class, I probably would have attended more.
Write a book for my child, basically trying to put all my wisdom / experience into it. My father passed away a year ago and it still bothers me we have not discussed anything serious in the last 10 or so years. I need his brain, his experience, it is far too hard to deal with life without his advice, and yet all I have is photos. We really owe it to our children to write down everything we could teach them.
I would write a book anyway, even if I had no child, the best service I can give to the world is helping others figure out certain things quicker than I did.
Apparently, retiring professors traditionally give a lecture entitled, "The Last Lecture," during which they talk about what wisdom they want to leave behind. This particular book is the lecture Randy Pausch gave after being diagnosed with terminal cancer.
Maybe you can get some insight that way, but of course there are important differences. If I really had only 6 months to live, I'd stop working and dieting right now and go buy some beer. If I still had 70 years to live, I'd want go back to school and learn another profession.
All utilitarian calculations, to my knowledge, have to start with an examination of one's goals. If your primary goal is to enjoy life (nothing wrong with that), then that approach is fine. If your goal is to help the world, then I'm arguing there are things you can do in your six months that others can't or won't because the behaviors are too dangerous.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Understanding the distinction between the map and the territory. And understanding that there are different levels of maps.
I think if you go to CFAR's webpage, and (I think) look at one of Michael Smith's interviews, he says that that's the one thing he wants people to take away from CFAR.