Donation Discussion - alternatives to the Against Malaria Foundation
About a year and a half ago, I made a donation to the Against Malaria Foundation. This was during jkaufman's generous matching offer.
That was 20 months ago, and my money is still in the "underwriting" phase - funding projects that are still, of yet, just plans and no nets.
Now, the AMF has had a reasonable reason it was taking longer than expected:
"A provisional, large distribution in a province of the [Democratic Republic of the Congo] will not proceed as the distribution agent was unable to agree to the process requested by AMF during the timeframe needed by our co-funding partner."
So they've hit a snag, the earlier project fell through, and they are only now allocating my money to a new project. Don't get me wrong, I am very glad they are telling me where my money is going, and especially glad it didn't just end up in someone's pocket instead. With that said, though, I still must come to this conclusion:
The AMF seems to have more money than they can use, right now.
So, LW, I have the following questions:
- Is this a problem? Should one give their funds to another charity for the time being?
- Regardless of your answer to the above, are there any recommendations for other transparent, efficient charities? [other than MIRI]
The immediate real-world uses of Friendly AI research
Much of the glamor and attention paid toward Friendly AI is focused on the misty-future event of a super-intelligent general AI, and how we can prevent it from repurposing our atoms to better run Quake 2. Until very recently, that was the full breadth of the field in my mind. I recently realized that dumber, narrow AI is a real thing today, helpfully choosing advertisements for me and running my 401K. As such, making automated programs safe to let loose on the real world is not just a problem to solve as a favor for the people of tomorrow, but something with immediate real-world advantages that has indeed already been going on for quite some time. Veterans in the field surely already understand this, so this post is directed at people like me, with a passing and disinterested understanding of the point of Friendly AI research, and outlines an argument that the field may be useful right now, even if you believe that an evil AI overlord is not on the list of things to worry about in the next 40 years.
Let's look at the stock market. High-Frequency Trading is the practice of using computer programs to make fast trades constantly throughout the day, and accounts for more than half of all equity trades in the US. So, the economy today is already in the hands of a bunch of very narrow AIs buying and selling to each other. And as you may or may not already know, this has already caused problems. In the “2010 Flash Crash”, the Dow Jones suddenly and mysteriously hit a massive plummet only to mostly recover within a few minutes. The reasons for this were of course complicated, but it boiled down to a couple red flags triggering in numerous programs, setting off a cascade of wacky trades.
The long-term damage was not catastrophic to society at large (though I'm sure a couple fortunes were made and lost that day), but it illustrates the need for safety measures as we hand over more and more responsibility and power to processes that require little human input. It might be a blue moon before anyone makes true general AI, but adaptive city traffic-light systems are entirely plausible in upcoming years.
To me, Friendly AI isn't solely about making a human-like intelligence that doesn't hurt us – we need techniques for testing automated programs, predicting how they will act when let loose on the world, and how they'll act when faced with unpredictable situations. Indeed, when framed like that, it looks less like a field for “the singularitarian cultists at LW”, and more like a narrow-but-important specialty in which quite a bit of money might be made.
After all, I want my self-driving car.
(To the actual researchers in FAI – I'm sorry if I'm stretching the field's definition to include more than it does or should. If so, please correct me.)
A vote against spaced repetition
LessWrong seems to be a big fan of spaced-repetition flashcard programs like Anki, Supermemo, or Mnemosyne. I used to be. After using them religiously for 3 years in medical school, I now categorically advise against using them for large volumes of memorization.
[A caveat before people get upset: I think they appropriate in certain situations, and I have not tried to use them to learn a language, which seems its most popular use. More at the bottom.]
A bit more history: I and 30 other students tried using Mnemosyne (and some used Anki) for multiple tests. At my school, we have a test approximately every 3 weeks, and each test covers about 75 pages of high-density outline-format notes. Many stopped after 5 or so such tests, citing that they simply did not get enough returns from their time. I stuck with it longer and used them more than anyone else, using them for 3 years.
Incidentally, I failed my first year and had to repeat.
By the end of that third year (and studying for my Step 1 boards, a several-month process), I lost faith in spaced-repetition cards as an effective tool for my memorization demands. I later met with a learning-skills specialist, who felt the same way, and had better reasons than my intuition/trial-and-error:
- Flashcards are less useful to learning the “big picture”
- Specifically, if you are memorizing a large amount of information, there is often a hierarchy, organization, etc that can make leaning the whole thing easier, and you loose the constant visual reminder of the larger context when using flashcards.
- Flashcards do not take advantage of spatial, mapping, or visual memory, all of which the human mind is much better optimized for. It is not so well built to memorize pairs between seemingly arbitrary concepts with few to no intuitive links. My preferred methods are, in essence, hacks that use your visual and spatial memory rather than rote.
Here are examples of the typical kind of things I memorize every day and have found flashcards to be surprisingly worthless for:
- The definition of Sjögren's syndrome
- The contraindications of Metronidazole
- The significance of a rise in serum αFP
Here is what I now use in place of flashcards:
- Ven diagrams/etc, to compare and contrast similar lists. (This is more specific to medical school, when you learn subtly different diseases.)
- Mnemonic pictures. I have used this myself for years to great effect, and later learned it was taught by my study-skills expert, though I'm surprised I haven't found them formally named and taught anywhere else. The basic concept is to make a large picture, where each detail on the picture corresponds to a detail you want to memorize.
- Memory palaces. I recently learned how to properly use these, and I'm a true believer. When I only had the general idea to “pair things you want to memorize with places in your room” I found it worthless, but after I was taught a lot of do's and don'ts, they're now my favorite way to memorize any list of 5+ items. If there's enough demand on LW I can write up a summary.
Spaced repetition is still good for knowledge you need to retrieve immediately, when a 2-second delay would make it useless. I would still consider spaced-repetition to memorize some of the more rarely-used notes on the treble and bass clef, if I ever decide to learn to sight-read music properly. I make no comment on it's usefulness to learn a foreign language, as I haven't tried it, but if I were to pick one up I personally would start with a rosetta-stone-esque program.
Your mileage may vary, but after seeing so many people try and reject them, I figured it was enough data to share. Mnemonic pictures and memory palaces are slightly time consuming when you're learning them. However, if someone has the motivation and discipline to make a stack of flashcards and study them every day indefinitely, then I believe learning and using those skills is a far better use of time.
Publication: the "anti-science" trope is culturally polarizing and makes people distrust scientists
Paper by the Cultural Cognition Project: The culturally polarizing effect of the "anti-science trope" on vaccine risk perceptions
This is a great paper (indeed, I think many at LW would find the whole site enjoyable). I'll try to summarize it here.
Background: The pro/anti vaccine debate has been hot recently. Many pro-vaccine people often say, "The science is strong, the benefits are obvious, the risks are negligible; if you're anti-vaccine then you're anti-science".
Methods: They showed experimental subjects an article basically saying the above.
Results: When reading such an article, a large number of people did not trust vaccines more, but rather, trusted the American Academy of Pediatrics less.
My thoughts: I will strive to avoid labeling anybody as being "anti-science" or "simply or willfully ignorant of current research", etc., even when speaking of hypothetical 3rd parties on my facebook wall. This holds for evolution, global warming, vaccines, etc.
///
Also included in the article: references to other research that shows that evolution and global warming debates have already polarized people into distrusting scientists, and evidence that people are not yet polarized over the vaccine issue.
If you intend to read the article yourself: I found it difficult to understand how the authors divided participants into the 4 quadrants (α, ß, etc.) I will quote my friend, who explained it for me:
I was helped by following the link to where they first introduce that model.
The people in the top left (α) worry about risks to public safety, such as global warming. The people in the bottom right (δ) worry about socially deviant behaviors, such as could be caused by the legalization of marijuana.
People in the top right (β) worry about both public safety risks and deviant behaviors, and people in the bottom left (γ) don't really worry about either.
Donating while in temporary debt (i.e. as a student)
Topic: I will be in debt for several years, but will eventually have a disposable income. Should I donate now or later?
Here's my situation: I am a student, with student loans and no income. I can take out more loans than I need. Grad PLUS loans have a fixed interest rate of 7.9% - higher than, say, a mortgage rate, or expected stock returns. Some day, I will have those loans paid off, and will have money that I intend to give to charity.
My objectives: to live on less than my means, and give a significant fraction of my income to charity.
Question: When, if ever, should I give to charity before paying off those loans?
My initial reaction is to keep a record of how much I feel like I should be giving now, then give it later, adjusted for interest (at some rate equal to or less than 7.9%) - this would result in a bigger donation, but the same impact on my finances.
The only times I think I should give now, and not later:
1)If I don't believe I will make good on my commitment later on. (I'll presumably have a family and bills, etc, and while I am perfectly happy to live on little myself, I know I will want my kids to have nice things. This is somewhat illogical, but I'm imperfect.)
2)If the most worthwhile charity I find see has a higher interest rate than 7.9%.
3)If I find an opportunity to use my money charitably in which I can do more good than others, or where no one else can or will donate. (Mainly, random acts of kindness to strangers or friends - or, someone is matching my donation)
I doubt I will ever see 2) happen (or if it does, I should raise awareness)
3) doesn't happen very often, but when it does I think it is an acceptable use of funds
1) is the main scenario that concerns me. I've heard that "giving charitably is a habit" (that's why my parents had me tithe as a kid). I think that's true, though I haven't read any research on that. Either way, though, as I have no meaningful income (and my loan "allowance" is way more than I care to borrow), how much should I donate to help form the habit?
What do you think? Also, are there any other reasons to donate sooner and not later?
Edit: Givewell has an article on giving now vs later. Not all of it is relevant to my situation, but one point:
>"Economic growth, increased giving, and smarter giving may mean that giving opportunities are worse in the far future."
Isolated AI with no chat whatsoever
Suppose you make a super-intelligent AI and run it on a computer. The computer has NO conventional means of output (no connections to other computers, no screen, etc). Might it still be able to get out / cause harm? I'll post my ideas, and you post yours in the comments.
(This may have been discussed before, but I could not find a dedicated topic)
My ideas:
-manipulate current through its hardware, or better yet, through the power cable (a ready-made antenna) to create electromagnetic waves to access some wireless-equipped device. (I'm no physicist so I don't know if certain frequencies would be hard to do)
-manipulate usage of its hardware (which likely makes small amounts of noise naturally) to approximate human speech, allowing it to communicate with its captors. (This seems even harder than the 1-line AI box scenario)
-manipulate usage of its hardware to create sound or noise to mess with human emotion. (To my understanding tones may affect emotion, but not in any way easily predictable)
-also, manipulating its power use will cause changes in the power company's database. There doesn't seem to be an obvious exploit there, but it IS external communication, for what it's worth.
Let's hear your thoughts! Lastly, as in similar discussions, you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" There are plenty of unknown unknowns here.
AI box: AI has one shot at avoiding destruction - what might it say?
Eliezer proposed in a comment:
>More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. The Gatekeeper player has told their friends that they will type this as soon as the Experiment starts. You can type up to one sentence in your IRC queue and hit return immediately, the other player cannot type anything before the game starts (so you can show at least one sentence up to IRC character limits before they can type AI DESTROYED). Do you think you can win?
This spawned a flurry of ideas on what the AI might say. I think there's a lot more ideas to be mined in that line of thought, and the discussion merits its own thread.
So, give your suggestion - what might an AI might say to save or free itself?
(The AI-box experiment is explained here)
EDIT: one caveat to the discussion: it should go without saying, but you probably shouldn't come out of this thinking, "Well, if we can just avoid X, Y, and Z, we're golden!" This should hopefully be a fun way to get us thinking about the broader issue of superinteligent AI in general. (Credit goes to Elizer, RichardKennaway, and others for the caveat)
TIL in Medical School - Doctors have myths too.
Today I Learned in Medical School:
Doctors have medical myths too! According to my prof, many doctors believe that aspiration (having stuff go down into the lungs) causes anaerobic pneumonia, but that is rarely the case. He says that myth is often taught resident-to-student, but it isn’t actually backed up by any research, and isn’t true. The kicker - if the doctor would stop to think about it, it should jump out as unintuitive – it would take some serious changes inside the *lung* to make an *anaerobic* infection – an infection of bacteria that thrive in areas with no oxygen. In reality it takes frequent aspirations over a long period of time to block off an area of the lungs.
I think the moral of this story (though this just may be preaching to the choir here at LW) – all people, be they doctors or kindergarteners, don’t usually check facts they’re taught, especially when being taught by an authoritative teacher. Unless they’re lead to discover/derive a fact themselves, they usually assimilate it into their network of beliefs as a brute fact – “carbon has four valence electrons,” “don’t end a sentence with a preposition,” “in 1492 Columbus discovered America.”
Now, you frequently don’t have enough time to “learn it the hard way” or derive an answer yourself. If I had to read every single research publication that populated the facts in my textbooks, I might not ever graduate. However, it is important to remember that you’ve taken shortcuts for most of your education (and religion/lack thereof, and life in general) – and if some fact ever later strikes you as being odd, look into it. Otherwise, we’re just playing the telephone game.
Fun fact: in medical school, we had a mini-lesson on common cognitive errors in medicine
Yesterday in medical school, we had a lecture on common mistakes doctors make. I saw this slide:
Attribution Errors
Confirmation Bias
Commission Bias
Omission Bias
Anchoring
When does something stop being a “self-consistent idea” and become scientific fact?
Topic: When does something stop being a “useful theory” and become something we can believe?
[Preface - Hi! This is essentially my first time posting here. If I did something wrong, let me know. I've read about 1/3-1/2 of the major sequences, but feel free to reference a specific article if you think it helps answer the question]
We see something in the world that appears mysterious to us, and we come up with an idea (“idea X”) that explains it. For X to be fact, it should:
1) be internally consistent with itself, as well as with all its implications. i.e. if X, then Y, and if Y, then Z. If we know Z to be obviously false, then we know X must be false.
2) be externally consistent with reality as we know it. We can’t find something in reality that makes X clearly unture.
3) Explain things we have already observed (that’s why we’ve come up with idea X in the first place)
4) preferably, “make beliefs pay rent” – we should be able to use X to make predictions of the future, otherwise it doesn’t hold much value.
(By the way, if you disagree with 4 let me know, I’d love to hear your take. I’m not being sarcastic. Truth is still truth, even if there isn't any utility in believing it.)
So, lets say we’ve got a Generic Tribe of Primitive Peoples. They experience an earthquake. It’s the first earthquake in 50 years. They ask Wise Old Jim what all the commotion was. Jim, who is 57 and the only one who’s seen one before, thinks for a while, then says, “I’m not sure, but here’s one possibility: That’s George the Giant, who lives on the other side of the mountains, rolling over in his sleep. He does that occasionally.”
Now, this story about George is:
1) internally consistent (it makes sense that giants would sleep for a long time, and roll over occasionally.)
2) externally consistent (George is on the other side of the mountains, which is mysterious uncharted territory. He’s heavy enough to rumble the earth all the way over here. No-one can think of anything they’ve seen to make his existence unlikely.)
3) explains the earthquake
4) It lets us know that the world isn’t ending, that nothing major has changed, and that this is a natural occurrence which will probably happen again years later. All of which are true.
[In this scenario, there is not in-reality a giant on the other side of the mountains, but they have no way of crossing the mountains to do the obvious empirical test to confirm it.]
So… what are these folk doing wrong? Is this theory supported and useful enough for them to accept it into their belief system? Should they develop some kind of falsifiable hypothesis? If so, what?
Even if *you* can think of a solid test, what if the whole tribe tried to think of a test, but couldn't think of a good one that would confirm or reject the theory? Should they accept this theory or not?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)