Does anyone know if something urgent has been going on with MIRI, other than the Effective Altruism Summit? I am a job application candidate -- I have no idea about my status as one. But I was promised a chat today, days ago, and nothing was arranged regarding time or medium. Now it is the end of the day. I sent my application weeks ago and have been in contact with 3 of the employees who seem to work on the management side of things. This is a bit frustrating. Ironically, I applied as Office Manager, and hope that (if hired) I would be doing my best to take care of these things -- putting things on a calendar, working to help create a protocol for 'rejecting' or 'accepting' or 'deferring' employee applications, etc. Have other people had similar, disorganized correspondences with MIRI? Or have they mostly been organized, suggesting that I take this experience as a sure sign of rejection?
Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.
Did you hear the one about Luke Muehlhauser deconverting from evangelical Christianity?
Of course you did. He never stops bringing it up for some reason.
The whole subculture that is the new 'rationality movement' has some nodes, i.e., nodes, and subcultures, which are not included in this map of the Bay Area memespace. I'm sitting here at home with my friend Kytael, and we're brainstorming the following:
- What nodes are part of the rationalist movement that aren't typical of the Bay Area memespace.
- What nodes aren't part of the rationalist movement that are still part of the Bay Area memespace.
- What nodes we as a community might want to add the rationalist memespace.
- What nodes might enter the rationalist memespace that some parts of the community might consider undesirable.
Nodes Unique to the Rationalist Community
- Neoreaction
- Men's Rights Activists/Pick-Up Artists
- Secular Solstices, Spiritual Naturalism
- Self-Reflection
- Hansonian Contrarianism
- Generalization of Science and Economics to Everyday Life
- Nerd/Geek Culture
Nodes From the Bay Area Separate From the Rationalist Community
- Whole Earth community
- New Age Culture
- Back-to-the-land movement
- Kink Culture
Controversial Nodes Within the Rationalist Community
- Neoreaction
- Men's Rights Activism, Pick-Up Artists
- Social Justice
Emerging Subcultures and Memes in the Rationalist Community
- Post-rationality/Post-rationalism
- Partnered Dancing
- (Whatever Is Trending On) Slate Star Codex
- Applied Rationality=???
- Psychtropic/Nootropic Use
- Bitcoin/Cryptocurrency Enthusiasm
New Memes and Groups The Rationalist Community May Want to Explore More
- Open Borders
- ...
This list isn't exhaustive, and it could be controversial, so please question, or criticize it below. I will reflexively update this list by editing this comment in response to replies. This was more of a brainstorming exercise than anything, but one I thought other Less Wrong users might consider interesting. If a great discussion results, myself, or someone else, could turn this into a fuller post in its own right.
Is there an update on this issue? Representatives from nearly all the relevant organizations have stepped in, but what's been reported has done little to resolve my confusion on this issue, and I think of myself as divided on it as Mr. Hallquist originally was. Dr. MacAskill, Mr. O'Haigeartaigh, Ms. Salamon have all provided explanations for why they believe each of the organizations they're attached are the most deserving of funding. The problem is that this has done little to assuage my concern about which organization is in the most need of funds, and will have the greatest impact given a donation in the present, relative to each of the others.
Thinking about it as a write this comment, it strikes me an unfortunate case of events when organizations who totally want to cooperate towards the same ends are put in the awkward position of making competing(?) appeals to the same base of philanthropists. This might have been mentioned elsewhere in the comments, but donations to which organization do any of you believe would lead to the biggest return of investment in terms of attracting more donors, and talent, towards existential risk reduction as a whole? Which organization will increase the base of effective altruists, and like individuals, who would support this cause?
Don't apologize! This is great info!
If anything, I could use more information from the CEA, the FHI, and the GPP. Within effective altruism, there's a bit of a standard of expecting some transparency of the organizations, purportedly effective, which are supported. In terms of financial support, this would mean the open publishing of budgets. Based upon Mr. O'Heigeartaigh's report above, the FHI itself might be strapped for available time, among all its other core activities, to provide this sort of insight.
I recently started my career as an effective altruist earning to give by making my first big splash with a $1000 USD unrestricted donation to Givewell last month.
Man! Last month I posted that I had learned some HTML/CSS/JS and made a really basic website. This month, I learned that I made an A in my CS101 class, am currently making an A in my CS102 class, and picked up a part time internship doing web/mobile (phonegap) development for a startup in my town. I've also started designing a website I want to make, and have built a dev VM with Ruby on Rails built in and configured.
I've got all my financial stuff together to start going back to school full time in the spring, and I'll graduate with my BS Computer Science in Spring 2016.
I've used Pomodoro time management to balance my two partners, full time job, school, internship, and powerlifting.
I also realized that I really should go to a psychiatrist about potentially having bipolar II, as this is a pretty classic hypomanic phase immediately following a depressive phase.
Uh, I've trawled through Wikipedia for the causes, and symptoms, of mental illnesses, and, according to my doctors (general practitioner, and psychiatrist), I've been good at identifying what I'm experiencing before I've gone to see them about it. The default case is that patients just go to the doctor, report their symptoms, answer questions about their lifestyle lately, and the doctors take care of diagnoses, and/or assigning treatment. I choose to believe that I have such clarity about my own mental processes because my doctors tell me how impressed they are when I come to them seeming to already know what I'm experiencing. I don't know why this is, but my lazy hypothesis is chalking it up to me being smart (people I know tell me this more than I would expect), and that I've become more self-reflective after having attended a CFAR workshop.
Of course, both my doctors, and I, could be prone to confirmation bias, which would be a scary result. Anyway, I've had a similar experience of observing my own behavior, realizing it's abnormal, and being proactive about seeking medical attention. Still, for everyone, diagnosing yourself by trawling Wikipedia, or WebMD, seems a classic example of an exercise prone to confirmation bias (e.g., experiencing something like medical student's disease). This post is a signal that I've qualified my concerns through past experience, and that I encourage you to both seek out a psychiatrist, as I don't expect that to result in a false negative diagnosis, and also to still be careful as you think about this stuff.
I sort of side with Mitchel on this.
A mentor of mine once told me that replication is useful, but not the most useful thing you could be doing because it's often better to do a followup experiment that rests on the premises established by the initial experiment. If the first experiment was wrong, the second experiment will end up wrong too. Science should not go even slower than it already does - just update and move on, don't obsess.
It's kind of how some of the landmark studies on priming failed to replicate, but there are so many followup studies which are explained by priming really well that it seems a bit silly to throw out the notion of priming just because of that.
Keep in mind, while you are unlikely to hit statistically significance where there is no real result, it's not statistically unlikely to have a real result that doesn't hit significance the next time you do it. Significance tests are attuned to get false negatives more often than false positives.
Emotionally though... when you get a positive result in breast cancer screening even when you're not at risk, you don't just shrug and say "probably a false positive" even though it is. Instead, you irrationally do more screenings and possibly get a needless operation. Similarly, when the experiment fails to replicate, people don't shrug and say "probably a false negative", even though that is, in fact, very likely. Instead, they start questioning the reputation of the experimenter. Understandably, this whole process is nerve wracking for the original experimenter. Which I think is where Mitchel was - admittedly clumsily - groping towards with the talk of "impugning scientific integrity".
Scientists as community of humans should expect there research to return false positives sometimes, because that is what is going to happen, and they should publish those results. Scientists should also expect experiments to demonstrate that some of their hypotheses are just plain wrong. It seems to me replication is only not very useful if the replications of the experiment are likely prone to all the same crap that currently makes original experiments from social psychology not all that reliable. I don't have experience, or practical knowledge of the field, though, so I wouldn't know.
Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm only age 22, and I don't have lots of life experience. So, I don't know how pleasing the rewards of such hardships would be, nor do I have a model of how much pain would go into this. However, reading through the scenarios seemed awful, so I rated my willingness to go through with them very low relative to the median response.
I'd be more interested in the same poll restricted to prime over the age of at least forty, asking along the lines of whether the rewards of hardship were so great they'd be willing to go through the pain again.