Also discovered bone conduction headphones and I am impressed with the quality.
Do you have a recommendation? Constantly on the look out for new headphone styles, I have weird ear holes that nothing fits in.
Taking my place in history - one of my first tasks as an intern at MIRI was to write some ruby scripts that dealt with some aspects of that donation.
Not only did that experience land me my first programming job, but just realizing now that it was also the impetus that led me to grab more bitcoin (I had sold mine at the first peak in 2013) AND look into Stellar. Probably the most lucrative internship ever.
(Shoutout to Malo/Alex if you guys are still lurking LW)
I'm feeling nostalgic.
Is there any interest in having a monthly thread where we re-post links to old posts/comments from LW? Possibly scoped to that month in previous years? i.e, each comment would look like
(2013) link
brief description / thoughts
or something.
It's pretty easy to go back and look through some of the older, more popular posts - but I think there were many open thread comments or frontpage posts not by Yvain / Eliezer that are starting to slip through the cracks of time. Would be nice to see what we all remember.
This is the kind of content I've missed from LW in the past couple of years. Reminded me of something on old LW a while back that is a nice object level complement to this post. I saved it and look at it occasionally for inspiration (I don't really think it's a definitive list of 'things to do as a superhuman', or even a good list of things to do at all, but just as a nice reminder that ambitious people are interesting and fun):
(Not sure who the author is, if anyone finds the original post please link to it! I'll try to find it when I get the time)
For anyone interested in vipassana meditation, I would recommend checking out Shinzen Young. He takes a much more technical approach to the practice. This pdf by him is pretty good.
Oh my god if we can get this working with org-mode and habitrpg it will be the ultimate trifecta. And I've already got the first two (here).
Seriously this could be amazing. Org-mode and habitrpg are great, but they don't really solve the problem of what to do next. But with this, you get the data collection power of org mode with the motivational power of habitrpg - then Familiar comes in, looks at your history (clock data, tags, agendas, all of the org mode stuff will be a huge pool of information that it can interact with easily because emacs) and does its thing.
It could tell habitrpg to give you more or less experience for things that are correlated with some emotion you've tagged an org mode item with. Or habits that are correlated with less clocked time on certain tasks. If you can tag it org mode you can track it with familiar, and familiar will then controls how habitrpg calculates your experience. Eventually you won't have that nagging feeling in the back of your head that says "Wow, I'm really just defining my own rewards and difficulty levels, how is this going to actually help me if I can just cheat at any moment?" - Maybe you can still cheat yourself, but Familiar will tell you exactly the extent of your bullshit. It basically solves the biggest problem of gamification! You'll have to actually fight for your rewards, since Familiar won't let you get away with getting tons of experience for tasks that are not correlated with anything useful. Sure it won't be perfectly automated, but it will be close enough.
It could sort your agenda by what you actually might get done vs shit that you keep there because you feel bad about not doing it - and org mode already has a priority system. It could tell you what habits (org-mode has these too) are useful and what you should get rid of.
It could work with magit to get detailed statistics about your commit history and programming patterns.
Or make it work with org-drill to analyze your spaced repetition activity! Imagine, you could have an org-drill file associated with a class you are taking and use it to compare test grades and homework scores and the clocking data from homework tasks. Maybe there is a correlation between certain failing flashcards and your recent test score. Maybe you are spending too much time on SRS review when it's not really helping. These are things that we usually suspect but won't act on, and I think seeing some hard numbers, even if they aren't completely right, will be incredibly liberating. You don't have to waste cognitive resources worrying about your studying habits or wondering if you are actually stupid, because familiar will tell you! Maybe it could even suggest flashcards at some point, based on commit history or wikipedia reading or google searches.
Maybe some of this is a little far fetched but god would it be fun to dig into.
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don't care about the far future will be motivated to prevent it.
This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I'm blurring "near mode instrumental rationality" and "far mode instrumental rationality," but I think there's a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won't be needed for years.
I think you're just blurring "rationality" here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say), and especially of the kind needed to properly handle AI - and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don't give two shits about AI risk - if they don't think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren't thinking about it now - why are you confident this won't be the case in the future? Thinking about AI requires a rather large conceptual leap - "rationality" is necessary but not sufficient, so even if all powerful people were "rational" it doesn't follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I'm not aware of. It's hard enough explaining recursion to people who are actually interested in computers. And it's not like we can drop a UFAI on a country to get people to pay attention.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it's more salient, and in the future it will be still more salient.
In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I'm taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be "expected to increase over time", and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail - for example, it's not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research - the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of "Yay Open Access Knowledge is Good!" applause light, but it could really go either way.
Also I really don't know where you got that last idea - I can't imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There's a reason why it's hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don't think it's a great idea to rely on the conscientiousness of people in this case.
focus@will is pretty useful for me - I've never been into movie music, but the cinematic option was very inspiring for me. There is some science behind the project too.
For the GTD stuff, I use emacs + org-mode + .emacs based on this configuration + mobile org.
Since I try to work exclusively in emacs, I can quickly capture notes and "things that need to get done" in their proper context, all of which is aggregated under an Agenda window. The Agenda window manages a collection of ".org" files which store the specific details of everything. MobileOrg syncs all these .org files to my phone. Combined with the GTD philosophy of never having anything uncategorized bouncing around in my mind, this system works very well for me.
Example workflow (a better and more complete example is in the configuration I linked above):
It all seems very complicated, but all of this is literally a couple of keystrokes. And this barely scratches the surface (take a look at the aforementioned configuration to see what I mean).
Pros:
Cons:
A spaced repetition package is also available for org-mode, which really ties the whole thing together for me.
EDIT: You can also overlay latex fragments directly in org-mode, which is really nice for notetaking. Whole .org files can be exported to latex as well.
I re-read Atlas Shrugged once or twice a year. One of my first posts on LW was this (and you even commented on it!):
https://www.lesswrong.com/posts/7s5gYi7EagfkzvLp8/in-defense-of-ayn-rand
Not necessarily proud of it, but it's interesting to re-read it after fully reconciling the book with my own internal principles. I can see how much I struggled with the fact that I really resonated with the idea of hero-worship, while also feeling so fragile in my own judgments, simultaneously. It really is a wonderful book, and I no longer feel the need to defend anything about it - I just get a little sad when it gets brushed off (the lord of the rings comparison joke really gets me), as an honest reading will always reveal something fundamental, even in criticism.