Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
ryjm40

I re-read Atlas Shrugged once or twice a year. One of my first posts on LW was this (and you even commented on it!):

 

https://www.lesswrong.com/posts/7s5gYi7EagfkzvLp8/in-defense-of-ayn-rand

 

Not necessarily proud of it, but it's interesting to re-read it after fully reconciling the book with my own internal principles. I can see how much I struggled with the fact that I really resonated with the idea of hero-worship, while also feeling so fragile in my own judgments, simultaneously. It really is a wonderful book, and I no longer feel the need to defend anything about it - I just get a little sad when it gets brushed off (the lord of the rings comparison joke really gets me), as an honest reading will always reveal something fundamental, even in criticism.

ryjm20
Also discovered bone conduction headphones and I am impressed with the quality.

Do you have a recommendation? Constantly on the look out for new headphone styles, I have weird ear holes that nothing fits in.

ryjm170

Taking my place in history - one of my first tasks as an intern at MIRI was to write some ruby scripts that dealt with some aspects of that donation.

Not only did that experience land me my first programming job, but just realizing now that it was also the impetus that led me to grab more bitcoin (I had sold mine at the first peak in 2013) AND look into Stellar. Probably the most lucrative internship ever.

(Shoutout to Malo/Alex if you guys are still lurking LW)

ryjm80

I'm feeling nostalgic.

Is there any interest in having a monthly thread where we re-post links to old posts/comments from LW? Possibly scoped to that month in previous years? i.e, each comment would look like

(2013) link
brief description / thoughts

or something.

It's pretty easy to go back and look through some of the older, more popular posts - but I think there were many open thread comments or frontpage posts not by Yvain / Eliezer that are starting to slip through the cracks of time. Would be nice to see what we all remember.

ryjm160

This is the kind of content I've missed from LW in the past couple of years. Reminded me of something on old LW a while back that is a nice object level complement to this post. I saved it and look at it occasionally for inspiration (I don't really think it's a definitive list of 'things to do as a superhuman', or even a good list of things to do at all, but just as a nice reminder that ambitious people are interesting and fun):

  • Become awesome at mental math
  • Learn mnemonics. Practise by memorizing and rehearsing something, like the periodic table or the capitals of all nations or your multiplication tables up to 30x30.
  • Practise visualization, i.e. seeing things that aren't there. Try inventing massive palaces mentally and walking through them mentally when bored. This can be used for memorization (method of loci).
  • Research n-back and start doing it regularly.
  • Learn to do lucid dreaming
  • Learn symbolic shorthand I recommend Gregg
  • Look at the structure of conlangs like Esperanto and Lojban and Ilaksh I feel like this is mind-expanding, like I have a better sense of how language and communication and thought works after being exposed to this..
  • Learn to stay absolutely still for extended periods of time; convince onlookers that you are dead.
  • Learn to teach yourself stuff.
  • Live out of your car for a while, or go homeless by choice
  • Can you learn to be pitch-perfect? Anyway, generally learn more about music.
  • Exercise. Consider 'cheating' with creatine or something. Creatine is also good for mental function for vegetarians If you want to jump over cars, try plyometrics ..
  • Eat healthily. This has become a habit for me. Forbid yourself from eating anything for which a more healthy alternative exists (eg., no more white rice (wild rice is better), no more white bread, no more soda, etc.). Look into alternative diets; learn to fast.
  • Self-discipline in general. Apparently this is practisable. Eliminate comforting lies like that giving in just this once will make it easier to carry on working. Tell yourself that you never 'deserve' a long-term-destructive reward for doing what you must, that doing what you must is just business as usual. Realize that the part of your brain that wants you to fall to temptation can't think long-term - so use the disciplined part of your brain to keep a temporal distance between yourself and short-term-gain-long-term-loss things. In other words, set stuff up so you're not easy prey to hyperbolic discounting.
  • Learn not just to cope socially, but to be the life of the party. Maybe learn the PUA stuff.
  • That said, learn to not care what other people think when it's not for your long-term benefit. Much of social interaction is mental masturbation, it feels nice and conforming so you do it. From HP and the MOR:
    • For now I'll just note that it's dangerous to worry about what other people think on instinct, because you actually care, not as a matter of cold-blooded calculation. Remember, I was beaten and bullied by older Slytherins for fifteen minutes, and afterward I stood up and graciously forgave them. Just like the good and virtuous Boy-Who-Lived ought to do. But my cold-blooded calculations, Draco, tell me that I have no use for the dumbest idiots in Slytherin, since I don't own a pet snake. So I have no reason to care what they think about how I conduct my duel with Hermione Granger.
  • Learn to pick locks. If you want to seem awesome, bring padlocks with you and practise this in public
  • Learn how to walk without making a sound
  • Learn to control your voice. Learn to project like an actress. PUAs have also written on this.
  • Do you know what a wombat looks like, or where your pancreas is? Learn basic biology, chemistry, physics, programming, etc.. There's so much low-hanging fruit.
  • Learn to count cards, like for blackjack. Because what-would-James-Bond-do, that's why! (Actually, in the books Bond is stupidly superstitious about, for example, roulette rolls.)
  • Learn to play lots of games (well?). There are lots of interesting things out there, including modern inventions like Y and Hive that you can play online.
  • Learn magic. There are lots of books about this.
  • Learn to write well, as someone else here said.
  • Get interesting quotes, pictures etc. and expose yourself to them with spaced repetition. After a while, will you start to see the patterns, to become more 'used to reality'?
  • Learn to type faster. Try alternate keyboard layouts, like Dvorak.
  • Try to make your senses funky. Wear a blindfold for a week straight, or wear goggles that turn everything a shade of red or turn everything upside-down or an eye patch that takes away your depth-sense. Do this for six months, or however long it takes to get used to them. Then, of course, take them off. The when you're used to not having your goggles on, put them on again. You can also do this on a smaller scale, by flipping your screen orientation or putting your mouse on the other side or whatnot.
  • Become ambidextrous. Commit to tying your dominant hand to your back for a week.
  • Humans have magnetite deposits in the ethmoid bone of their noses. Other animals use this for sensing direction; can humans learn it?
  • Some blind people have learned to echolocate. [Seriously](http://en.wikipedia.org/wiki/Human_echolocation)
  • Learn how to tie various knots. This is useless but awesome.
  • Wear one of those belts that tells you which way north is. Keep it on until you are homing pigeon.
  • Learn self-defence.
  • Learn wilderness survival. Plently of books on the net about this.
  • Learn first aid. This is one of those things that's best not self-taught from a textbook.
  • Learn more computer stuff. Learn to program, then learn more programming languages and how to use e.g. the Linux coreutils. Use dwm. Learn to hack. Learn some weird programming languages If you're actually using programming in your job, though, make sure you're scarilyawesome at at least one language.
  • Learn basic physical feats like handstands, somersaults, etc..
  • Polyphasic sleep?
  • Use all the dead time you have lying around. Constantly do mental math in your head, or flex all your muscles all the time, or whatever.
  • All that limits you is your own weakness of will.

(Not sure who the author is, if anyone finds the original post please link to it! I'll try to find it when I get the time)

ryjm30

For anyone interested in vipassana meditation, I would recommend checking out Shinzen Young. He takes a much more technical approach to the practice. This pdf by him is pretty good.

ryjm60

Oh my god if we can get this working with org-mode and habitrpg it will be the ultimate trifecta. And I've already got the first two (here).

Seriously this could be amazing. Org-mode and habitrpg are great, but they don't really solve the problem of what to do next. But with this, you get the data collection power of org mode with the motivational power of habitrpg - then Familiar comes in, looks at your history (clock data, tags, agendas, all of the org mode stuff will be a huge pool of information that it can interact with easily because emacs) and does its thing.

It could tell habitrpg to give you more or less experience for things that are correlated with some emotion you've tagged an org mode item with. Or habits that are correlated with less clocked time on certain tasks. If you can tag it org mode you can track it with familiar, and familiar will then controls how habitrpg calculates your experience. Eventually you won't have that nagging feeling in the back of your head that says "Wow, I'm really just defining my own rewards and difficulty levels, how is this going to actually help me if I can just cheat at any moment?" - Maybe you can still cheat yourself, but Familiar will tell you exactly the extent of your bullshit. It basically solves the biggest problem of gamification! You'll have to actually fight for your rewards, since Familiar won't let you get away with getting tons of experience for tasks that are not correlated with anything useful. Sure it won't be perfectly automated, but it will be close enough.

It could sort your agenda by what you actually might get done vs shit that you keep there because you feel bad about not doing it - and org mode already has a priority system. It could tell you what habits (org-mode has these too) are useful and what you should get rid of.

It could work with magit to get detailed statistics about your commit history and programming patterns.

Or make it work with org-drill to analyze your spaced repetition activity! Imagine, you could have an org-drill file associated with a class you are taking and use it to compare test grades and homework scores and the clocking data from homework tasks. Maybe there is a correlation between certain failing flashcards and your recent test score. Maybe you are spending too much time on SRS review when it's not really helping. These are things that we usually suspect but won't act on, and I think seeing some hard numbers, even if they aren't completely right, will be incredibly liberating. You don't have to waste cognitive resources worrying about your studying habits or wondering if you are actually stupid, because familiar will tell you! Maybe it could even suggest flashcards at some point, based on commit history or wikipedia reading or google searches.

Maybe some of this is a little far fetched but god would it be fun to dig into.

ryjm30

I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.

Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.

AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don't care about the far future will be motivated to prevent it.

This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).

The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I'm blurring "near mode instrumental rationality" and "far mode instrumental rationality," but I think there's a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won't be needed for years.

I think you're just blurring "rationality" here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say), and especially of the kind needed to properly handle AI - and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don't give two shits about AI risk - if they don't think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren't thinking about it now - why are you confident this won't be the case in the future? Thinking about AI requires a rather large conceptual leap - "rationality" is necessary but not sufficient, so even if all powerful people were "rational" it doesn't follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I'm not aware of. It's hard enough explaining recursion to people who are actually interested in computers. And it's not like we can drop a UFAI on a country to get people to pay attention.

Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it's more salient, and in the future it will be still more salient.

In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.

It seems like you are claiming that AI safety does not require a substantial shift in perspective (I'm taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be "expected to increase over time", and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail - for example, it's not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research - the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of "Yay Open Access Knowledge is Good!" applause light, but it could really go either way.

Also I really don't know where you got that last idea - I can't imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There's a reason why it's hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don't think it's a great idea to rely on the conscientiousness of people in this case.

ryjm00

focus@will is pretty useful for me - I've never been into movie music, but the cinematic option was very inspiring for me. There is some science behind the project too.

ryjm50

For the GTD stuff, I use emacs + org-mode + .emacs based on this configuration + mobile org.

Since I try to work exclusively in emacs, I can quickly capture notes and "things that need to get done" in their proper context, all of which is aggregated under an Agenda window. The Agenda window manages a collection of ".org" files which store the specific details of everything. MobileOrg syncs all these .org files to my phone. Combined with the GTD philosophy of never having anything uncategorized bouncing around in my mind, this system works very well for me.

Example workflow (a better and more complete example is in the configuration I linked above):

  1. At the end of class, Professor assigns a programming project due in a week. I pull out my phone and quickly capture a TODO item with a deadline in Mobileorg. Mobileorg syncs this to google calendar.
  2. I get home and pull up the agenda in emacs. The item referencing the programming project shows up in my "Tasks to refile" category (equivalent to "Inbox" in GTD terms), along with any other TODOs I captured while I was at school.
  3. I refile the project to an org file that contains all the information about my classes and define a NEXT item under it, which represent the next action I need to take on the project. When I start working on the project, I can attach any files related to it directly on the TODO item identifying the project.
  4. The NEXT item shows up on a list of NEXT items on the agenda. I can filter these by project (defined in the GTD way) or by the tag system.

It all seems very complicated, but all of this is literally a couple of keystrokes. And this barely scratches the surface (take a look at the aforementioned configuration to see what I mean).

Pros:

  • Forces you to learn emacs.
  • Easily configurable and incredibly robust.
  • Optimized for functionality rather than prettiness (i.e if you end up liking it, you'll know it wasn't because of the nice UI, which is usually the main selling point for any computer based organizational system).

Cons:

  • Forces you to learn emacs.
  • Takes a huge amount of effort to set up. I would compare it to setting up an Arch Linux system.
  • Can get messy if you don't know what you're doing.
  • Getting the syncing functionality isn't easy.

A spaced repetition package is also available for org-mode, which really ties the whole thing together for me.

EDIT: You can also overlay latex fragments directly in org-mode, which is really nice for notetaking. Whole .org files can be exported to latex as well.

Load More