In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?
Why can't we implement subreddits here? Seems like it would be super useful, for this and for other problems like the fact that philosophy, AGI, life extension/transhumanism and rationality all get mixed into the same discussions section.
I have absolutely no interest whatsoever in athletic ability, save for the strictly functional ability to sustain going to school on a bicycle every day (thus saving a crapload of money), and to carry myself in a balanced and acceptably graceful way when walking down streets and hallways (which is to say, fit enough that I don't trip over myself or get wound up after three flights of stairs).
The abilities to run fast, climb high, swim deep, row fast, or fight hard, are of absolutely no use to me whatsoever in the environment I live in. If I lived in, say, South Africa, where having a fit body in fight-or-flght condition is a matter of life and death, that would be a different matter entirely, but I don't, and it isn't.
And nootropics get me in the tense-afraid type state very quickly: I inevitably crash and burn, and it takes me a few days to go back to normal. Also, they very much hamper my sense of empathy: I become a mini-Ridcully, a mental locomotor engine which will only go forward, cannot steer, and which completely disregards the feelings of those around him in his enthusiasm to perform and achieve. They just aren't a sustainable option for me.
Have you looked at rhodiola and L-theanine? They tend to counter some of the negative effects of more intense nootropics.
Rational is about how you think, not how you got there. There have been many rational people throughout history who have read approximately none of that.
I am mostly talking about epistemic rationality, not instrumental rationality. With that in mind, I wouldn't consider anyone from a hundred years ago or earlier to be up to my epistemic standards because they simply did not have access to the requisite information, ie. cognitive science and Bayesian epistemology. There are people that figured it out in certain domains (like figuring out that the labels in your mind are not the actual things that they represent), but those people are very exceptional and I doubt that I will meet people that are capable of the pioneering, original work that these exceptional people did.
What I want are people who know about cognitive biases, understand why they are very important, and have actively tried to reduce the effects of those biases on themselves. I want people who explicitly understand the map and territory distinction. I want people who are aware of truth-seeking versus status arguments. I want people who don't step on philosophical landmines and don't get mindkilled. I would not expect someone to have all of these without having at least read some of Lesswrong or the above material. They might have collected some of these beliefs and mental algorithms on their own, but it is highly unlikely that they came across all of them.
Is that too much to ask? Are my standards too high? I hope not.
This might be a more enjoyable test (warning, game and time sink): http://armorgames.com/play/6061/light-bot-20
This sounds like you think of them as mooks you want to show the light of enlightenment to. The sort of clever mathy people you want probably don't like to think of themselves as mooks who need to be shown the light of enlightenment. (This also might be sort of how I feel about the whole rationalism as a thing thing that's going on around here.)
That said, actually being awesome for your target audience's values of awesome is always a good idea to make them more receptive to looking into whatever you are doing. If you can use your rationalism powers to achieve stuff mathy university people appreciate, like top test scores or academic publications while you're still an undergraduate, your soapbox might be a lot bigger all of a sudden.
Then again, it might be that rationalism powers don't actually help enough in achieving this, and you'll just give yourself a mental breakdown while going for them. The math-inclined folk, who would like publication writing superpowers, probably also see this as the expected result, so why should they buy into rationality without some evidence that it seems to be making people win more?
To be honest, unless they have exceptional mathematical ability or are already rationalists, I will consider them to be mooks. Of course, I wont make that apparent, it is rather hard to make friends that way. Acknowledging that you are smart is a very negative signal, so I try to be humble, which can be awkward in situations like when only two out of 13 people pass a math course that you are in, and you got an A- and the other guy got a C-.
And by the way, rationality, not rationalism.
Good point. I know some nice Haskell tutorials and haven't looked around to see if there are comparably nice Coq tutorials, but I guess it's worth looking.
Tutorials/texts that I know of are Software Foundations, Andrej Bauer's tutorial, and this Hott-Coq tutorial. It looks like installing the HoTT library is a huge pain in the arse though so I think I'll stick with vanilla Coq until either I get one of my CS friend to install it for me, or they make a more user friendly install.
Edit: also this
Working on a new TDT writeup for MIRI.
Working on my classes for SPARC.
Writing a long series of math blog posts around the interface of logic, type theory, and category theory. I may not be able to summon the willpower to get through every topic I want to cover, but if I do then the light at the end of the tunnel is homotopy type theory, and I may also attempt to learn Haskell as a side effect.
Why Haskell and not Coq or Agda? That's where all the HoTT stuff is being done anyways.
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don't naturally have principles of correct reasoning, we just do what intuitively seems right
This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn't be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I prefer your style (rather, I really dislike Eliezer's style). Possible data points: I read a lot of math: math blogs, math texts, math papers, and I have poor reading comprehension and reading speed. I don't have a particularly short or long attention span, and I don't really read much science or philosophy. I didn't get a whole lot of epiphanies from the sequences, though it did have a strong influence on how I think (ie. my updates weren't felt as epiphanies).
I like the structure of your writing. I like to build my mental categories from the top down, and structured writing helps me put things in mental buckets. For quite a while after reading the sequences, the whole idea of rationality was a big muddle of concepts and I had a hard time thinking about it as a whole. I had to think it over and do all the categorization by myself, which was a lot of work, and I don't think I benefited enough from having to do that to justify the exercise.