Excuse me if I'm misunderstanding your ideas here, but isn't this, almost bullet for bullet, exactly what Khan academy is doing?
Biophysics grad student dropout/work at startup now; I personally was sort of sick of mathematical modelling, so I decided not to go the machine learning route. But as I was based in Boston, there were machine learning jobs everywhere. I enjoyed working through SICP, and its functional style been pretty useful in quickly understanding new concepts (Javascript callbacks and promises, for example) in programming.
I got my programming start in a website for a tournament that I ran - it taught me my way around a large framework (Django) and as django was what people call a 'highly opinionated and bloated' framework, it taught me one version of how experts think a large-scale project 'should' be organized.
I did this sort of tracking for several months. My generalizable experiences are:
- Everyone has their own situation. Following the guide of some other person exactly is bound to fail. Instead, you should start simple, and let your system evolve as you decide that you need to track different things.
- That being said, it seems like some things tend to turn up repeatedly as "good things to keep track of", like "today I was happy for X", or "how much sleep did I get last night?"
- Since no two people really have the same routine, instead of using specialized software, the flexibility of a raw text file with a template probably works best.
Downvoted for not keeping MOR stuff in the MOR threads.
I agree. I would rather not see the discussion section turned into reddit.com/r/hpmor. There is reddit.com/r/hpmor for that.
Your comments seem like they answer a slightly different question: "What would it feel for a person who has free will to not have free will?". The right question is, "What would it feel for a person who doesn't have free will to not have free will?". (brushing all concerns about what 'free will' is under the carpet for now)
"But as the Arnolds' profile grows, of course, not everyone is a fan of this science of giving, especially since it comes at a cost to the many individuals and local organizations who need direct help now and could benefit from their billions. The answer to the most asked question may not be known for years: Will their plan work?""
I chuckled at this. All of a sudden, people are asking "will it work?", but they never asked the same questions of the charities they regularly donate to.
[LINK] Evidence-based giving by Laura and John Arnold Foundation
http://online.wsj.com/article/SB10001424127887323372504578466992305986654.html
Apparently a hedge fundie made 4 billion and is giving most of it away to what the WSJ describes as a "moneyball" approach to giving.
I stopped reading the article after I got to "Templeton Foundation". Don't think this is quite what you're thinking it is.
I'm really curious to know how many people connect to that domain having confused "m" and "rn".
...I'm reading every "pomo-" word in this comment section as "porno-" now. Thanks a lot.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I couldn't agree more to that - to a first approximation.
Now of course, the first problem is with people who think a person is either rational in general or not, right in general, or not. Being right or rational is conflated with intelligence, for people can't seem to imagine that a cognitive engine which output so many right ideas in the past could be anything but a cognitive engine which outputs right ideas in general.
For instance and in practice, I'm pretty sure I strongly disagree with some of your opinions. Yet I agree with this bit over there, and other bits as well. Isn't it baffling how some people can be so clever, so right about a huge bundle of things (read : how they have opinion so very much like mine), and then suddenly you find they believe X, where X seems incredibly stupid and wrong for obvious reasons to you.
I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).
Problem number two is simply that whether you think yourself right about a certain problem, have thought about it for a long time before coming to your own conclusion, doesn't preclude new, original information, or intelligent arguments to sway your opinion. I'm often pretty darn certain about my beliefs (those I care about anyway, that is, usually the instrumental beliefs and methods I need to attain my goals) but I know better than not to change my opinion or belief for a topic about which I care if I'm conclusively shown to be wrong (but that should go without saying in a rationalist community).
'''I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).'''
Well said.