Posts

Sorted by New

Wiki Contributions

Comments

Awesome, thanks! I'll be there :)

I live about 2 hours away and was thinking I might like to come, but I'm not real familiar with how LW meetups work. Are they open to pretty much anyone (i.e. is it a problem that I'm just a lurker and don't really know anyone)? Anything in particular I have to do to sign up? How long do the meetups usually last?

As Eliezer says, on short time scales (days, weeks, months) we change our minds less often than we expect to. However, it's worth noting that, on larger time scales (years, decades) the opposite seems to be true. Also, our emotional state changes more frequently than we expect it to, even on short time scales. I can't seem to recall my exact source on this second point at the moment (I think it was some video we watched in my high school psychology class), though, anecdotally, I've observed it to be true in my own life. Like, when I'm feeling good, I may think thoughts like "I'm a generally happy person", or "my current lifestyle is working very well, and I should not change it", which are falsifiable claims/predictions that are based on the highly questionable assumption that my current emotional state will persist into both the near and distant future. Similarly, I may think the negations of such thoughts when I'm feeling bad. As a result, I have to remind myself to be extra skeptical/critical of falsifiable claims/predictions that agree too strongly my current emotional state.

One thing I think would be cool would be some sort of audio-generating device/software/thing that allows arbitrary levels of specificity. So, on one extreme, you could completely specify a fully deterministic stream of sound, and, on the other extreme, you could specify nothing and just say "make some sound". Or you could go somewhere in between and specify something along the lines of "play music for X minutes, in a manner evoking emotion Y, using melody Z as the main theme of the piece".

Thanks! As for "confusing questions", some thing I've had long-term interests in are: ethics, consciousness, and trying to wrap my mind around some of the less intuitive concepts in math/physics. Apart from that, it varies quite a bit. Recently, I've become rather interested in personality modeling. The Big-5 model has great empirically tested descriptive power, but is rather lacking in explanatory power (i.e. it can't, afaik, answer questions like "what's going on in person X's mind that causes them to behave in manner Y?" or "how could person X be made more ethical/rational/happy/whatever without fundamentally changing their personality?"). At the same time, the Myers-Briggs model (and, more importantly, the underlying Jungian cognitive function theory) has the potential to more effectively answer such questions, but also has rather limited/sketchy empirical support. So I've been thinking mainly of how M-B might be tweaked so that the theory matches reality better.

I did this and I might try doing a few more pieces like it. You have to click somewhere on the screen to start/stop it.

I'm a college student studying music composition and computer science. You can hear some of my compositions on my SoundCloud page (it's only a small subset of my music, but I made sure to put a few that I consider my best at the top of the page). In the computer science realm, I'm into game development, so I'm participating in this thing called One Game A Month whose name should be fairly self-explanatory (my February submission is the one that's most worth checking out - the other 2 are kind of lame...).

For pretty much as long as I can remember, I've enjoyed pondering difficult/philosophical/confusing questions and not running away from them, which, along with having parents well-versed in math and science, led me to gradually hone my rationality skills over a long period of time without really having a particular moment of "Aha, now I'm a rationalist!". I suppose the closest thing to such a moment would be about a year ago when I discovered HPMoR (and, shortly thereafter, this site). I've found LW to be pretty much the only place where I am consistently less confused after reading articles about difficult/philosophical/confusing questions than I am before.

Could you expand on this? It's not clear to me how this is the case.

Load More