When I dream about being underwater, I can breathe in the dream, but I also am under the impression that I'm holding my breath somehow, even though I'm breathing. Like, I'll "hold my breath" only, I've just made the mental note to do it and not actually done it. But it won't be clear to me in the dream whether or not I'm holding my breath, even though I'm aware that I'm still breathing. It's weird and contradictory, but dreams are capable of being like that. It's like how in a dream, you can see someone and know who they're supposed to be, even though they may look and act nothing like that person they supposedly are. Or how you can be in both the first and third person perspective at the same time.
It's like how in a dream, you can see someone and know who they're supposed to be, even though they may look and act nothing like that person they supposedly are. Or how you can be in both the first and third person perspective at the same time.
Heh, I've recently had a few weird half-lucid dreams, where on some level I seem to know that I'm dreaming, but don't follow this to its logical conclusions and don't gain much intentionality from it... In one of them, I ran into a friend I hadn't seen in a long time and later found he'd left something of his with me, and I wanted to return it to him. So I thought I'd look him up on Facebook and message him there; but, I reasoned, this is a dream, so what if that wasn't who I thought it was, but just someone else who looked exactly like him? So I felt I'd rather avoid going that route lest I message him and then feel foolish if it did turn out to be someone else, somehow accounting for this aspect of dreams but not noticing that this being a dream meant there was no real social risk to me and no pressing need to return his property in the first place. (Also kind of amusing that in retrospect he actually didn't look that much like the person he was supposed to be, yet in the dream I was able to know who he was while wondering "what if that was someone else who just looked like him?".)
Last night I had a dream which for some time rendered reality in aerial view as a sprite grid resembling old Gameboy RPGs, including a little pixel character who I knew was me.
The largest number is about 45,000,000,000, although mathematicians suspect that there may be even larger numbers. (45,000,000,001?)
I'm Aaron Swartz. I used to work in software (including as a cofounder of Reddit, whose software that powers this site) and now I work in politics. I'm interested in maximizing positive impact, so I follow GiveWell carefully. I've always enjoyed the rationality improvement stuff here, but I tend to find the lukeprog-style self-improvement stuff much more valuable. I've been following Eliezer's writing since before even the OvercomingBias days, I believe, but have recently started following LW much more carefully after a couple friends mentioned it to me in close succession.
I found myself wanting to post but don't have any karma, so I thought I'd start by introducing myself.
I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.
Yay, it is you!
(I've followed your blog and your various other deeds on-and-off since 2002-2003ish and have always been a fan; good to have you here.)
- Shut Up and Multiply (SUM)
- Society for Methodical Rationality Training (SMRT) (ok, not really)
- Rationality Praxis Institute
Shut Up and Multiply (SUM)
Unfortunately that's not even a very good phrase to begin with, let alone as a name for an organization. People hearing it for the first time without context mostly seem to assume that refers to reproduction, presumably by comparison to the phrase "be fruitful and multiply", or at least have that come to mind and are confused about what it has to do with rationality.
I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn't have anything to work on, or any direction to do so. Agency is hard to build, I think.
As always, of course, I could be wrong.
Would a "perfect implementation of Bayes", in the sense you meant here, be a Solomonoff inductor (or similar, perhaps modified to work better with anthropic problems), or something perfect at following Bayesian probability theory but with no prior specified (or a less universal one)? If the former, you are in fact most of the way to an agent, at least some types of agents, e.g. AIXI.
As an aside; the use of "Org" (i.e. Rationality Org) seems really unusual and immediately makes me think of Scientology (Sea Org); am I unusual in having this reaction?
I thought the same and wondered if it might have been intentional and meant ironically (since IIRC that is not meant to be the actual eventual name of the organization anyway). Either way, not the best association.
Last year I formatted the TDT paper in LaTeX to teach myself LaTeX. (It's done, aside from a diagram that was missing from the original and possibly a citation or two that were underspecified.) Would this be useful to you, if I reformatted it for the new template?
Did you literally mean send an email to volunteers+subscribe@intelligence.org or does that mean something else?
It's a Google Group, sending any email to that address will indeed subscribe you to the list.
I am a known magus, so even an Imperius curse is not out of the question.
Or you've been neglecting to treat your Spontaneous Duplication.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If he was having completely full-blown auditory, visual, and tactile hallucinations (note that this is fairly unusual, for example schizophrenia apparently usually only manifests hallucinations in one modality), then what exactly could he do about it or even how would he test it?
Yes, me[2010-05] did not think of that :) I agree now