Open Thread, August 2010-- part 2
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (369)
Reasonable Doubt: Innocence Project Co-Founder Peter Neufeld on Being Wrong
Excellent link. A particularly noteworthy excerpt:
This is the same phenomenon that is responsible for most scientific scandals: people cheat when they think they have the right answer.
It illustrates why proper methods really ought to be sacrosanct even when you're sure.
If I commit quantum suicide 10 times and live, does my estimate of MWI being true change? It seems like it should, but on the other hand it doesn't for an external observer with exactly the same data...
It makes no difference. You're either throwing away Everett branches or having a chance of throwing away everything. This experiment doesn't tell you which. You could, however, conclude that you're a damn fool. ;)
The anthropic principle gets in the way. If you play classical (i.e. non-quantum) Russian Roulette 10 times and live, you might conclude that there is some force protecting you from death. If you play classical Russian Roulette 10 times and die, you're not in a position to conclude anything much.
Good point, I missed that. So MWI seems to be even subjectively unconfirmable...
Yep. Until/unless our understanding of physics improves, we can't get any evidence for or against MWI. Our only reason for preferring it is that it sounds simple and thus should have lower prior. But it's a weird kind of "narrative simplicity", not mathematical (Kolmogorov) simplicity, because mathematically there's only one quantum mechanics and no interpretations. So I wonder why people care about MWI as anything more than an (admittedly very nice) intuition pump for studying QM.
It should be called the MWH (Hypothesis) - not the MWI (Interpretation).
See: Q16 Is many-worlds (just) an interpretation?
MWI says (in part) that we don't have to make wave function collapse an integral part of the mathematical formulation of quantum mechanics. Since, historically, wave function collapse has been a part of the mathematical formulation of quantum mechanics, that seems sufficient reason to care about MWI.
I think you're equivocating on "mathematical formulation". We want theories to predict the future. The algorithm that assigns probabilities to your future observations is the same, and equally mysterious, across all interpretations. MWI does raise the tantalizing possibility that the Born rule might not be part of basic physics - that it might somehow emerge from a universe without it - but AFAIK this isn't settled yet.
Assuming MWI is true, I have doubts about the idea that repeated quantum suicide would prove to you that MWI is true, as many people seem to assume. It seems to me that we need to take into account the probability measure of observer moments, and at any time you should be surprised if you happen to find yourself experiencing a low-probability observer moment - just as surprised as if you had got into the observer moment in the "conventional" way of being lucky. I am not saying here that MWI is false, or that quantum suicide wouldn't "work" (in terms of you being able to be sure of continuity) - merely that it seems to me to present an issue of putting you into observer moments which have very low measure indeed.
If you ever find yourself in an extremely low-measure observer moment, rather than having MWI or the validity of the quantum suicide idea proved to you, it may be that it gives you reason to think that you are being tricked in some way - that you are not really in such a low-measure situation. This might mean that repeated quantum suicide, if it were valid, could be a threat to your mental health - by putting you into a situation which you can't rationally believe you are in!
Quantum suicide would only have a low probability of causing you to observe an unlikely outcome, as should any event. The overwhelmingly likely outcome is that you just die.
A quick probability math question.
Consider a population of blobs, initially comprising N individual blobs. Each individual blob independently has a probability p of reproducing, just once, spawning exactly one new blob. The next generation (an expected N*p individuals) has the same probability for each individual to spawn one new blob, and so on. Eventually the process will stop, with a total blob population of P.
The question is about the probability distribution for P, given N and p. Is this a well-known probability distribution? If so, which? Even if not, are there things that can be said about it which are mathematically obvious? (Not obvious to me, obviously. I'd be interested in which gaps in my math education I'm revealing by even asking these questions.)
After G generations, each blob has a probability q=p^G of having a descendant. So, it seems to me that P will be distributed as a binomial with q and N as parameters.
The blobs don't reproduce with probability p in any given generation, they reproduce with probability p ever. The scenario doesn't require generations in the sense you seem to be thinking of, it could all happen within 1 second, or a first generation blob might reproduce after the highest generation blob that reproduces has already done so.
Oh, ok. I thought the blobs died each generation. A shrinking population. Instead they go into nursing homes. A growing population which stabilizes once everyone is geriatric.
Got it. Wei pretty clearly has the solution. Negative Binomial distribution
Pretty damned obvious, actually, that (P-N) is distributed as a negative binomial where r is set to N; failure = failure to reproduce; success = birth.
Here's my solution. The descendants of each initial blob spawn independently of descendants of other initial blobs, so this is a sum of N independent distributions. The number of descendants of one initial blob is obviously the geometric distribution. Googling "sum of independent geometric distributions" gives Negative binomial distribution as the answer.
Thanks for answering several questions at once. :)
Agreed - there are never more than N breeding blobs, each success increases P by one, and each failure reduces the remaining number of breeding blobs by one. Essentially, if r = N, X = P-N.
I don't think that's right. I don't have the math to show why yet, but my current working hunch says to make explicit your assumptions about whether the initial number of blobs, and the number of generations, are continuous or discrete, because the geometric distribution may not actually be right.
Offhand, I think you would also need to know the number of generations. I'll have to do some pen-and-paper work to work out what the distribution looks like.
Huh? Why? The expected number of blobs is given by N/(1-p), the number of actually realized generations is not a variable, it's determined by N, p and chance. I have no idea how the distribution looks, but the number of actual generations should be one of the things you have a distribution across, not an input.
Morendil said:
Under your model, P=0 with frequency 1, so that doesn't make sense. I think the idea is to stop after a predetermined number of generations and see how many blobs are left.
Edit: No, wait, I see what's going on. You're right.
My experiments with nootropics continue. A few days ago, I started taking sulbutiamine (350mg/day), a synthetic analog of thiamine which differs in that it crosses the blood-brain barrier more readily. The effects were immediate, positive, and extremely dramatic - on an entirely different order of magnitude than I expected, and probably the largest single improvement to my subjective well being I have ever experienced. A feeling of mental fatigue and not wanting to do stuff - a feeling that leads to spending lots of time on blogs, playing video games and otherwise killing time suboptimally (though not necessarily the only such feeling) - just up and vanished overnight. This was something that I had identified as a major problem, and believed to be purely psychological in nature, but was, in fact, entirely biochemical. On the first day I took sulbutiamine, I felt significantly better, worked three hours longer than normal, and went to the gym (which would previously have been entirely out of character for me).
That said, I do have a concrete reason to believe that this effect is atypical. Specifically, I believe I was deficient in thiamine; I believe this because I'm a type 1 diabetic, and according to the research reported in this article, that means my body uses up thiamine at a greatly increased rate; I was only getting the RDA of thiamine from a standard multivitamin; and the problems I had seem to match the symptoms of minor thiamine deficiency pretty well.
That said, searching the internet finds people without thiamine deficiencies who also benefited from sulbutiamine, albeit to a lesser degree. And trying sulbutiamine is safe (no credible reports of adverse effects ever) and cheap ($17 for an 85-day supply as bulk powder), so I recommend it.
While re-reading the reports here for summary in my personal drugs file, it suddenly occurred to me that your experience with sulbutiamine might be on the level of pica & iron deficiency, and so worth mentioning or linking as a comment in http://lesswrong.com/lw/15w/experiential_pica/ .
Thanks for the reminder regarding sulbutiamine. That stuff is fantastic!
I'll add that sulbutiamine is one of the nootropics that is absolutely foul tasting!
Oh, in other news, the FDA is apparently going after piracetam; smartpowders.com reports that it's been ordered to cease selling piracetam and is frantically trying to get rid of its stock. See
That is infuriating! The fools!
This puts Robin Hanson's criticism of the FDA in a new perspective for me.
They blew it up! They blew it all up! Goddamn them to hell!
(Wrong allusion?)
Yikes! Hits close to home for me! I had actually ordered bulk piracetam about a week ago, in an order with two other supplements. When the shipment arrived, the piracetam wasn't in it, and it had a note saying it was out of stock and I wouldn't be charged for it, but I'd be informed when it was available again.
I thought it was strange at first, since they wouldn't have taken the order if they weren't able to reserve a unit for my order. (This isn't fractional reserve banking, folks!) But that explanation makes a lot more sense. If only I had placed the order a few days earlier...
Just-in-time techniques always struck me as being very close to fractional reserve banking, actually...
Anyway, elsewhere in that Reddit page, users mention that other nootropics seem to be getting harder to find lately like choline and huperzine-a. (I tried huperzine-a and wasn't impressed, but I kind of need the choline to go with any piracetam.)
Strange. My local grocery store with a health food/supplement aisle just started stocking a choline/inositol blend (saw it for the first time two days ago). I previously got the choline in a different supplement section, from a product called LipoTrim.
That's quite interesting. I recently finished up my own 30g supply of sulbutiamine, and while I thought that it does work roughly on the level of piracetam without choline supplementation, I wasn't hugely impressed. But I am not diabetic nor do I match any of the descriptions of beriberi in Wikipedia.
(Didn't last me 85 days, however. 200mg strikes me as a pretty small dose.)
Very interesting — thanks for the information. I'm trying piracetam right now, but this also sounds like something I'd like to try. I have similar problems with mental fatigue and low motivation... unfortunately, I don't yet have even a vague sense of the biochemical basis for my issues (my symptoms match chronic fatigue, but it seems like its causal structure is not well-understood anyway). But it's worth a try, I suppose.
Are you taking this and the piracetam at the same time, or did you stop the piracetam to try this?
Both at the same time. (I have no particular reason to think they interact, I'm just following the strategy of changing only one thing at a time.) I hope sulbutiamine works for you; but if it doesn't, don't give up, it just means the biochemical issue is somewhere else, and there are many more safe things to try.
I tried both ways. They didn't seem to interfere or interact.
I noticed that a lot of the reviews were complaining about the taste - were you using it in its raw form, or putting it into capsules?
I put it in capsules. Besides getting around the taste, it's also much more convenient that way; rather than having to measure and prepare some every day, I can sit down and prepare a month's worth of capsules in 30 minutes. The more different supplements you take, the more important it is to do it this way.
Have you considered buying or selling capsules? It seems unlikely that this is something you should do yourself, but only for yourself.
Also, before you said that you filled 10 capsules per minute. Do you take 10 capsules per day? Do you mix piracetam and choline in a single capsule?
I've considered buying capsules, but decided to get powder instead because it's cheaper and allows more flexibility if I change dosage or decide to pre-mix stuff. I couldn't sell the capsules I make because I don't measure them precisely enough (they vary by +/-10% or so). I currently take 5 capsules day - two of piracetam, two of choline citrate, and one of sulbutiamine.
Putting together capsules sounds hard, but it's actually quite easy. You get empty gel caps, which come as two unequally sized pieces that fit together tightly enough to stay in place but loosely enough to pull apart. Take the pieces of the capsule apart, pack some powder into the larger piece, put them together, and drop it on a scale. If it's within acceptable range, drop it in the 'done' container, otherwise open it back up and add some or remove some. After a dozen or so, you get the hang of it and can hit a 10% tolerance pretty consistently on the first time. Wear latex gloves so the gel caps won't stick to your fingers and you don't get hair and sweat in the powder tub.
(Edit: the discrepancy between my saying a month's worth of capsules in 30 minutes, and a rate of 10/minute, is due to setup and cleanup time; and neither of these numbers was precise to more than a factor of 2.)
If it's good enough for you, it may be good enough for customers; it's just a different niche. It may also be an illegal niche.
ETA: flexibility is a good reason.
I'm not speaking for Jim but I note that I find mixing the racetams with the choline source convenient. It allows for simply adjusting the dose while keeping the same ratio.
With nootropics, everything tastes bad. I dissolve my stuff in hot water when I make tea, and wash it down with the latter. It didn't taste worse than just piracetam+choline, FWIW - that's foul enough to mask pretty much any taste.
What fosters a sense of camaraderie or hatred for a machine? Or: How users learned to stop worrying and love Clippy
http://online.wsj.com/article/SB10001424052748703959704575453411132636080.html
I recommend reading the article-- I didn't realize people could be recruited that easily.
-- John Baez on saving the planet
ETA: Ag, just before posting this I realized Hal Finney had already basically raised this same point on the original thread! Still, I think this expands on it a little...
You know, if Wei Dai's alien black box halting decider scenario were to occur, I'm not sure there is any level of black-box evidence that could convince me they were telling the truth. (Note: To make later things make sense, I'm going to assume the claim is that if the program halts, it actually tells you the output, not just that it halts.)
It's not so much that I'm committed to the Turing notion of computability - presumably opening the box should, if they're telling us the truth, allow us to learn this new Turing-uncomputable physics; the problem is that - without hypercomputers ourselves - we don't really have any way of verifying their claim in the first place. Of course the set of halting programs is semicomputable, so we can certainly verify its yes answers (if not quickly), but no answers can only be verified in the cases where we ourselves have precomputed the answer by proving it for that particular case (or family of cases). In short, we can verify that it's correct on the easy cases, but it's not clear why we should believe it's correct on the hard cases that we actually care about it on. In other words, we can only verify it by checking it against a precomputed list of our own, and ISTM that if we precomputed it, they could have done the same.
If you're not being careful, you could just say justify the claim with "induction", but even without the precomputed list idea, induction also supports the hypothesis that it simply runs programs for a fixed but really long time and then reports whether they've halted, which doesn't require anything uncomputable and so is more probable a priori. (The fact that Wei Dai said nothing about the computation time makes this a bit trickier, but presumably they may have computation far faster than us.)
Now if it just claimed, say, that it was only necessarily correct for programs using polynomial space, then we'd be in better shape, due to IP=PSPACE; even if we couldn't replicate its results very fast, we could at least verify them quickly. We could actually give it hard cases, that we can't do (quickly) by hand, and then verify that it got them right. (Except actually I'm brushing over some problems here - IIRC it's uncomputable to determine whether a program will use polynomial space in the first place, so while it presumably doesn't have a precomputed list of inputs for the different programs, it might well have a precomputed list of which programs below a certain length are polynomial-space in the first place! And then just run those much faster than we can. We could just make it an oracle for a single PSPACE-complete problem, but then of course there's nothing uncomputable going on so no there's no real problem in the first place; it could just be a really fast brute-force solver. This would allow us to verify quickly that they have much more advanced computers than us, that can solve PSPACE-complete problems in an instant, but that's not nearly as surprising. Not sure if there's any way to make this example really work as intended.)
When we test our own programs, we have some idea of what's in the black box - we have some idea how they work, and we are just verifying that we haven't screwed it up. And on those, since we have some idea of the algorithm, we can construct tricky test cases to check the parts that are most likely to screw it up. And even if we're verifying a program from someone untrustworthy, SFAICT this is based on inferring what ways the program probably works, or what ways that look right but don't work on hard cases someone may have come up with, or key steps it will probably rely on, or ways it might cheat, and writing test cases for those. Of course you can't rule out arbitrarily advanced cheats this way, but we have other evidence against those - they'd take up too much space, or they'd be even harder than doing it correctly. In the case of a halting oracle, the problem there is no point where it would seem that such a ridiculous cheat would be even harder than doing it correctly.
So until the black box is opened, I'm not sure that this is a good argument against the universal prior, though I suppose it does still argue that the universal prior + Bayes doesn't quite capture our intuitive notion of induction.
I'm afraid I can't do much better at this point than to cite Harvey Friedman's position on this. (He posted his alien crystall ball scenario before I posted my alien black box, and obviously knows a lot more about this stuff than I do.)
Here are the relevant discussion threads on the Foundations of Mathematics mailing list:
ETA:
Assuming the laws of physics actually does allow a halting oracle to be implemented, then at some point it would be easier to just implement it than to do these ridiculous cheats, right? As we rule out various possible cheats, that intuitively raises our credence that a halting oracle can be physically implemented, which contradicts the universal prior.
...having actually read those now, those threads didn't seem very helpful. :-/
Hm, indeed. Actually, it occurred to me after writing this that one thing to look at might be the size of the device, since there are, as far as we know, limits on how small you can make your computational units. No idea how you'd put that into action, though.
Huh?
(15 minutes later)
Now I know what I'll be reading for the rest of the week. Thanks!
Have advocates of the simulation argument actually argued for the possibility of ancestor simulations? It is a very counterintuitive idea, yet it seems to be invoked as though it is obviously possible. Aside from whatever probability we want to assign to the possibility that the future human race will discover strange previously-unknown laws of physics that make it more feasible, doesn't the idea of an ancestor simulation (a simulation of "the entire mental history of humankind") depend on having access to a huge amount of information that has presumably been permanently lost to entropy? Where is the future civilization expected to get all the mental structures needed to simulate the entire mental history of humankind (or a model of the early Earth implausibly precise enough that simulating it causes things to play out exactly as they really did)?
If things don't play out exactly as they really did, does the simulation argument lose any force?
It does appear to depend on ancestor simulations being of the world's history as it actually happened, on the basis that if we end up making simulations of our own history, then we are probably in such a simulation run by an someone in an outer future version of our own world.
You could argue for the same conclusion without requiring that, but it seems to me that it would end up being a completely different argument; at the very least, you'd have to figure out the general probability of some advanced civilization creating a simulation containing you, which is a lot harder when you aren't assuming that the civilization running the simulation used to actually contain you (and can somehow extrapolate backwards far enough to recover the information in your mind).
OK I buy it. To be fair, Bostrom's conclusion is either we're in a simulation, we're going to go extinct, or "(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)." You're saying that (2) is so plausible that the other alternatives are not interesting.
Sort of. I was really only intending to ask what the claimed justification is for believing in the possibility of ancestor simulations, not to argue that they are not possible; Bostrom is a careful enough philosopher that I would be surprised if he didn't explicitly justify this somewhere. But in the absence of any particular argument against my prior judgment of the feasibility of ancestor simulations (i.e. they'd require us to be able to extrapolate backwards in much greater detail than seems possible), then yes, I'd argue that (2) is the most likely if we do eventually reach posthumanity.
Maybe they are simulating me by mistake. Back in the "real world" I never existed. It is still the case that they simulating me.
Edit: Actually, this response wasn't particularly responsive. Consider it withdrawn unless it contains virtues I don't currently see.
Second the question. It's been a long time since I read Tipler, but as I recall, he claimed Omega would simulate all possible humans, not just all historically real ones.
Is Tipler / the Omega Point relevant to the simulation argument? I haven't seen him invoked in discussions thereof, and that idea (whatever its probability) seems to have a whole different set of implications, more along the lines of the confusing anthropic problems we have with Very Big Worlds and Boltzmann brains.
Relevant only to the extent that large scale simulation of the hypothetical past of the human species is a large enough (and/or pointless enough) task that it will require an Omega Point quantity of resources.
Last open thread I linked a Wired article on a argument-diagram software called ACH being open-sourced.
It's now available: http://competinghypotheses.org/
(PHP, apparently. Blech!)
I came here to post this actually - seems like it would be extremely useful for countering confirmation bias.
I just read and liked "Pascal's mugging." It was written a few years ago, and the wiki is pretty spare. What's the state of the art on this problem?
While raking, I think I finally thought of a proof that the before-offer-probability can't be known.
The question is basically 'what fraction of all Turing machines making an offer (which is accepted) will then output a certain result?'
We could rewrite this as 'what is the probability that a random Turing machine will output a certain result?
We could then devise a rewriting of all those Turing machines into Turing machines that halt or not when their offer is accepted (eg. halting might = delivering, not halting = welshing on the deal. This is like Rice's theorem).
Now we are asking 'what fraction of all these Turing machines will halt?'
However, this is asking 'what is Chaitin's constant for this rewritten set of Turing machines?' and that is uncomputable!
Since Turing machine-based agents are a subset of all agents that might try to employ Pascal's Mugging (even if we won't grant that agents must be Turing machines), the probability is at least partially uncomputable. A decision procedure which entails uncomputability is unacceptable, so we reject giving the probability in advance, and so our probability must be contingent on the offer's details (like its payoff).
Thoughts?
I think Nesov is right, you've basically (re)discovered that the universal prior is uncomputable and thought that this result is related to Pascal's Mugging because you made the discovery while thinking about Pascal's Mugging. Pascal's Mugging seems to be more about the utility function having to be bounded in some way.
You might be interested in this thread, where I talked about how a computable decision process might be able to use an uncomputable prior:
http://groups.google.com/group/one-logic/browse_frm/thread/b499a90ef9e5fd84/2193ca2c204a55d8?#2193ca2c204a55d8
It seems to be an argument against possibility of making any decision, and hence not a valid argument about this particular decision. Under the same assumptions, you could in principle formalize any situation in this way. (The problem boils down to uncomputability of universal prior itself.)
Besides, not making the decision is not an option, so you have to fall down to some default decision when you don't know how to choose, but where does this default come from?
I take it as an argument against making perfect decisions. If perfection is uncomputable, then any computable agent is not perfect in some way.
The question is what imperfection do we want our agent to have? This might be the deep justification for choosing to scale probability by utility that I was looking for. Scaling linearly corresponds to being willing to lose a fixed amount to mugging, scaling superlinearly correspond to not willing to lose any genuine offer, scaling sublinearly corresponds to not being willing to ever be fooled. Or something like that. The details need some work.
In order to make a decision, we do not always need an exact probability: sometimes just knowing that a probability is less than, say, 0.5 is enough to determine the correct decision. So, even though an exact probability p may be incomputable, that doesn't mean that the truth value of the statement "p<0.1" can not be computed (for some particular case). And that computation may be all we need.
That said, I'm not sure exactly how to interpret "A decision procedure which entails uncomputability is unacceptable." Unacceptable to whom? Do decision procedures have to be deterministic? To be algorithms? To be recursive? To be guaranteed to terminate in a finite time. To be guaranteed to terminate in a bounded time? To be guaranteed to terminate by the deadline for making a decision?
Alright, so you compute away and determine that the upper bound on Chaitin's constant for your needed formalism is 0.01. The mugger than multiplies his offering by 100, and proceeds to mug you, no? (After all, you don't know that the right probability isn't 0.01 and actually some smaller number.)
This is pretty intuitive to me - a decision procedure which cannot be computed cannot make decisions, and a decision procedure which cannot make decisions cannot do anything. I mean, do you have any reason to think that the optimal, correct, decision theory is uncomputable?
I have no idea whether we are even talking about the same problem. (Probably not, since my thinking did not arise from raking). But you do seem to be suggesting that the multiplication by 100 does not alter the upper bound on the probability. As I read the wiki article on "Pascal's Mugging", Robin Hanson suggests that it does. Assuming, of course, that by "his offering" you mean the amount of disutility he threatens. And the multiplication by 100 does also affect the number (in this example 0.01) which I need to know whether p is less than. Which strikes me as the real point.
This whole subject seems bizarre to me. Are we assuming that this mugger has Omega-like psy powers? Why? If not, how does my upper bound calculation and its timing have an effect on his "offer"? I seem to have walked into the middle of a conversation with no way from the context to guess what went before.
I haven't seen much response to it. There's a reply in Analysis by Baumann who takes a cheap out by saying simply that one cannot provide the probability in advance, that it's 'extremely implausible'.
I have an unfinished essay where I argue that as presented the problem is asking for a uniform distribution over an infinity, so you cannot give the probability in advance, but I haven't yet come up with a convincing argument why you would want your probability to scale down in proportion as the mugger's offer scales up.
That is: it's easy to show that scaling disproportionately leads to another mugging. If you scale superlinearly, then the mugging can be broken up into an ensemble of offers that add to a mugging. If you scale sublinearly, you will refuse sensible offers that are broken up.
But I haven't come up with any deeper justification for linearly scaling other than 'this apparently arbitrary numeric procedure avoids 3 problems'. I've sort of given up on it, as you can see from the parlous state of my essay.
Thanks. Here's my fresh and uneducated opinion.
I see three kinds of answers to the mugging:
Here's my analysis in the sense of 4., tell me if I'm making a common mistake. We are worried that P(agent can do H amount of harm | agent threatens to do H amount of harm) times H can be arbitrarily large. As Tarleton pointed out in the 2007 post, any details beyond H about the scenario we're being threatened with is a distraction (right? That actually doesn't seem to be the implicit assumption of your draft, or of Hanson's comment, etc.)
By Bayes the quantity in question is the same as
P(threat | ability)/P(threat) x P(ability) x H
Our hope is that we can prove this quantity is actually bounded independent of H (but of course not independent of the agent making the threat). I'll leave aside the fact that the probability that such a proof contains a mistake is certainly bounded below.
P(threaten H) is the probability that a certain computer program (the agent making the threat) will give a certain output (the threat). My feeling about this number is that it is medium sized if H has low complexity (such as 3^^^3) and tiny if H has high complexity (such as some of the numbers within 10% of 3^^^3). That is, complex threats have more credibility. I'm comforted by the fact that, by the definition of complexity, it would take a long time for an agent to articulate his complex threat. So let's assume P(threaten H) is medium-sized, as in the original version where H = 3^^^3 x value of human not being tortured.
It seems like wishful thinking that P(threat | ability) should shrink with H. Let's assume this is also medium sized and does not depend on H.
So I think the question boils down to how fast P(agent can do H amount of harm) shrinks with H. If it's O(1/H) we're OK, and if it's larger we're boned.
As long as we're all chipping in, here's my take:
(1) Even if the correct answer is to hand over the money, we should expect to feel an intuitive sense that doing so is the wrong answer. A credible threat to inflict that much disutility would never have happened in the ancestral environment, but false threats to do so have happened rather often. That being the case, the following is probably rationalization rather than rationality:
(2) Consider the proposition that, at some point in my life, someone will try to Pascal's-mug me and actually back their threats up. In this case, I would still expect to receive a much larger number of false threats over the course of my lifetime. If I hand over all my money to the first mugger without proper verification, I won't be able to pay up when the real threat comes around.
I think that your (2) is a proof that handing over the money is the wrong answer. My understand is that the problem is whether this means that any AI that runs on the basic package that we sometimes envision hazily -- prior, (unbounded) utility function, algorithm for choosing based somehow on multiplying the former by the latter -- is boned.
I thought that my (2) was a proof that a prior-and-utility system will correctly decide to investigate the claim to see whether it's credible.
But what a prior-and-utility system means by "credible" is that the expected disutility is large. If a blackmailer can, at finite cost to itself, put our AI in a situation with arbitrarily high expected disutility, then our AI is boned.
Ah, you're worried about a blackmailer that can actually follow up on that threat. I would point out that humans usually pay ransoms, so it's not exactly making a different decision than we would in the same situation.
Or, the AI might anticipate the problem and self-modify in advance to never submit to threats.
I'm worried about a blackmailer that can with positive probability follow up on that threat.
Yes humans behave in the same way, at least according to economists. We pay ransoms when the probability of the threat being carried out, times the disutility that would result from the threat being carried out, is less than the ransom. The difference is that for human-scale threats, this expected disutility does seem to be bounded.
That could mean one of at least two things: either the AI starts to work according to the rules of a (hitherto not conceived?) non-prior-and-utility system. Or the AI calibrates its prior and its utility function so that it doesn't submit to (some) threats. I think the question is whether something like the second idea can work.
No, see, that's different.
If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.
Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun. Do you now assign a higher expected-value utility to giving $100 to the EFF, or to giving the same $100 instead to SIAI? If I blew up the moon as a warning shot, would that change your mind?
I don't quite follow this. Assuming we're using one of the universal priors based on Turing machine enumerations, then an agent which consists of 3^^^3threat+noability is much shorter and much more likely than an agent which consists of ~.10*3^^^3threat+ability. The more complex the threat, the less space there is for executing it.
If I disagree, it's for a very minor reason, and with only a little confidence. (P(threat) is short for P(threat|no information about ability).) But you're saying the case for P(threaten H) being bounded below (and its reciprocal being bounded above) is even stronger than I thought, right?
Another way to argue that P(threaten H) should be medium-sized: at least in real life, muggings have a time-limit. There are finitely many threats of a hundred words or less, and so our prior probability that we will one day receive such a threat is bounded below.
Another way to argue that the real issue is P(ability H): our AI might single you out and compute P(gwern will do H harm) = P(gwern will do H harm | gwern can do H harm) x P(gwern can do H harm). It seems like you have an interest in convincing the AI that P(gwern can do H harm) x H is bounded above.
Light entertainment: this hyperboleandahalf comic reminded me of some of the FAI discussions that go on in these parts.
http://hyperboleandahalf.blogspot.com/2010/08/this-comic-was-inspired-by-experience-i.html
Apparently AGI, transhumanism, and the Singularity are a massive statist/corporate conspiracy, and there exists a vast "AGI Manhattan Project". Neat.
Looks like the Illuminati have deleted page 8, which I assume is where all the juciest stuff is!
I have written a critique of the position that one boxing wins on Newcomb's problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here
I don't understand what the part about "fallible" and "infallible" agents is supposed to mean. If there is an "infallible" agent that makes the correct prediction 60% of the time and a "fallible" agent that makes the correct prediction 60% of the time, in what way should one anticipate them to behave differently?
It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.
As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.
http://wiki.lesswrong.com/wiki/Omega
Omega is assumed to be a "smart predictor".
You may have misunderstood what is meant by "smart predictor".
The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.
Furthermore a significant part of the essay explains in detail why many of the assumptions associated with Omega are problematic.
Edited to add that on rereading I can see how the bit where I say, "It doesn’t state whether Omega is sufficiently smart." is a bit misleading. It should be read as a statement about the method of making the prediction not about Omega's intelligence.
I've seen statements of Newcomb-like problems saying things like "Omega gets it right 90% of the time". In that case it seems like it should matter whether it's because of cosmic rays that affect all predictions equally, or whether it's because he can only usefully predict the 90% of people who are easiest to predict, in which case if I'm not mistaken you can two-box if you're confident you're in the other 10%. I'm sure this would have been thought through somewhere before.
Has anyone tried or uses the Zeo Personal Sleep Coach (press coverage).
It's a sleep tracker - measuring light, REM, and deep sleep - which sounds useful for improving sleep which as we all know is extremely important to mental performance and learning and health. I'm thinking of getting one, but the $200 pricepoint is a little daunting.
There is a $.99 iPhone App that does essentially the same thing using the phone's accelerometers, etc. called Sleep Cycle http://www.mdlabs.se/sleepcycle/ It definitely seems to have had a positive impact on my mornings. Less biometrics than the Zeo probably, but certainly more economical if you have an iPhone.
I got a Zeo recently, but mainly to try to get answers to a question that isn't generally applicable (specifically, how blood sugar interacts with sleep). I don't really buy the validity of using an accelerometer as a proxy for sleep stage, but if your goal is just to get woken from light sleep rather than deep sleep, there's an Android app called Gentle Alarm that does that using a pre-alarm: a soft alarm sound played 30 minutes before your scheduled wake-up time which, in principle, will only wake you if were close to awake already.
Has the scheduled wakeup time worked? Of the Zeo functionalities, that sounds the most dubious to me.
Thanks. I found SleepSense, a similar sleep tracker application for Windows Mobile. And Smart Alarm for Android.
IMO, the quality of comments on Overcoming Bias has diminished significantly since Less Wrong started up. This was true almost from the beginning, but the situation has really spiraled out of control more recently.
I gave up reading the comments regularly last year, but once a week or so, I peek at the comments and they are atrociously bad (and almost uniformly negative). The great majority seem unwilling to even engage with Robin Hanson's arguments and instead rely on shaming techniques.
So what gives? Why is the comment quality so much higher on LW than on OB? My first thought is karma, but OB didn't have karma when Eliezer Yudkowsky was posting, and the comments were pretty good back then. My best guess is that the good commenters were mostly Yudkowsky fans, and they left when EY left.
However, I don't know if anyone else shares my impression about OB commenter quality, so I may be completely misguided here.
I don't follow OB, but your comment sent me over there to look around. What I saw was a lot of criticism from feminists regarding posts by Robin that had a strong anti-feminist odor to them. I also saw some posts on less controversial subjects that drew almost no comments at all. So the natural presumption is that those feminist commentors are not regulars, but rather were attracted to OB when a post relevant to their core interests got syndicated somewhere. If that is what you are talking about then ...
Well, sure, a registration system might have repelled some of the commentors. If Robin really wants to insulate himself from feedback in this way, it might work. But I rather doubt that he is the kind of person to exclaim "OMG, we have wymin commenting here! Who let them in?". I hope his regulars aren't either.
Some comments on topics like this are emotive. Admittedly, you can't really engage with them. But that doesn't mean you shouldn't at least read them, count them, and try to learn something from their sheer existence, if not from their logic.
That isn't what I was talking about. I was talking about a general impression I've gotten over the last year. Robin's recent posts have received an Instalanche, so they're hardly representative.
I remember there being lots of bad comments on the old OB, and I think that putting a karma system in place, and requiring registration, helped an awful lot.
I wasn't reading OB before LW existed, but if you look there now, it's immediately apparent that the topics represented on the front page are much, much, much more interesting to the average casual reader than the ones on LW's front page. I wouldn't be surprised if the commenters tended to be less invested and less focused as a result.
(EDIT: I shouldn't say the "average casual reader," since that must mean something different to everyone. I clarified what I meant below in response to katydee; I think OB appeals to a large audience of interested laymen who like accessible, smart writing on a variety of topics, but who aren't very interested in a lot of LW's denser and more academic discussion.)
I suppose I'm not the average casual reader, but here's my comparison--
Less Wrong front page:
-Occam efficiency/rationality games-- low interest
-Strategies for confronting existential risk-- high interest
-Potential biases in evolutionary psychology-- mid-level interest
-Taking ideas seriously-- extremely high interest
-Various community threads-- low/mid interest
-Quick explanations of rationality techniques-- extremely high interest
-Conflicts within the mind-- mid/high interest
Overcoming Bias front page:
-Personality trait effects on romantic relationships-- minimal interest
-Status and reproduction-- minimal interest
-Flaws with medicine- mid-level interest
-False virginity-- no interest beyond "it exists"
-(In)efficiency of free parking-- minimal interest
-Strategies for influencing the future-- high interest
-Reproductive ethics-- minimal interest
-Economic debate-- minimal interest
Only two of the Overcoming Bias articles were interesting to me at all; only one was strongly interesting, and it was also short. Less Wrong seemed, at least to me, to have better/more interesting topics than Overcoming Bias, which might be why it has better/more interesting discussions.
Hmm, you seem to be seeing a totally different OB "front page" to me. Where are you seeing those articles?
edit: nevermind, I thought this was the current open thread. I didn't see that it was from 2010.
I totally agree with you; that's why I'm here!
But personally, I know a lot of fairly smart, moderately well-educated people who just aren't very interested in a life of the mind. They don't get a lot out of studying philosophy and math, they read a little but not a lot, they don't seek intellectual self-improvement, and they aren't terribly introspective. However, they all have a passing interest in current events, technology, economics, and social issues; the stuff you'd find in the New Yorker or Harper's, or on news aggregators. Hanson's writing on these topics is exactly the sort of thing that appeals to that demographic, whereas Less Wrong is just not.
It seems unusual that people would have a passing interest in technical issues in economics but not psychology.
I certainly find Hanson's anecdotes far more useful when socialising with people that have interested in hearing surprising stories about human behaviour (ie. most of the people I bother socialising with). The ability to drop sound bites is, after all, the primary purpose of keeping 'informed' in general.
I expect that was the biggest reason. When I started following OB it basically was Eliezer's blog. Sure, occasionally Robin would post a quote and an interpretation but that was really just 'intermission break' entertainment.
I do note that comments here have been said to have reduced in quality. That is probably true and is somewhat related to lacking a stream of EY posts and also because there aren't many other prominent posters (like Yvain, Roko, Wei, etc.) posting on the more fascinating topics. (At least, fascinating to me.)
lol, yeah, that's the impression I got in the OB days. When there was discussion about renaming the site, I half-seriously thought it should be called "Eliezer Yudkowsky and the backup squad" :-P
Oh man, just you wait! I'm almost done with one. Here's the title and summary:
Title: Morality as Parfitian-filtered Decision Theory? (Alternate title: Morality as Anthropic Acausal Optimization?)
Summary: Situations like the Parfit's Hitchhiker problem select for a certain kind of mind: specifically, one that recognizes that an action can be optimal, in a self-interested sense, even if it can no longer cause any future benefit. A mind that can identify such actions might place them in a different category which enables it to perform them, in defiance of the (futureward) consequentialist concerns that normally need to motivate it. Our evolutionary history has put us through such "Parfitian filters", and the corresponding actions, viewed from the inside, feel like "the right thing to do"; we are unconvinced by arguments that point out the lack of a future benefit -- or our estimates of the magnitude of what future benefits do exist is skewed upward. Therein lies the origin of our moral intuitions, as well as the basis for creating the category "morality" in the first place.
That sounds like a follow-up or generalization of your blog post on Parfit's Hitchhiker and intellectual property. I look forward to it!
I know I've been mentioning Good and Real constantly since I read it, but this sounds a bit like the account of human decision theory (morality) in G&R...
Anyone know how to do linked footnotes? Where you have a link that jumps to the bottom and one at the bottom that jumps back to the point in the text? I suppose I could just do [1], [2], etc., but I figure that would annoy people.
I've made about six different 3-paragraph posts about G&R in the past three weeks, so I think you're safe ;-)
And yes, it does draw heavily on Drescher's account of the overlap between "morality" and "acting as if recognizing subjunctive acausal means-end links" (which I hope to abbreviate to SAMEL without provoking a riot).
I'm looking forward to that one! I can't guarantee that I'll agree with all of it (it will depend on how strong you make some of the claims in the middle) but I can tell I'll be engaged either way.
My first impression from the titles was that the 'Alternative' one was far better. But on reflection it sounds like the first title would be more accurate.
Who, in particular, said the comments have reduced in quality? Your post seems weasel-wordy to me as it currently stands.
rhollerithdotcom is one. I don't think I am being excessively controversial here. Ebbs and flows of post content are inevitable and even just looking at the voting trends in the post list. There is little shame in such a variation... it is a good sign that people are busy doing real work!
Perhaps, but it was also polite. I did not (and still am not) providing the explicit link because I don't see it necessary to direct people to the surrounding context. The context represents a situation that was later shown to be a success story in personal development but at that time reflected negatively.
In the OB days, I mainly read it because of EY. Maybe others did too. I'm surprised that OB still wins in the usage stats.
Absolutely. OB posts are worth a read occasionally but the comments are not. And here I include even comments (not posts) by Robin himself. The way Robin replies does, I suggest, contribute to who is willing or interested in commenting there. Status interferes rather drastically with the ability of prominent figures to engage usefully in a public forum. By public forum I refer to the generic meaning not electronic adoption. It is often the case that hecklers wishing to shame and express outrage are the only ones who consider it worthwhile to show up.
For my part if I was particularly keen on discussing a topic from OB I would consider bringing it up on the open thread on LW.
I’m not sure whether the satanic ritual abuse and similar prosecutions of the 80s/90s have ever been discussed on LW in any detail (I couldn’t find anything with a few google searches), but some of the failures of rationality in those cases seem to fit into the subject matter here.
For those unfamiliar with these cases, a sort of panic swept through many parts of the United States (and later other countries) resulting in a number of prosecutions of alleged satanic ritual abuse or other extensive conspiracies involving sexual abuse, despite, in almost all cases, virtually no physical evidence that such abuse occurred. Lack of physical evidence, of course, does not always mean that a crime has not occurred, but given the particular types of allegations made, it was not credible in most cases that no physical evidence would exist. It is hard to choose the most outrageous example, but this one is pretty remarkable:
Moreover, there were all sorts of serious problems with the highly suggestive techniques used in questioning the children to elicit the accusations of abuse. If one is inclined to be charitable to the investigators, some of the problems with the interviews could be chalked up to lack of understanding at the time of how problematic these techniques were, but the stories are pretty damning. A short description of the sorts of techniques can be found in this Wiki entry on one of the most prominent prosecutions, that involving the McMartin preschool.
Many, although by no means all, of the defendants in these sorts of cases have since been exonerated. I am posting this comment because a defendant in one of the cases, Jesse Friedman (one of the subjects of the documentary film Capturing the Friedmans), is in the news because of a recent federal appellate court decision (pdf), which denied relief to Friedman, but noted:
For anyone who would like a brief overview of the problems with these sorts of prosecutions, the court’s opinion linked above has a relatively concise but informative discussion at pp. 18-23. For anyone interested in a book length treatment, I also recommend No Crueler Tyrannies by Dorothy Rabinowitz. Tons of info on the Internet as well, of course.
My father was a forensic psychiatrist heavily involved in some of these cases, testifying for the defense of the accused. The moral panic phenomenon is real and complex, but there's a more basic failure of rationality underlying the whole movement which was the false belief in the inherent veracity of children.
Apparently juries (and judges alike) took the testimony of children at face value. The problem was that investigative techniques of the social workers invariably elicited the desired reactions in the children. In law you have the concept of leading the witness, but that doesn't apply for investigations of child abuse. The children are taken away from their parents and basically locked up with the investigators until they tell them what they want to hear. It wasn't even necessarily deliberate - from what I understand in many cases the social workers just had a complete lack of understanding of how they were conditioning the children to fabricate complex and in many cases outright ridiculous stories. Its amazing how similar the whole scare was to historical accounts of the witch trials. Although as far as I know, in the recent scare nobody was put to death (but I could even be wrong about that, and certainly incalculable damage was done nonetheless).
I'd pay attention to this but note that the second source isn't reliable. Anyone who is talking about D-Wave seriously doesn't know much about quantum computing. Unfortunately, D-wave is a massively hyped project which has in practice done close to zero actual work. Scott Aaronson's writing on the subject.
~Oops I did it again.~
~I trolled on Slashdot ~
~Got modded to 5~
Well, hey, at least this time they universally criticized me. The topic was about a species being discovered that was present earlier than they thought, and I said that this refutes evolution.
Indeed.
Since they universally argued with you, I assume that they assumed that you were joking. People who understand evolution often find it hard to believe that anybody could be as nuts as creationists actually are, so the default assumption tends to be that any sufficiently dumb comment is trolling.
Having a dog in the room made subjects 30% less likely to defect in Prisoner's Dilemma (article; sample size 52 people in groups of 4).
This changes my views on pet ownership completely.
The Probability Chip
A few posts down.
Quick question — I know that Eliezer considers all of his pre-2002 writings to be obsolete. GISAI and CFAI were last updated in 2001 but are still available on the SIAI website and are not accompanied by any kind of obsolescence notice (and are referred to by some later publications, if I recall correctly). Are they an exception, or are they considered completely obsolete as well? (And does "obsolete" mean "not even worth reading", or merely "outdated and probably wrong in many instances"?)
I just came across this article called "Thank God for the New Atheists," written by Michael Dowd, and I can't tell if his views are just twisted or if he is very subtly trying to convert religious folks into epistemic rationalists. Sample quotes include:
...
...
...
Ah, yes. The only way to true religious understanding is through science and realizing our anthropomorphic biases...uh, wait. What? This guy seems to be calling for a religion grounded in science and rationality, but then he says things like:
So I'm confused. It makes me think that he's a crypto-rationalist trying to convert religious believers into rationalists. If that's true, it does seem like really effective strategy.
The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don't even need it...
He's also written a book called "Thank God for Evolution," in which he sprays God all over science to make it more palatable to christians.
If he really is trying to deconvert people, I suspect it won't work. They won't take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.
A HN post mocks Kurzweil for claiming the length of the brain's "program" is mostly due to the part of the genome that affects it. This was discussed here lately. How much more information is in the ontogenic environment, then?
The top rated comment makes extravagant unsupported claims about the brain being a quantum computer. This drives home what I already knew: many highly rated HN comments are of negligible quality.
PZ Myers:
(PZ Myers wrongly accuses Kurzweil of claiming he or others will simulate a human brain aided in large part by the sequenced genome, by 2020).
Kurzweil's denial - thanks Furcas - answers my question this way: only a small portion of the information in the brain's initial layout is due to the epigenetic pre-birth environment (although the evidence behind this belief wasn't detailed).
No he doesn't.
Cool - were it not for your comment, I wouldn't have ever heard the correction.
Off the top of my head:
The laws of physics
9 months in the womb
The rest of your organs. (maybe)
Your entire childhood...
These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.
If this press release isn't overstating its case, AIXItl/other unFriendly bayesian superintelligence just got a lot closer.
Can't decide? With the Universe Splitter iPhone app, you can do both! The app queries a random number generator in Switzerland which releases a single photon into a half-silvered mirror, meaning that according to MWI each outcome is seen in one branch of the forking Universe. I particularly love the chart of your forking decisions so far.
If all you want is single bits from a quantum random number generator, you can use this script.
With an endorsement from E8 surfer dude!
[Originally posted this in the first August 2010 Open Thread instead of this Part 2; oops]
I've been wanting to change my username for a while, and have heard from a few other people who do too, but I can see how this could be a bit confusing if someone with a well-established identity changes their username. (Furthermore, at LW meetups, when I've told people my username, a couple of people have said that they didn't remember specific things I've posted here, but had some generally positive affect associated with the name "ata". I would not want to lose that affect!) So I propose the following: Add a "Display name" field to the Preferences page on LW; if you put something in there, then this name would be shown on your user page and your posts and comments, next to your username. (Perhaps something like "ata (Adam Atlas)" — or the other way around? Comments and suggestions are welcome.)
I'm willing to code this if there's support for it and if the administrators deem it acceptable.
Your username reminds me of a scene from one of my favorite South Park episodes (slightly NSFW).
Is there a post dealing with the conflict between the common LW belief that there are no moral absolutes, and that it's okay to make current values permanent; and the belief that we have made moral progress by giving up stoning adulterers, slavery, recreational torture, and so on?
I'm not sure that both of those are common LW beliefs (at least common in the same people at the same time), but I don't see any conflict there. If there are no moral absolutes, then making current values permanent is just as good as letting them evolve as they usually do.
Who here advocates making current values permanent?
Replace "making current values permanent" with CEV jargon on extrapolating volition into future minds within a trajectory determined by current human values. (The CEV program still needs to demonstrate that means something different from "making current values permanent". Details depend, among other things, on how or whether you split values up into terminal and instrumental values.)
It certainly explicitly claims not to be doing that - see:
"2. Encapsulate moral growth." - http://singinst.org/upload/CEV.html
I'm thinking of signing up for cryonics. However, one point that is strongly holding me back is that cryonics seems to require signing up for a DNR (Do not resuscitate). However, if there's a chance at resuscitation I'd like all attempts to be made and only have cryonics used when it is clear that the other attempts to keep me alive will fail. I'm not sure that this is easily specifiable with current legal settings and how cryonics is currently set up. I'd appreciate input on this matter.
What did you read that makes it seem this way? I haven't run into this before.
A variety of places mention it. Alcor mentions it here. Cryonics.org discusses the need for some form of DNR although the details don't seem to be very clear there. Another one that discusses it is this article which makes the point that repeated attempts at resuscitation can lead to additional brain damage although at least from the material I've read I get the impression that as long as it doesn't delay cryopreservation by more than an hour or two that shouldn't be an issue.
You don't have to sign a DNR or objection to autopsy to get cryonics. The autopsy objection is recommended, but not required. It looks like Alcor wants terminally ill people to sign a DNR, not typical healthy people.
I've signed a religious objection to autopsy (California doesn't seem to allow an atheistic objection to autopsy), but never has a DNR been mentioned to me by anyone at Alcor.
Thanks. That helps clarify things.
Which just a tad ironic. Atheists are people who consider the physical state of their brain to be all that is 'them'. Most religious people assume their immortal soul has traipsed off some place, a paradise or at the very least a brand spanking new (possibly animalian) body.
File under "Less Wrong will rot your brain":
At my day job, I had to come up with and code an algorithm which assigned numbers to a list of items according to a list of sometimes-conflicting rules. For example, I'd have a list of 24 things that would have to be given the numbers 1-3 (to split them up into groups) according to some crazy rules.
The first algorithm I came up with was:
Of course, I did not try to implement this algorithm. Rather, I ended up solving the problem (mostly) using about 100 lines of perl and no AI.
What was the application?
This is what happens to me whenever I start to write a difficult program in C++, start by building a innovative system which solves the problems with minimal intervention on my part, and then eventually set up a cludge using heuristics which get the same thing done in a fraction of the time.
I recently started taking piracetam, a safe and unregulated (in the US) nootropic drug that improves memory. The effect (at a dose of 1.5g/day) was much stronger than I anticipated; I expected the difference to be small enough to leave me wondering whether it was mere placebo effect, but it has actually made a very noticeable difference in the amount of detail that gets committed to my long-term memory.
It is also very cheap, especially if you buy it as a bulk powder. Note that when taking piracetam, you also need to take something with choline in it. I bought piracetam and choline citrate as bulk powders, along with a bag of empty gelatin capsules and a scale. (Both piracetam and choline citrate taste extremely vile, so the gel caps are necessary. Assembling your own capsules is not hard, and can be done at a rate of approximately 10/minute with a tolerance of +/- 10% once you get the hang of it.)
I strongly recommend that anyone who has not tried piracetam stop procrastinating and order some. Yes, people have done placebo-controlled studies. No, there are not any rare but dangerous side effects. Taking piracetam is an unambiguous win if you want to learn and remember things.
Question: did you find that it leads to faster grokking (beyond the effects of improvement of raw recall ability)?
I don't know, but I think it's just memory. This is almost impossible to self-test, since there's a wide variance in problem difficulty and no way to estimate difficulty except by speed of grokking itself.
Two questions:
-How much does it cost?
-How soon do you start becoming desensitized to it, if at all?
I ordered from here at a price of $46 for 500g each of piracetam and choline citrate, plus $10 for gel caps and $20 for a scale (which is independently useful).
I could not find any reported instances of desensitization to piracetam, so I don't think it's an issue.
I'm trying out nootropics, adding them one at a time. Next on my list to try is sulbutiamine; I've seen claims that it prevents mental fatigue, and it too has basically zero side-effect risks. Also on my list to try are lion's mane, aniracetam, l-tyrosine and fish oil. All of these are unregulated in the US.
I also use adrafinil, which greatly improves my focus. However, it's more expensive and it can't be used continuously without extra health risks, so I only use it occasionally rather than as part of my daily regimen. (There's an expensive and prescription-only related drug, modafinil, which can be used continuously.)
Sounds good. Be sure to report back once you test out the others-- nootropics are very interesting to me, and I think generally useful to the community as well.
First, the results of a wikipedia check: "There is very little data on piracetam's effect on healthy people, with most studies focusing on those with seizures, dementia, concussions, or other neurological problems." which seems to decrease the assurance of safety for everyday use. But otherwise, most of the sources appear to agree with your advertising. I too would like to see memory tests for these drugs, but preferably in a large and random sample of people, with a control group given a placebo, and another control group taking the tests with no aid of any kind. As well as a long term test to check for diminishing effectiveness or side effects. With my memory, I would pay a considerable amount to improve it, but first I want to see a wide scale efficiency test.
Working on it. Give me a few years.
Why? Given the low cost and risk of trying it out, the high possible benefits, and the high probability that results will depend on individual genetic or other variations and so will not reach significance in any study, wouldn't the reasonable thing be to try it yourself, even if the wide-scale test had already concluded it had no effect?
Using your logic, I would be forced to try a large proportion of all drugs ever made. My motivation to buy this drug is close to my motivation to buy every other miracle drug out there, I want more third party tests of each one so I can make a more informed decision of where to spend my money, instead of experimenting on hundreds per month. Also, it does not have a DIN number in Canada, so I would need to import it.
A large proportion of drugs ever made have been claimed to improve memory and have a long history of null-results for side-effects and positive results for mental improvement?
Indeed.
I'm looking for something that I hope exists:
Some kind of internet forum that caters to the same crowd as LW (scientifically literate, interested in technology, roughly atheist or rationalist) but is just a place to chat about a variety of topics. I like the crowd here but sometimes it would be nice to talk more casually about stuff other than the stated purpose of this blog.
Any options?
I honestly keep hoping that subreddits will be implemented here sometime soon. Yes, "off-topic" discussion technically doesn't fit the stated purpose of the site, but the alternative of having LWers interested in having off-topic discussion having to migrate off-site to some other discussion forum seems ridiculous to me.
Stardestroyer.net fits that description somewhat, for values of "casually" that allow for copious swearing punctuating most disagreements. I haven't posted there, but Kaj Sotala posts as Xuenay (<s>apologies</s> no apologies for stalking).
Examples of threads on LW-related topics:
(Edited after first upvote; later edited again to add a link.)
Pharyngula. More atheist than rational, and more biology than technology, but it is definitely a community. It is a blog, but has an interesting feature called the endless thread which is kind of "collective stream of consciousness". Check it out. And also look at other offerings in the science blogosphere.
[Edit:supplied link.]
I would not suggest Pharyngula for this purpose. The endless thread is fun but the rationality level there is not very high. It is higher than that of a random internet forum but I suspect that many LWians would become quickly annoyed at the level at which arguments are treated as soldiers.
Agreed. Myers himself is way too political, and basically not nice. If anybody ever calls him on that, they get an insane level of vitriol and accusations stopping just short of being in league with the Vatican.
Truth, bro. It is pretty rowdy.
This is something I'd very much like as well. If you find anything, let me know. xkcd forums can be pretty good, though I haven't been on there in a while.
If all else fails, we can make one of our own.
I was thinking the same thing.
Pattern matching, signalling:
Link from The Agitator.
I'm considering starting a Math QA Thread at the toplevel, due to recent discussions about the lack of widespread math understanding on LW. What do you say?
I have been wanting something like this on LW for quite awhile, but wasn't sure it was on topic. With your linked post in mind, however, I think this is a good idea, and I, for one, would be an active participant.
Here is all the math you need to know to understand most of LW (correct me if I'm wrong):
I'm working through all of it right now. Not very far yet though.
You might want to add computer science and basic programming knowledge too.
Some people, including me, can get away with knowing much less and just figuring stuff out as we go along. I'm not sure if anyone can learn this ability, but for me personally it wasn't inborn and I know exactly how I acquired it. Working through one math topic properly at school over a couple years taught me all the skills needed to fill any gaps I encountered afterwards. University was a breeze after that.
The method of study was this: we built one topic (real analysis) up from the ground floor (axiomatization of the reals), receiving only the axioms and proving all theorems by working through carefully constructed problem sets. An adult could probably condense this process into several months. It doesn't sound like much fun - it's extremely grueling intellectual work of the sort most people never even attempt - but when you're done, you'll never be afraid of math again.
I had to figure out ALL myself, without the help of anyone in meatspace. I'm lacking any formal education that be worth mentioning. The very language I'm writing in right now is almost completely self-taught. It took me half a decade to get here, irrespective of my problems. That is, most of the time I haven't been learning anything but merely pondering what is the right thing to do in the first place. Only now I've gathered enough material, intention and the basic tools to tackle my lack of formal education.
Ok, you might add some logic and set theory as well if you want to grasp the comments. Although some comment threads go much further than that.
I'm not sure that people necessarily know what questions they need to ask, or even that they need to ask.
A math Q&A seems like a good idea, but it would be a better idea if there were some "the math you need for LW" posts first.
There was a very nice piece here (possibly a quote) on how to think about math problems-- no more that a few paragraphs long. It was about how to break things down and the sorts of persistence needed. Anyone remember it?
I just found a new blog that I'm going to follow: http://neuroanthropology.net/
This post is particularly interesting: http://neuroanthropology.net/2009/02/01/throwing-like-a-girls-brain/
Possible new barriers to Moore's Law where small chips won't have enough power to use the maximum transistor density they have available. The article also discusses how other apparent barriers (such as leaky gates) have been overcome in the past including this amusing line:
Problems with high stakes, low quality testing
What does Less Wrong know about the Myers-Briggs personality type indicator? My sense is that it's a useful model for some things, but I'm most interested in how useful it is for relationships. This site suggests that each personality type pair has a specific type of relationship, while this site only comments on what the ideal pair is for any given type. But the two sites disagree about what the ideal pairings are.
Personality Page is not mainstream Jungian; they seem to be of the opinion that sharing a dominant trait of opposite attitude is most beneficial. More mainstream MBTI sites will tend to agree with Socionics that completely opposite traits are the most complementary (for example Fe and Ti) but disagree on what of these traits correlate to a J or P.
So if you go by the theory that J/P correlates to extroverted conscious traits (the MBTI position), INTP and ESFJ are complementary. If you go by the theory that J/P correlates to the dominant trait, INTJ is ESFJ's dual. Socionics sites tend to take this position.
Note that while these letters should be completely exclusive for introverts, many of the introvert profiles seem to be the same (or suspiciously similar) between the systems, particularly with sensing types. So an (alleged) ISFP MBTI may actually be ISFP in Socionics.
That would imply that someone is wrong/confused. Either the profiles are uselessly vague (Fourier effect, no better than astrology charts for identifying this particular feature), the traits aren't actually real emperical phenomena (Si1 is indistinguishable from Se2), or the traits are being defined differently (such that Si1 in system A is actually Se2 in system B).
To confuse/complicate matters more, all the traits have various features in common with each other: S+T are pragmatic and "hard", T+N are theoretical/consequence-based, F+N are abstract and ideal, F+S are aesthetic and social, just as T+F are judging and S+N are perceiving. So profiles could have varying accuracy while describing surface aspects of real traits, yet not distinguishing them from each other well enough to be useful.
Now, if just you want to use this to find a prospective spouse or best friend who is your dual type, and don't care so much about the theoretical correctness of who is what type, there's a work-around: Find someone who appears opposite on the first three letters, then see if they make you comfortable or not. If they have shared values and a compatible sense of humor, chances are relatively high that they are a dual type rather than a conflict type.
But which view (if any) makes good predictions in the relationship department?
EDIT: A quick survey of abstracts on google scholar suggests that marital satisfaction is not related to the MB personality types of the couples.
That is interesting. I would expect there to be some significant differences in relationship quality among MB types even if the types are only somewhat correlated (under the assumption that socionics is correct).
One of the better sites on the topic is Rick DeLong's Socionics.us. He says there is only roughly a 30% correlation between MBTI types and Socionics types. Boulakov is also skeptical of the validity of MBTI typings. Perhaps the correlation is not high enough to obtain meaningful results here. I will be updating my beliefs on the matter, as this implies most MBTI types are mistaken if socionics is valid.
Honestly though, it really does look a lot like motivated cognition on part of socionists. I mean, they do have a coherently self-consistent theory but reference to external data points are suspiciously scarce. They seemingly start with the assumption (based on anecdotal observations of Augusta, socionics' founder, and others after her) that these relationship preferences between distinct types exist, find subjective validation, and then go from there to assert that the MBTI is just not accurate enough at determining the traits socionics is based on. So for example if two people who are claimed to be ISFp and ENTp (where lowercase p is "irrational") do not get along, Socionists will say the typing is invalid rather than that the theory is wrong. But if relationships are the only acid test of a typing, and relationships are the only thing predicted by the typing, it's turned into a vague "if you like these kinds of people you will like these kinds of people".
However, it's not entirely hopeless because there are more specific predictions to to validate. As an example, given a valid ISFp/ENTp pair, socionics also predicts the ISFp will be a supervisor ("supervision transmitter" in DeLong's terms) for the INFj type, whereas ENTp will be the "request transmitter" or beneficiary for the INFj. So if you could design a set of experimental test situations where supervision and request are distinguishable from other types of interaction (perhaps a game of some sort), you could perhaps set up a series of meetings between test subjects and see if it checks out. You could verify a given dual pair by their interactions with a given supervision/request receiver first, then arrange a meeting between them and see if they have more compatibility than the control group. The same thing could be verified with the diad's supervision/request transmitter type.
I think there are better models to use when considering relationships. I note that often such models are useful in as much as they serve to provide a language which can be used to describe intuitive associations that we pick up through observation. The model is not terrible, being a formalisation of the 'opposites attract' conventional wisdom with consideration given to how different people relate on intellectual and emotional levels.
As for MBTI, I have found it useful in some regards. I know, for example, that I can basically rule out relationships with anyone who comes in as a "J". I just find "J"s annoying ('judgemental' of me, I know!)
Edit: The links you provide are... interesting. I must admit I have rather strong doubts about just how accurate those physical descriptions of various personality types are!
like? (I'm intrigued)
Me too. There does seem to be some correlation between physical appearance and personality, but those details are rather burdensome.
A little fiction about specialized AI.
There was a question recently about whether neurons were like computers or something like that. I cannot find the comment although I replied at the time. Today I came across an article that may interest that questioner. http://www.sciencedaily.com/releases/2010/08/100812151632.htm
--David Foster Wallace, "The String Theory", July 1996 Esquire
I thought of Robin Hanson and ems as I read this:
Informal poll to ensure I'm not generalizing from one example:
How frequently do you find yourself able to remember how you feel about something, but unable to remember what produced the feeling in the first place (ie: you remember you hate steve but can't remember why)?
It seems like this is a cognitive shortcut, giving us access only to the "answer" that's already been computed (how to act vis-a-vis steve) instead of wasting energy and working memory re-accessing all the data and re-performing the calculation.
All the time.
Generally I only remember the cause of my dispositions if I feel it's important in itself. In the case of something like political beliefs I have a commitment to throwing out anything I can't justify through verbal reasoning and reference to fundamental values. With likes and dislikes - from those fundamental values to flavors of ice cream - I don't consider the path that got me there particularly important.
Sometimes I need the justification for ulterior or social reasons. It doesn't particularly matter to me why I like the people I happen to like, but I try to examine them at least enough that I can give them substantive compliments when appropriate, even if this examination doesn't come naturally to me. Cynically, most "debate topics" are like this - you need the justificatory reasoning in order to engage in cocktail conversations.
I would say that most people I know easily fit this heuristic, but I almost never employ it, based on the way I remember people. When I have been in a conflict with someone, I can recall a categorized list of every thing I dislike about them, and a few fights we have had quite easily, and vice versa for people I like. What this means essentially is... I have a very hard time remaining angry / happy with people, because it requires constant use of resources, and it also seems to effect my ability to remember meeting people at all. Since I store memories of other people using events instead of descriptions if I have never had a particularly eventful interaction with someone, remembering their names or any other info is almost impossible.
Occasional but rare. I have more of a problem where I have some feeling some reason and then find out I was wrong about that reason and then need to make effort to adjust my feelings to fit the data. But I generally remember the cause for my feelings. The only exception is that occasionally I'll vaguely remember that some approach to a problem doesn't work at all but won't remember why (it generally turns out that I spent a few days at some point in the past trying to use that method to solve something and got negative results showing that the method wasn't very useful.)
Extremely rare, but I suspect I'm in the same boat as wedrifid in that I seem to have above-average memory.
Yes -- and I agree that it's probably a cognitive shortcut, because it's also something that happens with purely conceptual ideas. I'll forget the definition of a word, but remember whether it's basically a positive or negative notion. Yay/Boo is surprisingly efficient shorthand for describing anything.
This has happened to me but not often.
There is a reason for it. Current thinking is that memories of events are retrieved by a process in the hippocampus (until the memories become substantially re-consolidated). Memories of strong emotional experiences are also retrieved by a process in the amygdala. They are not memories of events but just a link between the emotion and the object that caused it. In recent memories these are usually connected - If the amygdala retreives it prompts the hippocampus to do so also - if the hippocampus does the retreiving it triggers the amygdala. But the two processes can be disconnected for a particular memory pair. You see X and feel the memory of fear or anger but not the episodic memory of when and where you felt that emotion towards X. The amygdala retrieves but the hippocampus fails to.