I can't say I disagree.
Of course experimental design is very important in general. But VAuroch and I agree that when two designs give rise to the same likelihood function, the information that comes in from the data are equivalent. We disagree about the weight to give to the information that comes in from what the choice of experimental design tells us about the experimenter's prior state of knowledge.
you're ignoring critical information
No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.
Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still ...
Thanks for the sci-hub link. So awesome!
You're going to have a hard time convincing me that... vectors are a necessary precursor for regression analysis...
So you're fitting a straight line. Parameter estimates don't require linear algebra (that is, vectors and matrices). Super. But the immediate next step in any worthwhile analysis of data is calculating a confidence set (or credible set, if you're a Bayesian) for the parameter estimates; good luck teaching that if your students don't know basic linear algebra. In fact, all of regression analysis, from the most basic least squares estimator ...
Consciousness is the most recent module, and that does mean [that drawing causal arrows from consciousness to other modules of human mind design is ruled out, evolutionarily speaking.]
The causes of the fixation of a genotype in a population are distinct from the causal structures of the resulting phenotype instantiated in actual organisms.
Sure, I agree with all of that. I was just trying to get at the root of why "nobody asked [you] to take either vow".
Before I also haven't heard anybody speak about taking those kinds of vows to oneself.
It's not literal. It's an attempt at poetic language, like The Twelve Virtues of Rationality.
I don't disagree with this. A lot of the kind of math Scott lacks is just rather complicated bookkeeping.
(Apropos of nothing, the work "bookkeeping" has the unusual property of containing three consecutive sets of doubled letters: oo,kk,ee.)
I have the sort of math skills that Scott claims to lack. I lack his skill at writing, and I stand in awe (and envy) at how far Scott's variety of intelligence takes him down the path of rationality. I currently believe that the sort of reasoning he does (which does require careful thinking) does not cluster with mathy things in intelligence-space.
Scott's technique for shredding papers' conclusions seem to me to consist mostly of finding alternative stories that account for the data and that the authors have overlooked or downplayed. That's not really a math thing, and it plays right to his strengths.
Causal stories in particular.
I actually disagree that having a good intuitive grasp of "stories" of this type is not a math thing, or a part of the descriptive statistics magisterium (unless you think graphical models are descriptive statistics). "Oh but maybe there is confounder X" quickly becomes a maze of twisty passages where it is easy to get lost.
"Math things" is thinking carefully.
I think equating lots of derivation mistakes or whatever with poor math ability is: (a) toxic and (b) wrong. I think the innate ability...
Maybe for the bit about signalling in the last paragraph...? Just guessing here; perhaps Kawoomba will fill us in.
I like it when I can just point folks to something I've already written.
The upshot is that there are two things going on here that interact to produce the shattering phenomenon. First, the notion of closeness permits some very pathological models to be considered close to sensible models. Second, the optimization to find the worst-case model close to the assumed model is done in a post-data way, not in prior expectation. So what you get is this: for any possible observed data and any model, there is a model "close" to the assumed one that predict...
It's a rather confusing way of referring to a "biased point of view". Saying that "Person A has privilege" wrt. some issue is a claim that A's overall observations and experiences are unrepresentative, and so she should rely on others' experiences as much as on her own.
That's not quite correct; I think it's best to start with the concept of systematic oppression. Suppose for the sake of argument that some group of people is systematically oppressed, that is, on account of their group identity, the system in which they find themselves...
I'm a SSC fan and highly sympathetic to SJ goals and ideals. One of the core LW meetup members in my city can't stand to read SSC on account of what he perceives to be constant bashing of SJ. (I've already checked and verified that his perception of the proportion of SJ bashing in SSC posts is a massive overestimate, probably caused by selection bias.) As a specific example of verbiage that he considers typical of SSC he cited:
...And the people who talk about “Nice Guys” – and the people who enable them, praise them, and link to them – are blurring the alre
Embarrassingly, I didn't have the "who feeds Paris" realization until last year -- well after I thought I had achieved a correct understanding of and appreciation for basic microeconomic thought.
Nice choice of username. :-)
Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.
You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.
FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.
Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.
In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.
I was working in protein structure prediction.
I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.
Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.
The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
...Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If th
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.
I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
Mental processes inside someone's mind actually happen in physical reality.
Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances...
Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.
That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.
I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.
I wasn't talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It's not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.
Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.
That's a pretty good example of the Fallacy of Gray right there.
Hmm.. let me think...
The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura's first projection to build in the static part of the computation -- physics -- thereby making the resulting program s...
It won't have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.
My intuition -- and it's a Good one -- is that the discontinuity is produced by intelligence acting to increase itself. It's built into the structure of the thing acted upon that it will feed back to the thing doing the acting. (Not that unique an insight around these parts, eh?)
Okay, here's a metaphor(?) to put some meat on the bones of this comment. Suppose you have an interpreter for some computer language and you have a program written in that language that implements partial evaluation. With just these tools, you can make the partial evaluator (i) act...
Fungible. The term is still current within economics, I believe. If something is fungible, it stands to reason that one can funge it, nu?
As Vaniver mentioned, it relates to exploring trade-offs among the various goals one has / things one values. A certain amount of it arises naturally in the planning of any complex project, but it seems like the deliberate practice of introspecting on how one's goals decompose into subgoals and on how they might be traded off against one another to achieve a more satisfactory state of things is an idea that is novel, distinct, and conceptually intricate enough to deserve its own label.
Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, the...
He had doubts, he extinguished them, and that's what makes him guilty.
This is not the whole story. In the quote
He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.
you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us s...
tl;dr: No, the subject of the site is wider than that.
Long version: IIRC, EY originally conceived of rationality as comprising two relatively distinct domains: epistemic rationality, the art and science of ensuring the map reflects the territory, and instrumental rationality, the art and science of making decisions and taking actions that constrain the future state of the universe according to one's goals. Around the time of the fork of CFAR off of SIAI-that-was, EY had expanded his conception of rationality to include a third domain: human rationality, th...
clearly advertising propaganda
It's not clear to me -- I'm not even sure what you think it's advertising!
( ETA: I wrote a bunch of irrelevant stuff, but then I scrolled up and saw (again, but it somehow slipped my mind even though I friggin' quoted it in the grandparent, I'm going senile at the tender age of 36) that you specifically think it's advertising for CFAR, so I've deleted the irrelevant stuff. )
Advertising for CFAR seems like a stretch, because -- although very nice things are said about Anna Salamon -- the actual product CFAR sells isn't mentioned at all.
My conclusion: there might be an interesting and useful post to be written about how epistemic rationality and techniques for coping with ape-brain intersect, and ShannonFriedman might be capable of writing it. Not there yet, though.
...a long advertisement for CFAR...
...containing an immediately useful (or at least, immediately practicable) suggestion, as, er, advertised.
Awesome, thanks!
Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)
I invite you to spell out the prediction that you drew about the evolution of human intelligence from your theory of humor and how the recently published neurology research verified it.
What if it was very hard to produce an intelligence that was of high performance across many domains?... There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists...
In fact, we already know the minimax optimal algorithm for combining "expert" predictions (here "expert" denotes an online sequence prediction algorithm of any variety); it's the weighted majority algorithm.
This is a field in which the discoverer of the theorem that rational agents cannot disagree was given the highest possible honours...