The catch-22 I would expect with CFAR's efforts is that anyone buying their services is already demonstrating a willingness to actually improve his/her rationality/epistemology, and is looking for effective tools to do so.
The bottleneck, however, is probably not the unavailability of such tools, but rather the introspectivity (or lack thereof) that results in a desire to actually pursue change, rather than simply virtue-signal the typical "I always try to learn from my mistakes and improve my thinking".
The latter mindset is the one most urgently...
The scarier thought is how often we're manipulated that way when people don't bungle their jobs. The few heuristics we use to identify such mischief are trivially misled (for example, establishing plausibility by posting on inconsequential other topics (at least on LW that incurs a measurable cognitive footprint, which is however not the case on, say, Reddit), and then there's always Poe's law to consider). Shills man, shills everywhere!
As they dictum goes, just cuz you're paranoid ...
Reminds me of Ernest Hemingway's apparent paranoid delusions of being un...
Disclaimer: Only spent 20 minutes on this, so it might be incomplete, or you may already have addressed some of the following points:
At first glance, John Lowe authored 2 pubmed-listed papers on the topic.
The first of which in an open journal with no peer review (Med. Hypotheses) which has also published stuff on e.g. AIDS denialism. From his paper: "We propose that molecular biological methods can provide confirmatory or contradictory evidence of a genetic basis of euthyroid FS [Fibromyalgia Syndrome]." That's it. Proposing a hypothesis, not pro...
Hey! Hey. He. Careful there, a propos word inflation. It strikes with a force of no more than one thousand atom bombs.
Are you really arguing for keeping ideologically incorrect people barefoot and pregnant, lest they harm themselves with any tools they might acquire?
Sounds as good a reason as any!
maybe we should shut down LW
I'm not sure how much it counts, but I bet Chief Ramsay would've shut it down long ago. Betting is good, I've learned.
As seen in the first episode series Caprica, quoth Zoe Graystone:
"(...) the information being held in our heads is available in other databases. People leave more than footprints as they travel through life; medical scans, dna profiles, psych evaluations, school records, emails, recording, video, audio, cat scans, genetic typing, synaptic records, security cameras, test results, shopping records, talent shows, ball games, traffic tickets, restaurant bills, phone records, music lists, movie tickets, tv shows... even prescriptions for birth control.&quo...
"Mind" is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.
That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.
The data doesn't care what importance you ascribe to it. It's ...
LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question.
Well, there is a lot of motivated cognition on that topic (relevant disclaimer, I'm an atheist in the conventional sense of the word) and it seems deceptively straight forward to answer (mostly by KC-dabblers), but it is in fact anything but. The non-triviality arises from technical considerations, not some philosophical obscurantism.
This may be the wrong comment chain to get into it, and your grandstanding doesn't exactly signal an immediate willingness to engage in medias res, so I won't elaborate for the moment (unless you want me to).
Good content, however I'd have preferred "You Are A Mind" or similar. You are an emergent system centered on the brain and influences upon it, or somesuch. It's just that "brain" has come to refer to 2 distinct entities -- the anatomical brain, and then the physical system generating your self. The two are not identical.
Well, I must say my comment's belligerence-to-subject-matter ratio is lower than yours. "Stamped out"? Such martial language, I can barely focus on the informational content.
The infantile nature of my name calling actually makes it easier to take the holier-than-thou position (which my interlocutor did, incidentally). There's a counter-intuitive psychological layer to it which actually encourages dissent, and with it increases engagement on the subject matter (your own comment nonwithstanding). With certain individuals at least, which I (correctl...
Certainly, within what's Good (tm) and Acceptable (tm), funding better education in the third world is the most effective method.
However, if you go far enough outside the Overton window, you don't need credibility, as long as the power asymmetry is big enough. You want food? It only comes with a chemical agent which sterilizes you, similar to Golden Rice. You don't need to accept it, you're free to starve. The failures of colonialism as well as the most recent forays into the middle east stem from the constraints of also having to placate the court of publ...
I too have the impression that for the most part the scope of the "effective" in EA refers to "... within the Overton window". There's the occasional stray 'radical solution', but usually not much beyond "let's judge which of these existing charities (all of which are perfectly societally acceptable) are the most effective".
Now there are two broad categories to explain that:
a) Effective altruists want immediate or at least intermediate results / being associated with "crazy" initiatives could mean collateral damage t...
it massively restricts EA's maximum effectiveness
It's not that simple. Implementation of radical outside-of-Overton policies requires not only the willingness to do so. You need to have sufficient power to say "We will do this and the rest of the world can go jump into the lake".
EA is very very VERY far away from such an amount of power.
(That is a good thing)
I'm not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW'ers. While it's not strictly a prequisite for learning rationality, it certainly is for starting in medias res.
The current approach is a good selector for dividing the chaff (well educated because that's what was expected, but no true intellectual curiosity) from the wheat (whom Deleuze would call thinkers-qua-thinkers).
HPMOR instead, maybe?
That's a good argument if you were to construct the world from first principles. You wouldn't get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it's all about. The normative power of the present state, if you will. Never mind that "natural" includes antibiotics, but not gene modification.
This may seem self-evident, but what I'm pointing ou...
Disclosing one's sexual orientation won't be (mis)construed as a status grab in the same way as disclosing one's (real or imagined) intellectual superiority. Perceived arguments from authority must be handled with supreme care, otherwise they invariably set the stage for a primate hierarchy contest. Minute details in phrasing can make all the difference: "I could engage with people much smarter than you, yet I choose to help you, since you probably need my help and my advice" versus "I made the following experiences, hopefully someone [imper...
I dislike the trend to cuddlify everything, to make approving noises no matter what, then framing criticisms as merely some avenue for potential further advances, or somesuch.
On the one hand, I do recognize that works better for the social animals that we are. On the other hand, aren't we (mostly) adults here, do we really need our hand held constantly? It's similar to the constant stream of "I LOVE YOU SO MUCH" in everday interactions, it's a race to the bottom in terms of deteriorating signal/noise ratios. How are we supposed to convey actual a...
I don't think that it carves reality at its joints to call that "mathematical ability."
... and we're down to definitional quibbles, which are rarely worth the effort, other than simply stating "I define x as such and such, in contrast to your defining x as such as such". Reality has no intrinsic, objective dictionary with an entry for "mathematical ability", so such discussions can mostly be solved by using some derivative terms x1 and x2, instead of an overloaded concept x.
Of course, the discussion often reduces to who ha...
I amended the grandparent. Suppose for the sake of argument you agreed with my estimate of this being the proverbial "last, best hope". Then giving away the one potentially game-changing advantage to barter for a globally insignificant "victory" would be the epitome of an overly greedy algorithm. Losing sight of the actual goal because an authority figure told you so, in a way not thinking for yourself beyond the task as stated.
Making that point sounds, on reflection, like exactly the type of thing I'd expect Eliezer to do. Do what I me...
Personally, I feel that case 1 ("doesn't work at all") is much more probable
I've come to the opposite conclusion. Should we drag out quotes to compare evidence? Is your estimate predicated on just one or two strong arguments, and if so could I bother you to state them? The most probability mass to my estimate is contributed by Voldemort's former reluctance to test the horcrux system and his prior blind spots as a rationalist when designing the system, and the oft-reinforced notion of Harry actually being a version of Tom Riddle, indistinguisha...
"No action" is an action, same as any other (for a grokkable reference, see consequentialists and the Trolley experiment). Also, obviously it wouldn't be "no action" it would be selling Voldemort the idea that there's nothing left, maybe revealing the secret deemed most insignificant and then begging for that to apply to both parents.
1) (Harry tells Voldemort his death could hijack the horcrux network) doesn't seem unlikely at all. Both hints from within the story (the Marauder map) and on the meta level ("Riddles and Answers") suggest an unprecedent congruence of identity, at least in the sense of magical artifacts (the map) being unable to tell the difference.
I did not post it since strictly speaking Harry should keep quiet about it.Losing the challenge of not dying (learned to lose), but increasing his chances of winning the war. Immediately even: Since the new horcrux sys...
Skimming over (part of) the proposed solutions on FF.net has thoroughly destroyed any sense of kinship with the larger HPMoR readership. Darn, it was such a fuzzily warm illusion. In concordance with Yvain's latest blog post, there may be few if any general shortcuts to raising the sanity waterline.
Harry's commitment is quite weaksauce, and it was surprising that he wasn't called on it:
I sshall help you obtain the Sstone (...) sshall not do anything I think will annoy you to no good end. Sshall call no help if I expect them to be killed by you or for hosstagess to die.
So he's free to call help as long as he expects to win the ensuing encounter. After which he could hand the Stone to a subdued Quirrell for just a moment, technically fulfilling that clause as well. Also, the "to no good end" qualifier? "Winning against Voldemort" certainly would count as a good end, cancelling that part as well.
Parseltongue doesn't produce binding promises. There no need to technically fulfill clauses. It doesn't function as a commitment device. It just makes someone talk frankly about his own intentions.
Harry can't provide any stronger commitment and is indeed sorry about his inability to provide a stronger commitment.
Well, depends on how much you discount the expected utility of cryonics due to Pascal's Wager concerns. The variance of the payoff for freezing tissue certainly is much smaller, and freezing tissue really isn't a big deal from a technical or even societal point of view, as evidenced by, say, female egg freezing for fertility preservation.
Seems like there's some feminists or some 6'5 men with a superiority complex around.
Well, I am 6'7, without a superiority complex of course. That's not why I downvoted you, though, and since you asked for an explanation:
I'm reading the comments and looking for some new ammo for my next fight in the gender wars.
That's not the kind of approach (arguments as soldiers) we're looking for in a rationality forum. One of the prequisites is a willingness to change your mind, which seems to be setting the bar too high for some people.
I'd call it an a-rationality quote, in the sense that it's just an observation; one backed up by evidence but with no immediate relevancy to the topic of rationality.
On second thought, it does show a kind of bias, namely the "compete-for-limited-resources" evolutionary imperative which introduced the "bias" of treating most social phenomena as zero-sum games. Bias in quotes because there is no correct baseline to compare against, tendency would probably be a better term.
Strong statement from Bill Gates on machine superintelligence as an x-risk, on today's Reddit AMA:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
The whole AI box experiment is a fun pastime, and educational in so far as learning to take artificial intellects seriously, but as real-world long-term "solutions" go, it is utterly useless. Like trying to contain nuclear weapons indefinitely, except you can build one just by having the blueprints and a couple leased hours on a supercomputer, no limited natural elements necessary, and having one means you win at whatever you desire (or that's what you'd think). All the while under the increasing pressure of improving technology, ever lowering th...
I truly am torn on the matter. LW has caused a good amount of self-modification away from that position, not in the sense of diminishing the arguments' credence, but in the sense of "so what, that's not the belief I want to hold" (which, while generally quite dangerous, may be necessary with a few select "holy belief cows")*.
That personal information notwithstanding, I don't think we should only present arguments supporting positions we are convinced of. That -- given a somewhat homogeneous group composition -- would amount to an echo c...
There are analogues of the classic biases in our own utility functions, it is a blind spot to hold our preferences as we perceive them to be sacrosanct. Just as we can be mistaken about the correct solution to Monty Hall, so can we be mistaken about our own values. It's a treasure trove for rational self-analysis.
We have an easy enough time of figuring out how a religious belief is blatantly ridiculous because we find some claim it makes that's contrary to the evidence. But say someone takes out all such obviously false claims, or take a patriot, someone w...
I don't recommend it, but I'll have to see individual cases to know whether I'd bring down the banhammer.
As a general thing, I don't recommend using insults which might stabilize bad behavior by making it part of a person's identity. Also, I have a gut level belief that people are less likely to think clearly when they're angry.
It is simply unfathomable to me how you come to the logical conclusion that an UFAI will automatically and instantly and undetectably work to bypass and subvert its operators. Maybe that’s true of a hypothetical unbounded universal inference engine, like AIXI. But real AIs behave in ways quite different from that extreme, alien hypothetical intelligence.
Well, it follows pretty straightforwardly from point 6 ("AIs will want to acquire resources and use them efficiently") of Omohundro's The Basic AI Drives, given that the AI would prefer to act...
... and there is only one choice I'd expect them to make, in other words, no actual decision at all.