All of Kawoomba's Comments + Replies

... and there is only one choice I'd expect them to make, in other words, no actual decision at all.

If this were my introduction to LW, I'd snort and go away. Or maybe stop to troll for a bit -- this intro is soooo easy to make fun of.

Well, glad you didn't choose the first option, then.

The catch-22 I would expect with CFAR's efforts is that anyone buying their services is already demonstrating a willingness to actually improve his/her rationality/epistemology, and is looking for effective tools to do so.

The bottleneck, however, is probably not the unavailability of such tools, but rather the introspectivity (or lack thereof) that results in a desire to actually pursue change, rather than simply virtue-signal the typical "I always try to learn from my mistakes and improve my thinking".

The latter mindset is the one most urgently... (read more)

0Lumifer
The self-help industry (as well as, say, gyms or fat farms) mostly sells what I'd call "willpower assists" -- motivation and/or structure which will push you to do what you want to do but lack sufficient willpower for.

Climate change, while potentially catastrophic, is not an x-risk. Nuclear war is only an x-risk for a subset of scenarios.

0siIver
I disagree.

The scarier thought is how often we're manipulated that way when people don't bungle their jobs. The few heuristics we use to identify such mischief are trivially misled (for example, establishing plausibility by posting on inconsequential other topics (at least on LW that incurs a measurable cognitive footprint, which is however not the case on, say, Reddit), and then there's always Poe's law to consider). Shills man, shills everywhere!

As they dictum goes, just cuz you're paranoid ...

Reminds me of Ernest Hemingway's apparent paranoid delusions of being un... (read more)

-3Gleb_Tsipursky
Interesting that your perspective is that I'm manipulating LWs, as opposed to genuinely trying to get InIn participants engaged in LW. Let's imagine two worlds: one where I was, and one where I wasn't. You have evidence that I have accomplished quite a bit in my activities, which should tell you something about my abilities. If I was trying to manipulate LWs, I would not do it so obviously. There's plenty of subtle maneuvers that could be done. But that's not where my interest lies. If I was trying to get InIn participants genuinely engaged in LW and getting them trained up in rationality, I would do exactly what I'm doing. Consider where the evidence points carefully. Don't go from your desired conclusion.
4Lumifer
In the post-Snowden era... X-/ He is promoting a charity he's trying to get off the ground, so his options are limited. Some would say it's a spice. Or a herb :-)

Disclaimer: Only spent 20 minutes on this, so it might be incomplete, or you may already have addressed some of the following points:

At first glance, John Lowe authored 2 pubmed-listed papers on the topic.

The first of which in an open journal with no peer review (Med. Hypotheses) which has also published stuff on e.g. AIDS denialism. From his paper: "We propose that molecular biological methods can provide confirmatory or contradictory evidence of a genetic basis of euthyroid FS [Fibromyalgia Syndrome]." That's it. Proposing a hypothesis, not pro... (read more)

0johnlawrenceaspden
j'y trouve beaucoup de mal, je vous assure..... I mean crank-a-rama, signs-wise. But actually, what I've read of Lowe reminds me of me. He's obviously trying to prove something that he's convinced is true, and I can well believe that he's self-deceived. But I have trouble believing that he was a liar, or a serial murderer of his patients. And if he wasn't that, then he must have been right about the peripheral resistance. A lot of what he's writing looks like spectacular pedantry, which can only come from a really motivated thinker trying to force his hypothesis through the sieve of the inconvenient facts. And it doesn't half remind me of a mathematician trying to prove a tricky theorem that he knows is likely true because he's checked a million cases by computer and it's either true or false for subtle reasons. So you keep trying to force the proof through, and the places you fail give you ideas for why it might be false in some unexpected way. So you look in those places, trying to find why it's false, and when you can't prove it's false either, you go back to trying to show that it's true in spite of those failures. And eventually you either show it's false, or you show it's true. And you're allowed to use any method you like to work on it, as long as at the end, you've either got proof or counter-example. It turns out Lowe's hobby was mathematical logic. I keep thinking that if it had been probability theory, he might have nailed it before he died. I really really wish I could talk to him. Bad death. That's another thing we should sort out, once we have sorted this. I think it's typical of Lowe to have gone looking for something, and found the opposite, and then published anyway, and then tried to force his ideas through in spite of what he found. It's just that my professional training, such as it was, tells me that that's how you find out the truth. ---------------------------------------- And of course, I'm not depending on Lowe. He told me where to

I wonder if / how that win will affect estimates on the advent of AGI within the AI community.

2Vaniver
I've already seen some goalpost-moving at Hacker News. I do hope this convinces some people, though.

Please don't spam the same comment to different threads.

Hey! Hey. He. Careful there, a propos word inflation. It strikes with a force of no more than one thousand atom bombs.

Are you really arguing for keeping ideologically incorrect people barefoot and pregnant, lest they harm themselves with any tools they might acquire?

Sounds as good a reason as any!

maybe we should shut down LW

I'm not sure how much it counts, but I bet Chief Ramsay would've shut it down long ago. Betting is good, I've learned.

2Lumifer
Extremely dangerous stuff, that... But if betting is good, pre-commitment and co-operation are the best! X-)

As seen in the first episode series Caprica, quoth Zoe Graystone:

"(...) the information being held in our heads is available in other databases. People leave more than footprints as they travel through life; medical scans, dna profiles, psych evaluations, school records, emails, recording, video, audio, cat scans, genetic typing, synaptic records, security cameras, test results, shopping records, talent shows, ball games, traffic tickets, restaurant bills, phone records, music lists, movie tickets, tv shows... even prescriptions for birth control.&quo... (read more)

1TheWakalix
But as it is said, do not generalize from fictional evidence.
0turchin
The main question is should we consciously write down our secret thoughts and child memories, hoping on better reconstruction in the future? If some kind of reconstruction is inevitable, if AI will make some kind of simulations anyway, maybe it is better to provide it with as much correct information as possible?

"Mind" is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.

That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.

The data doesn't care what importance you ascribe to it. It's ... (read more)

LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question.

Well, there is a lot of motivated cognition on that topic (relevant disclaimer, I'm an atheist in the conventional sense of the word) and it seems deceptively straight forward to answer (mostly by KC-dabblers), but it is in fact anything but. The non-triviality arises from technical considerations, not some philosophical obscurantism.

This may be the wrong comment chain to get into it, and your grandstanding doesn't exactly signal an immediate willingness to engage in medias res, so I won't elaborate for the moment (unless you want me to).

3Transfuturist
The laws of physics as we know them are very simple, and we believe that they may actually be even simpler. Meanwhile, a mind existing outside of physics is somehow a more consistent and simple explanation than humans having hardware in the brain that promotes hypotheses involving human-like agents behind everything, which explains away every religion ever? Minds are not simpler than physics. This is not a technical controversy.
2[anonymous]
Go on and elaborate, but unless you can show some very thorough technical considerations, I just don't see how you're able to claim a mind has low Kolmogorov complexity.

If you're looking for gullible recruits, you've come to the wrong place.

Don't lease the Ferrari just yet.

8gjm
My impression is "naive" rather than "cynically looking for gullible recruits", for what it's worth.

What are you talking about?

History can be all things to all people, like the shape of a cloud it's a canvas on which one can project nearly any narrative one fancies.

Their approach reduces to an anti-epistemic affect-heuristic, using the ugh-field they self-generate in a reverse affective death spiral (loosely based on our memeplex) as a semantic stopsign, when in fact the Kolmogorov distance to bridge the terminological inferential gap is but an epsilon.

4Good_Burning_Plastic
You know you've been reading Less Wrong too long when you only have to read that comment twice to understand it.
0nyralech
I'm afraid I don't know what you mean by Kolmogorov distance.
2XFrequentist
I got waaay too far into this before I realized what you were doing... so well done!

Good content, however I'd have preferred "You Are A Mind" or similar. You are an emergent system centered on the brain and influences upon it, or somesuch. It's just that "brain" has come to refer to 2 distinct entities -- the anatomical brain, and then the physical system generating your self. The two are not identical.

Well, I must say my comment's belligerence-to-subject-matter ratio is lower than yours. "Stamped out"? Such martial language, I can barely focus on the informational content.

The infantile nature of my name calling actually makes it easier to take the holier-than-thou position (which my interlocutor did, incidentally). There's a counter-intuitive psychological layer to it which actually encourages dissent, and with it increases engagement on the subject matter (your own comment nonwithstanding). With certain individuals at least, which I (correctl... (read more)

Certainly, within what's Good (tm) and Acceptable (tm), funding better education in the third world is the most effective method.

However, if you go far enough outside the Overton window, you don't need credibility, as long as the power asymmetry is big enough. You want food? It only comes with a chemical agent which sterilizes you, similar to Golden Rice. You don't need to accept it, you're free to starve. The failures of colonialism as well as the most recent forays into the middle east stem from the constraints of also having to placate the court of publ... (read more)

2ChristianKl
Again I think the likely result of your project is lost influence because you provide ammunition to various people who don't want to have Western doctors in their country. I think you have a bad model of political realities. Even if individual citizens of a third world country would accept that, the power that be in that society don't. Additionally you will be able to distribute less condoms. Almost per definition the mere act of discussing ideas outside of the Overton window publically comes with a cost even if you just discuss them and not do anything further. To the extend that you discuss them you don't do that publically. Almost per definition the person who moves outside of the Overton window doesn't have a big amount of power.
2[anonymous]
If you were really intent on extending the Overton window in general, you would include Communist solutions as well as fascist ones ;-).

I too have the impression that for the most part the scope of the "effective" in EA refers to "... within the Overton window". There's the occasional stray 'radical solution', but usually not much beyond "let's judge which of these existing charities (all of which are perfectly societally acceptable) are the most effective".

Now there are two broad categories to explain that:

a) Effective altruists want immediate or at least intermediate results / being associated with "crazy" initiatives could mean collateral damage t... (read more)

0[anonymous]
You're assuming that system 1 sensibilities aren't a useful heuristic for evaluating what's effective given finite evaluation resources.

it massively restricts EA's maximum effectiveness

It's not that simple. Implementation of radical outside-of-Overton policies requires not only the willingness to do so. You need to have sufficient power to say "We will do this and the rest of the world can go jump into the lake".

EA is very very VERY far away from such an amount of power.

(That is a good thing)

0[anonymous]
No, the question is why you're employing algorithms that routinely tell you to drink 500 gallons of vinegar per day, sterilize the poor, or take other obviously ridiculous actions. It is probably usually better to just use probabilistic constraint methods, in which solutions that meet your constraints better are more likely - but all other variables are allowed to vary randomly subject to the minimum necessary causal constraints - and sample until you find a satisfactory solution.
3ChristianKl
I think the likely result of any attempt of a mass sterilisation project is increased population because you don't get it to work but Western doctors in the third world lose credibility. We actually have good data that better education decreases birth rates.

MIRI continues to be in good hands!

I'm not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW'ers. While it's not strictly a prequisite for learning rationality, it certainly is for starting in medias res.

The current approach is a good selector for dividing the chaff (well educated because that's what was expected, but no true intellectual curiosity) from the wheat (whom Deleuze would call thinkers-qua-thinkers).

HPMOR instead, maybe?

6John_Maxwell
Agree. NancyLebovitz's posts points at something true: there should be more resources like HPMOR for "regular people" to increase their level of rationality. That's part of the reason I'm excited about groups like Intentional Insights that are working on this. But I think "dumbing down" Less Wrong is a mistake. You want to segment your audience. Less Wrong is for the segment that gets curious when they see unfamiliar technical terms.

That's a good argument if you were to construct the world from first principles. You wouldn't get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it's all about. The normative power of the present state, if you will. Never mind that "natural" includes antibiotics, but not gene modification.

This may seem self-evident, but what I'm pointing ou... (read more)

8Lumifer
That's fine as long as you understand it and are not deluding yourself with a collection of reasons why this cozy local minimum is actually the best ever. The considerable power wielded by inertia should be explicit.

Disclosing one's sexual orientation won't be (mis)construed as a status grab in the same way as disclosing one's (real or imagined) intellectual superiority. Perceived arguments from authority must be handled with supreme care, otherwise they invariably set the stage for a primate hierarchy contest. Minute details in phrasing can make all the difference: "I could engage with people much smarter than you, yet I choose to help you, since you probably need my help and my advice" versus "I made the following experiences, hopefully someone [imper... (read more)

7JonahS
That's the thing – it's not zero sum! Other LWers can become thousands of times more intellectually sophisticated than they are. Some of them may have substantially more potential in principle than I have. Similarly for idealism. We should have a culture of positive sum cooperation, where people are happy to have someone more who's much more knowledgable around because they can benefit from it, rather than thinking in terms of "if they're around than they have more status, so I have lower status, so it's bad for me if they signal intellectual superiority." If people had consistently adopted such attitudes throughout history, we would still be in the dark ages.

I dislike the trend to cuddlify everything, to make approving noises no matter what, then framing criticisms as merely some avenue for potential further advances, or somesuch.

On the one hand, I do recognize that works better for the social animals that we are. On the other hand, aren't we (mostly) adults here, do we really need our hand held constantly? It's similar to the constant stream of "I LOVE YOU SO MUCH" in everday interactions, it's a race to the bottom in terms of deteriorating signal/noise ratios. How are we supposed to convey actual a... (read more)

6dxu
Congratulatory comments, even of the empty sort like "Great job!", serve as positive Pavlovian reinforcement, which helps to motivate/encourage people to post. In addition, they signal appreciation and gratefulness at the fact that someone was willing to make a top-level post in the first place. The fact that the people on LessWrong are at times so damn unfriendly is in my opinion a non-trivial part of the cause of LW's too often insular atmosphere. Furthermore, studies consistently show that humans respond better to positive reinforcement than to negative reinforcement, regardless of age. This isn't about whether we're "adults who don't need our hands held". It's about how to motivate people to post more. If Jonah gets a torrent of criticisms every time he posts something, that's going to create an ugh field around the idea of posting. If he then points this out in a comment, and people respond by saying what effectively amounts to "Well, it's your own fault for not being clear enough," well, you can imagine how it might feel. This is an issue entirely separate from that of whether the criticisms are right. The bottom line is that transmission of useful information isn't the only kind of transmission that occurs in human communication. "This post is so messy and obfuscated as to be nearly unreadable" and "I think your point may benefit from some clarification" are denotationally similar, but connotationally they are very different. If you insist on ignoring this distinction or dismissing it as unimportant (as it seems so many LWers are wont to do), you run the risk of generating an unpleasant social atmosphere. Seriously. This isn't rocket science. (See what I did there?)
5JonahS
I don't want approval, I want to help people. If people think that they can offer helpful feedback (as Vaniver did), they should do so. Empty praise is just as useless as empty criticism. Vaniver's feedback had substantive information value – that's why I'm glad that he made his comment. If I fail to help people because I'm not receptive enough to critical feedback, it's my own fault. I accept responsibility for the consequences of my actions.
7IlyaShpitser
http://en.wikipedia.org/wiki/Phatic_expression This is a thing because we have complex brains, with only a part devoted to processing information of the kind you mean, and others worried about contingent social facts: dominance/submission/status/etc. I think the broadly right response is to make peace-via-compromise between those parts, and that involves speaking on multiple bandwidths, as it were. This, to me, is a type of instrumental rationality in interpersonal communication.

I don't think that it carves reality at its joints to call that "mathematical ability."

... and we're down to definitional quibbles, which are rarely worth the effort, other than simply stating "I define x as such and such, in contrast to your defining x as such as such". Reality has no intrinsic, objective dictionary with an entry for "mathematical ability", so such discussions can mostly be solved by using some derivative terms x1 and x2, instead of an overloaded concept x.

Of course, the discussion often reduces to who ha... (read more)

Teaching happiness can be -- and often is -- at odds with teaching epistemic rationality.

0ChristianKl
It can also help with epistemic rationality if you teach people to identify distorted thinking via the CBT framework as layed out in David Burns "The Feeling Good Handbook". Effectively dealing with one's own emotions helps with clear thinking. On the other hand when you try to get school teacher to do something right it's quite possible that they mess up and at the end you have less epistemic rationality.

I amended the grandparent. Suppose for the sake of argument you agreed with my estimate of this being the proverbial "last, best hope". Then giving away the one potentially game-changing advantage to barter for a globally insignificant "victory" would be the epitome of an overly greedy algorithm. Losing sight of the actual goal because an authority figure told you so, in a way not thinking for yourself beyond the task as stated.

Making that point sounds, on reflection, like exactly the type of thing I'd expect Eliezer to do. Do what I me... (read more)

Personally, I feel that case 1 ("doesn't work at all") is much more probable

I've come to the opposite conclusion. Should we drag out quotes to compare evidence? Is your estimate predicated on just one or two strong arguments, and if so could I bother you to state them? The most probability mass to my estimate is contributed by Voldemort's former reluctance to test the horcrux system and his prior blind spots as a rationalist when designing the system, and the oft-reinforced notion of Harry actually being a version of Tom Riddle, indistinguisha... (read more)

0TobyBartels
‘If you cling to your life, you will lose it, and if you let your life go, you will save it.’ —Luke 17:33 (NLT, which seemed the nicest phrasing of those that I found on one list) But this sort of sentiment is more in line with canon than with MoR. Of course, this particular instance gives it a twist that neither Rowling nor Luke intended.
2cousin_it
Yeah, I was trying to help Harry survive the next minute with high probability, not win the war with high probability. The latter is a harder problem, and it's not enough to have a plan that's based on horcrux hijacking only. If I felt that horcrux hijacking might give me an actual easy win (as opposed to, say, Voldemort killing himself immediately and fighting me within the horcrux system), then I wouldn't mention it, and say something else instead.

"No action" is an action, same as any other (for a grokkable reference, see consequentialists and the Trolley experiment). Also, obviously it wouldn't be "no action" it would be selling Voldemort the idea that there's nothing left, maybe revealing the secret deemed most insignificant and then begging for that to apply to both parents.

1) (Harry tells Voldemort his death could hijack the horcrux network) doesn't seem unlikely at all. Both hints from within the story (the Marauder map) and on the meta level ("Riddles and Answers") suggest an unprecedent congruence of identity, at least in the sense of magical artifacts (the map) being unable to tell the difference.

I did not post it since strictly speaking Harry should keep quiet about it.Losing the challenge of not dying (learned to lose), but increasing his chances of winning the war. Immediately even: Since the new horcrux sys... (read more)

3cousin_it
I'm not sure that Harry should keep quiet. There are three cases: 1) Horcrux hijacking doesn't work at all. Speaking up prolongs Harry's life until Voldemort does an experimental test. 2) Horcrux hijacking works, but Voldemort can devise a workaround. Speaking up gives up an easy win, but also prolongs Harry's life until Voldemort does an experimental test and devises a workaround. 3) Horcrux hijacking works, and there's no workaround. It doesn't matter if Harry speaks up or not. I feel that case 1 is much more probable than case 2, so speaking up is a good idea. If we had strong arguments for case 2, I'd recommend keeping quiet instead.
2Izeinwinter
There is no point in adopting it as a plan because it is what will happen if he does nothing at all. It's a reason to not do certain things- such as point this possibility out, but not in and of itself any kind of plan.

Skimming over (part of) the proposed solutions on FF.net has thoroughly destroyed any sense of kinship with the larger HPMoR readership. Darn, it was such a fuzzily warm illusion. In concordance with Yvain's latest blog post, there may be few if any general shortcuts to raising the sanity waterline.

Harry's commitment is quite weaksauce, and it was surprising that he wasn't called on it:

I sshall help you obtain the Sstone (...) sshall not do anything I think will annoy you to no good end. Sshall call no help if I expect them to be killed by you or for hosstagess to die.

So he's free to call help as long as he expects to win the ensuing encounter. After which he could hand the Stone to a subdued Quirrell for just a moment, technically fulfilling that clause as well. Also, the "to no good end" qualifier? "Winning against Voldemort" certainly would count as a good end, cancelling that part as well.

Parseltongue doesn't produce binding promises. There no need to technically fulfill clauses. It doesn't function as a commitment device. It just makes someone talk frankly about his own intentions.

Harry can't provide any stronger commitment and is indeed sorry about his inability to provide a stronger commitment.

Well, depends on how much you discount the expected utility of cryonics due to Pascal's Wager concerns. The variance of the payoff for freezing tissue certainly is much smaller, and freezing tissue really isn't a big deal from a technical or even societal point of view, as evidenced by, say, female egg freezing for fertility preservation.

The (?) proves you right about the philosophy part.

4IlyaShpitser
The (?) was meant to apply to the conjunction, not the latter term alone.

Seems like there's some feminists or some 6'5 men with a superiority complex around.

Well, I am 6'7, without a superiority complex of course. That's not why I downvoted you, though, and since you asked for an explanation:

I'm reading the comments and looking for some new ammo for my next fight in the gender wars.

That's not the kind of approach (arguments as soldiers) we're looking for in a rationality forum. One of the prequisites is a willingness to change your mind, which seems to be setting the bar too high for some people.

-6[anonymous]

I'd call it an a-rationality quote, in the sense that it's just an observation; one backed up by evidence but with no immediate relevancy to the topic of rationality.

On second thought, it does show a kind of bias, namely the "compete-for-limited-resources" evolutionary imperative which introduced the "bias" of treating most social phenomena as zero-sum games. Bias in quotes because there is no correct baseline to compare against, tendency would probably be a better term.

2Davidmanheim
But it is descriptive of how we are actually wired; perhaps it would be better if happiness were not relative, but it is.

Strong statement from Bill Gates on machine superintelligence as an x-risk, on today's Reddit AMA:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

5savedpass
"It seems pretty egocentric while we still have malaria and TB for rich people to fund things so they can live longer. It would be nice to live longer though I admit."

The whole AI box experiment is a fun pastime, and educational in so far as learning to take artificial intellects seriously, but as real-world long-term "solutions" go, it is utterly useless. Like trying to contain nuclear weapons indefinitely, except you can build one just by having the blueprints and a couple leased hours on a supercomputer, no limited natural elements necessary, and having one means you win at whatever you desire (or that's what you'd think). All the while under the increasing pressure of improving technology, ever lowering th... (read more)

0passive_fist
We all know that AI is a risk but personally I wouldn't worry too much. I doubt anything remotely similar to the AI box situation will ever happen. If AI happens via human enhancement many of the fears will be completely invalid.

The quote was dashed by the poster.

3Gondolinian
It looks like RPMcMurphy has replaced all of their recent comments with that character. RPMcMurphy, if you want to delete your account, you can do so by going to preferences and clicking on DELETE in the top right. You can also retract posts to protect them from being downvoted (will also keep them from being upvoted) by clicking on the button that looks kind of like Ѳ at the bottom right of your comments.

Contains a lot of guesstimates though, which it freely admits throughout the text (in the abstract, not so much). It's a bit like a very tentative Fermi estimate.

I truly am torn on the matter. LW has caused a good amount of self-modification away from that position, not in the sense of diminishing the arguments' credence, but in the sense of "so what, that's not the belief I want to hold" (which, while generally quite dangerous, may be necessary with a few select "holy belief cows")*.

That personal information notwithstanding, I don't think we should only present arguments supporting positions we are convinced of. That -- given a somewhat homogeneous group composition -- would amount to an echo c... (read more)

There are analogues of the classic biases in our own utility functions, it is a blind spot to hold our preferences as we perceive them to be sacrosanct. Just as we can be mistaken about the correct solution to Monty Hall, so can we be mistaken about our own values. It's a treasure trove for rational self-analysis.

We have an easy enough time of figuring out how a religious belief is blatantly ridiculous because we find some claim it makes that's contrary to the evidence. But say someone takes out all such obviously false claims, or take a patriot, someone w... (read more)

2Richard_Kennaway
Whose? You seem reluctant to stand by the nihilism you are preaching.
4AndHisHorse
Why is this a rationality quote?

I don't recommend it, but I'll have to see individual cases to know whether I'd bring down the banhammer.

As a general thing, I don't recommend using insults which might stabilize bad behavior by making it part of a person's identity. Also, I have a gut level belief that people are less likely to think clearly when they're angry.

Don't knock it 'til you try it.

Truth had never been a priority. If believing a lie kept the genes proliferating, the system would believe that lie with all its heart.

Peter Watts, Echopraxia, on altruism. Well ok, I admit, not on altruism per se.

It is simply unfathomable to me how you come to the logical conclusion that an UFAI will automatically and instantly and undetectably work to bypass and subvert its operators. Maybe that’s true of a hypothetical unbounded universal inference engine, like AIXI. But real AIs behave in ways quite different from that extreme, alien hypothetical intelligence.

Well, it follows pretty straightforwardly from point 6 ("AIs will want to acquire resources and use them efficiently") of Omohundro's The Basic AI Drives, given that the AI would prefer to act... (read more)

2[anonymous]
Well this argument I can understand, although Omohundro’s point 6 is tenuous. Boxing setups could prevent the AI from acquiring resources, and non-agents won’t be taking actions in the first place, to acquire resources or otherwise. And as you notice the ‘undetectable’ qualifier is important. Imagine you were locked in a box guarded by a gatekeeper of completely unknown and alien psychology. What procedure would you use for learning the gatekeeper’s motives well enough to manipulate it, all the while escaping detection? It’s not at all obvious to me that with proper operational security the AI would even be able to infer the gatekeeper’s motivational structure enough to deceive, no matter how much time it is given. MIRI is currently taking actions that only really make sense as priorities in a hard-takeoff future. There are also possible actions which align with a soft-takeoff scenario, or double-dip for both (e.g. Kaj’s proposed research[1]), but MIRI does not seem to be involving itself with this work. This is a shame. [1] http://intelligence.org/files/ConceptLearning.pdf
Load More