Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alexei 10 December 2016 08:09:23AM 0 points [-]

We can be careful to include all information that they, from their vantage point, would want to know -- even if on our judgment, some of the information is misleading or irrelevant, or might pull them to the “wrong” conclusions.

I did not understand this part.

Comment author: gjm 10 December 2016 04:18:01PM 6 points [-]

I don't know how it plays out in the CFAR context specifically, but the sort of situation being described is this:

Alice is a social democrat and believes in redistributive taxation, a strong social safety net, and heavy government regulation. Bob is a libertarian and believes taxes should be as low as possible and "flat", safety-nets should be provided by the community, and regulation should be light or entirely absent. Bob asks Alice[1] what she knows about some topic related to government policy. Should Alice (1) provide Bob with all the evidence she can favouring the position she holds to be correct, or (2) provide Bob with absolutely all the relevant information she knows of, or (3) provide Bob with all the information she has that someone with Bob's existing preconceptions will find credible?

It's tempting to do #1. Anna is saying that CFAR will do (the equivalent of) #2 or even #3.

[1] I flipped a coin to decide who would ask whom.

Comment author: gjm 08 December 2016 12:55:48AM 2 points [-]

The criticisms of "general functionalism" in the post seem to me to be aimed at a different sort of functionalism from the sort widely espoused around here.

The LW community is (I think) mostly functionalist in the sense of believing e.g. that if you're conscious then something that does the same as you do is also conscious. They'd say that implementation details don't matter for answering various philosophical questions. Is this thing me? Is it a person? Do I need to care about its interests? Is it intelligent? Etc. But that's a long way from saying that implementation details don't matter at all and, e.g., I think it's "LW orthodoxy" that they do; that, e.g., something that thinks just like me but 1000x faster would be hugely more capable than me in all sorts of important ways.

(The advantages of the humans over the aliens in Eliezer's "That Alien Message" have a lot to do with speed, though that wasn't quite Eliezer's point and he makes the humans smarter in other ways and more numerous too.)

If formal AI-safety work neglects speed, power consumption, side-channel attacks, etc., I think it's only for the sake of beginning with simpler more tractable versions of the problems you care about, not because anyone seriously believes that those things are unimportant.

(And, just to be explicit, I believe those things are important, and I think it's unlikely that any approach to AI safety that ignores them can rightly be said to deliver safety. But an approach that begins by ignoring them might be reasonable.)

Comment author: gjm 08 December 2016 12:46:46AM 2 points [-]

I worry that the last paragraph of this post is too optimistic.If "formal proof is insufficient", that might mean that proceeding formally can't produce superintelligent AI, in which case indeed we don't need to worry so much about AI risks -- but it might instead mean that proceeding formally produces flaky superintelligent AI. That is, AI that's just about as smart as we'd have hoped or feared, except that it's extra-vulnerable to malicious hackers, or it has weird cognitive biases a bit like ours but orders of magnitude more subtle, or it basically operates like a "normal" superintelligence except that every now and then a cosmic ray flips a bit and it decides to destroy the sun.

That would not be reassuring.

Comment author: Lumifer 02 December 2016 06:42:28PM 0 points [-]

If they're arguing for (alleged) anti-terrorist measures like the TSA

I think this is the main context in which the question of whether you should or should not be afraid of terrorists arises. Relatively few people (in the West) are personally afraid of terrorists to the extent of significantly changing their behaviour -- with the likely exception of the situations when terrorism becomes widespread, see e.g. The Troubles. But a lot of people do make the argument that one should present one's underwear for examination on demand because otherwise the terrorists win/kill us all/conquer the world/think of the children/etc. As a timely example, didn't the UK just pass the Snoopers' Chapter?

So the right point of comparison

It's a different question. We started with asking, basically, to what degree should you be afraid of terrorism, but here you are asking how much resources should society allocate to fighting/preventing terrorism.

Comment author: gjm 03 December 2016 12:02:08AM 0 points [-]

It's a different question

I'm not sure it is. I think there's always a how-much-resources subtext. People stressing how scary and dangerous terrorism is are (I think) usually doing so to justify expending resources, or trampling on civil liberties, or something of the kind. People stressing how little harm it actually does are (I think) usually doing so in opposition to that, implicitly or explicitly saying "this is not the sort of threat that justifies the huge expense and inconvenience and indignity of airport security theatre".

In which case, the relevant question is not "how much harm does terrorism do?" but something more like "what would the tradeoffs be if we did more or less of this allegedly-anti-terrorist stuff?".

Comment author: Lumifer 02 December 2016 03:33:38PM 1 point [-]

Harms arising from hysterical overreaction are harms of terrorism.

Only if you consider the hysterical overreaction inevitable.

The first wave of airplane hijackings and general terrorism in the West (in recent times) was in 1970s, driven mostly by Palestinians and radical-left groups. Strangely enough, it did not lead to no-fly lists, considering nail clippers to be dangerous weapons, and having to dump your water before going into the airport lounge to buy more...

Generally speaking, if you have some control over whether reaction X to event Y will take place, you can't say that harms/benefits of X are harms/benefits of Y.

Comment author: gjm 02 December 2016 06:16:40PM 0 points [-]

Sure. So let's go back to the earlier question: When people say "more people die from having trees fall on them than from terrorism[1], so you shouldn't be bothered by terrorism any more than you are by falling trees", is that a reasonable argument or does it fail to engage with less-obvious harms caused by terrorism?

[1] I have not in fact checked whether this is true. It will certainly be true with all sorts of things in place of "falling trees" that most of us are mostly not very scared of.

I think the answer depends on what sort of botherment we're looking at. If someone feels visceral fear of violent death when they think about terrorism, it's really only actual deaths in terrorist attacks that are relevant, and the fact that they're rare compared with <whatever> is good reason not to be so afraid. If they're arguing for (alleged) anti-terrorist measures like the TSA, then again it obviously doesn't make sense for them to say "we need to overreact to terrorism, because terrorism is bad on account of overreaction". I think these probably are what those "more people are killed by their own toothpaste[2] than by terrorism" memes are aiming at, so I agree that sarahconstantin's version of NatashaRostova's argument doesn't seem like it works well.

[2] I have not checked whether this is true, and it probably isn't. See [1] above.

But it's not all wrong. I will gladly agree that most of what the TSA does seems to be security theatre, but it would be quite surprising if literally everything we do in the name of preventing and obstructing terrorism were completely useless. So the right point of comparison is either with the harm terrorists would be doing if we didn't do anything to stop them, or with the harm they are actually doing plus that of whatever measures we could be adopting that would be equally effective with less impact on civil liberties, waiting times, not having our genitalia groped by security agents, etc. Unfortunately, I've no idea how to estimate either of those with any accuracy.

Comment author: Lumifer 01 December 2016 06:20:10PM 6 points [-]

And how do you distinguish harms of terrorism from harms of a hysterical overreaction to terrorism? TSA is almost entirely security theater, it's a self-inflicted disaster (those sufficiently paranoid can speculate on the advantages of stoking fear to promote surveillance-and-control systems: "never let a crisis go to waste").

Comment author: gjm 02 December 2016 03:22:35PM 0 points [-]

Harms arising from hysterical overreaction are harms of terrorism. Such harms of overreaction are (I bet) among the reasons why terrorists do what they do.

(Suppose you have an infection like a cold or influenza. Many of the unpleasant symptoms you notice have their proximate causes in your body's immune response to the infection. That doesn't stop those symptoms being classified as "harm caused by the infection".)

Comment author: gjm 01 December 2016 10:39:31PM 16 points [-]

(This is in the same general area as casebash's two suggestions, but I think it's different enough to be worth calling out separately.)

Most of the material on LW is about individual rationality: How can I think more clearly, approximate the truth better, achieve my goals? But an awful lot of what happens in the world is done not by individuals but by groups. Sometimes a single person is solely responsible for the group's aims and decision-making, in which case their individual rationality is what matters, but often not. How can we get better at group rationality?

(Some aspects of this will likely be better explored for commercial gain than for individual rationality, since many businesses have ample resources and strong motivation to spend them if the ROI is good; I bet there are any number of groups out there offering training in brainstorming and project planning, for instance. But I bet there's plenty of underexplored group-rationality memespace.)

Comment author: Lumifer 30 November 2016 05:48:04PM 0 points [-]

some other people contemplating using the same technique might be less so

Feel free to point out to those some other people their shortcomings, then. I hope you don't think I'm a role model, do you now? X-)

Comment author: gjm 30 November 2016 05:58:59PM 0 points [-]

I don't really believe in role models. Anyway, I wasn't intending to point out any person's shortcomings; I was agreeing with VipulNaik's misgivings about the technique.

(To be more concrete, "doing X may get you ignored as a blowhard" is a criticism of doing-X, not a criticism of someone who either does X or contemplates doing X.)

In response to Epistemic Effort
Comment author: Lumifer 29 November 2016 09:12:59PM *  0 points [-]

I think this is conflating two different things: how much effort did you spend (e.g. "Made a 5 minute timer and thought seriously about possible flaws or refinements") and what did you do to empirically test the idea (e.g. "Ran an Randomized Control Trial"). These two are somewhat correlated, hopefully, but it's possible both to engage in very complex and effortful flights of fancy without any connection to the empirics, and to start with simple and basic actual tests without thinking about the problem too hard or too long.

I think I'd rather see people state the Falsifiability Status of their idea, say, on a scale ranging from trivial to never. For example:

  • Trivial: most anyone could do it in a few minutes at most
  • Easy: many people could do it with a modest investment of time
  • Moderate: amateurs could do it but it would require effort
  • Difficult: doable by professionals with a budget; amateurs will have huge difficulties
  • Very hard: probably doable, but requires large teams and a lot of money and effort
  • Potentially possible: probably doable in the future, but not at the current technology level
  • Never: not falsifiable
In response to comment by Lumifer on Epistemic Effort
Comment author: gjm 30 November 2016 05:40:36PM 0 points [-]

I'd split it up a bit differently. "How much effort" versus "What actual reason does anyone else have to agree with this?". The latter isn't quite the same as "what empirical testing has it had?" but I think it's the more important question.

However, "Epistemic effort" as proposed here (1) probably does correlate pretty well with "how much reason to agree?", (2) also gives information about the separate question "how seriously is this person taking this discussion?" and (3) is probably easier to give an accurate account of than "what actual reason ...".

In response to Articles in Main
Comment author: gjm 30 November 2016 05:35:53PM 1 point [-]

Is there a way to put a linkpost in Discussion and disable comments on it?

View more: Next