Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 31 August 2016 06:24:50PM 2 points [-]
Comment author: kpreid 01 September 2016 02:56:41PM 2 points [-]

Thanks for doing that!

Comment author: ChristianKl 17 February 2016 06:28:30PM 0 points [-]

Part of what the acceptance of Ohm’s Law demanded was a redefinition of both ‘current’ and ‘resistance’; if those terms had continued to mean what they had meant before, Ohm’s Law could not have been right; that is why it was so strenuously opposed as, say, the Joule-Lenz Law was not.

Thomas Kuhn in The Structure of Scientific Revolutions

Comment author: kpreid 22 August 2016 06:38:07PM 1 point [-]

Is there something not-paywalled which describes what the relevant old definitions were?

In response to Test Driven Thinking
Comment author: kpreid 05 September 2015 08:05:19PM 1 point [-]

Your description of TDD is slightly incomplete: the steps include, after writing the test, running the test when you expect it to fail. The idea being that if it doesn't fail, you have either written an ineffective test (this is more likely than one might think) or the code under test actually already handles that case.

Then you write the code (as little code as needed) and confirm that the test passes where it didn't before to validate that work.

Comment author: solipsist 23 July 2015 04:00:34AM *  9 points [-]

I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.

Dilettanting:

  • It is really really hard to produce code without bugs. (I don't know a good analogy for writing code without bugs -- writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
  • The market doesn't support secure software. The expensive part isn't writing the software -- it's inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It's a market for lemons.
  • Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply -- there isn't much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there's an entire secret ecosystem of secured software out there, "secure" systems are using a stack with insecure, consumer, components.
  • Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you're a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
  • Edward Snowden was, like, just some guy. He wasn't trained by the KGB. He didn't have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone "hey everyone I just stole thousand of secret documents". Most thieves do not work that way.
  • Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Comment author: kpreid 01 September 2015 01:30:10AM *  0 points [-]

Computers systems comprise hundreds of software components and are only as secure as the weakest one.

This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per "user") that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don't fit today's world of networked computers.

If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker's problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.

This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.

Comment author: ScottL 23 August 2015 03:21:22AM *  0 points [-]

I find myself bothered by the way the examples assume one uses exactly one particular approach to thinking — but in a different aspect.

I am not sure how to fix this. Plus, the examples (except the first) are all from the literature on mental models.

To solve this second problem you need to use multiple models.

is false. I only need one model, which leaves some facts unspecified.

I changed the post. I shouldn't have used the word solved. I meant that you need to generate all of the models if you are going to ensure that the model with the conclusion is valid or as you say not 'inconsistent'. So, you not only have the reach the conclusion. You need to also check if it's valid. That's why you go through all three models. In the last example the police arrived before the reporter in one model and the reporter arrived after the police in another of the models. Therefore, the example is invalid.

Comment author: kpreid 25 August 2015 02:52:03AM 0 points [-]

Plus, the examples (except the first) are all from the literature on mental models.

Then my criticism is of the literature, not your post.

I meant that you need to generate all of the models if you are going to ensure that the model with the conclusion is valid or as you say not 'inconsistent'. So, you not only have [to] reach the conclusion. You need to also check if it's valid.

Reality is never inconsistent (in that sense). Therefore, I only need to check to guard against errors in my reasoning or in the information I have given; neither of these is necessary.

That's why you go through all three models. In the last example the police arrived before the reporter in one model and the reporter arrived after the police in another of the models. Therefore, the example is invalid.

In the last example, the type of reasoning I described above would find no answer, not multiple ones.

(And, to clarify my terminology, the last example is not an instance of "the premises are inconsistent"; rather, there is insufficient information.)

Comment author: kpreid 23 August 2015 01:37:32AM 2 points [-]

I appreciate this article for introducing research I was not previously aware of.

However, as other commenters did, I find myself bothered by the way the examples assume one uses exactly one particular approach to thinking — but in a different aspect. Specifically, I made the effort to work through the example problems myself, and

To solve this second problem you need to use multiple models.

is false. I only need one model, which leaves some facts unspecified. I reasoned as follows:

  1. What I need to know is the relation between “police” and “reporter”.
  2. Everything we know about “police” is that it is simultaneous with “alarm”.
  3. Everything we know about “reporter” is that it is simultaneous with “stabbed”.
  4. What do we know about the two newly mentioned events? That “alarm” is before “stabbed”.
  5. Therefore “police” is before “reporter” (or, if we do not check further, the premises could be inconsistent).

This is building up exactly as much model as we need to reach the conclusion.

I will claim that this is a more realistic mode of reasoning — that is, more applicable to real-world problems — than the one you assume, because it does not assume that all of the information available is relevant, or that there even is a well-defined boundary of “all of the information”.

Comment author: Dreaded_Anomaly 25 June 2015 06:30:00PM 10 points [-]

A few brief supplements to your introduction:

The source of the generated image is no longer mysterious: Inceptionism: Going Deeper into Neural Networks

ANN-generated images

But though the above is quite fascinating and impressive, we should also keep in mind the bizarre false positives that a person can generate: Images that fool computer vision raise security concerns

ANN false positives

Comment author: kpreid 27 June 2015 03:17:34PM 4 points [-]

I look at the bizarre false positives and I wonder if (warning: wild speculation) the problem is that the networks were not trained to recognize the lack of objects. For example, in most cases you have some noise in the image, so if every training image is something, or rather something-plus-noise, then the system could learn that the noise is 100% irrelevant and pick out the something.

(The noisy images look to me like they have small patches in one spot faintly resembling what they're identified as — if my vision had a rule that deemphasized the non-matching noise and I had a much smaller database of the world than I do, then I think I'd agree with those neural networks.)

If the above theory is true, then a possible fix would be to include in training data a variety of images for which the expected answers are like “empty scene”, "too noisy", “simple geometric pattern”, etc. But maybe this is already done — I'm not familiar with the field.

Comment author: TezlaKoil 21 May 2015 12:32:43PM *  19 points [-]

I have tried:

  • Wearing a vibrating compass anklet for a week. It improved my navigational skills tremendously. I have low income, but I would definitely buy one if I could afford it.

  • Listening to a 60 bpm metronome on a Bluetooth earpiece for a week (excluding showers). I got used to the sound relatively quickly, but I most definitely did not acquire an absolute sense of time. However, I noticed that during boring activities such as filling out paperwork, the ticking itself seems to slow down.

I will try:

  • Wearing an Oculus Rift that shows the Fourier Transform of what I would normally see. I'd like to know if I can get used to it, and if it improves my mathematical intuition.
Comment author: kpreid 06 June 2015 10:46:03PM 1 point [-]

I wonder: after sufficient adaptation to a rate-of-time sense, could useful mental effects be produced by adjusting the scale?

Comment author: Lumifer 22 May 2015 03:43:57PM 0 points [-]

Hm. A solid rocket burns from one end, opening up the nose will do nothing to the thrust. Splitting a side, I would guess, will lead to uncontrolled acceleration with chaotic flight path, but not zero acceleration.

Comment author: kpreid 23 May 2015 03:02:35PM 2 points [-]

Apparently that's true of some model rocket motors, but the SRBs have a hollow through the entire length of the propellant, so that it burns from the center out to the casing along the entire length at the same time.

Comment author: CBHacking 22 May 2015 04:23:18AM 0 points [-]

Whoops, you're right. I thought the gimbaling was just on the SSMEs (attached to the orbiter) but in retrospect it's obvious that the SRBs had to have some control of their flight path. I'm now actually rather curious about the range safety stuff for the SRBs - one of the dangers of an SRB is that there's basically no way to shut it down, and indeed they kept going for some time after Challenger blew up - but the gimbaling is indeed an obvious sign that I should have checked my memory/assumptions. Thanks.

Comment author: kpreid 22 May 2015 02:27:50PM 1 point [-]

I'm now actually rather curious about the range safety stuff for the SRBs - one of the dangers of an SRB is that there's basically no way to shut it down, and indeed they kept going for some time after Challenger blew up

What I've heard (no research) is that thrust termination for a solid rocket works by charges opening the top end, so that the exhaust exits from both ends and the thrust mostly cancels itself out, or perhaps by splitting along the length of the side (destroying all integrity). In any case, the fuel still burns, but you can stop it from accelerating further.

View more: Next