Wiki Contributions

Comments

Sorted by
kpreid31

What distinguishes this from how my brain works?

Your brain stores memories of input and also of previous thoughts you had and the experience of taking actions. Within the “replaced with a new version” view of the time evolution of your brain (which is also the pure-functional-programming view of a process communicating with the outside world), we can say that the input it receives next iteration contains lots of information from outputs it made in the preceding iteration.

But with the reinforcement learning algorithm, the previous outputs are not given as input. Rather, the previous outputs are fed to the reward function, and the reward function's output is fed to the gradient descent process, and that determines the future weights. It seems like a much noisier channel.

Also, individual parts of a brain (or ordinary computer program with random access memory) can straightforwardly carry state forward that is mostly orthogonal to state in other parts (thus allowing semi-independent modules to carry out particular algorithms); it seems to me that the model cannot do that — cannot increase the bandwidth of its “train of thought while being trained” — without inventing an encoding scheme to embed that information into its performance on the desired task such that the best performers are also the ones that will think the next thought. It seems fairly implausible to me that a model would learn to execute such an internal communication system, while still outcompeting models “merely” performing the task being trained.

(Disclaimer: I'm not familiar with the details of ML techniques; this is just loose abstract thinking about that particular question of whether there's actually any difference.)

kpreid51

Next, high and low settings are chosen for each X factor, and all possible combinations of settings are arranged in a hypercube. Instead of experimenting on one factor at a time with enough repetitions to build up statistical significance, you can perform just a few repetitions at each corner of the hypercube.

This concept reminds me of the problem of planning software tests: I want to exercise all behaviors of the code under test, but actually testing the cartesian product of input conditions often means writing a test that is so generic it duplicates the code under test (unless there is a more naïve algorithm that the test can use), and is hard to evaluate for its own correctness. Instead, I end up writing a selected set of cases intended to cover interesting combinations of inputs — but then the problem is thinking of which inputs are worth testing. When bugs are discovered, they may be combinations of inputs that were not thought of (or they may be parameters we didn't think of testing, i.e. implicitly put in the “control” category, or specific edge-case values of parameters we did test).

An alternative to hand-written testing of specific cases is to write a property test, like “is input A + input B always ≤ output C, under a wide-ranging selection of inputs”. This feels analogous to measuring correlations in that hypercube — and the part of the actual output that you're not checking precisely (in my example, the value A + B − C) is the part of the test that is “noise” rather than “control” because we've decided it is more practical to ignore that information than to control it (write a test that contains or computes the exact answer to expect).

kpreid220

I like this post and am not intending to argue against its point by the following:

I read the paragraph about orders of magnitude and immediately started thinking about whether there are good counterexamples. Here are two: wires are used in lengths from nanometers to kilometers, and computer programs as a category run for times from milliseconds to weeks (even considering only those which are intended to have a finite task and not to continue running until cancelled).

Common characteristics of these two examples are that they are one-dimensional (no “square-cube law” limits scaling) and that they are arguably in some sense the most extensible solutions to their problem domains (a wire is the form that arbitrary length electrical conductors take, and most computer programs are written in Turing-complete languages).

Perhaps the caveat is merely that “some things scale freely such that the order of magnitude is no new information and you need to look at different properties of the thing”.

kpreid30

For what it's worth, https://en.wikipedia.org/wiki/Evaporative_cooler takes the perspective (in one paragraph) that “Vapor-compression refrigeration uses evaporative cooling, but the evaporated vapor is within a sealed system, and is then compressed ready to evaporate again, using energy to do so.” So, in this perspective, evaporative cooling is a part of the system and forced recirculation (requiring the energy source mentioned in the question) is another.

heat pumps not refrigerators

Note that what is colloquially called a heat pump is the same fundamental thing as a refrigerator — equipment is referred to as a “heat pump” when it is used for heating rather than, or in addition to, cooling, but the processes and principles are the same (with the addition of a “reversing valve” so that the direction of operation may be changed, when both heating and cooling are wanted).

kpreid30

Isolation is not about surges, but about preventing current from flowing in a particular path at all. In a transformer, there is no conductive (only magnetic) path from the input side to the output side. So, if you touch one or more of the low-voltage output terminals of a transformer, you can't thereby end up part of a high-voltage circuit no matter what else you're also touching; only experience the low voltage. This is how wall-plug low voltage power supplies work. Even the ones that are using electronic switching converters (nearly all of them today) are using a transformer to provide the isolation: the line voltage AC is converted to higher frequency AC, run through a small transformer (the higher the frequency, the smaller a transformer you need for the same power) and converted back to DC.

kpreid20

Is there something not-paywalled which describes what the relevant old definitions were?

kpreid20

Your description of TDD is slightly incomplete: the steps include, after writing the test, running the test when you expect it to fail. The idea being that if it doesn't fail, you have either written an ineffective test (this is more likely than one might think) or the code under test actually already handles that case.

Then you write the code (as little code as needed) and confirm that the test passes where it didn't before to validate that work.

kpreid00

Computers systems comprise hundreds of software components and are only as secure as the weakest one.

This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per "user") that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don't fit today's world of networked computers.

If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker's problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.

This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.

kpreid00

Plus, the examples (except the first) are all from the literature on mental models.

Then my criticism is of the literature, not your post.

I meant that you need to generate all of the models if you are going to ensure that the model with the conclusion is valid or as you say not 'inconsistent'. So, you not only have [to] reach the conclusion. You need to also check if it's valid.

Reality is never inconsistent (in that sense). Therefore, I only need to check to guard against errors in my reasoning or in the information I have given; neither of these is necessary.

That's why you go through all three models. In the last example the police arrived before the reporter in one model and the reporter arrived after the police in another of the models. Therefore, the example is invalid.

In the last example, the type of reasoning I described above would find no answer, not multiple ones.

(And, to clarify my terminology, the last example is not an instance of "the premises are inconsistent"; rather, there is insufficient information.)

Load More