Steve Keen's Debunking Economics blames debt, not automation.
Essentially, many people currently feel that they are deep in debt, and work to get out of debt. Keen has a ODE model of the macroeconomy that shows various behaviors, including debt-driven crashes.
Felix Martin's Money goes further and argues that strong anti-inflation stances by central bank regulators strengthen the hold of creditors over debtors, which has made these recent crashes bigger and more painful.
The statements, though contradictory, refer to two different thought experiments.
The two comments, though contradictory, refer to two different thought experiments.
Is it reasonable to take this as evidence that we shouldn't use expected utility computations, or not only expected utility computations, to guide our decisions?
If I understand the context, the reason we believed an entity, either a human or an AI, ought to use expected utility as a practical decision making strategy, is because it would yield good results (a simple, general architecture for decision making). If there are fully general attacks (muggings) on all entities that use expected utility as a practical decision making strategy, then perhaps we shou...
Magic Haskeller and Augustsson's Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.
I believe they cannot output a term t :: a->b because there is no such term, because 'anything implies anything else' is false.
The type constructors that you're thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as "Forall A, Forall B, A -> B" are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
The type corresponding to a proposition like "all computable functions from the reals to the reals are continuous" looks like a function type consuming some representation of "a computable function" and produc...
I think you may be sincerely confused. Would you please reword your question?
If your question is whether someone (either me or the OP) has committed a multiplication error - yes, it's entirely possible, but multiplication is not the point - the point is anthropic reasoning and whether "I am a Bolzmann brain" is a simple hypothesis.
The arithmetical hierarchy is presuming a background of predicate logic; I was not presuming that. Yes, the type theory that I was gesturing towards would have some similarity to the arithmetical hierarchy.
I was trying to suggest that the answer to "what is a prediction" might look like a type theory of different variants of a prediction. Perhaps a linear hierarchy like the arithmetical hierarchy, yes, perhaps something more complicated. There could be a single starting type "concrete prediction" and a type constructor that, given sour...
Perhaps there is a type theory for predictions, with concrete predictions like "The bus will come at 3 o'clock", and functions that output concrete predictions like "Every monday, wednesday and friday, the bus will come at 3 o'clock" (consider the statement as a function taking a time and returning a concrete prediction yes or no).
An ultrafinitist would probably not argue with the existence of such a function, even though to someone other than an ultrafinitist, the function looks like it is quantifying over all time. From the ultrafinitist's point of view, you're going to apply it to some concrete time, and at that point it's going to output a some concrete prediction.
The prevalence of encodings means that we might not be able to "build machines with one or the other". That is, given that there are basic alternatives A and B and A can encode B and B can encode A, it would take a technologist specializing in hair-splitting to say whether a machine that purportedly is using A is "really" using A at its lowest level or whether it is "actually" using B and only seems to use A via an encoding.
If in the immediate term you want to work with many-sorted first order logic, a reasonable first step wo...
It seems to me like this discussion has gotten too far into either/or "winning". The prevalance of encodings in mathematics and logic (e.g. encoding the concept of pairing in set theory by defining (a, b) to be the set {{a}, {a, b}}, the double negation encoding of classical logic inside intuitionist logic, et cetera) means that the things we point to as "foundations" such as the ZF axioms for set theory are not actually foundational, in the sense of necessary and supporting. ZF is one possible system which is sufficiently strong to enc...
Feeling our way into a new formal system is part of our (messy, informal) pebblecraft. Sometimes people propose formal systems starting with their intended semantics (roughly, model theory). Sometimes people propose formal systems starting with introduction and elimination rules (roughly, proof theory). If the latter, people sometimes look for a semantics to go along with the syntax (and vice versa, of course).
For example, lambda calculus started with rules for performing beta reduction. In talking about lambda calculus, people refer to it as "functio...
This and previous articles in this series emphasize attaching meaning to sequences of symbols via discussion and gesturing toward models. That strategy doesn't seem compatible with your article regarding sheep and pebbles. Isn't there a way to connect sequences of symbols to sheep (and food and similar real-world consequences) directly via a discipline of "symbolcraft"?
I thought pebblecraft was an excellent description of how finitists and formalists think about confusing concepts like uncountability or actual infinity: Writing down "... is ...
Mathematics and logic are part of a strategy that I'll call "formalization". Informal speech leans on (human) biological capabilities. We communicate ideas, including ideas like "natural number" and "set" using informal speech, which does not depend on definitions. Informal speech is not quite pointing and grunting, but pointing and grunting is perhaps a useful cartoon of it. If I point and grunt to a dead leaf, that does not necessarily pin down any particular concept such as "dead leaves". It could just as well ind...
I'm concerned that you're pushing second order logic too hard, using a false fork - such and so cannot be done in first order logic therefore second-order logic. "Second order" logic is a particular thing - for example it is a logic based on model theory. http://en.wikipedia.org/wiki/Second-order_logic#History_and_disputed_value
There are lots of alternative directions to go when you go beyond the general consensus of first-order logic. Freek Wiedijk's paper "Is ZF a hack?" is a great tour of alternative foundations of mathematics - firs...
If you taboo one-predicate 'matter', please specialize the two-place predicate (X matters to Y) to Y = "the OP's subsequent use of this article", and use the resulting one-place predicate.
I am not worried about apparent circularity. Once I internalized the Lowenheim-Skolem argument that first-order theories have countable "non-standard" models, then model theory dissolved for me. The syntactical / formalist view of semantics, that what mathematicians are doing is manipulating finite strings of symbols, is always a perfectly good model, ...
Does it matter if you don't have formal rules for what you're doing with models?
Do you expect what you're doing with models to be formalizable in ZFC?
Does it matter if ZFC is a first-order theory?
It may not be possible to draw a sharp line between things that exist from the things that do not exist. Surely there are problematic referents ("the smallest triple of numbers in lexicographic order such that a^3+b^3=c^3", "the historical jesus", "the smallest pair of numbers in lexicographic order such that a^3+24=c^2", "shakespeare's firstborn child") that need considerable working with before ascertaining that they exist or do not exist. Given that difficulty, it seems like we work with existence explicitly, as a...
One model for time travel might be a two dimensional piece of paper with a path or paths drawn wiggling around on it. If you scan a "current moment" line across the plane, then you see points dancing. If a line and its wiggles are approximately perpendicular to the line of the current moment, then the dancing is local and perhaps physical. Time travel would be sigmoid line, first a "spontaneous" creation of a pair of points, then the cancellation of one ("reversed") point with the original point.
An alternative story is of a li...
I understand your point - it's akin to the Box quote "all models are wrong but some are useful" - when choosing among (false) models, choose the most useful one. However, it is not the case that stronger assumptions are more useful - of course stronger assumptions make the task of proving easier, but the task as a whole includes both proving and also building a system based on the theorems proven.
My primary point is that EY is implying that second-order logic is necessary to work with the integers. People work with the integers without using seco...
If you were writing software for something intended to traverse the Interplanetary transfer network then you would probably use charts and atlases and transition functions, and you would study (symplectic) manifolds and homeomorphisms in order to understand those more-applied concepts.
If an otherwise useful theorem assumes that the manifold is orientable, then you need to show that your practical manifold is orientable before you can use it - and if it turns out not to be orientable, then you can't use it at all. If instead you had an analogous theorem that applied to all manifolds, then you could use it immediately.
If you assume A and derive B you have not proven B but rather A implies B. If you can instead assume a weaker axiom Aprime, and still derive B, then you have proven Aprime implies B, which is stronger because it will be applicable in more circumstances.
I agree with this statement - and yet you did not contradict my statement that second order logic is also not part of mainstream mathematics.
A topologist might care about manifolds or homeomorphisms - they do not care about foundations of mathematics - and it is not the case that only one foundation of mathematics can support topology. The weaker foundation is preferable.
Second-order logic is not part of standard, mainstream mathematics. It is part of a field that you might call mathematical logic or "foundations of mathematics". Foundations of a building are relevant to the strength of a building, so the name implies that foundations of mathematics are relevant to the strength of mainstream mathematics. A more accurate analogy would be the relationship between physics and philosophy of physics - discoveries in epistemology and philosophy of science are more often driven by physics than the other way around, and ...
My understanding is that the essay's effect is via the horror a reader feels at the alternate-world presented in the essay. It opens the reader's eyes somewhat to the degree that sexism is embedded in everyday grammar and idiom. My understanding is that it is not a persuasive essay in the usual sense.
Please elaborate.
I agree that if you don't look at the numbers, but at the surrounding text, you get the sense that the numbers could be paraphrased in that way.
So does h, labeled "I hear universe" mean "I hear the universe tell me something at all", or "I hear the universe tell me that they love me" or "I hear the universe tell me what it knows, which (tacitly according to the meaning of knows) is accurate"?
I thought it meant "I have a sensation as if the universe were telling me that they love me", but the highest probabi...
Twice in this article, there are tables of numbers. They're clearly made-up, not measured from experiment, but I don't really understand exactly how made-up they are - are they carefully or casually cooked?
Could people instead use letters (variables), with relations like 'a > b', 'a >> b', 'a/b > c' and so on in the future? Then I could understand better what properties of the table are intentional.
I think it would be valuable if someone pointed out that a third party watching, without controlling, a scientist's controlled study is in pretty much the same situation as the three-column exercise/weight/internet use situation - they have instead exercise/weight/control group.
This "observe the results of a scientist's controlled study" thought experiment motivates and provides hope that one can sometimes derive causation from observation, where the current story arc makes a sortof magical leap.
There are some aspects of maps - for example, edges, blank spots, and so on, that seem, if not necessary, extremely convenient to keep as part of the map. However, if you use these features of a map in the same way that you use most features of a map - to guide your actions - then you will not be guided well. There's something in the sequences like "the world is not mysterious" about people falling into the error of moving from blank/cloudy spots on the map to "inherently blank/cloudy" parts of the world.
The slogan "the map is not ...
You might enjoy Crutchfield's epsilon machines, and Shalizi's CSSR algorithm for learning them:
http://masi.cscs.lsa.umich.edu/~crshalizi/notabene/computational-mechanics.html
There's cognitive strategies that (heuristically) take advantage of the usually-persistent world. Should I be embarrassed, after working and practicing with pencil and paper to solve arithmetic problems, that I do something stupid when someone changes the properties of pencil and paper from persistent to volatile?
What I'd like to see is more aboveboard stuff. Suppose that you notify someone that you're showing them possibly-altered versions of their responses. Can we identify which things were changed when explicitly alerted? Do we still confabulate (probably)? Are the questions that we still confabulate on questions that we're more uncertain about - more ambiguous wording, more judgement required?
Yes, I (and Stross) am taking auditors, internal and external, as a model. Why do you comment specifically on internal auditors?
There's a lot of similarity between the statistical tests that a scientist does and the statistical tests that auditors do. The scientist is interested in testing that the effect is real, and the auditor is testing that the company really is making that much money, that all its operations are getting aggregated up into the summary documents correctly.
Charlie Stross has a character in his 'Rule 34', Dorothy Straight, who is an organization-auditor, auditing organizations for signs of antisocial behavior. As I understood it, she was asking whether the organi...
As I understand it, you're dividing the agent from the world; once you introduce a reward signal, you'll be able to call it reinforcement learning. However, until you introduce a reward signal, you're not doing specifically reinforcement learning - everything applies just as well to any other kind of agent, such as a classical planner.
The arrows all mean the same thing, which is roughly 'causes'.
Chess is a perfect-information game, so you could build the board entirely from the player's memory of the board, but in general, the state of the world at time t-1, together with the player, causes the state of the world at time t.
It might be valuable to point out that nothing about this is reinforcement learning yet.
Those are interesting reviews but I didn't know they were speeches in SIAI's voice.
Thanks for posting this!
I am also grateful to Holden for provoking this - as far as I can tell, the only substantial public speech from SIAI on LessWrong. SIAI often seems to be far more concerned with internal projects than communicating with its supporters, such as most of us on LessWrong.
I don't think Strange7 is arguing Strange7's point strongly; let me attempt to strengthen it.
A button that does something dangerous, such as exploding bolts that separate one thing from another thing, might be protected from casual, accidental changes by covering it with a lid, so that when someone actually wants to explode those bolts, they first open the lid and then press the button. This increases reliability if there is some chance that any given hand motion is an error, but the errors of separate hand motions are independent. Similarly 'are you sure'...
The distinction between hardwiring and softwiring is, at above the most physical, electronic aspects of computer design, a matter of policy - something in the programmer's mind and habits, not something out in the world that the programmer is manipulating. From any particular version of the software's perspective, all of the program it is running is equally hard (or equally soft).
It may not be impossible to handicap an entity in some way analogous to your suggestion, but holding fiercely to the concept of hardwiring will not help you find it. Thinking abo...
The thing that is most like an agent in the Tool AI scenario is not the computer and software that it is running. The agent is the combination of the human (which is of course very much like an agent) together with the computer-and-software that constitutes the tool. Holden's argument is that this combination agent is safer somehow. (Perhaps it is more familiar; we can judge intention of the human component with facial expression, for example.)
The claim that Tool AI is an obvious answer to the Friendly AI problem is a paper tiger that Eliezer demolished. H...
Minor text correction;
"dedicated committee of human-level AIs dedicated" repeats the same adjective in a small span.
More wide-ranging:
Perhaps the paper would be stronger if it explained why philosophers might feel that convergence is probable. For example, in their experience, human philosophers / philosophies converge.
In a society, where the members are similar to one another, and much less powerful than the society as a whole, the morality endorsed by the society might be based on the memes that can spread successfully. That is, a meme like '...
There was an incident of censorship by EY relating to acausal trading - the community's confused response (chilling effects? agreement?) to that incident explains why there is no overall account.
There's two uses of 'utility function'. One is analogous to Daniel Dennett's "intentional stance" in that you can choose to interpret an entity as having a utility function - this is always possible but not necessarily a perspicuous way of understanding an entity - because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".
The second form is as an explicit component within an AI design. Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.
You're right, it's infeasible to care about individual memes (or for that matter, the vast majority of individual animals) the way we care about other humans. I don't have an answer to your question, I'm trying to break a logjam of humancentric ethical thinking.
Forgive me for passing on my confusion here, but I'm not certain that consciousness/sentience, is anything more than 'recognizably human'. You and I have a common brain architecture and one of our faculties is picking that out from trees and rocks. Perhaps there are plenty of evolved, competent alie...
No. This is regarding the 'possibly irrelevant rant' which I marked explicitly as a 'possibly irrelevant rant'. The concepts in the rant seemed nearby and inspirational to the main article in my mind when I wrote it, which is why I included it, but I cannot articulate a direct connection.
Analogous in that people once discriminated against other races, other sexes, but over time with better ethical arguments, we decided it was better to treat other races, other sexes as worthy members of the "circle of compassion". I predict that if and when we interact with another species with fairly similar might (for example if and when humans speciate) then humancentrism will be considered as terrible as racism or sexism is now.
Moral realism (if I understand it correctly) is the position that moral truths like 'eating babies is wrong' are o...
If humans are bad at mental arithmetic, but good at, say, not dying - doesn't that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?
E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.
Quantifying is ONE technique, and it's bee... (read more)