I'm going to go out and state that the chosen example of "middle school students should wear uniforms" fails the prerequisite of "Confidence in the existence of objective truth", as do many (most?) "should" statements.
I strongly believe that there is no objectively true answer to the question "middle school students should wear uniforms", as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the "should" m...
I think you're basically making correct points, but that your conclusion doesn't really follow from them.
Remember that double crux isn't meant to be a "laboratory technique" that only works under perfect conditions—it's meant to work in the wild, and has to accommodate the way real humans actually talk, think, and behave.
You're completely correct to point out that "middle school students should wear uniforms" isn't a well-defined question yet, and that someone wanting to look closely at it and double crux about it would need to boil dow...
The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.
"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.
For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusi...
your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.
! That's not how other humans interpret "alliance," and using language like that is a recipe for social disaster. This is a description of convenience. Allies are people that you will sacrifice for and they will sacrifice for you. The NAACP may benefit from the existence of Stormfront, but imagine the fallout from a fundraising letter that called them the NAACP's allies!
Whether or not someone is an ally or an enem...
The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.
The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.
An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:
Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.
According to even the best possible care-o-meter that I could have, all the problems in th...
I'd read it as an acknowledgement that any intelligence has a cost, and if your food is passive instead of antagonistic, then it's inefficient (and thus very unlikely) to put such resources into outsmarting it.
If animal-complexity CNS is your criteria, then humans + octopuses would be a counterexample, as urbilaterals wouldn't be expected to have such a system, and the octopus intelligence has formed separately.
A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.
Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, ...
Dolphins are able to herd schools of fish, cooperating to keep a 'ball' of fish together for a long time while feeding from it.
However, taming and sustained breeding is a long way from herding behavior - it requires long term planning for multi-year time periods, and I'm not sure if that has been observed in dolphins.
Income question needs to be explicit about if it's pre-tax or post-tax, since it's a huge difference, and the "default measurement" differs between cultures, in some places "I earn X" means pre-tax and in some places it means post-tax.
Actually "could he, in principle, have made place for such possibilities in advance?" is very, very excellent question.
We can allocate for such possibilities in advance. For example, we can use a simple statistical model for limitations of our own understanding of reality - I have a certain number of years experience in making judgements and assumptions about reality; I know that I don't consider all possible explanations, and I can estimate that in x% cases the 'true' explanation was one that I hadn't considered. So I can make a 'belief budget'...
Well, but you can (a) preform moderately extensive testing, and (b) do redundancy.
If you write 3 programs for verifying primeness (using different algorithms and possibly programming languages/approaches); and if all their results match, then you can assume a much higher confidence in correctness than for a single such program.
There's the classic economic textbook example of two hot-dog vendors on a beach that need to choose their location - assuming an even distribution of customers, and that customers always choose the closest vendor; the equilibrium location is them standing right next to each other in the middle; while the "optimal" (from customer view, minimizing distance) locations would be at 25% and 75% marks.
This matches the median voter principle - the optimal behavior of candidates is to be as close as possible to the median but on the "right side"...
Life makes so much more sense now.
Seriously, I always wondered why I always see a Walgreens and a CVS across the street from each other. Or why I see the same with two competing chains of video stores (not that I see video stores much anymore, in this age of Netflix.)
"Tell the AI in English" is in essence an utility function "Maximize the value of X, where X is my current opinion of what some english text Y means".
The 'understanding English' module, the mapping function between X and "what you told in English" is completely arbitrary, but is very important to the AI - so any self-modifying AI will want to modify and improve that. Also, we don't have a good "understanding English" module so yes, we also want the AI to be able to modify and improve that. But, it can be wildly dif...
If you model X as "rude person", then you expect him to be rude with a high[er than average] probability cases, period.
However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common 'rude' situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.
In essence, it's simpler and fa...
My [unverified] intuition on AI properties is that the delta between current status and 'IQ60AI' is multiple orders of magnitude larger than the delta between 'IQ60AI' and 'IQ180AI'. In essence, there is not that much "mental horsepower" difference between the stereotypical Einstein and a below-average person; it doesn't require a much larger brain or completely different neuronal wiring or a million years of evolutionary tuning.
We don't know how to get to IQ60AI; but getting from IQ60AI to IQ180AI could (IMHO) be done with currently known method...
It's quite likely that the optimal behaviour should be different in case the program is functionally equal but not exactly equal.
If you're playing yourself, then you want to cooperate.
If you're playing someone else, then you'd want to cooperate if and only if that someone else is smart enough to check if you'll cooperate; but if it's decision doesn't depend on yours, then you should defect.
I see MOOC's as a big educaational improvement because of this - sure, I could get the same educational info without the MOOC structure; just by reading the field best textbooks and academic papers; but having a specific "course" with the quizzes/homework makes me actually do the excercises, which I wouldn't have done otherwise; and the course schedule forces me to do them now, instead of postponing them for weeks/months/forever.
I feel confused. "a space I can measure distances in" is a strong property of a value, and it does not follow from your initial 5 axioms, and seems contrary to the 5th axiom.
In fact, your own examples given further seem to provide a counterexample - i.e., if someone prefers being a whale to 400 actual orgasms, but prefers 1/400 of being a whale to 1 orgasm, then both "being a whale" and "orgasm" have some utility value, but they cannot be used as units to measure distance.
If you're in a reality where a>b and 2a<2b, then you're not allowed to use classic arithmetic simply because some of your items look like numbers, since they don't behave like numbers.
OK, for a slightly clearer example, in the USA abortion debate, the pro-life "camp" definitely considers pro-life to be moral and wants to apply to everyone; and pro-choice "camp" definitely considers pro-choice to be moral and to apply to everyone.
This is not a symbolic point, it is a moral question that defines literally life-and-death decisions.
That's not sufficient - there can be wildly different, incompatible universalizable morality systems based on different premises and axioms; and each could reasonably claim to be that they are a true morality and the other is a tribal shibboleth.
As an example (but there are others), many of the major religious traditions would definitely claim to be universalizable systems of morality; and they are contradicting each other on some points.
What is the difference between "self-serving ideas" as you describe, "tribal shibboleths" and "true morality" ?
What if "Peterdjones-true-morality" is "PeterisP-tribal-shibboleth", and "Peterdjones-tribal-shibboleth" is "PeterisP-true-morality" ?
I'm afraid that any nontrivial metaethics cannot result in concrete universal ethics - that the context would still be individual and the resulting "how RichardKennaway should live" ethics wouldn't exactly equal "how PeterisP should live".
The difference would hopefully be much smaller than the difference between "how RichardKennaway should live RichardKennaway-justly" and "How Clippy should maximize paperclips", but still.
Another situation that has some parallels and may be relevant to the discussion.
Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even w...
That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".
But my point is that:
a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".
b) As far as I understand, homo sapiens do generally actually have ...
OK, then I feel confused.
Regarding " if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater" - I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?
No, I'm not arguing that this is a bias to overcome - if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.
I'm arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating "value of entity_X's suffering to me". They are clearly not equal, they differ by order(s) of magnitude.
"general value of entity_X's suffering" is a different, not identical measurement - but when ma...
What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.
So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings - if the average homo sapiens would be evaluated with a coefficient of 1, then some people...
The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.
Why not?
Of course, the best proportion would be 100% of people telling me that p(the_warming)=85%; but if we limit the outside opinions to simple yes/no statements, then having 85% telling 'yes' and 15% telling 'no' seems to be far more informative than 100% of people telling 'yes' - as that would lead me to very wrongly assume that p(the_warming) is the same as p(2+2=4).
The participants don't know the rules, and have been given a hint that they don't know the rules - as the host said that the choices will be independent/hidden, but then is telling you the other contestant's choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.
Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.
Your interlucotur clearly wouldn't be behaving nicely and would clearly be pushing for some confrontation - but does it mean that it is wrong or not allowed? This feels the same as if (s)he simply and directly called you a jackass in your face - it is an insult and potentially hostile, but it's clearly legal and 'allowed'; there are often quite understandable valid reasons to (re)act in such a way against someone, and it wouldn't really be an excuse in a murder trial (and the original problem does involve murders as reaction to perceived insults).
All of the above days seem quite fun and fine to me.
As for the original article point - I agree that there isn't any significant difference between the hypothetical British salmon case and Mohammad's case, but it this fact doesn't change anything. There isn't a right to never be offended. There is no duty to abstain from offending others. It's nice if others are nice, but you can't demand everybody to be nice - most of them will be indifferent, and some will be not nice, and you just have to live with it and deal with it without using violence - and if yo...
If I understand your 'problem' correctly - estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it's not nearly a game-specific concept - it applies to any partner-selection without perfect information, like mating or in job interviews. As long as there is a large enough pool of potential parners, and you don't need all of the 'good' ones, then false negatives don't really matter as much as the speed or ease of the selection process and the cost of false positives, ...
As the saying goes, you can ignore politics but it doesn't mean that politics will ignore you.
It is instrumentally rational to be aware of political methodologies both in the sense that they will interact with many issues in your daily life and also in the sense on how you may improve the success chances of any goals needing interaction or cooperation with others.
It goes on from the reasons of systems thinking through the theoretical foundation, the maths used, and the practical applications and pretty much all common types of issues seen in real world.
It's about 5 times larger volume (~1000 A4 pages) than the Meadows' "Thinking in Systems", so not exactly textbook format, but covers the same stuff quite well and more. Though, it does spend much of the second half of the book focusing almost exclusively on practical development of system dynamics models.
The saying actually goes 'jack of all trades and a master of none, though oft better than a master of one'.
There are quite a few insights and improvements that are obvious with cross-domain expertise, and much of the new developments nowadays pretty much are merging of two or more knowledge domains - bioinformatics as a single, but not nearly only example. Computational linguistics, for example - there are quite a few treatises on semantics written by linguists that would be insightful and new for computer science guys handling also non-linguistic knowledge/semantics projects.
I haven't read the books you mention, but it seems that Sterman's 'Business Dynamics: Systems thinking and modeling for a complex world' covers mostly the same topics, and it felt really well written, I'd recommend that one as an option as well.
In that sense, it's still futile. The whole reason for the discussion is that AI doesn't really need permission or consent of anyone; the expected result is that AI - either friendly or unfriendly - will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a 'constitution' matters only if it is within the goal system of a Friendly AI, otherwise it's not worth the paper it's written on.
I'm still up in the air regarding Eliezer's arguments about CEV.
I have all kinds of ugh-factors coming in mind about not-good or at least not-'PeterisP-good' issues an aggregate of 6 billion hairless ape opinions would contain.
The 'Extrapolated' part is supposed to solve that; but in that sense I'd say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) i...
To put it in very simple terms - if you're interested in training AI according to technique X because you think that X is the best way, then you design or adapt the AI structure so that technique X is applicable. Saying 'some AI's may not respond to X' is moot, unless you're talking about trying to influence (hack?) AI designed and controlled by someone else.
I've worn full-weight chain and plate reconstruction items while running around for a full day, and I'm not physically fit at all - I'd say that a random geeky 12 year old boy would be easily able to wear an armor suit, the main wizard-combat problems being getting winded very, very quickly if running (so they couldn't rush in the same way as Draco's troops did), and slightly slowed down arm movement, which might hinder combat spellcasting. It is not said how long the battles are - if they are less than an hour, then there shouldn't be any serious hindrances; if longer then the boys would probably want to sit down and rest occasionally or use some magic to lighten the load.
hypothesis—that it is really hard to over-ride the immediate discomfort of an unpleasant decision—is to look at whether aversions of comparable or greater magnitude are hard to override. I think the answer in general is 'no.' Consider going swimming and having to overcome the pain of entering water colder than surrounding. This pain, less momentary than the one in question and (more or less) equally discounted, doesn't produce problematic hesitation.
I can't agree with you - it most definitely does produce a problematic hesitation. If you're bringing this example, then I'd say that it is evidence that the general answer is 'yes', at least for a certain subpopulation of homo sapiens.
Sorry for intruding on an very old post, but checking 'peoplerandom' integers modulo 2 is worse than flipping a coin - when asked for a random number, people tend to choose odd numbers more often than even numbers, and prime numbers more often than non-prime numbers.
Then it should be rephrased as 'We should seek a model of reality that is accurate even at the expense of flattery.'
Ambiguous phrasings facilitate only confusion.
I'm not an expert on relevant US legislative acts, but this is the legal definition in local laws here and I expect that the term of espionage have been defined a few centuries ago and would be mostly matching throughout the world.
A quick look at current US laws (http://www.law.cornell.edu/uscode/18/usc_sec_18_00000793----000-.html) does indicate that there is a penalty for such actions with 'intent or reason to believe ... for the injury of United States or advantage of any foreign nation' - so simply acting to intentionally harm US would be punishable as...
Spies by definition are agents of foreign powers acting on your soil without proper registration - i.e., like the many representatives in embassies have registered as agents of that country and are allowed to operate on their behalf until/if expelled.
As far as Assange (IIRC) has not been in USA while the communiques were leaked, and it is not even claimed that he is an agent of some other power, then there was no act of espionage. It might be called espionage if and only if Manning was acting on behalf of some power - and even then, Manning would be the 'spy', not Assange.
I perceive the intention of the original assertion is that even in this case you would still fail in making 10.000 independent statements of such sort - i.e., in trying to do it, you are quite likely somehow make a mistake at least once, say, by a typo, a slip of the tongue, accidentally ommitting 'not' or whatever. All it takes to fail on a statement like "53 to be prime" all it takes is for you to not notice that it actually says '51 is prime' or make some mistake when dividing.
Any random statement of yours has a 'ceiling' of x-nines accuracy....
A RSS feed for new posts is highly desirable - I don't generally go to websites "polling" for new information that may or may not be there unless e.g. I'm returning to a discussion that I had yesterday, so a "push mechanism" e.g. RSS is essential to me.