All of PeterisP's Comments + Replies

A RSS feed for new posts is highly desirable - I don't generally go to websites "polling" for new information that may or may not be there unless e.g. I'm returning to a discussion that I had yesterday, so a "push mechanism" e.g. RSS is essential to me.

1Brendan Long
There's some sort of top-level RSS feed: https://www.lesserwrong.com/feed.xml I don't know if there's any way to subscribe to individual people/sections.
2Chris_Leong
I believe that this is already on the roadmap.

I'm going to go out and state that the chosen example of "middle school students should wear uniforms" fails the prerequisite of "Confidence in the existence of objective truth", as do many (most?) "should" statements.

I strongly believe that there is no objectively true answer to the question "middle school students should wear uniforms", as the truth of that statement depends mostly not on the understanding of the world or the opinion about student uniforms, but on the interpretation of what the "should" m... (read more)

I think you're basically making correct points, but that your conclusion doesn't really follow from them.

Remember that double crux isn't meant to be a "laboratory technique" that only works under perfect conditions—it's meant to work in the wild, and has to accommodate the way real humans actually talk, think, and behave.

You're completely correct to point out that "middle school students should wear uniforms" isn't a well-defined question yet, and that someone wanting to look closely at it and double crux about it would need to boil dow... (read more)

The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.

0ChristianKl
Generals are not the people who decide whether or not a war gets fought but who decide over individual battles.
2fortyeridania
Peace or cold war are not the only possible outcomes. Surrender is another. An example is the conquest of the Aztecs by Cortez, discussed here, here, and here. Surrender can (but need not) have disastrous consequences too.

"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.

For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusi... (read more)

5Epictetus
Allies are those who agree to cooperate with you. An alliance may be temporary, limited in scope, and subject to conditions, but in the end it's all about cooperation. A stupid enemy who makes mistakes certainly benefits your cause and is a useful tool, but he's no ally.
Vaniver140

your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.

! That's not how other humans interpret "alliance," and using language like that is a recipe for social disaster. This is a description of convenience. Allies are people that you will sacrifice for and they will sacrifice for you. The NAACP may benefit from the existence of Stormfront, but imagine the fallout from a fundraising letter that called them the NAACP's allies!

Whether or not someone is an ally or an enem... (read more)

PeterisP100

The difference is that there are many actions that help other people but don't give an appropriate altruistic high (because your brain doesn't see or relate to those people much) and there are actions that produce a net zero or net negative effect but do produce an altruistic high.

The built-in care-o-meter of your body has known faults and biases, and it measures something often related (at least in classic hunter-gatherer society model) but generally different from actually caring about other people.

An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:

Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.

According to even the best possible care-o-meter that I could have, all the problems in th... (read more)

I'd read it as an acknowledgement that any intelligence has a cost, and if your food is passive instead of antagonistic, then it's inefficient (and thus very unlikely) to put such resources into outsmarting it.

If animal-complexity CNS is your criteria, then humans + octopuses would be a counterexample, as urbilaterals wouldn't be expected to have such a system, and the octopus intelligence has formed separately.

6Nornagest
The last common ancestor of humans and octopuses probably didn't have a very complicated nervous system, but it probably did have a nervous system: most likely a simple lateral cord with ganglia, like some modern wormlike animals. That seems to meet the criteria for shminux's "dedicated organism-wide communication subsystem".

A gold-ingot-manufacturing-maximizer can easily manufacture more gold than exists in their star system by using arbitrary amounts of energy to create gold, starting with simple nuclear reactions to transmute bismuth or lead into gold and ending with direct energy to matter to gold ingots process.

Furthermore, if you plan to send copies-of-you to N other systems to manufacture gold ingots there, as long as there is free energy, you can send N+1 copies-of-you. A gold ingot manufacturing rate that grows proportionally to time^(n+1) is much faster than time^n, ... (read more)

2pinyaka
Are you saying that because gold can be produced that energy is always going to be the limiting factor in goal maximization? That was an example, not a proof. The point was that unless energy was the limiting factor in meeting a goal, you wouldn't expect an arbitrarily intelligent AI to try to scrape up all available energy. Earlier tonight, I had a goal of obtaining a sandwich. There is no way of obtaining a sandwich that involves harnessing all the free energy of our sun or expansion into other solar systems that would be more efficient than simply going to a sandwich shop and buying one, thus any arbitrarily intelligent AI would not do those things if it took on the efficient obtainment of my sandwich as a goal. Again, this is just an example that is meant to show that the mere existence of AI does not necessarily require an AI to "turn off stars" as James_Miller was saying you'd expect to see "for almost any goal or utility function than an AI had."

Dolphins are able to herd schools of fish, cooperating to keep a 'ball' of fish together for a long time while feeding from it.

However, taming and sustained breeding is a long way from herding behavior - it requires long term planning for multi-year time periods, and I'm not sure if that has been observed in dolphins.

PeterisP100

Income question needs to be explicit about if it's pre-tax or post-tax, since it's a huge difference, and the "default measurement" differs between cultures, in some places "I earn X" means pre-tax and in some places it means post-tax.

1eurg
Also, in many European countries it means "pre- and post some different tax". Because one part is payed by the employer, and the other by the employee. Populism, Politics and Economics. Good results guaranteed.

Actually "could he, in principle, have made place for such possibilities in advance?" is very, very excellent question.

We can allocate for such possibilities in advance. For example, we can use a simple statistical model for limitations of our own understanding of reality - I have a certain number of years experience in making judgements and assumptions about reality; I know that I don't consider all possible explanations, and I can estimate that in x% cases the 'true' explanation was one that I hadn't considered. So I can make a 'belief budget'... (read more)

Well, but you can (a) preform moderately extensive testing, and (b) do redundancy.

If you write 3 programs for verifying primeness (using different algorithms and possibly programming languages/approaches); and if all their results match, then you can assume a much higher confidence in correctness than for a single such program.

0Luke_A_Somers
I think multiple programs wouldn't help very much unless they were run on different processors and written in different languages.
0dankane
Yes. I agree. I am certainly not trying to say that 99.99% confidence in the primality status of a four digit number is not achievable.
PeterisP640

There's the classic economic textbook example of two hot-dog vendors on a beach that need to choose their location - assuming an even distribution of customers, and that customers always choose the closest vendor; the equilibrium location is them standing right next to each other in the middle; while the "optimal" (from customer view, minimizing distance) locations would be at 25% and 75% marks.
This matches the median voter principle - the optimal behavior of candidates is to be as close as possible to the median but on the "right side"... (read more)

1Gunnar_Zarncke
That's true only if the voters/buyers have only exactly these choices. But in general they have more: They can also Exit, Voice, and Loyalty ( http://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty ). That is customers can refuse to buy at all (Exit) and voters can protest instead of silent voting (Voice). Or they can support a side actively (Loyalty). Taking this into account changes the simple economic result to one overlayed with longer term Exit/Voice trends.

Life makes so much more sense now.

Seriously, I always wondered why I always see a Walgreens and a CVS across the street from each other. Or why I see the same with two competing chains of video stores (not that I see video stores much anymore, in this age of Netflix.)

4Nomad
Gotta agree with that. I live about 5 minutes away from 3 different supermarkets within metres of each other.

"Tell the AI in English" is in essence an utility function "Maximize the value of X, where X is my current opinion of what some english text Y means".

The 'understanding English' module, the mapping function between X and "what you told in English" is completely arbitrary, but is very important to the AI - so any self-modifying AI will want to modify and improve that. Also, we don't have a good "understanding English" module so yes, we also want the AI to be able to modify and improve that. But, it can be wildly dif... (read more)

5Jiro
By this reasoning, an AI asked to do anything at all would respond by immediately modifying itself to set its utility function to MAXINT. You don't need to speak to it in English for that--if you asked the AI to maximize paperclips, that is the equivalent of "Maximize the value of X, where X is my current opinion of how many paperclips there are", and it would modify its paperclip-counting module to always return MAXINT. You are correct that telling the AI to do Y is equivalent to "maximize the value of X, where X is my current opinion about Y". However, "current" really means "current", not "new". If the AI is actually trying to obey the command to do Y, it won't change its utility function unless having a new utility function will increase its utility according to its current utility function. Neither misunderstanding nor understanding will raise its utility unless its current utility function values having a utility function that misunderstands or understands.

If you model X as "rude person", then you expect him to be rude with a high[er than average] probability cases, period.

However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common 'rude' situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.

In essence, it's simpler and fa... (read more)

PeterisP100

My [unverified] intuition on AI properties is that the delta between current status and 'IQ60AI' is multiple orders of magnitude larger than the delta between 'IQ60AI' and 'IQ180AI'. In essence, there is not that much "mental horsepower" difference between the stereotypical Einstein and a below-average person; it doesn't require a much larger brain or completely different neuronal wiring or a million years of evolutionary tuning.

We don't know how to get to IQ60AI; but getting from IQ60AI to IQ180AI could (IMHO) be done with currently known method... (read more)

It's quite likely that the optimal behaviour should be different in case the program is functionally equal but not exactly equal.

If you're playing yourself, then you want to cooperate.

If you're playing someone else, then you'd want to cooperate if and only if that someone else is smart enough to check if you'll cooperate; but if it's decision doesn't depend on yours, then you should defect.

PeterisP100

I see MOOC's as a big educaational improvement because of this - sure, I could get the same educational info without the MOOC structure; just by reading the field best textbooks and academic papers; but having a specific "course" with the quizzes/homework makes me actually do the excercises, which I wouldn't have done otherwise; and the course schedule forces me to do them now, instead of postponing them for weeks/months/forever.

I feel confused. "a space I can measure distances in" is a strong property of a value, and it does not follow from your initial 5 axioms, and seems contrary to the 5th axiom.

In fact, your own examples given further seem to provide a counterexample - i.e., if someone prefers being a whale to 400 actual orgasms, but prefers 1/400 of being a whale to 1 orgasm, then both "being a whale" and "orgasm" have some utility value, but they cannot be used as units to measure distance.

If you're in a reality where a>b and 2a<2b, then you're not allowed to use classic arithmetic simply because some of your items look like numbers, since they don't behave like numbers.

5nshepperd
"Hawaii" can't be used as a unit to measure distance, nor can "the equator", but "the distance from Hawaii to the equator" can. Similarly, "the difference between 0 orgasms and 1 orgasm" can be used as a unit to measure utilities (you could call this unit "1 orgasm", but that would be confusing and silly if you had nonlinear utility in orgasms: 501 orgasms could be less than or more than "1 orgasm" better than 500). Also, did you mean to have these the other way around?:

OK, for a slightly clearer example, in the USA abortion debate, the pro-life "camp" definitely considers pro-life to be moral and wants to apply to everyone; and pro-choice "camp" definitely considers pro-choice to be moral and to apply to everyone.

This is not a symbolic point, it is a moral question that defines literally life-and-death decisions.

That's not sufficient - there can be wildly different, incompatible universalizable morality systems based on different premises and axioms; and each could reasonably claim to be that they are a true morality and the other is a tribal shibboleth.

As an example (but there are others), many of the major religious traditions would definitely claim to be universalizable systems of morality; and they are contradicting each other on some points.

2Peterdjones
Maybe. But in context it is onlhy necessary, since in context the point is to separate out the non-etchial cclams which have been piggybacked onto ethics. That's not obvious. The points they most obviouslty contradict each other on tend to be the most symbolic ones, about diet and dress, etc.

What is the difference between "self-serving ideas" as you describe, "tribal shibboleths" and "true morality" ?

What if "Peterdjones-true-morality" is "PeterisP-tribal-shibboleth", and "Peterdjones-tribal-shibboleth" is "PeterisP-true-morality" ?

-1Peterdjones
universalizability

I'm afraid that any nontrivial metaethics cannot result in concrete universal ethics - that the context would still be individual and the resulting "how RichardKennaway should live" ethics wouldn't exactly equal "how PeterisP should live".

The difference would hopefully be much smaller than the difference between "how RichardKennaway should live RichardKennaway-justly" and "How Clippy should maximize paperclips", but still.

0Richard_Kennaway
Ok, I'll settle for concrete theorems, with proofs, about how some particular individual should live. Or ways of discovering facts about how they should live. And presumably the concept of Coherent Extrapolated Volition requires some way of combining such facts about multiple individuals.

Another situation that has some parallels and may be relevant to the discussion.

Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even w... (read more)

That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".

But my point is that:

a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".

b) As far as I understand, homo sapiens do generally actually have ... (read more)

-1MugaSofer
My point is that duty, while worth encouraging throughout society, is screened off by most utilitarian calculations; as such it is a bias if, rationally, the other choice is superior.

OK, then I feel confused.

Regarding " if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater" - I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?

-1MugaSofer
Consider: if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

No, I'm not arguing that this is a bias to overcome - if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.

I'm arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating "value of entity_X's suffering to me". They are clearly not equal, they differ by order(s) of magnitude.

"general value of entity_X's suffering" is a different, not identical measurement - but when ma... (read more)

1MugaSofer
... oh. That seems ... kind of evil, to be honest.

What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.

So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings - if the average homo sapiens would be evaluated with a coefficient of 1, then some people... (read more)

1MTGandP
I wouldn't try to estimate the value of a particular species' suffering by intuition. Intuition is, in a lot of situations, a pretty bad moral compass. Instead, I would start from the simple assumption that if two beings suffer equally, their suffering is equally significant. I don't know how to back up this claim other than this: if two beings experience some unpleasant feeling in exactly the same way, it is unfair to say that one of their experiences carries more moral weight than the other. Then all we have to do is determine how much different beings suffer. We can't know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain. Hence, the physical pain that a chicken feels is roughly comparable to the pain that a human feels. It should be possible to use neuroscience to provide a more precise comparison, but I don't know enough about that to say more. Top animal-welfare charities such as The Humane League probably prevent about 100 days of suffering per dollar. The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms. As a side note, you mentioned comparing the value of a cow versus a human. I don't think this is a very useful comparison to make. A better comparison is the suffering of a cow versus a human. A life's value depends on how much happiness and suffering it contains.
0MugaSofer
To be clear, you are arguing that this is a bias to be overcome, yes? Scope insensitivity?

The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.

Why not?

Of course, the best proportion would be 100% of people telling me that p(the_warming)=85%; but if we limit the outside opinions to simple yes/no statements, then having 85% telling 'yes' and 15% telling 'no' seems to be far more informative than 100% of people telling 'yes' - as that would lead me to very wrongly assume that p(the_warming) is the same as p(2+2=4).

2A1987dM
Why?
PeterisP130

The participants don't know the rules, and have been given a hint that they don't know the rules - as the host said that the choices will be independent/hidden, but then is telling you the other contestant's choice. So they can easily assume a chance that the host is lying, or might then give the first contestant a chance to switch his choice, etc.

6drnickbone
This is a good catch, and criticism of the "deliberately spoil the experiment" design. A better design would be to put the contestants in adjacent rooms, but to allow the second contestant to "accidentally" overhear the first (e.g. speaking loudly, through thin walls). Then the experimenter enters the second contestant's room and asks them whether they want to co-operate or defect.

Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.

0taryneast
mensa. Or a qualified psychologist

Your interlucotur clearly wouldn't be behaving nicely and would clearly be pushing for some confrontation - but does it mean that it is wrong or not allowed? This feels the same as if (s)he simply and directly called you a jackass in your face - it is an insult and potentially hostile, but it's clearly legal and 'allowed'; there are often quite understandable valid reasons to (re)act in such a way against someone, and it wouldn't really be an excuse in a murder trial (and the original problem does involve murders as reaction to perceived insults).

All of the above days seem quite fun and fine to me.

As for the original article point - I agree that there isn't any significant difference between the hypothetical British salmon case and Mohammad's case, but it this fact doesn't change anything. There isn't a right to never be offended. There is no duty to abstain from offending others. It's nice if others are nice, but you can't demand everybody to be nice - most of them will be indifferent, and some will be not nice, and you just have to live with it and deal with it without using violence - and if yo... (read more)

1s8ist
Well said! It is shameful that many folks' response to this is that we need to punish those who act to offend. Those who enforce and enable the unreasonable standard of a right to not be offended are at blame.

If I understand your 'problem' correctly - estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it's not nearly a game-specific concept - it applies to any partner-selection without perfect information, like mating or in job interviews. As long as there is a large enough pool of potential parners, and you don't need all of the 'good' ones, then false negatives don't really matter as much as the speed or ease of the selection process and the cost of false positives, ... (read more)

As the saying goes, you can ignore politics but it doesn't mean that politics will ignore you.

It is instrumentally rational to be aware of political methodologies both in the sense that they will interact with many issues in your daily life and also in the sense on how you may improve the success chances of any goals needing interaction or cooperation with others.

1Nornagest
I agree, but would draw a distinction between studying political methodology and political issues. Many mind-killers aren't mind-killing if you study them through an abstraction layer.

It goes on from the reasons of systems thinking through the theoretical foundation, the maths used, and the practical applications and pretty much all common types of issues seen in real world.

It's about 5 times larger volume (~1000 A4 pages) than the Meadows' "Thinking in Systems", so not exactly textbook format, but covers the same stuff quite well and more. Though, it does spend much of the second half of the book focusing almost exclusively on practical development of system dynamics models.

The saying actually goes 'jack of all trades and a master of none, though oft better than a master of one'.

There are quite a few insights and improvements that are obvious with cross-domain expertise, and much of the new developments nowadays pretty much are merging of two or more knowledge domains - bioinformatics as a single, but not nearly only example. Computational linguistics, for example - there are quite a few treatises on semantics written by linguists that would be insightful and new for computer science guys handling also non-linguistic knowledge/semantics projects.

I haven't read the books you mention, but it seems that Sterman's 'Business Dynamics: Systems thinking and modeling for a complex world' covers mostly the same topics, and it felt really well written, I'd recommend that one as an option as well.

0Davidmanheim
I have not read it, but the title and the reviews on amazon seem to imply that the book isn't about systems theory, it's about applications of systems theory to business and economics, two great applications, but not the subject itself. Physics books may be great, and they may need to explain math, but they are not math books. If this is indeed a business book, I'd hesitate to recommend it as a book on systems theory.

In that sense, it's still futile. The whole reason for the discussion is that AI doesn't really need permission or consent of anyone; the expected result is that AI - either friendly or unfriendly - will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.

Also, a 'constitution' matters only if it is within the goal system of a Friendly AI, otherwise it's not worth the paper it's written on.

0Perplexed
Well, yes. I am assuming that the 'constitution' is part of the CEV, and we are both assuming that CEV or something like it is part of the goal system of the Friendly AI. I wouldn't say that it is the whole reason for the discussion, though that is the assumption explaining why many people consider it urgent to get the definition of Friendliness right on the first try. Personally, I think that it is a bad assumption - I believe it should be possible to avoid the all-powerful singleton scenario, and create a 'society' of slightly less powerful AIs, each of which really does need the permission and consent of its fellows to continue to exist. But a defense of that position also belongs in a separate discussion.

I'm still up in the air regarding Eliezer's arguments about CEV.

I have all kinds of ugh-factors coming in mind about not-good or at least not-'PeterisP-good' issues an aggregate of 6 billion hairless ape opinions would contain.

The 'Extrapolated' part is supposed to solve that; but in that sense I'd say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) i... (read more)

1Perplexed
As I see it, adherence to the forms of democracy is important primarily for political reasons - it is firstly a process for gaining the consent of mankind to a compromise, and only secondly a technique for locating the 'best' compromise (best by some metric). Also, as I see it, it is a temporary compromise. We don't just do a single opinion survey and then extrapolate. We institute a constitution guaranteeing that mankind is to be repeatedly consulted as people become accustomed to the Brave New World of immortality, cognitive enhancement, and fun theory.

To put it in very simple terms - if you're interested in training AI according to technique X because you think that X is the best way, then you design or adapt the AI structure so that technique X is applicable. Saying 'some AI's may not respond to X' is moot, unless you're talking about trying to influence (hack?) AI designed and controlled by someone else.

PeterisP100

I've worn full-weight chain and plate reconstruction items while running around for a full day, and I'm not physically fit at all - I'd say that a random geeky 12 year old boy would be easily able to wear an armor suit, the main wizard-combat problems being getting winded very, very quickly if running (so they couldn't rush in the same way as Draco's troops did), and slightly slowed down arm movement, which might hinder combat spellcasting. It is not said how long the battles are - if they are less than an hour, then there shouldn't be any serious hindrances; if longer then the boys would probably want to sit down and rest occasionally or use some magic to lighten the load.

2drethelin
This. I've also worn multiple layers of armor, and something that's heavy to lift with your hands becomes much easier to handle when you're supporting it with your shoulders/body. If we extrapolate from harry, they transfigured the armor into existence, so it could be even lighter than average armor in any case.
0bigjeff5
They wouldn't have had to get the heaviest stuff either, they were trying to stop first year sleep spells, not Auror stupify's. Chain mail was probably more than enough, and heavy wool might have had good effect if it were thick enough. Edit: I should have read down further, apparently chain mail is much heavier than plate. Who knew?

hypothesis—that it is really hard to over-ride the immediate discomfort of an unpleasant decision—is to look at whether aversions of comparable or greater magnitude are hard to override. I think the answer in general is 'no.' Consider going swimming and having to overcome the pain of entering water colder than surrounding. This pain, less momentary than the one in question and (more or less) equally discounted, doesn't produce problematic hesitation.

I can't agree with you - it most definitely does produce a problematic hesitation. If you're bringing this example, then I'd say that it is evidence that the general answer is 'yes', at least for a certain subpopulation of homo sapiens.

4DanielVarga
I am most definitely a member of that subpopulation. At a swimming pool, peer pressure quickly kicks in. But at a shallow beach, I can procrastinate in waist-high water for minutes.
PeterisP130

Sorry for intruding on an very old post, but checking 'peoplerandom' integers modulo 2 is worse than flipping a coin - when asked for a random number, people tend to choose odd numbers more often than even numbers, and prime numbers more often than non-prime numbers.

1TraderJoe
[comment deleted]

Then it should be rephrased as 'We should seek a model of reality that is accurate even at the expense of flattery.'

Ambiguous phrasings facilitate only confusion.

3shokwave
I read it as declarative - We (at spaceandgames) seek a model etc etc. Peter isn't the only person on that twitter account.

I'm not an expert on relevant US legislative acts, but this is the legal definition in local laws here and I expect that the term of espionage have been defined a few centuries ago and would be mostly matching throughout the world.

A quick look at current US laws (http://www.law.cornell.edu/uscode/18/usc_sec_18_00000793----000-.html) does indicate that there is a penalty for such actions with 'intent or reason to believe ... for the injury of United States or advantage of any foreign nation' - so simply acting to intentionally harm US would be punishable as... (read more)

6[anonymous]
Whether Assange is intent on helping the US nation or damaging depends on how you define "the US nation". Assange likes the US constitution but hates the current US government. If you try to let the government crumble with the goal of regime change to get a regime that honors the US constitution is that damaging the US nation? Assange wrote in one of the interview that founding Wikileaks was a "forced move". Why is it a forced move? Because otherwise the war was lost. Which war? http://events.ccc.de/congress/2005/fahrplan/events/920.en.html gives you the talk in the year before the founding of Wikileaks that resembles the admission that the war is lost. It's not really an accident that the CCC congress that happened in the last week had a keynote by a person who's involved in Wikileaks and gave the "We lost the war"-talk I mentioned above was titled "We come in peace": http://rop.gonggri.jp/?p=438 Then groups that challenge the status quo get generally misunderstood and the mainstream media pretends the idea that Wikileaks is simply Julian Assange and therefore ignores the intellectual environment that produced Wikileaks. Yesterday Daniel Domscheit-Berg said that Wikileaks got 600 applications as volunteers after their talk at the CCC in 2009. A CCC foundation manges Wikileaks donations. Even the the CCC distanced itself a bit from Wikileaks in the last year it's still the intellectual basis from which Wikileaks rose. A good soundbite from the keynote of the CCC: "People ask me “Anonymous… That is the hackers striking back, right?” And then I have to explain that unlike Anonymous, people in this community would probably not issue press release with our real names in the PDF metadata. And that if this community were to get involved, the targets would probably be offline more often."

Spies by definition are agents of foreign powers acting on your soil without proper registration - i.e., like the many representatives in embassies have registered as agents of that country and are allowed to operate on their behalf until/if expelled.

As far as Assange (IIRC) has not been in USA while the communiques were leaked, and it is not even claimed that he is an agent of some other power, then there was no act of espionage. It might be called espionage if and only if Manning was acting on behalf of some power - and even then, Manning would be the 'spy', not Assange.

1PhilGoetz
Do you know whether that's the definition used by the espionage act?

I perceive the intention of the original assertion is that even in this case you would still fail in making 10.000 independent statements of such sort - i.e., in trying to do it, you are quite likely somehow make a mistake at least once, say, by a typo, a slip of the tongue, accidentally ommitting 'not' or whatever. All it takes to fail on a statement like "53 to be prime" all it takes is for you to not notice that it actually says '51 is prime' or make some mistake when dividing.

Any random statement of yours has a 'ceiling' of x-nines accuracy.... (read more)

Load More