Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 27 May 2016 02:36:18PM 1 point [-]

Are progressives particularly enthusiastic about imperial presidency?

I think so, yes. Here is an example, they are not hard to find. Of course, the left elides the word "imperial" :-/

I have noticed people being annoyed

More than annoyed. These people want to expand the presidential powers and use the executive branch to achieve their goals, separation of powers be damned.

Is it a thing progressives do more than conservatives?

Yes, because progressive are much more comfortable with the idea of Big State (not to mention the idea of upending traditional arrangements).

Comment author: gjm 29 May 2016 01:23:10AM -2 points [-]

Here is an example

... whose authors say

> the consolidation of executive authority has led to a number of dangerous policies [see David Shipler, in this issue], and we strongly oppose the extreme manifestations of this power, such as the “kill lists” that have already defined Obama’s presidency

which doesn't seem exactly like a ringing endorsement of "imperial presidency".

So far as I can tell, the article isn't proposing that the POTUS should have any powers he doesn't already have; only that he should use some of his already-existing powers in particular ways. If that's "imperial presidency" then the US already has imperial presidency and the only thing restraining it is the limited ambition of presidents.

These people want to expand the presidential powers and use the executive branch to achieve their goals, separation of powers be damned.

Which people, exactly? Again, the article you pointed to as an example of advocacy for "imperial presidency" claims quite explicitly that the president already has the power to do all the things it says he should do. (Of course that might be wrong. But saying the president should do something that you wrongly believe he's already entitled to do is not advocating for expanding presidential power.)

Yes, because [...]

Do you have evidence that they actually do, as opposed to a bulveristic explanation of why you would expect them to?

I'm not sure how one would quantify that, but a related question would be which presidents have actually exercised more "imperial" power. A crude proxy for that that happens to be readily available is number of executive orders issued. So here are the last 50 years' presidents in order by number of EOs, most to least: Reagan (R), Clinton (D), Nixon (R), Johnson (D), Carter (D), Bush Jr (R), Obama (D), Ford (R), Bush Sr (R). Seems fairly evenly matched to me.

I don't think I understand your bulveristic explanation, anyway. Issuing more executive orders (or exercising more presidential power in other ways) is about the balance between branches of government, not about the size of the government.

Here's an interesting article from 2006 about pretty much exactly this issue; it deplores the (alleged) expansion of presidential power and says both conservatives and progressives are to blame, and if you look at its source you will see that it's not likely to be making that claim in the service of a pro-progressive bias.

Comment author: Lumifer 27 May 2016 12:48:51AM *  3 points [-]

Is Sanders actually more than let's say 25% likely to get the nod?

No.

To get the nomination he needs something extraordinary to happen. Something like Hillary developing a major health problem or the FBI indicting her over her private email server.

Trump scares me almost as much as Clinton

Someone pointed out a silver lining: the notion of President Trump might make progressives to be slightly less enthusiastic about imperial presidency. I'm not holding my breath, though.

Comment author: gjm 27 May 2016 10:19:39AM *  -1 points [-]

Are progressives particularly enthusiastic about imperial presidency?

I haven't noticed any such enthusiasm. I have noticed people being annoyed when "their guy" was in the White House but couldn't do the things they wanted because Congress was on the other side, but that's not at all the same thing.

Is it a thing progressives do more than conservatives? I dunno. It may be a thing progressives have done more of in the last decade or so because they've spent more of that time with the president on their side and Congress against, but that doesn't tell us much about actual differences in disposition.

[EDITED for slightly less clumsy wording.]

Comment author: Ixiel 26 May 2016 10:05:30PM -1 points [-]

Ok, I have to hold my breath as I ask this, and I'm really not trying to poke any bears, but I trust this community's ability to answer objectively more than other places I can ask, including more than my weak weak Google fu, given all the noise:

Is Sanders actually more than let's say 25% likely to get the nod?

I had written him off early, but I don't get to vote in that primary so I only just started paying attention. I'm probably voting Libertarian anyway, but Trump scares me almost as much as Clinton, so I'd sleep a little better during the meanwhile if it turns out I was wrong.

Thanks in advance. If this violates the Politics Commandment I accept the thumbs, but I'd love to also hear an answer I can trust.

Comment author: gjm 27 May 2016 12:09:36AM -1 points [-]

He's millions of votes and many many delegates down compared to HRC. I think the only realistic way he gets the Democratic nomination is if HRC abruptly becomes obviously unelectable (e.g., if the business with her email server starts looking like getting her into actual legal trouble, or someone discovers clear evidence of outright bribery from her Wall Street friends), in which case the "superdelegates" might all switch to Sanders. I don't see any such scenario that actually looks more than a few percent likely.

(I make no claim to be an expert; I offer this only as a fairly typical LWer's take on the matter.)

Comment author: Pimgd 26 May 2016 08:48:52AM *  0 points [-]

Pretty much this; if we adjust the numbers to "A: 20 cents or B: 25% chance for 100 cents" then I'd take the option B, but scale it up to "A: $200,000 or B: 25% chance for $1,000,000", and I'd take option A. Because $0 is 0 points, $1 million is something like 4 points, and $200,000 is about 2 points.

Human perception of scale for money is not linear (but not logarithmic either... not log 10, anyway, maybe log somethingelse). And since I'm running this flawed hardware...

Some of it was pointed out already as "prospect theory" but that seems to be more about perception of probability rather than the perception of the actual reward.

Comment author: gjm 26 May 2016 11:32:56AM -1 points [-]

Log to base 10 and log to base anything_else differ only by a scale factor, so if you find log-to-base-10 unsatisfactory you won't like any other sort of log much better.

It sounds like you're suggesting that utility falls off slower than log(wealth) or log(income). I think there's quite good evidence that it doesn't -- but if you're looking at smallish changes in wealth or income then of course you can get that smaller falloff in change in utility. E.g., if you start off with $66,666 and gain $200k then your wealth has gone up by a factor of 4; if you gain $1M instead then your wealth has gone up by a factor of 16. If your logs are to base 2, you get exactly the numbers you described.

The right figure to use for "wealth" there may well not be exactly your total net wealth; it should probably include some figure for "effective wealth" arising from your social context -- family and friends, government-funded safety net, etc. It seems like that should probably be at least a few tens of kilobucks in prosperous Western countries.

Comment author: Viliam 26 May 2016 08:50:29AM *  1 point [-]

What if Mary is Chinese? :P

I mean, what if there is a person not understanding Chinese in a room, operating the Chinese symbols they don't understand, according to formal rules that make no sense to them. The system already "knows" (on the symbol level, not the person who operates it) everything about the "red" color, but it has never perceived the red color in its input. And then, one day, it receives the red color in the input. If there is an unusual response by the system, what exactly caused it? (For extra layer of complication, let's suppose that the inputs to the "Chinese room" are bit streams containing JPEG images, so even the person operating the room has never seen the red color.)

To add more context, what if the perceived red object is a trolley running down the railway track...

Comment author: gjm 26 May 2016 11:13:55AM *  -1 points [-]

See also: Can Bad Men Make Good Brains Do Bad Things?. That's a JSTOR article which won't be accessible for most readers, but some kind person has copied out its content here.

[EDITED to use a slightly better choice of some-kind-person.]

Comment author: etymologik 24 May 2016 09:08:23PM *  1 point [-]

Thanks much for the link to the Secretary Problem solution. That's will serve perfectly. Even if I don't know the total number of houses that will be candidates for serious consideration, I do know there's an average, which is (IIRC) six houses visited before a purchase.

As for cheating ... what I mean by that is deluding myself about some aspects of the property I'm looking at so that I believe "this is the one" and make an offer just to stop the emotional turmoil of changing homes and spending a zillion dollars that I don't happen to possess. "Home sweet home" and "escape the evil debt trap" are memes at war in my head, and I will do things like hallucinate room dimensions that accommodate my furniture rather than admit to myself that an otherwise workable floor plan in a newly gutted and beautifully renovated yet affordable home is too dang small and located in a declining neighborhood. I take a tape measure and grid paper with me to balk the room size cheat. But I also refer to the table, which requires me to check for FEMA flood zone location. This particular candidate home was in a FEMA 100-year flood zone, and the then-undeveloped site had in fact been flooded in 1952. That fact was enough to snap me out of my delusion. At that point the condition of the neighboring homes became salient.

The extent to which self-delusion and trickery are entwined in everyday thought is terribly disheartening, if you want to know the truth.

On weighting my functional criteria based on dollars, the real estate market has worked out a marvelous short circuit for rationality. Houses are no longer assessed for value based on an individual home's actual functional specifications. The quantity of land matters (so yard size matters to price). Otherwise, overwhelmingly, residential properties are valued for sale and for mortgages based on recent sales of "comparable" homes. "Comparable" means "the same square footage & same number of bedrooms and baths within half of mile of your candidate home." The two homes can otherwise be completely dissimilar, but will nevertheless be considered "comparable". No amount of improvements to the house or yard will change the most recent sale price of the other homes in the neighborhood. What this means is that sales prices are just for generic shelter plus the land, where the land is most of the value and neighborhood location is most of the land value. So the price of the home you're looking at is not really very closely tied to anything you might value about the home. This makes it very difficult to come up with a reasonable market price for, say, an indoor laundry versus laundry facilities in the garage. It's certainly beyond my meager capacity to calibrate the value of home amenities based on dollars.

I'm told it wasn't this way in the 1950s, but given the history of land scams in the U.S., which go all the way back to colonial land scams in Virginia, I have my doubts that prices for real estate were ever rational.

But I'll try to find something for weights. Back to the drawing board. And thanks for your help.

Comment author: gjm 25 May 2016 12:50:38PM -1 points [-]

Secretary Problem [...] will serve perfectly

Beware! The optimal solution depends a lot on the exact problem statement. The goal in the SP is to maximize the probability that you end up with the best available option, and it assumes you're perfectly indifferent between that and all other possible outcomes.

That Wikipedia page discusses one variant, where each candidate has a score chosen uniformly at random between 0 and 1, and all you learn about each candidate is whether it's the best so far. Your goal is to maximize your score. With that modification, the optimal strategy turns out to be to switch from "observe" to "accept next best-so-far" much sooner than with the original SP -- after about sqrt(n) candidates.

Your actual situation when buying a house is quite different from either of these. You might want to hack up a little computer program that simulates a toy version of the house-buying process, and experiment with strategies.

Comment author: Tripitaka 25 May 2016 08:00:04AM -1 points [-]

I am not sure if I read it here or on SSC, but someone tried to estimate how a "mary's room" equivalent for the human brain would look like. A moon sized library on which robotic crawlers run around at decent fractions of c ...

Anybody having info on that?

Comment author: gjm 25 May 2016 12:40:48PM -1 points [-]

When you say "mary's room", do you actually mean Chinese Room rather than Mary's Room?

Comment author: Viliam 25 May 2016 11:51:50AM *  5 points [-]

allistic person, let’s talk about this one place you feel locked out of and how we can make it even better for the majority, who already run so many other industries to the exclusion of people like me, first. Let’s make sure the already-privileged majority is comfortable in all places, at all times, before appreciating small pockets of minority safety and accommodation, and asking what they used to do right before they, too, were colonized by the tyranny of the narrowly-defined “default” human being in need of additional comfort while I try to survive. THAT FEELS FUCKING INCLUSIVE TO ME, HELL YEAH.

(...) Expecting autistic people to get better at small talk in order to make allistics feel more welcome is like expecting people in wheelchairs to get better at walking in order to make physically abled people feel more welcome.

(...) Mainstream feminism has some serious catching-up to do when it comes to learning about the lives of people who aren’t nice normal middle- to upper-class ladies, not to mention a lot of earned distrust. When you tell people that a skill to which they are inherently maladapted is a new requirement for participating in some culture, you are telling those people that they are no longer welcome in that culture. Bluntly, that is not your decision to make, and people are right not to trust the motivations of anyone who behaves as if they think it is. Too many of us have been burned too many times by people who told us “we want to make this a great place for everyone!”, only to find out that in practice, “everyone” actually means “all the allistics.”

reminds me of a SSC post on safe spaces:

One important feature of safe spaces is that they can’t always be safe for two groups at the same time. Jews are a discriminated-against minority who need a safe space. Muslims are a discriminated-against minority who need a safe space. But the safe space for Jews should be very far way from the safe space for Muslims, or else neither space is safe for anybody.

The rationalist community is a safe space for people who obsessively focus on reason and argument even when it is socially unacceptable to do so.

I don’t think it’s unfair to say that these people need a safe space. I can’t even count the number of times I’ve been called “a nerd” or “a dork” or “autistic” for saying something rational is too high to count. Just recently commenters on Marginal Revolution – not exactly known for being a haunt for intellect-hating jocks – found an old post of mine and called me among many other things “aspie”, “a pansy”, “retarded”, and an “omega” (a PUA term for a man who’s so socially inept he will never date anyone).

I also enjoy this ending (EDIT: of the article linked by Lumifer, not the SSC one):

Nor is it lost on me that I am sitting here patiently spergsplaining theory of mind to people who supposedly have it when I supposedly don’t. Allistics can get away with developing a theory of one mind — their own — because they can expect most of the people they interact with to have knowledge, perspectives, and a sensorium not all that different from theirs. Autists don’t get that option. Reaching adulthood, for us, means first learning how to function through a distorted sensorium, then learning to develop a theory of minds, plural, starting with ones different from our own.

It reminds me of my pet theory on some similarities between high-IQ people and autists; specifically, having to develop a "theory of mind unlike my own" during childhood. (But the two of us probably had a long disagreement about this in the past, if I remember correctly, so I don't want to repeat it here.)

Comment author: gjm 25 May 2016 12:33:17PM -1 points [-]

Just to forestall confusion, that ending is not the ending of the SSC post, but the (near-)ending of the post Lumifer linked to. (In particular, Scott is not calling himself autistic.)

Comment author: Good_Burning_Plastic 23 May 2016 09:43:52AM 1 point [-]

"Random" doesn't mean anything but "unpredictable", and a possibly relevant question is "unpredictable by whom?".

But yes, probably. (If you ask 1000 people for a number from 1 to 10 many more than 100 of them will say "7" etc.)

Comment author: gjm 23 May 2016 02:02:26PM -1 points [-]

I think "random" does mean something more than "unpredictable". It means something more like "independent of things you care about". More precisely, that's what it should mean in most places where it's used.

(I'm not quite satisfied with this formulation; e.g., a "random" thing that always takes the same value is independent of everything, but you wouldn't usually want to call it random. What we're really trying to get at is "statistically indistinguishable from idealized randomness" but it would be nice to find a way of saying it that doesn't appeal to an existing notion of randomness. Perhaps something like "incompressible, on average, given the accessible state of all the other things we care about".)

Imagine, if you will, a lottery that works as follows. Each lottery ticket bears a SHA-256 hash of (the ticket's lottery numbers + a further string); the further string is not revealed until the time of the draw. When drawing time comes, the winning numbers and the further string are revealed on national TV, and if you think you might have a winning ticket you can bring it to have the hash checked.

In Scenario 1, the winning numbers are chosen "at random". In Scenario 2, you (and only you) have the magical power to make the winning numbers be the ones hashed onto your ticket. You don't know the further string. You also don't know what your ticket's numbers are. You just know that after you perform your magical ritual the numbers drawn will match the numbers on your ticket.

The numbers drawn are still unpredictable. No one knows what numbers are on your ticket. (Let's suppose that the tickets are made by some process that after making each ticket erases all evidence of what numbers have been hashed onto it.) But something is predictable, namely that the numbers drawn will match yours and you will win the lottery.

The lottery numbers in Scenario 2 are still "random" in some sense, but this is exactly the kind of situation you're trying to avoid when you deliberately randomize things.

So, the answer to Romashka's question is: those results would be less random-in-my-sense because they might correlate with interesting properties of the people in the experiment, which could be a problem if e.g. you were using these coin "tosses" to control something you're doing with the people (e.g., which ones to use for the next phase of the experiment, or which of two things to ask them to do). They might well also diverge somewhat from the 1:1 ratio you'd expect from theoretical unbiased coin tosses (e.g., maybe most people prefer to put coins with their "heads" face upwards).

Comment author: etymologik 20 May 2016 06:22:54PM 0 points [-]

Thanks for that explanation of utility functions, gjm, and thanks to protostar for asking the question. I've been struggling with the same issue, and nothing I've read seems to hold up when I try to apply it to a concrete use case.

What do you think about trying to build a utility TABLE for major, point-in-time life decisions, though, like buying a home or choosing a spouse?

P.S. I'd upvote your response to protostar, but I can't seem to make that happen.

Comment author: gjm 20 May 2016 09:03:42PM -1 points [-]

build a utility table

I'm not sure exactly what you mean by a utility table, but here is an example of one LWer doing something a bit like a net utility calculation to decide between two houses.

One piece of advice I've seen in several places is that when you have a big and difficult decision to make you should write down your leading options and the major advantages and disadvantages of each. How much further gain (if any) there is from quantifying them and actually doing calculations, I don't know; my gut says probably usually not much, but my gut is often wrong.

View more: Next