I took the survey too. I can haz karma plz? Kthxbye.
The race question doesn't make much sense for Europeans. I could answer White (non-Hispanic) even though the Hispanic category doesn't exist here. But what should Spaniards answer?
The thing is that the proof for Gödel's theorem is constructive. We have an algorithm to construct Gödel sentences from Axioms. So basically the only way we can be unable to recognize our Gödel sentences is being unable to recognize our axioms.
"Stuart Armstrong does not believe this sentence."
Aw, I happen to have a bit of difficulty in figuring out what proposition that desugars to in the language of Peano Arithmetic, could you help me out? :-)
(The serious point being, we know that you can write self-contradictory statements in English and we don't expect to be able to assign consistent truth-values to them, but the statements of PA or the question whether a given Turing machine halts seem to us to have well-defined meaning, and if human-level intelligence is computable, it seems at least at first as if we should be able to encode "Stuart Armstrong believes proposition A" as a statement of PA. But the result won't be anywhere as easily recognizable to him as what you wrote.)
But that sentence isn't self-contradictory like "This is a lie", it is just self-referential, like "This sentence has five words". It does have a well-defined meaning and is decidable for all hypothetical consistent people other than hypothetical consitentified Stuart Armstrong.
That's a problem with all theories of truth, though. "Elaine is a post-utopian author" is trivially true if you interpret "post-utopian" to mean "whatever professors say is post-utopian", or "a thing that is always true of all authors" or "is made out of mass".
To do this with programs rather than philosophy doesn't make it any worse.
I'm suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn't tell you how to match the right statement to the right computer program. If you match the statement "snow is white" to the computer program that is a bunch of random characters, the program will return no result and you'll conclude that "snow is white" is meaningless. But that's just the same problem as the philosopher who refuses to accept any definition of "snow", or who claims that snow is obviously black because "snow" means that liquid fossil fuel you drill for and then turn into gasoline.
If your closest match to "post-utopian" is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means "something people call post-utopian" - which would probably be a weird and nonstandard word use the same way using "snow" to mean "oil" would be nonstandard - or that post-utopianism isn't meaningful.
Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn't make it worse, but it doesn't make it better either.
If a person with access to the computer simulating whichever universe (or set of universes) a belief is about could in principle write a program that takes as input the current state of the universe (as represented in the computer) and outputs whether the belief is true, then the belief is meaningful.
(if the universe in question does not run on a computer, begin by digitizing your universe, then proceed as above)
But that's only useful if you make it circular.
Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.
You actually want it to depend on the state of the universe in the right way, but that's just another way to say it should depend on whether the belief is true.
But isn't the risk of diversifying compensated by a corresponding possibility of large reward if the sector outperforms? I wouldn't consider a strategy that produces modest losses with high probability but large gains with low probability sufficient to disprove my claim.
Let's go one step back on this, because I think our point of disagreement is earlier than I thought in that last comment.
The efficient market hypothesis does not claim that the profit on all securities has the same expectation value. EMH-believers don't deny, for example, the empirically obvious fact that this expectation value is higher for insurances than for more predictable businesses. Also, you can always increase your risk and expected profit by leverage, i.e. by investing borrowed money.
This is because markets are risk-averse, so that on the same expectation value you get payed extra to except a higher standard deviation. Out- or underperforming the market is really easy by excepting more or less risk than it does on average. The claim is not that the expectation value will be the same for every security, only that the price of every security will be consistent with the same prices for risk and expected profit.
So if the EMH is true, you can not get a better deal on expected profit without also accepting higher risk and you can not get a higher risk premium than other people. But you still can get lots of different trade-offs between expected profit and risk.
Now can you do worse? Yes, because you can separate two types of risk.
Some risks are highly specific to individual companies. For example, a company may be in trouble if a key employee gets hit by a beer truck. That's uncorrelated risk. Other risks affect the whole economy, like revolutions, asteroids or the boom-bust-cycle. That's correlated risk.
Diversification can insure you against uncorrelated risk, because, by definition, it's independent from the risk of other parts of your portfolio, so it's extremely unlikely for many of your diverse investments to be affected at the same time. So if everyone is properly diversified, no one actually needs to bear uncorrelated risk. In an efficient market that means it doesn't earn any compensation.
Correlated risk is not eliminated by diversification, because it is by definition the risk that affects all your diversified investments simultaneously.
So if you don't diversify you are taking on uncorrelated risk without getting paid for it. If you do that you could get a strictly better deal by taking on a correlated risk of the same magnitude which you would get payed for. And since that is what the marked is doing on average, you can get a worse deal than it does.
An interesting corollary of the efficient market hypothesis is that, neglecting overhead due to things like brokerage fees and assuming trades are not large enough to move the market, it should be just as difficult to lose money trading securities as it is to make money.
No, not really. In an efficient marked risks uncorrelated with those of other securities shouldn't be compensated, so you should easily be able to screw yourself over by not diversifying.
I'm thinking about a fantasy setting that I expect to set stories in in the future, and I have a cryptography problem.
Specifically, there are no computers in this setting (ruling out things like supercomplicated RSA). And all the adults share bodies (generally, one body has two people in it). One's asleep (insensate, not forming memories about what's going on, and not in any sort of control over the body) and one's awake (in control, forming memories, experiencing what's going on) at any given time. There is not necessarily any visible sign when one party falls asleep and the other wakes, although there are fakeable correlates (basically, acting like you just appeared wherever you are). It does not follow a rigid schedule, although there is an approximate maximum period of time someone can stay awake for, and there are (also fakeable) symptoms of tiredness. Persons who share bodies still have distinct legal and social existences, so if one commits a crime, the other is entitled to walk free while awake as long as they come back before sleeping - but how do they prove it?
There are likely to be three levels of security, with one being "asking", the second being a sort of "oh yeah? prove it" ("tell me something only my wife would know / exhibit a skill your cohabitor hasn't mastered / etc."), and the third being... something. Because you don't want to turn loose someone who could be a dangerous criminal just because they were collaborating with a third party to learn information, or broke into the National Database of Secret Person-Distinguishing Passphrases, or didn't disclose all their skills to some central skill registry - but you don't want to lock up innocent people who made bad choices about who to move in with when they were eight, either.
Is there something that doesn't require computers, or human-atypical levels of memorization/computation, or rely critically on a potentially-break-into-able National Database of Secret Person-Distinguishing Passphrases, which will let someone have a permanently private bit of information they can use to verify to arbitrary others who they are? (There is magic, but it is not math-doing magic.)
Can they use quill and parchent?
If so, the usual public key algorithms could be encoded into something like a tax form, i.e. something like "...51. Subtract the number on line 50 from the number on line 49 and write the result in here:__ ...500. The warden should also have calculated the number on line 499. Burn this parchent."
Of course there would have to be lots of error checks. ("If line 60 doesn't match line 50 you screwed up. If so, redo everything from line 50 on.")
To make it practical, each warden/non-prisoner-pair would do a Diffie-Hellman exchange only once. That part would take a day or two. After establishing a shared secret the daily authentication would be done by a hash, which probably could be done in half an hour or less.
Of course most people would have no clue why those forms work, they would just blindly follow the instructions, which for each line would be doable with primary school math.
The wardens would probably spend large parts of their shifts precalculating hashes for prisoners still asleep, so that several prisoners could do their get-out work at the same time. Or maybe they would do the crypto only once a month or so and normally just tell the non-prisoners their passwords for the next day every time they come in.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This weekend I staged a Les Mis "One Day More" flashmob in our train station with about 20 people. (I'm Javert).
I realized I had been wishing that a flashmob would happen, and finally wised up and realized I should just do it myself. I think most of us underestimate how willing people are to go along with wacky plans, as long as they don't have to have any logistical responsibilities. It also helped to set point people for a couple different social circles to recruit (alums, work colleagues, an improv group, church friends). This has definitely lowered my reticence to stage other public spectacles.
Ha! That's the delightful little project, no?