It always amuses me to watch the optimism of gun fanatics who believe they're preparing themselves to resist with their shotguns against a state that has drones and nukes at its disposal.
The Syrians and Libyans seem to have done OK for themselves. Iraq and likely Afghanistan were technically wins for our nuclear and drone-armed state, but both were only marginal victories, Iraq was a fairly near run thing, and in neither case were significant defections from the US military a plausible scenario.
SlateStarCodex.
An interesting natural experiment happened on the Pacific Theater of WWII. American and Canadian forces attacked an island which had been secretly abandoned by the Japanese weeks prior. Their unopposed landing resulted in dozens of casualties from friendly fire and dozens of men lost in the jungle. Presumably, a similar rate of attrition occurred in every other landing, on top of casualties inflicted by the deliberate efforts of enemy troops.
Thanks for your thoughtful response. I'm glad that I've been more comprehensible this time. Let me see if I can address the problems you raise:
1) Point taken that human freedom is important. In the background of my argument is a theory that human freedom has to do with the endogeneity of our own computational process. So, my intuitions about the role of efficiency and freedom are different from yours. One way of describing what I'm doing is trying to come up with a function that a supercontroller would use if it were to try to maximize human freedom. The idea is that choices humans make are some of the most computationally complex things they do, and so the representations created by choices are deeper than others. I realize now I haven't said any of that explicitly let alone argued for it. Perhaps that's something I should try to bring up in another post.
2) I also disagree with the morality of this outcome. But I suppose that would be taken as beside the point. Let me see if I understand the argument correctly: if the most ethical outcome is in fact something very simple or low-depth, then this supercontroller wouldn't be able to hit that mark? I think this is a problem whenever morality (CEV, say) is a process that halts.
I wonder if there is a way to modify what I've proposed to select for moral processes as opposed to other generic computational processes.
3) A couple responses:
Oh, if you can just program in "keep humanity alive" then that's pretty simple and maybe this whole derivation is unnecessary. But I'm concerned about the feasibility of formally specifying what is essential about humanity. VAuroch has commented that he thinks that coming up with the specification is the hard part. I'm trying to defer the problem to a simpler one of just describing everything we can think of that might be relevant. So, it's meant to be an improvement over programming in "keep humanity alive" in terms of its feasibility, since it doesn't require solving perhaps impossible problems of understanding human essence.
Is it the consensus of this community that finding an objective function in E is an easy problem? I got the sense from Bostrom's book talk that existential catastrophe was on the table as a real possibility.
I encourage you to read the original Bennett paper if this interests you. I think your intuitions are on point and appreciate your feedback.
Thanks for your response!
1) Hmmm. OK, this is pretty counter-intuitive to me.
2) I'm not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.
3) Sorry, I don't think I was very clear. To clarify: once you've specified h, a superset of human essence, why would you apply the particular functions g/g* to h? Why not just directly program in 'do not let h cease to exist'? g/g* do get around the problem of specifying 'cease to exist', but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn't completely solve the definition of 'cease to exist'.
(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we've specified said superset.)
Note: I may have badly misunderstood this, as I am not familiar with the notion of logical depth. Sorry if I have!
I found this post's arguments to be much more comprehensible than your previous ones; thanks so much for taking the time to rewrite them. With that said, I see three problems:
1) '-D(u/h)' optimizes for human understanding of (or, more precisely, human information of) the universe, such that given humans you can efficiently get out a description of the rest of the universe. This also ensures that whatever h is defined as continues to exist. But many (indeed, even almost all) humans values aren't about entanglement with the universe. Because h isn't defined explicitly, it's tough for me to state a concrete scenario where this goes wrong. (This isn't a criticism of the definition of h, I agree with your decision not to try to tightly specify it.) But, e.g. it's easy to imagine that humans having any degree of freedom would be inefficient, so people would end drug-addled, in pods, with videos and audio playing continuously to put lots of carefully selected information into the humans. This strikes me as a poor outcome.
2) Some people (e.g. David Pearce (?) or MTGandP) argue that the best possible outcome is essentially tiled- that rather than have large and complicated beings human-scale or larger, it would be better to have huge numbers of micro-scale happy beings. I disagree, but I'm not absolutely certain, and I don't think we can rule out this scenario without explicitly or implicitly engaging with it.
3) As I understand it, in 3.1 you state that you aren't claiming that g is an optimal objective function, just that it leaves humans alive. But in this case 'h', which was not ever explicitly defined, is doing almost all of the work: g is guaranteed to preserve 'h', which you verbally identified with the physical state of humanity. But because you haven't offered a completely precise definition of humanity here, what the function as described above would preserve is 'a representation of the physical state of humanity including its biological makeup--DNA and neural architecture--as well as its cultural and technological accomplishments'. This doesn't strike me as a significant improvement from simply directly programming in that humans should survive, for whatever definition of humans/humanity selected; while it leaves the supercontroller with different incentives, in neither scenario are said incentives aligned with human morality.
(My intuition regarding g* is even less reliable than my intuition regarding g; but I think all 3 points above still apply.)
Can you direct me to some of those papers? Where should I start?
They are posted here.
IMHO, good starting points are 'Definiability of Truth in Probabilistic Logic' and 'Robust Cooperation in the Prisoner's Dilemma'.
I like your posts and comments a lot more when you refrain from the unfortunate rhetoric.
Our estimate of Putin's estimate of Obama's view on the U.S. empire is critical to calibrating our beliefs. Lots of leftwing intellectuals really, really do think that the U.S. empire is an evil, imperialist force (do you doubt that they believe this?). To calibrate our beliefs we need to figure out with what probability Putin thinks Obama has this view.
I, and presumably shminux as well, though that you were claiming that there's actually a good chance that Obama actually does want to see the American 'empire' collapse, not that Putin thought that he would.
Most people say that 90% of start-ups fail, but they don't mention how many start-ups entrepreneurs attempt on average. If: 0.most founders only attempt one startup (and first-time startups have a 90% chance of failing), 1. But, founders who found multiple startups have a better chance of success. Then, the inside view that (you should be able be able to succeed at a startup if you do a lot of them in your 20s) and the outside view of (90% of startups fail) can actually be compatible.
You could make your model a bit more precise by noting:
Chance of all your startups failing (during life-time) = P(1st startup is a failure) * P(2nd-startup is a failure) * ETC.
If: 0. each start-up is independent of each other. (A pretty big assumption. I would expect people to get better over time. You can gauge how much.) 1. Each startup takes 2 years. (You can obviously change this number around).
Then, if somebody just did 6 startups consecutively, their probability of success would be 1-.9^6 = .47. There's certainly room for a healthy amount of optimism in this model. If you model it more like 1-(.9.7.6*.6) = .73, then those seem like pretty good odds to do lots of startups.
I'd like to see people play around with numbers like these. It's an outside view model. You can use your inside view to predict more specific things (how long a start-up takes, how much you learn from your failures, etc).
This is better than having a model: P(Devise an idea for a product that creates demand.) = .90 P(Build it) = .90 p(Market and sell it) = .90 P(Things run smoothly (some might call this luck) ) = ?
P(success) = .9.9.9*? = higher than what's actually true. (Though, you could contend that plenty of people can't devise an idea for products that creates demand, and that if you can do this, then you have a much better chance than the average start-up).
Paul Graham said something like "Startup founders are / (have to be) optimists". I'm wondering how accurate people think P(2nd-failure | 1st failure) = .7 is. It seems like it's still pretty skeptical (closer to .9 chance of failure rather than .1 chance of failure of the purely inside view model), but is still probably optimistic compared to YC's ~80% failure rate.
You're taking a very inside-view approach to analyzing something that you have no direct experience with. (Assuming you don't.) This isn't a winning approach. Outside view predicts that 90% of startups will fail.
Startups' high reward is associated with high risk. But most people are risk averse, and insurance schemes create moral hazard.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
They are organized paramilitary groups who buy military-grade weapons and issue them to their soldiers, not random gun toters who fight with personally owned handguns and shotguns.
It seems to me that the main issues in setting up a militia are organization, recruitment and funding. Once you sort that out, acquiring weapons isn't much difficult.
Maybe, but this is the exact opposite of polymath's claim- not that fighting a modern state is so difficult as to be impossible, but that fighting one is sufficiently simple that starting out without any weapons is not a significant handicap.
(The proposed causal impact of gun ownership on rebellion is more guns -> more willingness to actually fight against a dictator (acquiring a weapon is step that will stop many people who would otherwise rebel from doing so) -> more likelihood that government allies defect -> more likelihood that the government falls. I'm not sure if I endorse this, but polymath's claim is definitely wrong.)
(As an aside, this is historically inaccurate: almost all of the weapons in Syria and Libya came either from defections from their official militaries (especially in Libya), or from foreign donors, not from private purchases. However, private purchases were important in Mexico and Ireland.)