All of Randaly's Comments + Replies

Randaly20

You seem to be assuming that there's not significant overhead or delays from negotiating leases, entering bankruptcy, or dealing with specialized hardware, which is very plausibly false.

If nobody is buying new datacenter GPU's, that will cut GPU progress to ~zero or negative (because production is halted and implicit knowledge is lost). (It will also probably damage broader semiconductor progress.)

This proportionally reduces cost of inference (and also of training).

This reduces the cost to rent a GPU-hour, but it doesn't reduce the cost to the owner. (Open... (read more)

2Vladimir_Nesov
The point of the first two paragraphs was to establish relevance and an estimate for the lowest market price of compute in case of a significant AI slowdown, a level at which some datacenters will still prefer to sell GPU-time rather than stay idle (some owners of datacenters will manage to avoid bankruptcy and will be selling GPU-time even with no hope of recouping capex, as long as it remains at an opex profit, assuming nobody will be willing to buy out their second hand hardware either). So it's not directly about OpenAI's datacenter situation, rather it's a context in which OpenAI might find itself, which is with access to a lot of cheap compute from others. I'm using "cost of inference" in a narrow sense of cost of running a model at a market price of the necessary compute, with no implications about costs of unfortunate steps taken in pursuit of securing inference capacity, such as buying too much hardware directly. In case of an AI slowdown, I'm assuming that inference compute will remain abundant, so securing the necessary capacity won't be difficult. I'm guessing one reason Stargate is an entity separate from OpenAI is to have an option to walk away from it if future finances of OpenAI can't sustain the hardware Stargate is building, in which case OpenAI might need or want to find compute elsewhere, hence relevance of market prices of compute. Right now they are in for $18bn with Stargate specifically out of $30-40bn they've raised (depending on success of converting into a for-profit).
Randaly20

Thanks for explaining. I now agree that the current cost of inference isn't a very good anchor for future costs in slowdown timelines.

I'm uncertain, but I still think OpenAI is likely to go bankrupt in slowdown timelines. Here are some related thoughts:

  1. OpenAI probably won't pivot to the slowdown in time.
    1. They'd have < 3 years to do before running out of money.
    2. Budgets are set in advance. So they'd have even less time.
    3. All of the improvements you list cost time and money. So they'd need to continue spending on R&D, before that R&D has improved their
... (read more)
4Vladimir_Nesov
Control over many datacenters is useful for coordinating a large training run, but otherwise it doesn't mean you have to find a use for all of that compute all the time, since you could lease/sublease some for use by others (which at the level of datacenter buildings is probably not overly difficult technically, you don't need to suddenly become a cloud provider yourself). So the quesion is more about the global AI compute buildout not finding enough demand to pay for itself, rather than what happens with companies that build the datacenters or create the models, and whether these are the same companies. It's not useful to let datacenters stay idle, even if that perfectly extends hardware's lifespan (which seems to be several years), since progress in hardware means the time of current GPUs will be much less valuable in several years, plausibly 5x-10x less valuable. And TCO over a datacenter's lifetime is only 10-20% higher than the initial capex. So in a slowdown timeline prices of GPU-time can drop all the way to maybe 20-30% of what they would need to be to pay for the initial capex, before the datacenters start going idle. This proportionally reduces cost of inference (and also of training). The Abilene site in 2026 only costs $22-35bn, and they've raised a similar amount for it recently, so the $100bn figure remains about as nebulous as the $500bn figure. For inference (where exclusive use of a giant training system in a single location is not necessary) they might keep using Azure, so there is probably no pressing need to build even more for now. Though I think there's unlikely to be an AI slowdown until at least late 2026, and they'll need to plan to build more in 2027-2028, raising money for it in 2026, so it's likely they'll get to try to secure those $100bn even in the timeline where there'll be an AI slowdown soon after.
Randaly20

I agree that finances are important to consider. I've written my thoughts on them here; I disagree with you in a few places.

(1) Given Altman's successful ouster of the OpenAI board, his investors currently don't have much drive/desire/will to force him to stop racing. They don't have much time to do so on the current pace of increasing spending before OpenAI runs out of money.

(2) It's not clear what would boost revenue that they're not already doing; the main way to improve profits would just be to slash R&D spending. Much of R&D spending is spent ... (read more)

Randaly20

What prompts did you use? Can you share the chat? I see Sonnet 3.7 denying this knowledge when I try.

2niplav
Sorry, can't share the exact chat, that'd depseudonymize me. The prompts were: Which resulted in the model outputting the canary string in its message.
Randaly*51

I want to clarify that I'm criticizing "AI 2027"'s projection of R&D spending, i.e. this table. If companies cut R&D spending, that falsifies the "AI 2027" forecast.

In particular, the comment I'm replying to proposed that while the current money would run out in ~2027, companies could raise more to continue expanding R&D spending. Raising money for 2028 R&D would need to occur in 2027; and it would need to occur on the basis of financial statements of at least a quarter before the raise. So in this scenario, they need to slash R&D spend... (read more)

5Vladimir_Nesov
I see what you mean (I did mostly change the topic to the slowdown hypothetical). There is another strange thing about AI companies, I think giving ~50% in cost of inference too much precision in the foreseeable future is wrong, as it's highly uncertain and malleable in a way that's hard for even the company itself to anticipate. About ~2x difference in inference cost (or size of a model) can be merely hard to notice when nothing substantial changes in the training recipe (and training cost), and better post-training (which is relatively cheap) can get that kind of advantage or more, but not reliably. Pretraining knowledge distillation might get another ~1.5x at the cost of training a larger teacher model (plausibly GPT-4.1 has this because of the base model for GPT-4.5, but GPT-4o doesn't). And there are all the other compute multipliers that become less fake if the scale stops advancing. The company itself won't be able to plan with any degree of certainty how good its near future models will be relative to their cost, or how much its competitors will be able to cut prices. So the current state of cost of inference doesn't seem like a good anchor for where it might settle in the slowdown timelines.
Randaly*31

My intuitions are more continuous here. If AGI is close in 2027 I think that will mean increased revenue and continued investment

Gotcha, I disagree. Lemme zoom on this part of my reasoning, to explain why I think profitability matters (and growth matters less):

(1) Investors always only terminally value profit; they never terminally value growth. Most of the economy doesn't focus much on growth compared to profitability, even instrumentally. However, one group of investors, VC's, do: software companies generally have high fixed costs and low marginal costs,... (read more)

4Vladimir_Nesov
They are losing money only if you include all the R&D (where the unusual thing is very expensive training compute for experiments), which is only important while capabilities keep improving. If/when the capabilities stop improving quickly, somewhat cutting research spending won't affect their standing in the market that much. And also after revenue grows some more, essential research (in the slow capability growth mode) will consume a smaller fraction. So it doesn't seem like they are centrally "losing money", the plausible scenarios still end in profitability (where they don't end the world) if they don't lose the market for normal reasons like failing on products or company culture. This does seem plausible in some no-slowdown worlds (where they ~can't reduce R&D spending in order to start turning profit), if in fact more investors don't turn up there. On the other hand, if every AI company is forced to reduce R&D spending because they can't raise money to cover it, then they won't be outcompeted by a company that keeps R&D spending flowing, because such a competitor won't exist.
Randaly40

Thanks for the response!

So maybe I should just ask whether you are conditioning on the capabilities progression or not with this disagreement? Do you think $140b in 2027 is implausible even if you condition on the AI 2027 capability progression?

I am conditioning on the capabilities progression.

Based on your later comments, I think you are expecting a much faster/stronger/more direct translation of capabilities into revenue than I am- such that conditioning on faster progress makes more of a difference.

The exact breakdown FutureSearch use seems relatively u

... (read more)
Randaly20

I'd be curious to hear more about what made you perceive our scenario as confident. We included caveats signaling uncertainty in a bunch of places, for example in "Why is it valuable?" and several expendables and footnotes. Interestingly, this popular YouTuber made a quip that it seemed like we were adding tons of caveats everywhere,

I was imprecise (ha ha) with my terminology here- I should have only talked about a precise forecast rather than a confident one, I meant solely the attempt to highlight a single story about a single year. My bad. Edited the post.

Randaly20

Typo: The description for table 2 states that "In total, 148 of our 169 tasks have human
baselines, but we rely on researcher estimates for 21 tasks in HCAST.". This is an incorrect sum; the right figure is 149 out of 170 tasks, per the table.

Randaly60

Footnote 18 of the timelines forecast appears to be missing its explanation.

6elifland
Thanks, this should be fixed now.
Randaly*1913

Those were in fact some of the cases I had in mind, yes, thank you - I read the news too. And what one learns from reading about them is how those are exceptional cases, newsworthy precisely because they reached any verdict rather than settling, driven by external politics and often third-party funding, and highly unusual until recently post-2016/Trump. It is certainly the case that sometimes villains like Alex Jones get smacked down properly by libel lawsuits; but note how wildly incomparable these cases are to the blog post that Spartz is threatening to

... (read more)
Reply1111
7habryka
I haven't read this whole comment, though expect I will. Just making a quick clarification: I don't think that's an accurate summary of the linked comment (though it's also not like totally unrelated). Here it is in full:  I agree that this comment confirms that Spencer sent us evidence that related to some claims in the post. It does not speak on my epistemic state with regards to the relevance of that evidence. (To take an object-level stance on the issue, though I was more responding to the fact that I expect people will interpret that sentence as me saying something I am not saying, I do think that Spencer's messages were evidence, though really not very much evidence, and I would object to my epistemic state being summarized in this context as being interpreted as Spencer's screenshots falsifying anything about Ben's original post, though I agree that they are bayesian evidence against the hypothesis. I do think for the argument at hand to have force it needs to meet a higher standard than "some bayesian evidence", and I don't currently think it meets that threshold by my own lights)
Randaly5434

This is a combative comment which fails to back up its claims.

how surely only noble and good people ever sue over libel

if you really believe lawsuits are so awesome and wonderful

He did not say this. This is not reasonable for you to write.

you can count on one hand the sort of libel lawsuit which follows this beautiful fantasy.

This is not true. This is obviously not true. A successful and important libel case (against Giuliani) was literally headline news this week. You can exceed five such cases just looking at similar cases: Dominion v Fox; Sma... (read more)

gwern*1310

This is not true. This is obviously not true. A successful and important libel case (against Giuliani) was literally headline news this week. You can exceed five such cases just looking at similar cases: Dominion v Fox; Smartmatic v Fox; Coomer v Newsmax; Khalil v Fox; Andrews v D’Souza; and Weisenbach v Project Veritas. This is extremely unreasonable for you to say.

Those were in fact some of the cases I had in mind, yes, thank you - I read the news too. And what one learns from reading about them is how those are exceptional cases, newsworthy precisely... (read more)

Randaly*9-3

IMO, both U.S. and UK libel suits should both be very strongly discouraged, since I know of dozens of cases where organizations and individuals have successfully used them to prevent highly important information from being propagated, and I think approximately no case where they did something good (instead organizations that frequently have to deal with libel suits mostly just leverage loopholes in libel law that give them approximate immunity, even when making very strong and false accusations, usually with the clarity of the arguments and the transparenc

... (read more)
Randaly*71

I'm not disputing that specific people at Skunk Works believed that their tech was disliked for being good; but that's a totally insane belief that you should reject immediately, it's obviously self-serving, none of those people present any evidence for it, and the DoD did try to acquire similar technology in all these cases.

Again, this is a direct quote on procurement incentives from a guy who was involved on both the buy and sell side of the SR-71 back in the day.

This is quote from, per you, somebody from the CIA. The CIA and Air Force are different orga... (read more)

Randaly*6311

Your discussion of Skunk Works is significantly wrong throughout. (I am not familiar with the other examples.)

For example, in 1943 the Skunk Works both designed and built America’s first fighter jet, the P80 Shooting Star, in just 5 months. Chief engineer Kelly Johnson worked with a scrappy team of, at its peak, 23 designers and 105 fabricators. Nonetheless, the resulting plane ended up being operationally used by the air force for 40 years.

The P80 was introduced in 1945; the US almost immediately decided to replace it with the F-86, introduced in 1949. Th... (read more)

Well, I do think your comment quite overstates its case, but I've made some edits that should avoid the interpretations mentioned, and I do think those make the post better. So thanks for that! :)  


On the P80: 

It was built in 1943 and introduced in 1945. When I wrote "used operationally for 40 years" I didn't have in mind that they sent it up to join forces with F-16s in the 1980s. Rather wanted to convey that "in spite of being built ridicolously quickly, it wasn't a piece of junk that got scrapped immediately and never ended up serving a real f... (read more)

9Richard Horvath
Thank you, I wanted to say the same. Furthermore: SR-71 was not really flying above enemy territory: the high flight altitude made it possible to peek over the curvature of earth. It did not fly over the USSR like the U-2 did before the advent of anti-air missiles, but generally over allied/international borders, peeking into the forbidden territory. Interceptors were raised against it numerous times it but usually were unable to achieve a position where they could have attacked it successfully. I am not sure where the "fired at 4000 times" myth comes from, but it is nonsense. The S-200 (SA-5) systems introduced in the late 60s should have been able to shoot them down from relatively large distance, and it is recorded that Swedish JAS-37 jets were able to intercept and have a lock on it.
3Ben
I did think it was odd that the none of the 4 listed crew was a gunner, yet it supposedly had the firepower to wipe out a Soviet force.

No.

I'm pretty negative on how you fail to discuss any specific claim or link to any specific evidence, but you spend your longest paragraph speculating about the supposed bias of unnamed people.

You haven't really written enough to be clear, but I suspect that you have confused concentration camps with death or extermination camps? Regardless, the recent UN report did pretty specifically support claims of concentration camps- see points 37-57

-2M. Y. Zuo
The biases I'm referring to are not 'supposed' they are openly advertised by the same proponents, it's especially obvious in the case of certain religious fundamentalist groups.
-5Bob Hope

I also found that, controlling for rents, the partisanship of a state did not predict homelessness (using the Partisan Voting Index)

 

This is not a useful way of looking at this; homelessness would be almost entirely controlled by city, not state, policies. State partisanship in large part measures not how blue or red the states' cities are, but rather how urban or rural the state as a whole is.

This, and the Bahrain/UAE cases, seem more likely to be driven by concerns about whether/how well the Chinese vaccines work?

On the other hand, look at the US wars in Vietnam, Iraq or Afghanistan. The outcomes of these wars were determined much more by political forces (in both of the relevant countries) than by overwhelming force.

Insurgencies aren't a good comparison for conventional wars like the Nagorno-Karabakh war.

The overall thrust here seems like an application of Clausewitz's maxim that "war is the extension of politics by other means". However, the specific politics suggested seem very unrealistic.

  • You suggest ways to impact Azerbaijan's internal politics by targeting harm to specific groups. I see no reason to believe that Armenia had any substantial ability to deal much harm to Azerbaijan at all, so this isn't relevant. In general, it would be much harder for Armenia to advance to deal significant damage to Azerbaijan's homeland than it would be to defend.
  • Assas
... (read more)
6Jay
So how should Armenia have retained Nagorno-Karabakh? Use the Iraqi playbook.  In the kinetic phase of the war, Armenia is probably hopeless.  So make only a token show of resistance.   Before Azerbaijan takes over NK, scatter weapons caches to your co-ethnics.  Train NK locals as insurgents.  Make sure your border is permeable to insurgents; give them a place to rest, recover, and prepare. Don't let Azerbaijan consolidate its control.  Use ambushes, snipers, and IEDs to discourage Azerbaijani troops from leaving their compounds.  When the invaders make an enemy (and they will, lots of them), give that enemy a weapon.  When the invaders make a friend, give that friend and his entire family a hideous death.  Let people know that collaborators get closed-casket funerals (and then bomb the funerals). Provoke the invaders into heavy-handed response, then put videos of the massacres on YouTube (CNN, if you can).  Make their allies pay in lives and embarrassment.  Portray your freedom fighters as heroes standing tall against brutal oppression. It's a horrible project.  War usually is, and insurgency is worse than most kinds of war.  But it could be done. Eventually Azerbaijan would probably leave, simply because nobody sane wants to stay in the hellhole you've created.  Victory!
7johnswentworth
I generally agree with most of this, in the context of that particular conflict. It was a very smart war to invest in from the perspective of Azerbeijani leadership; Armenia really didn't have a realistic approach to defend. Th one part I object to is "Pretty much all of these plans are underspecified outcomes, not realistic plans.". The title of the post is "Grand Strategy"; the whole point is to talk about general approaches, not specifics. Realistic plans would be the domain of strategy.

The source article is here. The numbers are not how much of the total the subgroups make up, they are how quickly each subgroup is growing. The text continues:

The number critically ill with covid-19 in that age group grew by about 30% in the week before January 2nd, and also in the following week—but by just 7% in the week after that (see chart 2). By contrast, among those aged between 40 and 55 (who were vaccinated at a much lower rate at the time) the weekly change in the number of critically ill remained constant, with a 20-30% increase in each of those three weeks.

Randaly100

I have no idea why Dr. Moncef Slaoui, the head of Operation Warp Speed, was asked to resign and transition things over to someone else. Seems like if someone does their one job this effectively you’d want to keep them around.

While it's possible that Moncef Slaoui's resignation was caused by the Biden transition's request, he'd been publicly clear for months that he would resign in late 2020 or early 2021, as soon as 2 vaccines were approved. Here's a news article of him saying this from November.

Plausibly the Biden transition just wanted him to resign at a... (read more)

Blade Runner 2045 movie

2049, not 2045.

Trump continues to promise a vaccine by late October. The head of the CDC says that’s not going to happen. Trump says the head of the CDC is ‘confused.’ The CDC walks the comments back. On net, this showed some attempt by the CDC to not kowtow to Trump, but then a kowtow, so on net seems like a wash.

This is missing the last step, which is that the CDC then walked back its walk back (?!?). See here:

The CDC scrambled to explain; by about 6 p.m., the agency was claiming Redfield had misunderstood the original question and

... (read more)

I don't really have a great answer to that, except that empirically in this specific case, Spain was indeed able to extract very large amounts of resources from America within a single generation. (The Spanish government directly spent very little on America; the flow of money was overwhelming towards Europe, to the point where it caused notable inflation in Spain and in Europe as a whole.) I don't disagree that running a state is expensive, but I don't see why the expense would necessarily be higher than the extracted resources?

1Pongo
OK, so maybe the idea is "Conquered territory has reified net production across however long a period -> take all the net production and spend it on ships / horses / mercenaries"? I expect that the administrative parts of states expand to be about as expensive as the resources they can get under their direct control. (Perhaps this is the dumb part, and ancient states regularly stored >90% of tax revenue as treasure?). Then, when you are making the state more expensive to run, you have less of a surplus. You also can't really make the state do something different than it was before if you have low fidelity control. The state doing what it was doing before probably wasn't helping you conquer more territory.

(1) Local support doesn't end after the first stages of the war, or after the war ends. I mentioned having favored local elites within one society/ethnicity continue to do most of the direct work in (2); colonizers also set up some groups as favored identities who did much of the work of local governance. For example, after the Spanish conquest, the Tlaxcala had a favored status and better treatment.

(2) Not sure why you'd expect low fidelity control to imply that it ends up as a wash in terms of extracting resources, can you clarify?

1Pongo
(2) It seems expensive to run a state (maintain power structures, keep institutions intact for future benefit, keep everything running well enough that the stuff that depends on other things running well keeps running). Increasing the cost by a large factor seems like it would reduce the net resources extracted. It seems even more expensive if the native population will continue intermittently fighting you for 400 years (viz your rebellion fact)
Randaly*110

I feel like there's two points causing the confusion:

(1) The assumption that natives are an undifferentiated mass. There were a variety of mutually hostile indigenous peoples, who themselves sough out allies against each other; and, in particular, who sought to balance the strongest local powers. Seven Myths of the Spanish Conquest, page 48:

The search for native allies was one of the standard procedures or routines of Spanish conquest activity throughout the Americas. Pedro de Alvarado entered highland Guatemala in 1524 not only with thousands of Nahua all

... (read more)
2Pongo
Thanks for sharing this data. The lesson I draw from (1) is that in fact I should not think that conquering some areas help you conquer others. Rather, when entering some area, it is possible to draw local support in the first stages of a war. This updates me back towards thinking it's costly to control a newly conquered area. The lesson I draw from (2) is that you can continue to make use of some of the state capacity of the native power structures. But it seems like you have fairly low fidelity control (at least in the language barrier case, and probably in all cases, because you lack a lot of connections to informal power structures). This seems like mostly a wash? Are these the same as the lessons you draw from this data?
2Daniel Kokotajlo
Well said. The implications for AI takeover are interesting to think about.

[Like, it's not for nothing that the Aztecs told the Conquistadors that they thought the latter group were gods!]

It is unlikely that the Aztecs actually believed that the Conquistadors were gods.  (No primary sources state this; the original source for the gods claim was Francisco Lopez de Gomara, writing based on interviews with conquistadors who returned to Spain decades later; his writing contains many other known inaccuracies.)

Claims that are related to, but distinct from, the Aztecs believing that the Conquistadors were gods:

  • The Aztecs, and other
... (read more)
Randaly*60

On Diamond and writing, see previous discussion here. It is highly unlikely that writing was critical:

  • Pizarro was illiterate
  • The Aztecs had writing, yet didn't beat the Spaniards (or avoid having their leader kidnapped)
  • Cortes' conquests were only a decade or so before- a short enough period that writing wasn't necessary to communicate the lessons. Pizarro was physically present in the Americas at the time.
  • There's not actually any clear pathway from "have writing" -> "Atahualpa refuses to leave his army to meet with Pizarro". Writing did not make all
... (read more)

The specific evidence you’ve cited is weak. (1) You write that “The argument that we should be listening to experts and not random people would make a lot of sense if the "armchair" folks didn't keep being right.” It is extremely easy to be right on a binary question (react more vs less). That many non-experts were right is therefore more-or-less meaningless. (I can also cite many, many examples of non-experts being wrong. I think what we want is the fraction of experts vs non-experts who were right, but that seems both vague and unobtainable.)

(Note that t

... (read more)
jefftk310

The posts I'm referring to made claims that were much stronger than "we should be reacting more". If you look through https://medium.com/@tomaspueyo/coronavirus-act-today-or-people-will-die-f4d3d9cd99ca and https://medium.com/@joschabach/flattening-the-curve-is-a-deadly-delusion-eea324fe9727, and the follow-up https://medium.com/@tomaspueyo/coronavirus-the-hammer-and-the-dance-be9337092b56 they're making detailed claims about how the world is and how it will soon be.

That quote seems to provide no evidence that the 'literate tradition' mattered. Cortes' conquest was only 14 years before; Pizarro had arrived in the New World 10 years before that; Cortes' conquest involved many people and was a big/important deal; even if the Spanish had no writing at all, Pizarro would likely have known the general outline of Cortes' actions.

It's strictly speaking impossible to rule out Pizarro indirectly being influenced by writing; but I don't think it would be possible for stronger evidence against the importance of writing in this specific case to exist.

2Daniel Kokotajlo
Agreed. I think literacy or "literate tradition" had nothing to do with it, but learning from Cortes' experience (and earlier Spanish experiences in the canary islands, etc.) was crucial.
The Portuguese presumably were reasonably educated

Pizarro was illiterate.

5Matthew Barnett
I don't think that specific fact really disputes that they "had access to a deep historical archive." From Jared Diamond's Guns Germs and Steel,

That is not true; the CSA had worse railroads, but they were still important throughout the war. Some of the most important Union offensives late in the war- the Atlanta campaign and the siege of Petersburg- were intended to sever the South's railroads; and the war ended almost immediately after the Union cut off the railroad routes to the CSA capital of Richmond at the Battle of Five Forks. Both sides were heavily reliant on railroads for supply, and also used railroads to move troops (for the CSA, e.g. moving Longstreet's corps to fight at Chickamagua).

5ESRogs
Nitpick -- for replies like this, it's helpful if you say which part of the parent comment you're objecting to. Obviously the reader can figure it out from the rest of your comment, but (especially since I didn't immediately recognize CSA as referring to the Confederate States of America) I wasn't sure what your first sentence was saying. A quote of the offending sentence from the parent comment would have been helpful.

Homepage seems to lack links to the last two books.

1Said Achmiz
Yep, books V and VI aren't up yet, as the post says :) Soon!

Now, imagine you’re a diplomat, at a diplomatic conference. You see a group of diplomats, including someone representing one of your allies, in an intense conversation. They’re asking the allied diplomat questions, and your ally obviously has to think hard to answer them. Your intuition is going to be that something bad is happening here, and you want to derail it at all costs.

Source? I feel very, very confident that this is false. You would only want to break things up if you felt very confident that your ally would screw up answering the questions; otherwise, having lots of people paying careful attention to your side's proposals would be a very good sign.

Randaly70

Literally every sentence you wrote is wrong.

The worst crimes of the holocaust were a conspiracy within the Nazi government.

This is not true. The Holocaust was ordered by the popular leader of the German government; they were executed by a very large number of people, probably >90% of whom actively cooperated and almost none of whom tried to stop the Holocaust. (see e.g. Christopher Browning's Ordinary Men) German society as a whole knew that their government was attempting genocide; see e.g. What We Knew for supporting details, or Wikipedia for a s... (read more)

Randaly10

All of these are plausibly true of art departments at universities as well. (The first two are a bit iffy.)

Randaly00

As I understand it, the mainstream interpretation of that document is not that Bin Laden is attacking America for its freedom; rather, AQ's war aims were the following:

  • End US support of Israel (also, Russia and India)
  • End the presence of US troops in the Middle East (especially Israel)
  • End US support for Muslim apostate dictators

See, e.g., this wikipedia article, or The Looming Tower. Eliezer is correct that AQ's attacks were not caused by AQ's hted of American freedoms.

Randaly-10

The argument doesn't understand what the moral uncertainty is over; it's taking moral uncertainty over whether fetuses are people in the standard quasi-deontological framework and trying to translate it into a total utilitarian framework, which winds up with fairly silly math (what could the 70% possibly refer to? Not to the value of the future person's life years- nobody disputes that once a person is alive, their life has normal, 100% value.)

Randaly00

No I'm not. The Fizzbuzz article cited above is a wiki article. It is not based on original research, and draws from other articles. You will find the article I linked to linked to in a quote at the top of the first article in the 'articles' section of the wiki article; it is indeed the original source for the claim.

3Jiro
The wiki article uses as a source for the FizzBuzz statement the article at http://tickletux.wordpress.com/2007/01/24/using-fizzbuzz-to-find-developers-who-grok-coding/ . The wiki does not use as a source the article you just gave me a link to, which is http://www.joelonsoftware.com/items/2005/01/27.html and contains the "We get between 100 and 200 [resumes] per opening" quote. What you describe is neither the source for the statement, nor the first link in the articles section, but the second link in the article that is the first link in the articles section. It is a stretch to claim that this is the wiki's source when the statement directly contains a source which is not the article you point to. Furthermore, if you follow through the chain of articles, you find that because writers are playing a game of telephone with articles, the separate claims that people 1) cannot solve FizzBuzz (at a rate of 50% over computer science graduates) and 2) cannot program (at a rate of 99.5% over resumes) have been morphed into the Frankenstein-like claim that 99.5% cannot solve FizzBuzz as an interview question, which is not what either source says and which spuriously combines the two and changes from the plausible resume to the implausible interviewee. That combined statement is the one that I said doesn't fit a basic sanity check. And it doesn't.
Randaly00

The quote does not claim there has been no filtering done before the interview stage. If you read the original source it explicitly states that it is considering all aplicants, not only those who make it to the interview stage: "We get between 100 and 200 [resumes] per opening."

0Jiro
You are confusing two different sources, the one that mentions FizzBuzz and the one in your link. Although both sources use the number 200, they are using it to refer to different things. It is the former (which uses it to refer to interviewees) which I object to, not the latter (which uses it to refer to resumes), except insofar as the latter is used to try to prove the former.
Randaly30

You seem to be confusing applicants with people who are given interviews. Typically less than half of applicants even make it to the interview stage- sometimes much, much less than half.

There's also enough evidence out there to say that this level of applicants is common. Starbucks had over a hundred applicants for each position it offered recently; Proctor and Gamble had around 500. This guy also says it's common for programmers.

0Jiro
No, I'm not. From shminux's link:
Randaly80

unless you believe more than 100 people on the average get interviewed before anyone is hired

This is accurate for the top companies- as of 2011, Google interviewed over 300 people for each spot filled. Many of these people were plausibly interviewed multiple times, or for multiple positions.

4Jiro
The job market isn't just Google. Is it really true that anyone who can program FizzBuzz will immediately get snapped up by the first place they apply to, if they are not applying to someplace like Google which receives such large numbers of applications? I find it hard to believe that the average accounting company or bank that needs programmers has to do 100 interviews on the average every time it hires one person. (Furthermore, multiply by how many competent programmers they go through. If they hire on the average 1 out of every 4 competent programmers who applies, that makes it 400 interviews for each new hire.)
Randaly30

Maybe, but this is the exact opposite of polymath's claim- not that fighting a modern state is so difficult as to be impossible, but that fighting one is sufficiently simple that starting out without any weapons is not a significant handicap.

(The proposed causal impact of gun ownership on rebellion is more guns -> more willingness to actually fight against a dictator (acquiring a weapon is step that will stop many people who would otherwise rebel from doing so) -> more likelihood that government allies defect -> more likelihood that the government... (read more)

1V_V
I didn't claim that fighting a government is simple. My claim is that the hardest part of fighting a government is forming an organized militia with sufficient funds and personnel. If you manage to do that, then acquiring weapons is probably comparatively easy.
Randaly40

The Syrians and Libyans seem to have done OK for themselves. Iraq and likely Afghanistan were technically wins for our nuclear and drone-armed state, but both were only marginal victories, Iraq was a fairly near run thing, and in neither case were significant defections from the US military a plausible scenario.

3V_V
They are organized paramilitary groups who buy military-grade weapons and issue them to their soldiers, not random gun toters who fight with personally owned handguns and shotguns. It seems to me that the main issues in setting up a militia are organization, recruitment and funding. Once you sort that out, acquiring weapons isn't much difficult.
2Vulture
I think it's usually written with spaces, though.
Randaly200

We can know that other amphibious assaults probably had lower or neglible friendly fire rates, because some other landings (some opposed) had absolutely lower rates of casulaties- e.g here, here, and here.

4Gunnar_Zarncke
Things look a bit more complex than the parent and OP make it. The first one on Kiska island resulted from Canadian and American detachment taking each other for the enemy. Agreed this is friendly fire - but among sub-optimally coordinated detachment - not within on single force. The second one on Woodlark and Kiriwina which had less casualties was not only unopposed, it was known to be unopposed, so expectations were differnt. The other opposed landings are more difficult to read.
Randaly30

Thanks for your response!

1) Hmmm. OK, this is pretty counter-intuitive to me.

2) I'm not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.

3) Sorry, I don't think I was very clear. To clarify: once you've specified h, a superset of human essence, why would you apply... (read more)

-9RPMcMurphy
Load More