Commentary and discussion on world events from the perspective that all goings-on can be related to one of the six elements of National Power: Military, Economic, Cultural, Demographic, Organizational, & Geographical. All Elements are interrelated and rarely can one be discussed without also discussing its impact on the others
Wednesday, January 02, 2008
Still Waiting For The Reunion in Hell
Tuesday, January 01, 2008
Extreme Dust Test: M4 and Others
Note: I would have published earlier but I had sent this out for a ‘peer review’ of sorts. Special thanks to Don Meaker at Pater’s Place for taking the time to review this over his holiday. Of course, any errors that remain are my own. --Thanks again Don!
DefenseTech was way out in front of a pack of media sources when it posted two pieces (see here and here) about a recent “Extreme Dust Test”, that the Army conducted on the M4 and three other ostensibly ‘competitor’ rifles. The summary of results as provided in the article were ‘interesting’ to say the least, and I was particularly struck by the near instantaneous eruption of reader comments calling for radical action and remedies that came from readers on both the posts. At the time, I believed the calls were clearly unwarranted given how much was unknown about the testing. I made a comment on the second article stating that I would defer forming an opinion on the results of the test until I had more data in hand. I wrote:
Well, from the subsequent response to my cautionary note, one would think I had called for dissolution of the infantry! The M16 (and derivatives like the M4) have brought out more personal opinions and controversy than perhaps anything else in weapons acquisition (for any service of any scale) except perhaps the 9mm vs. 45ACP pistol arguments. I think that the M16/M4 actually IS the most controversial issue of the two, because it usually inspires rhetoric on two fronts: reliability AND stopping power. Both controversies are rooted, I believe, in the fact that there is nothing more personal to the warrior than the weapon the warrior wields – and there are a lot more warriors with rifles than tanks, aircraft or ships.Frankly, having been a reliability engineer, and without verifiable complaints from the users in the field, I would not form an opinion on this until I studied the supporting data. For starters, I'd need to know the failure distribution,the specific conditons under which the failures occurred, and failure modes ('jam' is a failure, a mode is a 'how' that has a 'why') to even begin to understand if there is a problem, and if there is a problem is it with the weapon or the way it is employed.
If there is a problem, is there a fix that is easier and cheaper than buying new weapons?While history is rife with examples of Army 'Not Invented Here' syndrome, unless there is good evidence that Army weapon evaluators WANT to field problem weapons, I see no reason to doubt the testers at this time.
I decided to cast about for more information, but there really isn’t a lot of public and available information that is attributable to an either authoritative or verifiable source. Among a lot of rather alarmist and inflammatory articles and postings (just Google “M4 Dust Test”) I found little objective reporting and only a few tidbits not already covered by DefenseTech that were ‘seemingly’ credible (if unverifiable), such as this piece from David Crane at Defensereview.com.
Interesting stuff! ….and credible, given that only the HK416 and M4 are in ‘full production’. But like I said: “unverifiable” by me at this time. (Later on we’ll see some things that tend to support Mr Crane’s ‘contact’.)So, you want some (unconfirmed/unverified) inside skinny i.e. rumor on the latest test, something you most likely won’t find anywhere else, even when everyone else starts reporting about this test? Here ya’ go, direct from one of our U.S. military contacts—and we're quoting:
"1. Because the HK416 and M4 were the only production weapons, the ten HK416 and M4 carbines were all borrowed 'sight unseen' and the manufacturers had no idea that they were for a test. The 10 SCARs and 10 XM-8s were all 'handmade' and delivered to Aberdeen with pretty much full knowledge of a test. (The SCAR even got some addition help with 'extra' lubrication)
2. With the HK416, 117 of the 233 malfunctions were from just one of the 10 weapons.
None of the information ‘out there’ was of use in determining answers to any of the questions that I had posed in my original comment, so I had reluctantly set the idea of further analysis and moved on. That is, I was moving on until Christian Lowe, the author of the original DefenseTech articles generously asked if I was interested in a copy of a ‘PEO Soldier’ briefing that he had been given. Of course, I said “yes please!”.
After dissecting the briefing, I still have more questions than answers -- some of which the answers to may never be released. The answers the briefing does provide are more philosophical than technical (But all things considered, that is alright with me).
As it is, the briefing provides some insight into what the Army was doing and how much importance we should place on the results -- given 1) where the Army is in its test efforts and 2) to what end it hopes to satisfy in performing these tests. I think it also points to the some of the things the Army is going to have to do in the future to get the answers it needs.
I have decided to present and parse the briefing, with my analysis of the data contained therein, slide by slide. I will limit my discussion and questions to ONLY those details surrounding the test articles, test conduct, test design, and test results that can be determined with certainty. I will speculate as little as possible. But when I do, it will be stated in the form of an opinion supported by the facts in hand or be presented as a posing of a question that the briefing or testing raises in my own mind, and will not be given as an assertion of fact.
Overall Briefing Impressions
Before getting into the details of the briefing, let me provide my initial observations on the brief as a whole. First, from experience in preparing this kind of presentation I can tell it was specifically tailored for Executive/General Officer review originally. If this briefing wasn’t intended for that purpose, I’ll bet the person who put it together usually does it for that purpose. The ‘tell’ is found in both the organization and level of detail provided. Much more detail and the briefer would be chewed out for wasting time, and if there was much less detail the briefee would see too many open issues and the briefer would be thrown out for not being prepared. There are elements of the briefing that make it clear it was intended for someone not familiar with the nitty-gritty of testing or data analysis. I think it is a tossup whether those details were provided exclusively for public consumption or to provide perspective for a potentially nervous ‘operator’. The brief is organized to tell the audience five things in a fairly logical flow:
1) what they were trying to accomplish,The brief intermixes the first two ‘a bit’ and I would have arranged the information slightly differently. I suspect the briefing is slightly pared down for public consumption than in its original format (you will see a revision number in the footer of the slides). There is only one slide I think that I would have composed very differently and I will go over why when we see it later in the post. I hereby acknowledge that my preference may be due as much to Air Force-Army service differences as anything else. The slides are data heavy, with a lot less gee-whiz PowerPoint than what you’d find in an Air Force brief. In has, in other words, typical Army style and content.
2) what they did (1 and 2 are somewhat intermixed),
3) what was found,
4) what it means, and
5) what comes next.
The presentation totaled 17 slides and was created on 12 December 2007. Slide 1 is simply the title slide: “Extreme Dust Test” and Slide 17 is a call for questions. The meat of the briefing in between these slides will now be discussed.
What They Were Trying to Accomplish
Slide 2:
The first thing that strikes me about the ‘Purpose’ slide is that there is no mention whatsoever that, as it has been reported, this particular test was performed to appease any ‘outside’ concern. Whether this relationship is omitted out of convenience or perhaps even not true, we cannot determine from the briefing. What IS clearly stated, is that the Army is collecting information to help generate ‘future requirements’. So perhaps this effort to develop new requirements is the first step in response to a certain Senator’s call for a competition prior to acquisition of new rifles?Most interesting is the point that this test is an adaptation of an earlier ‘lubricant test’, and that it is an ENGINEERING test and NOT an OPERATIONAL test. In subsequent slides we will see that this is clearly the case, and one wonders what useful data the Army hoped to gain from performing this test, beyond learning how to use it as a starting point: the beginning of designing a meaningful dust test. It must be noted that both the reuse of a deterministic test design already in hand, and the purpose “to see what we can see” is completely within the analytical character of the Army, which has been noted and described by the late Carl Builder (who in many ways admired the Army most above all the other services) in his 1989 book Masks of War (Chapter 10).
Under “Applicability’ on this slide is a list of what this test did NOT address. in only a roundabout way does this slide state that the only real information they expected to acquire was related to ‘reliability performance in extreme dust conditions’. And nowhere in the brief is it stated or implied that the Army was expecting to get definitive answers with direct implications in the operational arena. As we will see later, this was not as much a ‘functional use’ test as it was an ‘abuse’ test.
To my ‘Air Force’ mind, this test and analytical approach doesn’t really make a lot of sense unless the results are specifically for use in designing a meaningful test later. So I again turn to Builder who summarizes in Masks of War (at the end of Chapter 10) the differences between the questions the different services ‘pursue through analyses’:
Air Force: How can we better understand the problem – its dimensions and range of solutions?And thus it does appear that the test objectives are wholly within a normal Army analytical approach, so I’ll take the reasons given for the test at face value.
Army: What do we need to plan more precisely – to get better requirements numbers?
My Interpretation of the Army Objectives: “We intended to reuse a test we had already developed for another purpose to gain insight into only one facet (dust exposure) of weapon reliability by testing weapons in conditions well beyond what would ever be experienced in the field and if we learn something we will feed that knowledge into future requirements”.
What They Did
Slide 3
There’s a couple of things on this slide that leap out at the viewer. First and foremost, while the test has been described as a “60,000 round test”, that is a somewhat misleading and imperfect description. More accurately, it should be described as 10 trials of a 6000 round test performed using 10 different weapons of each type (later on we will find reason to alter the definition further). I assume that when the Army calls it ‘statistically significant’ they have the data to support that firing 6000 rounds through a weapon (apparently to a system’s end-of-life) is a meaningful benchmark. And that performing the test 10 times on different weapons is enough to meet some standard (and unknown) statistical confidence interval. The second thing that leaps out is the simplified list of ‘controls’, knowing there are a host of potential variables in any test involving human input as well as a lot of other material variables to be controlled (like using common lots of ammunition and lubricants) as possible confounding factors. The human variable in any experiment is difficult to control which is why test engineers strive to automate as much as possible. I suspect the large number of rounds fired per test is designed to ‘average’ the human variability as much as anything else.Slide 4
I found this slide highly illuminating as to the nuts and bolts of the test. First it clearly shows the level of dust buildup on the weapons: a solid coating that would be impossible to collect on a weapon being carried: I guess you hypothetically could find one like this in garrison until the CSM came around. I don’t believe it would be a leap of faith to assert that just carrying the weapon would tend to clean it up and make it cleaner than what you see here. Second, the slide shows a technician/tester firing a weapon from a bench setup. Can you imagine the tedious repeated firing of the weapons using selective fire in this environment? Now I am also wondering how did they control the timing/gap of the ‘resqueeze’ sequence on one magazine and between magazines? How sensitive is each weapon design to continuous cycling, and how does that relate to the operational need? Is the human operator more adept at clearing some malfunctions than others? How many operators fired and reloaded each type of weapon? Did they rotate responsibilities among the different weapon types to remove any operator variables? (I told you there would be more questions than answers).
Slide 5
“Slide 5” is kind of an intermediate summary chart to tie all the information already given to the briefee in slides 2,3 & 4 and place it in front of them one more time before showing them the results. This is not a bad idea in any briefing, but especially sound if you want to keep misunderstanding out of expectations and reactions.
What Was Found
Slide 6
This slide is the first indication to me that there was possibly a slide or two removed for this particular audience, because this is the first reference given to the “Summer of ’07” test in the briefing.
I believe the difference between the M4 results in the Fall and Summer ‘07 M4 tests is the most significant piece of information in the brief, because that disparity calls into question the entire test design as well as its execution. If dissection of the test event conduct (for either try) identifies no ‘smoking gun’ errors that would explain the reason(s) why on the second go around for the M4, the C1 & C2 Weapon stoppages were 4+ times greater, C1 & C2 Magazine Stoppages were 60+% higher, and the total of the two as well as Class 3 stoppages were nearly 2 times higher, then I would suspect that the test design itself is flawed OR the variability of the units under test is far greater than anticipated. The only way to decide between the two without other data is to perform repeated testing, preferably again on all the weapon types, to determine if there is an identifiable pattern/distribution of outcomes. I would be surprised if the Army didn’t have reams of other tests and test data to use in evaluating this test and eventually determining this point, but just from the data presented here, I don’t see how the Army could reach ANY firm conclusions on the relative or absolute performance results, and from the ‘barf box’ at the bottom of the slide it looks like they are scratching their heads at this time.
I’m still bothered by not knowing the relative ratios of Class 1 and Class 2 malfunctions and the ‘open ended’ time limit of the Class 2 definition, because you could have (for example) 100 Class 1s at 3 seconds downtime apiece and that looks as bad as having 100 Class 2s that are 30 seconds downtime apiece. The number of malfunctions is important as a single facet of performance, but the number of malfunctions times the downtime for each one is the TRUE measure.
Quantitatively, because the test is suspect due to the non-repeatability of the M-4 data, all you can say about the total malfunctions so far is that the M4 had many more times the failures than the other carbines THIS TIME (again, in conditions well beyond what should ever be experienced in the field).
Slide 7
Well in the previous slide, we’ve seen the first breakdown of failure modes in the discrete identification of ‘magazine failures’. Slide 7 is the breakdown for the ‘weapon’ failure modes. Immediately we can tell the M4’s did much worse than the other systems in 2 of the 8 modes: Failure to Feed and Failure to Extract. The M4 experienced a slight relative deficit in the Failure to Chamber category as well (perhaps that ‘forward assist’ feature is still there for a reason eh?). The lack of information concerning the distribution of failures among the weapon types is a crucial piece of the puzzle and is missing throughout the brief. If the failures are caused by uneven quality control in the manufacture, handling or storage process versus a design problem, it would probably show up in the distribution of failures within the weapon type specimens (one weapon fails x times instead of x weapons fails once for example). Also, so far in the brief we do not know if the failures occurred late or early in the process, or late or early in each test cycle (we’re getting closer though).
Before moving on, we should note the main point of characterizing the data as ‘Raw Data’, so this information is clearly a first cut and not the final word as to what actually happened.
Slide 8
This is the one slide I would have presented differently. I would have first shown the data using a true linear scale with a zero baseline to show TRUE relative failure impact compared to total number of rounds fired. This slide is good as a ‘closeup’ (other than the rounding error for the SCAR) , using a hand built pseudo-logarithmic scale that makes the relative failures between weapon types distinguishable. But, on its own makes the net performance of ALL the systems look like they performed poorer than reality. Here’s what I would have shown just before the Army’s Slide 8:
Compare this slide with what it looks like when you provide a chart with absolutely no perspective to the failure numbers. Scary, huh?
Slide 9
At last! We have some distributions to look at. At first look, one cannot determine if the number of failures includes magazine failures (for a ‘total’ impact point of view) or is just covering the ‘weapon’ failures. This is an interesting slide that got my hopes up at first, but I had to pare back my expectations a bit as I really looked at it. First off, this represents only the first 30K rounds of the 60K fired, so it is the ‘early’ data. I would like to see the numbers for the Summer ‘07 test overlaid as well, because I suspect(only) that the real difference in the M4’s performance between then and now would be found in the last two cycles before every cleaning instead of as a random scatter or uniform increase. And again, I would love to know the distribution of failures among the 10 M4s, only now I would be particularly interested in firing cycles 15 and 23-25. The slide’s conclusion (barf box) is, I think, about the only thing that one can conclude definitively about this testing from the information given.
After I ran the numbers of failures shown and calculated the failure rates of the first 30K, it became apparent that the only way to get to the final success/failure rates in Slide 8 to jive with the extracted failure rates for Slide 9 was if Slide 9 used the ‘total’ C1 & C2 failures and not just ‘weapon’ failures.
If we didn’t already know about the wide disparity between the first and second M4 testing, we would probably conclude that all the other designs well outperformed the M4. But since we know the M4 did better once, how can we be sure the other designs won’t do worse in the future? As they say in the stock market, “past performance is no guarantee of future results”.
A “What if” Excursion: Ignoring the very real possibility that the test design itself and/or the execution of it may have been flawed, I would conclude that the Gas Piston designs were not even stressed in this test and the Gas design (M4) was heavily stressed. I would conclude that the M4 is far more susceptible to failure when not cleaned properly. From the data, I would hypothesize that if another test sequence was run with a full cleaning every 600 rounds for the M4, the overall performance would improve dramatically, and that the other systems would not see any real improvement, largely because they aren’t stressed in the test in the first place. Then IF the M4 performance was radically improved, we would still be stuck with the question: what does the absolute performance in the test mean in the ‘real world’?
How much performance does one need, versus how much performance one gets for dollars spent? That should be determined by experience in the field combined with expert judgment in estimating future use. We are talking about using 'systems engineering' to field what is needed. The process isn’t perfect, but as it has been long demonstrated: ‘perfect’ is the enemy of ‘good enough’.As I sit here typing, it occurs to me that for future requirements, the Army must have to also take into consideration changes to the size of the unit action in setting their new requirements: fewer soldiers in an action mean a single failure has a larger impact on engagement outcome. Historically, this has been a concern of the special operators and now perhaps it is a more ‘mainstream’ concern?
In looking at patterns, I thought it would be helpful to look at this data in several different ways. I first backed out the data from this chart and put it in a spreadsheet.
NOTE: I may have made some ‘1-off’ errors here and there in decomposing the plot provided due to the coarseness of the plot lines and markers, but I think I’ve got good enough accuracy for this little exercise.The first thing I did with the data is to use it to see what portion of the total performance did the results for the first 30K rounds truly represent. By my ‘calibrated eyeball’ extraction of the data, we find the following:

As it is shown, most of the failures (C1 and C2) experienced occurred in the first half of testing for the XM8 and SCAR. According to the data, only the HK416 performance significantly degraded in the second half of testing: experiencing more than 2/3s of its failures in the second half of the test. These distributions may be another indication of a test process problem because ALL weapons were tested to ‘end of life’ and that is when one would usually expect marginally MORE problems not fewer. In any case, we see that there is a significant amount of data that is NOT represented in the plot on Slide 9 and unfortunately cannot be part of our detailed analysis.
Detailed Examination of Data
I first wondered what the failure pattern in Slide 9 look like as expressed in relationship to numbers of rounds fired over time (again, to keep proper perspective on the numbers):

Not very illuminating, is it? Looks like everybody did well, but what are the details? So I then decided to ‘zoom in’ and look only at the failures as a percentage of rounds fired (expressed in cycles) over time:
Now this is more interesting. For the first 30K rounds, the SCAR was for a time the worst ‘performer’, and then it settled down and total performance approached that of the HK416 and XM8. This suggests there is merit to the quote mentioned earlier that indicated the SCAR had a change to the lubrication regimen in mid test (in my world this would cause a ‘retest’ by the way). If true, these numbers suggest that the SCAR would have been equal to or better in this test than the other top two performers if the later lubrication schedule been implemented from the start.The M4 patterns point to something interesting as well. Cycle 15 and Cycles 23-25 earlier appeared odd to me because of the spike in number of failures, but while I would not rule out Cycle 15 behavior as being part of a possible normal statistical variation (again, “need more data!”) Cycles 23-25 appear out of sorts because of the pattern of failure. Keep this pattern in mind for later observations. If we knew the number or rate of failures increased after this last cycle shown, I might conclude it was part of a normal trend, but since at the 30K Rounds-fired point the failure rate is within 2/10ths of 1 percent of the final failure rate, we know the failures did not ‘skyrocket’ as the second half of testing progressed. We are somewhat stymied again by how much we do not know from the data provided.
Dust and Lube (Five Firing Cycles Per Dust and Lube Cycle)
Next, I thought it might be helpful to overlay each weapon’s performance in 5 (minor) ‘Dust and Lube’ (DL) cycle-series to see how repeatable (or variable) the cycle performance was. Keep in mind that at the end of Cycles 2 and 4, a ‘full cleaning’ occurred. Each DL series is comprised of 5 firing-cycles of 120 rounds.
Before we look at the other results, note the ‘outlier’ pattern of the M4’s Dust and Lube Cycle 5 (DL5), particularly the firing cycles 23-25 (last three nodes of DL5). If the results of firing cycles 23 & 24 of the M4 testing are be found to be invalid, it would lower the overall failure rate through the first 30K rounds by about 20%. The sensitivity to cleaning for the M4 also makes me wonder about how the variability of grit size and lubrication, and any interrelationship, would have contributed to failure rates. Again, I would be very interested in knowing the failure modes and distribution for those particular firing cycles as well as Firing Cycle 15.

The sensitivity of the M4 (within the bounds of this test) is clearly indicated to be far greater than the other systems. In fact, because the numbers of failures for the other three weapons are so small (vs. saying the M4’s are so large) I suspect that just given the number of potential confounding variables that we have mentioned so far, that the failures of the SCAR, HK416, and XM8 in the first half of testing approach the level of statistical ‘noise’ (allowing that there still may have been some interesting failure patterns for the HK416 in the second half).
Full Clean & Lube (10 Firing Cycles per Cleaning Cycle)
Looking at the same data by the major ‘full clean and lube’ cycles, I think, only reinforces the shorter interval observations. Because the 30K rounds data limit truncates the 3rd major ‘clean and lube’ cycle, it shows up as 2 ½ cycles in the plots:

A different view of this data will be seen again later in the post. The briefing itself moves on to address “Other Observations”
Slide 10
So. well into this briefing we now learn that all the weapons were essentially worn out (as a unit) by the 6000 rounds-fired mark. Since this test was not for the purposes of determining the maximum operating life of each weapon type, this test should now be described as an “ X number of Rounds Extreme Dust Test with 10 trials using 10 different weapons” -- with “X” being the number of rounds fired before the first bolt had to be replaced. Once one weapon is treated differently than the rest, further testing cannot be reasonably considered part of the same test. One wonders how each ruptured case was cleared and whether or not each was considered a major or minor malfunction. The disparity in the number of occurrences also makes me wonder about the distribution of these failures leading up to 6000 rounds/weapon, and whether or not they have anything to do with only showing the failure events of the first half of the test in Slide 9.
I would be hesitant to write that based on this data (in this test), the HK416 was three times worse than the M4 because the absolute number of events is so small. I would, however, be more interested in the meaning of the disparity in the absolute number of events the closer the difference comes to being equal to an order of magnitude, so (again, within the context of the test only) I find the difference between the SCAR and M4 of ‘likely’ interest, and the difference between the M4 and the XM8 is ‘definitively’ of interest.
Slide 11
The only thing I found really interesting in this slide was that while all the weapons had pretty much the same dispersal pattern at the end of the test, the XM8 was quite a bit ‘looser’ at the start. What this means, I have no idea, other than they all wore-in about the same. From the XM8s dispersion performance, my ‘inner engineer’ wonders if perhaps there were some ‘tolerance management’ or other novel aspects to the XM8 design that contributed to its reliability performance?
What it Means
Slide 12
Nearly 5000 words into this analysis and the Army pretty well sums it up in one slide:
Slide 13
The Army begins to assert here (I think) that they recognize they will have to construct a more operationally realistic test in the future, and they are starting to identify and quantify what the operational environment looks like.
Since the slide now couches the need in terms of the individual Soldier, here’s what the same data we’ve been looking at looks like when expressed as an AVERAGE/per rifle by weapon type (Clarification: the X axis label is cumulative rounds fired broken down by cycle):
Using Slide 13 for perspective, we can view this data and say that IF the Extreme Dust Test data is valid and representative of the real world (and we have every reason to believe that the real world is a more benign environment) then the largest average disparity we might find in C1 & C2 stoppages between any two weapons for an engagement that consumed one basic load would be less than 1 stoppage difference for every TWO engagements. If for some reason the soldiers started shooting 2 basic loads on average, the greatest average difference in numbers of stoppages between different weapon types for one engagement would be about 1 ½ stoppages per engagement. Because of the absence of detailed failure data by specific weapon, failure, and failure mode we cannot determine whether or not this information is ‘good’ or ‘bad’ -- even if this data was representative of the ‘real world’. If for instance the M4 (or as has been noted possibly the HK416) had one ‘bad actor’ in the bunch, it would have completely skewed the test results. If we cannot even tell if THIS difference is significant, we STILL cannot assert any one weapon is ‘better’ than another even within the confines of this test. All we still KNOW is that the M4 experienced more failures. The good news is, the Army will have a better idea as to what they need to do to perform a better test the next time.
Slides 14 & 15
Here’s more “real world” perspective to think about when we view the test data. If someone has reason to doubt the CSMs - that is their business. I see nothing in the test design or test data that would invalidate their observations.
What Comes Next
Slide 16 (At Last!)
There’s something here for everyone: ‘Figure out what the test meant’ if anything, ‘use the info’ to build a better test, and Improve the breed or buy new if needed. Not mentioned in the slide, but just as important is the obvious ‘Don’t forget to clean your weapon!’
Works for me.
The only thing I fear coming out of these test results is that out of the emotion behind the concern, perhaps this test’s importance will be blown out of proportion within the total context of what a Soldier needs a weapon to do. I can see us very easily buying the best-darn-dust-proof-rifle-that-ever-t’was… and then spend the next twenty years worried about it corroding in a jungle someplace.
Postscript
I know this type of analysis always brings out the ‘don’t talk statistics to me -- this life and death!’ response. But the hard truth is, like in war itself, ALL weapons design and acquisition boils down to some cost-benefit equation that expresses in statistical terms 1) what contributes the most damage to the enemy 2) in the most (number and types) of situations, while 3) getting the most of our people home safe as possible, 4) within a finite dollar amount. Everyone does the math in their own way, and everyone disagrees with how it should be done. Just be glad you’re not the one responsible for making those cost-benefit decisions. I know I am.
Tuesday, December 11, 2007
New Fighter Comes Under Fire: Will the F-35 survive?
Here's what the GAO Found:
--Based on early test data, Program officials and end-users are concerned about several potential aircraft problems: engine stalls, demonstration of an improved aerial restart capability, and excessive taxi speed.Here are some of the details:
-- Warfighters believe that the aircraft needs additional equipment, such as a new internal electronic countermeasures set, an information distribution system terminal, and a new air-to-air missile. The aircraft does not have sufficient space available for all desired new capabilities.
--A review team was critical of the combat vulnerability of the aircraft. Based on a subsequent assessment by the contractor, the program is considering adding two vulnerability reduction features. In the opinion of Program Office officials the problem of vulnerability is not significant.
-- Subsequent to the vulnerability review, the aircraft mission has been revised to include more air-to-surface operations. In this role it is more vulnerable than in the air-to-air role because it is subject to a greater variety and concentration of hostile fire.
--The aircraft program cost estimate in the latest Selected Acquisition Report shows an increase of $7.7 billion from the previous year’s Selected Acquisition Report. Of this, $6.3 billion is attributed to acquisition quantity change. The remainder is for new capability for the original aircraft buy and program estimate revisions. The Selected Acquisition Report was received too late for GAO to analyze the changes as to reasonableness and accuracy.
-- It is generally considered that the cost of participating country production will be higher then U.S. production cost. The program office does not yet know what impact partner coproduction will have on the cost of U.S. aircraft. They contend, however, that the increase in aircraft procurement quantities as a result of partner participation will lower the cost of domestic production enough to offset the increased cost of coproduction.
--The aircraft program is experiencing schedule delays that could, if not corrected, affect completion of testing required to demonstrate aircraft performance before the full production decision scheduled for September. Program officials believe the delays will not seriously threaten the test schedule.
CONCLUSIONS AND RECOMMENDATIONS
Greater emphasis is now being placed on the aircraft air-to-surface mission and some of the significant survivability/vulnerability problems identified by the service review team have not yet been corrected.
The existing schedule for several critical test items seems optimistic and leaves little room for further delays or unanticipated test problems. Should either or both occur, the program will have to decide between delaying the production decision or revising test requirements.
The Secretary of Defense should:
--Reassess the aircraft survivability features to determine if they are adequate.
--Not allow participating partner pressure to hamper performance of testing necessary to justify a full production decision.
-- Invite the partner countries to participate in any assessment of the test schedule so that any changes can be mutually agreed upon.
The schedule for completion of the tests required before the full production decision is optimistic.
Test aircraft, radar, and the stores management system are currently behind schedule. Program officials have placed a high priority to resolving these issues in order to maintain the schedule. Continued slippage could result in a failure to complete required testing prior to the scheduled full production decision.
Delay in aircraft assemblyThe aircraft and airframes required for testing are scheduled for delivery but these test aircraft will not contain all production components. Among those deleted are the gun, radar, operational displays, fire control computer, and stores management system.
Aircraft A-1 was delivered and the static test airframe began scheduled testing in the same month. Program officials stated that the schedule slippages are slight, and are being recovered.
Two aircraft are particularly critical to the test program. Aircraft A-3, for example, will be the first with full mission equipment and many test requirements can be done only with this aircraft. Aircraft B-1 must make its first flight prior to the DSARC. Any extensive delay in the delivery of either of these aircraft could delay accomplishment of test requirements.
Radar production behind schedulePrior to the full production decision the contractor must successfully demonstrate all radar functions and the integration of the radar with the other aircraft avionics subsystems. This will require that a properly configured radar unit begin ground testing at least 2 months before its installation in test aircraft A-3. A flight model of the radar has demonstrated most radar functions, but this set is 20 percent larger than the one to be used in the production aircraft. The first radar set configured for the airframe has not been completed. Radar production is currently 6 weeks to 2 months behind schedule. Delivery of the radar unit is scheduled for mid-March which barely meets the requirements for ground testing. There is little time available for further production slippages or if significant testing problems occur.
Schedule slippage in stores management system
The aircraft stores management system coordinates the weapons functions with other aircraft avionics systems such as radar and optical displays. The system consists of a number of electronic units throughout the aircraft. In August, Program officials reviewed the stores management system progress and considered it unsatisfactory. The redesign of the system and other problems have caused schedule slippages. Program officials stated that these slippages will not affect the test schedule because the stores management system is not needed until Aircraft A-3. If the current problems persist, however, and the system is not available as scheduled, it will interfere with completion of DSARC IIIB testing,
And there is concern over foreign partner's needs and influence adversely affecting the Cost for the US......
MULTINATIONAL INFLUENCE ON PROGRAM SCHEDULEOops-- My bad! (This isn't about the JSF.)
From its inception the program has been heavily influenced by the desire of the United States Government to have the aircraft adopted by allies and subsequently, by the requirements of the Partner Governments. The time frame for aircraft selection, and the coproduction requirements. have caused conflicts with normal acquisition procedures, and have resulted in these procedures being either ignored or circumvented. The US and partner production decisions are scheduled for September. The current schedule slippages and related test program problems,
however, may require that the program choose between delaying the production decision or revising test requirements. Because of the multinational commitments, which include a firm delivery schedule for participating partner aircraft, there is some question as to what options will be available at that time. For instance, in DCP 143, indicates that if unforeseen difficulties arise it will be prepared to accept the first few aircraft without the radar and retrofit them later so as not to delay the aircraft delivery schedule.
The multinational aspects of the program are more thoroughly discussed in a separate GAO report.
CONTRACT PAYMENTS WITHHELD DUE TO UNSATISFACTORY PROGRESS
On August 31, the program office directed that $10 million of progress payment be withheld pending remedial action on a number of problem areas including the following:
-- Submission of Engineering Change Proposal 0006 which will reflect much of the impact of partner participation in the program.
-- Submission of change proposal for maintenance test equipment.
-- Submission of change proposal for nuclear capability.
--Other late reProgram officenses to requests for change proposals.
--Problems with stores management set.
--Schedule slippages on full-scale development.
As of December 3, satisfactory progress had been made in some of these areas and $5.5 million had been released. The remaining $4.5 million was still being withheld pending further contractor action. The principal concerns were Engineering Change Proposal 0006 which Program officials stated was fundamental to development of an adequate program budget for the following year and beyond, and some slippage in the full-scale development aircraft delivery schedule.
Experienced readers would have seen defunct and incorrect (for the JSF) terminology and known this wasn't about the F-35.
So what was the troubled and risky program described above? Why it was none other than the now-venerable F-16. And the above text was excerpted (with a minimum amount of anonymizing) from a 1977 GAO report.
Scary huh? The only difference between then and now is that the GAO has bigger staffs and budgets to do their hatchet work. So be skeptical when and if you start seeing handwringing over the JSF in the future
Keeping Talk of JSF "Costs" Real
Let’s talk apples and apples for a bit (and for a change).
Assuming the reference (see comments to original post at Defensetech) to $122M per JSF is from yet another limp-wristed GAO report (another thread for another time), perhaps even Table 3 of 'nearly useless' GAO-07-415, that ‘$122M’ number is unit “Acquisition Cost” which includes all the development, tooling, and everything else to mature the technology and put it in production, amortized over a planned production period and set quantity. It possibly includes other non-recurring costs, such as ‘facilities costs’ incurred when fielding the F-35. But without a deep dive into the analysis and background you couldn’t tell, doesn’t add very much to the discussion.
GAO-07-415 is only 'nearly useless' because it also provides us with some close-to-equivalent numbers for the F-18E/F, the Navy’s ersatz ‘risk reduction’ project (in case the JSF did not materialize). Using the same timeframe (through 2013), with a single bit of math we also see in the same Table 3 that the unit Acquisition Cost for the F-18E/F is a little more than $96M. So there appears to be a net $26M difference in unit Acquisition Cost. Some of that difference can be accounted for simply by when the dollars are spent within the time period. Obviously, the F-18E/F development dollars are behind it for the most part and are sunk cost, while a good chunk of the F-35’s development dollars are future dollars and yet to be spent. Therefore, some of the difference can be simply attributed to ‘different-year’ dollars.
But the driver behind the bulk of the dollar difference isn’t found in the calendar: we must factor in what those dollars buy the taxpayer in each option. Most of the technology and manufacturing infrastructure for the F-18E/F is only evolutionary vis-Ã -vis the F-18C/D generation. This includes any increases in capability and survivability. Just based upon what is publically acknowledged about the F-35 means it will be FAR more technically advanced and survivable than any F-18 – or any other predecessor aircraft. The powers-that-be have decided that the capability is worth the increase in unit Acquisition Cost, which isn’t surprising because it is by design.
More Costs
Unit Acquisition Cost is definitely NOT what just one F-35 costs or what it would cost to build one more, or to replace one that is lost. That number is much, much lower and known as unit Fly Away Cost. The F-35, by all accounts, is STILL within its target average flyaway cost range: early units will cost more (low-rate production) and later units will cost less. Go figure.
If Congress whacks the numbers bought, the target cost range will have to be adjusted, and then we will be working within a new reality with new numbers to befuddle the masses.
Thursday, December 06, 2007
NIE: Bush Administration Reaps What Was Sowed
Two and a half years ago (July 01, 2005 to be precise), Frank Gaffney warned the Bush Administration about the perils of appointing State Department ‘diplomats’ to positions requiring Intelligence expertise in an New Republic Online article titled “Not a Time to be Diplomatic” (subtitle: “Wrong Man Wrong Job”).
I’ve been watching to see if anyone has referenced it in the wake of the release of the latest National Intelligence Estimate (NIE) and have been very surprised that no one has mentioned it that I can see (somebody MUST have, but perhaps they’re on the edge of oblivion like this blog).
I wonder if Mr. Gaffney even remembers it or perhaps he is preparing an in depth “I told you so” article as I type.
In the 2005 article Gaffney opened with:
If you wondered whether the U.S. intelligence community could possibly perform even more dismally than it has of late with respect to various aspects of the terrorist and proliferation threat, the answer is now in. Even worse is in certain prospect if Director of National Intelligence John Negroponte goes forward with his reported offer to Ambassador Kenneth Brill to become director of the just-announced National Counter-Proliferation Center (NCPC).While the article focuses on Brill, two other figures at the center of the brouhaha: Negroponte and Fingar.
Instead, the ambassador is a career foreign-service officer. So, of course, is Ambassador Negroponte. So is the DNI's deputy for analysis, Thomas Fingar. So is his deputy for management, Ambassador Patrick Kennedy.Brill was evidently no ‘star’ at the IAEA:
So egregious was Brill's conduct, according to insiders, that not only the administration's advocates of robust counter-proliferation policies opposed his being given any subsequent posting, let alone a promotion. Even then-Secretary of State Colin Powell and his Deputy, Richard Armitage, strenuously objected to his conduct at the IAEA and refused to give him another assignment. But for his prospective rehabilitation by Amb. Negroponte, Ken Brill would presumably conclude his career in government with his present year-long sinecure at the National Defense University.Gaffney concluded:
The last thing the United States needs at the pinnacle of the intelligence apparatus assigned to countering what is widely agreed to be the most dangerous threat of our time — the scourge and spread of weapons of mass destruction in the hands of terrorists and their state-sponsors — is someone whose past track record suggests that he misperceives the threat, opposes the use of effective techniques to counter it and is constitutionally disposed to accommodate rather than defeat the proliferators.In determining the credibility of revised NIE Iranian WMD ‘judgment’, it is not unreasonable to examine the qualifications and ability of the people that are responsible to make such judgments. It seems to me that key people involved in this NIE have already been found wanting. And if the previously ‘high confidence’ NIE was wrong, what makes this NIE judgment more likely to be correct?
More importantly what are the consequences of being wrong this time?
Decoding the NIE doublespeak doesn’t do anything to inspire my confidence either.
I thought the intelligence apparatus was as broken as it could be, but I guess the Administration found the only way they could have made it worse: by moving in more pasty State Department boys.
Tuesday, November 27, 2007
Untwisting Bloomberg's Economic News: 4th Try
Bloomberg's article has been updated 4 times (as of now) since originally published. Copy of original article posted here.
Headline Now Reads:
U.S. Consumers Spent Average of 3.5% Less on Shopping (Update4)
Dr. Paul's Report is as follows.....
Subject:
Article by Cotton Timberlake and Tiffany Kary - Bloomberg News In the Ft. Worth Star Telegram, November 26Headline - Section C:
"US Consumers Spent Less on Holiday Shopping"Summary:
Number of store visits UP 4.8%
Sales Thursday UP 8.3%
Sales Thursday and Friday combined UP 7.2%
The Article's Message:
Average spending DOWN 3.5% ($347.44)
Dr. Paul's Evaluation of the Article:
I don't understand how the combination of the following "facts" presented in the article add up to the 'headline'
1. "U.S. Consumers spent 3.5 percent less during the post Thanksgiving Day holiday weekend than a year earlier, as retailers slashed prices to lure customers grappling with higher food and energy costs." [This is the reporter 'talking']
2. Page C1 - end of second paragraph: "Store visits increased 4.8 percent."
3. Page C3 - "More than 147 million consumers visited stores over the weekend…The average amount spent last year was helped by increased sales of HD TVs,” NRF spokesman Scott Krugman said.4. "It's the saturation of HD TVs into the market, and the retailers recognizing that consumers will be more conservative this year and focusing on lower-priced merchandise," he said. [This is the EXPERT talking]
5. Page C3 - last 2 paragraphs:
6. "Sales on the day after Thanksgiving, called Black Friday because it was the day that retailers traditionally turn a profit for the year, ROSE 8.3 PERCENT from a year earlier to $10.3 billion, Chicago-based research firm ShopperTrak RCT Corp. said"
AND
7. "Combined sales for Friday and Saturday ROSE 7.2 percent to 16.4 billion, the firm said Sunday."
Dr. Paul's Conclusion:
HEADLINE SHOULD HAVE READ:
"RETAILERS HAVE HECK OF A THANKSGIVING!"
Sunday, November 25, 2007
The Webomator

No, Mr Schenck... I don't want your stereo next - I just want to spread the word
Bradley W. Schenck is IMHO an extremely talented artist in several media who has (for lack of a better term) ‘a nest’ of websites for different purposes. I've appreciated his work for years. His art includes some absolutley fabulous ‘Retro Future’ and Celtic Art stuff – and a lot of it is for sale .
---------------------------------------------------------------------------
Sidebar: I would show a sample from his site, but since he took the trouble to 'right-click' protect (humorously) a lot of it, I just bypassed the protection to save copies to my desktop as a point of honor, and then honored his desire to see his stuff NOT proliferate unattributed (in the future) by almost* immediately destroying that which I had lifted.
*Almost' = I DID e-mail a graphic to the Instapundit in the hope that he might spread the word far wider than I can.
---------------------------------------------------------------------------
My saved links to his stuff were ancient; carried forward from at least two computers ago. After just now dragging them up and exploring them a little more than usual, I found he now has a blog as well. It looks like as good a place as any to start to explore his universe: visit Webomator now!
Disclaimer: I have no direct or indirect affiliation with Mr Schenck other than an affinity for his artwork and doubt very much we would see eye-to-eye on too many issues of the day. But no matter that he probably would not consider doing an "Appeasement Never Solved Anything" T-Shirt for me -- I still think the guy is THAT talented, and more people need to know about him.
I'll be linking to his blog in my 'favorites' soon.
I'm Ironman
So I'm "Ironman" (just like Eric at Classical Values?)
Your results:
You are Iron Man
| Iron Man | 75% | |
| Green Lantern | 70% | |
| Hulk | 65% | |
| The Flash | 65% | |
| Superman | 60% | |
| Supergirl | 55% | |
| Robin | 55% | |
| Spider-Man | 40% | |
| Wonder Woman | 35% | |
| Batman | 25% | |
| Catwoman | 25% |
Click here to take the Superhero Personality Quiz
OK. I'm comfortable with the fact that I'm a little more Hulk and Superman and a lot less Spiderman than Eric. Dare I take comfort in the fact I'm a lot less Catwoman and Wonderwoman? Or should I be concerned that I'm a lot more Supergirl?
(I think Dr Helen would probably find how we feel about the score is a lot more important an revealing than 'how or what we score')
I'll take the Libertarian quiz later. If it doesn't conclude I'm a "Personal Responsibility" Libertarian (ie GENUINE Conservative Republican) I'll know it's a sham. (Insert Maniacal Laughter Here)
Thursday, November 22, 2007
Al Durah Affair Update

Breath of the Beast nails the whole sordid "al Durah Affair" and Charles Enderlin with Enderlin's Ocean of Blood.
Read and Heed.
Wednesday, November 21, 2007
Clueless Stanford Law School Dean
Paul Mirengoff at Powerline has put up a recent series of interesting posts that questions whether Stanford Law School is in compliance with the Solomon Act which (as described in Powerline):
…requires schools receiving federal funding to give access to military representatives for recruiting purposes, and to treat military recruiters in the same way they treat all other employment recruitersPowerline has now received correspondence from the Dean of the Stanford Law School that puts up a rather weak case against the military’s Don’t Ask/Don’t Tell policy among other things, but the most telling point in the whole e-mail from the Dean gets picked up by Scott Johnson at the end. Johnson notes the Deans’ statement:
“[N]o other employer has a rule precluding some students from obtaining employment for reasons wholly irrelevant to their ability to do the work. The military's recruitment policy tells a segment of our community, for reasons that have no bearing whatsoever on their willingness or ability to serve, that they cannot do so because some other people fear or hate them for who they are.”Johnson first notes that the Dean Kramer is “attributing phobic motives to those who disagree with him” but he then immediately skips forward to the ‘legal aspects’ of the issue (the legal-eagle that he is) and properly points out that is not just ‘recruitment policy’ but the LAW OF THE LAND.
I want to not get into the policy-law distinction though and go back to the ‘phobic motives’ point.
What caught my eye in Dean Kramer’s description was NOT the embedded ‘phobia’ canard at the end. What struck me was the absolute cluelessness of what the military is about and the lack of awareness of the argument behind not permitting open homosexuality in the military. The argument against homosexuals openly serving in the military is the SAME standard by which ALL types of conduct in the military is measured: social activity and behaviors MUST not adversely impact good order and discipline.
Perhaps as a simple civilian, Dean Kramer is unaware that putting on the uniform involves more than just ‘doing a job’ 9 to 5 with 'billable hours': even JAGs may find themselves bunked in a Combat Outpost at some time in their career.
Actually, I covered this a while back when Peter Pace was being attacked over his thoughts on the subject so here’s an excerpt of that earlier post , because the Dean seems like he might need a good example to help him think things through:
...the real issue is this:Until separate sleeping and hygiene facilities that are provided in every possible field situation can be reasonably guaranteed to be equal to a female’s vis-Ã -vis heterosexual male and vice versa -- how will (insert name here)’s sense of personal privacy and freedom from harassment be protected? Doesn’t (insert name here) have as much of a right to not be quartered with a homosexual of the same sex as (insert name here) does to not be quartered with a heterosexual of the opposite sex? (And isn’t all this PC gender-speak lovely?)~Sigh~
When I run into ignorami spouting off about things military when they are totally ignorant of what it means to actually be IN the military I want to run their nose up and down my sleeve so they can count the bumps 'till they bleed. (The only thing worse is someone who should know better and still engages in WILLFUL ignorance. They get both sleeves. )
Tuesday, November 20, 2007
Ted Kennedy Fun at Ann Althouse
I don't normally read the comments beyond Ms Althouse's post, but this time around there's some fun going on in the comments section. Evidently she has a regular(?) visitor, a 'Christopher', who takes exception to the mere mention of Uncle Ted's Driving School's safety record.
'Chris' actually begins defending the indefensible via a poor imitation of Taranto's now famous reminder: "Mary Jo Kopechne could not be reached for comment"
I honestly don't know why people just don't talk over and ignore his (Chris') hissy fits, but they keep trying to talk to the boy....and as they say: "hilarity ensues."
"Where Y'all From?"
I don't usually go for these online quizzes, but this one brought out my curious streak. I'm a Texan. My Mom was a San Antone girl with Texas roots back to the days of the Republic and who had never left the state until she married my Dad. My Dad's Mother was a West Texas Girl (believe it: there is a distinct sub-group) and she met my Granddad in Texas. I lived in North and South Texas for about half my school-age years, and have been back home now about 5 years, after trying to get back ever since I retired from the Air Force.
But because my Father was first in the military and then a 'Migrant Aerospace Worker', between my childhood and adult lives I've also LIVED in Oregon (Born there - a Texan born 'overseas' as it were), Alabama, Florida, Kentucky, Kansas, Connecticut, California, Colorado, Nevada, Alaska, Arizona, Utah, and Iceland. I've visited Canada, Europe, and the Carribbean and have actually visited every state except Hawaii. Everywhere I've gone in the States EXCEPT the South, people usually assume I'm a local (But after five years my "Y'all" is starting to come naturally again).
THIS is what happens when you live everywhere:
| What American accent do you have? Your Result: The Midland "You have a Midland accent" is just another way of saying "you don't have an accent." You probably are from the Midland (Pennsylvania, southern Ohio, southern Indiana, southern Illinois, and Missouri) but then for all we know you could be from Florida or Charleston or one of those big southern cities like Atlanta or Dallas. You have a good voice for TV and radio. | |
| Philadelphia | |
| Boston | |
| The Northeast | |
| The West | |
| The Inland North | |
| The South | |
| North Central | |
| What American accent do you have? Quiz Created on GoToQuiz | |
You sound like you are from anywhere.
My Wife is just as bad or worse. Born in Maine into a career Air Force family, she slips from one speech pattern to another as easily as anyone I've ever seen or heard. We visited my folks in England in the early 80's and everyone thought we were Canadian at first. After a month in the 'Shires', I think everyone we met assumed I was a Canadian who had married a Brit.
I do love answering local friendly cashiers who seem to doubt my Texian origins and who frequently ask us "Where are Y'all from?". I usually have to throw in a few gratuitous "Y'alls" and "fixin' to's" to convince them that I really am a local boy.
The only downside I've experienced as the oldest child and the only one who followed my Dad's 'Aero Bracero' ways, is I sometimes have to ask for a translation from my siblings who haven't moved around nearly as much or as far.
Wednesday, November 14, 2007
The “COST” of Iraq War?
H/T Instapundit
I contemplated spending some time debunking the Democrat talking-point memo masquerading as a report on the cost of the “Iraq War” when the news broke yesterday, but decided to write about something else, thinking that the Dem’s analytical basis was so lame that someone with much greater readership would chop it down to size – and today I was proved correct.
James Pethokoukis at US News & World Report takes the Democrats to task today for failing to consider the costs of containing Iraq in his blog:
Should we then assume that by not waging the war, Uncle Sam would be a trillion dollars to the better? That would be a questionable assumption, a product of a sort of "static analysis" that assumes if you change one critical factor, all the rest stay pretty much the same. Professional futurists, like the ones at the Big Oil companies, know better than that. They give clients a range of scenarios based on different values for different variables. And that is also what three economists at the University of Chicago's business school did in 2006. They looked at the costs of not going to war with Iraq back in 2003.Mr. Pethokoulis then points out, the U of Chicago study examined the costs of CONTAINING Iraq (emphasis mine).
Advocates for forcible regime change in Iraq expressed several concerns about the pre-war containment policy. Some stressed an erosion of political support for the containment policy that threatened to undermine its effectiveness and lead to a much costlier conflict with Iraq in the future. Others stressed the difficulty of compelling Iraqi compliance with a rigorous process of weapons inspections and disarmament, widely seen as a critical element of containment. And others stressed the potential for Iraqi collaboration with international terrorist groups. To evaluate these concerns, we model the possibility that an effective containment policy might require the mounting of costly threats and might lead to a limited war or a full-scale regime-changing war against Iraq at a later date. We also consider the possibility that the survival of a hostile Iraqi regime raises the probability of a major terrorist attack on the United States.That last sentence was the key one for me and we’ll get back to it in a moment. Pethokoulis’ analysis continues:
Factoring in all those contingencies, the authors find that a containment policy would cost anywhere from $350 billion to $700 billon. Now when you further factor in that 1) a containment policy might also have led to a higher risk premium in the oil markets if Iraq was seen to be gaining in military power despite our efforts to box it in, and 2) money not borrowed and spent on Iraq might well have been spent on something else given the White House's free-spending ways, it's easy to see that doing a cost-benefit analysis on "war vs. containment" might have left administration officials with no clear-cut economic answer.Mr. Pethokoulis parenthetically provides a link to the House Republican reply to the Democrats ‘defective report’. The response is too soft on the hard numbers to my way of thinking – but that is OK, considering it is a ‘quick-turn’ response to a Democratic sneak attack. Mr. Pethokoulis closes by pointing out that others have reminded us that the cost-benefit isn’t all that important in the scheme of things via a 2006 reference to the Becker-Posner Blog.
So how can we think about the VALUE of taking Saddam out?
With the status quo being what it was in 2001, what were the chances that Saddam would have been passive in the wake of our success in Afghanistan? Does not the fact that Zarqari moved into Iraq after he was treated in Iran for injuries received in Afghanistan, or the fact that Saddam had allowed/supported the training of thousands of terrorists leading up to the invasion of Iraq perhaps indicates that Saddam was anything BUT passively standing on the sidelines?
Finally, the fact that we have spent the last 4 or so years killing an increasing number of foreign radicals that came to Iraq AFTER we freed it from the Baathists MUST be recognized by any rational mind that if we can kill or capture a radical Islamist in Iraq, they won’t be able to do evil in the United States.
So, can we provide some reasoning to logically characterize the economic BENEFIT of taking Saddam down in Iraq? Of course!
I was going to take a stab at it but a funny thing happened while researching the problem tonight. There is already an analysis out there! One that we can use to give us a feel for the cost avoidance we’ve accomplished to-date with the war in Iraq and our subsequent ‘nation building, as a CRITICAL PART of the Global War on Terror(GWOT)--something the Left would like to ignore and have the rest of us to forget.
The analysis pre-dates the latest Iraq War and was produced by Professor Looney with the Center for Contemporary Conflict (CCC), a ‘research arm’ of the Naval Postgraduate School in Monterrey. It is titled: “Economic Costs to the United States Stemming From the 9/11 Attacks”.
Using the professor’s assessment of the impact from the 9/11 attacks, we can easily see the value of successfully preventing further attacks on US soil. Now I admit this approach is based on the belief that the terrorists WOULD stage such attacks if they were capable of doing so. This is an idea that does not require any imagination to accept, but I would argue requires a seriously fantastic imagination to deny.
Professor Looney estimated that the 9/11 attacks cost the United States approximately $22.5B in direct costs in the short term, but added to that in indirect costs based upon the impact of 9/11 on the economy:
Immediately after the attacks, leading forecast services sharply revised downward their projections of economic activity. The consensus forecast for U.S. real GDP growth was instantly downgraded by 0.5 percentage points for 2001 and 1.2 percentage points for 2002. The implied projected cumulative loss in national income through the end of 2003 amounted to 5 percentage points of annual GDP, or half a trillion dollars (emphasis mine).So rounding down to easy numbers, we have the cost of the 9/11 attacks estimated at “half a trillion dollars” over a two year period. Taking an extremely conservative approach, and ignoring the compounding effects of multiple attacks on the US economy, we can see that every attack similar to 9/11 that is prevented since that time is worth 1/3 of the total cost that the Democrats claim to-date. Ergo, all we would have had to have accomplished in the GWOT so far was to keep Al Qaeda and their ilk too busy to carry out three lousy follow-on attacks and the War in Iraq is a big-time money-saver!
Add a little more realism to the assumptions by factoring in the compounding effect that repeated attacks of possibly even smaller scale or lesser success might have on the US, and the War in Iraq becomes a freebie! At least, that’s how it would look to any moron who actually thought the cost of doing the right thing was in any way as relevant as doing something because it WAS the right thing.
Hey! This is the second post in a row that I get to close with:
As the old saying goes: "Too many people know the price of everything but the value of nothing".
Tuesday, November 13, 2007
Last Defense Support Program Satellite Launched
This week the US launched its last DSP satellite: DSP 23.
How well have the DSPs performed? From the article linked in the Topic of this post:
The launch of DSP 23 extends the service of a satellite constellation that has been the nation's eyes in the sky for nearly four decades, providing warnings of tactical and strategic missile launches, nuclear detonations, and other technical intelligence. DSP satellites have operated four times beyond their specified design lives on average, and Flight 23 is expected to serve well into the next decade....
DSP satellites set a high standard for performance. The satellite's longevity has provided an extra 162 satellite-years on-orbit to date, the equivalent of delivering 30 to 50 additional satellites (without the cost of the launch).
While the performance of the first ones launched, beginning in 1970, was a national secret, and very little was known about how the progam matured throughout its development, we will no doubt continue to endure a steady stream of hand-wringing and whining over the DSP follow-on: the Space-Based Infrared System (SBIRS).
As the old saying goes: "Too many people know the price of everything but the value of nothing". Think of DSPs and SBIRS as preventing national blindness.
