Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Monday, August 20, 2012

Introducing Suzie Dershowitz Part 2

Today we will be ‘Provoking Accountability’….Of the ‘Unaccountable’



Smiling Suzie Dershowitz, with an incredibly hybris-ridden slogan. Source: POGO
(and why does this reminded me of a Jonah Goldberg book?)

POGO Major Ploy Du Jour #1: False Non-Political/Partisanship claims. Hint: Libertarian is NOT Conservative.

A Continuation From Part 1

Ms. Dershowitz opened her 'piece' (see part 1) by offering a title and a couple of paragraphs intimating POGO's position on reducing defense spending has broad support:
A recent study by Benjamin Zycher from the libertarian think tank the CATO Institute reaffirms what we've been saying all along: Cutting Pentagon spending will not cause the economic nightmare or job loss catastrophe the defense industry wants us to fear.
In addition to CATO, other right-leaning analysts, advocates, and politicians have also been vocally challenging the narrative that defense spending must not be decreased. Grover Norquist, president of Americans for Tax Reform, recently pledged to fight any efforts to divert tax reform revenues toward an increase in Pentagon spending or avoiding across-the-board budget cuts, known as sequestration. Rep. Roscoe Bartlett (R-Md.), a senior Republican on the House Armed Services Committee, has called for a national dialogue on sequestration, recognizing that "the average American out there, by big percentages, wants to cut defense by twice the sequester amount."
Got that? The 'spectrum' of support includes:
  1. ‘Big L’ Libertarian CATO which thinks of terms of Republic OR Empire, and that if you're not all at home, well then you must be an Empire
  2. A Cult of Personality ‘small government’ activist of the extreme self-serving more than tax-cut ilk, AKA Grover Norquist, and
  3. ONE Republican Congressional dinosaur who just happens to be in a fight to keep his seat: the sole Republican House Seat in a district that has been redrawn to his disadvantage since the last election.
Who in that group would today be likely to place a priority on defense spending compared to their other interests?
Answer: None of them.
 

 About ‘Big L’ CATO and Defense  

‘Big L’ CATO has a Pollyanna view of world affairs that lives under the delusion that the US can afford to downsize the military because THEY don’t see the ‘threat’ which, combined with a somewhat more ‘passive isolationist’ vision of the United States’ role in world affairs versus the current (and faded under Obama) role as the benevolent and last remaining Superpower. This is perfectly acceptable, if CATO would then make statements that were qualified with the caveat “In CATO’s opinion, view, vision, we believe X”. But they don’t qualify. They flatly assert we need to reduce our defense spending and our involvement in the world’s affairs, that there is no ‘threat’ that warrants defense spending levels, etc (see this video which could have been the germ for the POGO regurgitation) . In doing so they look right past the point that if the United States does not ensure its interests are taken care of around the globe, someone else will take care of them for us in the manner of their choosing. The focus on visible ‘threats’ conveniently prevents them having to recognize: 
  • The positive economic effects of close defense relationships with our allies, 
  • The deterrent effects to those who would seek to cause us indirect as well as direct harm, economic or otherwise, 
  • The advantages of having ‘friends’ and forces in place for any emergency (most likely unforeseen) no matter where on the globe that emergency might appear. 
 
As I’ve always said: I would be a Libertarian, if they had a frickin’ clue when it comes to defense, but then if they did, they would be good Conservatives. Here’s a tip for CATO.
If POGO and PDA are on your side—you are on the wrong side.
As it is, your defense ‘work’ just gives aid and comfort to the enemy. Sad.
BTW: Notice between the CATO ‘study’ and the CATO video, there is a conflation of the topics of ‘defense reductions’ in general and ‘defense sequestration’ specifically? This serves to abstract the issue and make it more ‘feely’ than ‘factual’. We’ll work on that later.
 

Part 3
Part 4

Wednesday, August 15, 2012

The Long Range Strike Imperative


World's Finest Bulk Exporter of Tritinol and Steel 
An "Op For" post (more specifically a comment in the thread) reminded me that there are ‘those’ out there who think we are ‘fat’ with the most critical Long Range Strike assets (AKA Strategic Bombers).
For those so inclined, I would counter with (Emphasis Mine):
Nations that can maintain freedom of action and the ability to threaten and apply violent force without retaliation will hold the ultimate strategic advantage. Failure to maintain credible LRS capabilities diminishes the effectiveness of the other instruments of national power. Although the US military has provided a dependable backdrop of international security for over 60 years, the size of that force has diminished recently even though the need for a strong force has not. In light of the present situation, one that closely resembles the slow demise of the British and Roman global powers, we would do well to heed Julian Corbett’s remarks about the intrinsic advantage of sea control during the waning years of Britain’s global preeminence: “Yet the fact remains that all the great continental masters of war have feared or valued British intervention . . . because they looked for its effects rather in the threat than in the performance. . . . Its operative action was that it threatened positive results unless it were strongly met.” Just as sea control and power projection proved critical for Britain, so is LRS valuable for today’s leading nations. Global actors such as China, Russia, and India recognize LRS’s strategic value, considering it imperative to a successful national security strategy. These rising global competitors, especially China and Russia, seek to obtain or develop their own LRS and to cultivate antiaccess [sic] and area denial capabilities to diminish the enduring strategic advantage of the United States...

--- Major Wade S. Karren, USAF. Read it all HERE (PDF).

"It's ALWAYS the 'Fighter's Turn', It's just that every now and then the rest get their fair share". Even with whatever the NGB will become, this chart won't change much from when I first built it around 2000
We are not ‘fat’ with Long Range Strike/Strategic Bomber capabilities.  
We are not even ‘fluffy’.


'Marauder' in the comments nails it (Photo Added 18 Aug 2012)

Tuesday, July 10, 2012

F-35 PAUC and APUC

Sheesh. I shouldn't surf the web after midnight (or at least not comment)

What I MEANT to type last night while commenting on a very good F-35 "Costs" article:

Kudos.
You honestly expand on a difficulty where many have seen opportunity to sow confusion.
PAUC among other things includes RDT&E and all costs associated with production of the item such as hardware/software, systems engineering (SE), engineering changes and warranties plus the costs of procuring technical data, training, support equipment, and initial spares. But there is one aspect of PAUC that can make it VERY inappropriate for telling people what something WILL cost them: PAUC includes ‘sunk’ cost.
Most notably, in the F-35’s case, it includes the percentage of the RDT&E, Production, Engineering, and Technical Data costs that have already been incurred. Since the primary production line and RDT&E capabilities for the F-35 are already stood up, and all the suppliers' engineering and production capabilities are running in place waiting for the higher production demand, this has to represent a huge chunk of PAUC [though APUC is still correct and part of PAUC I meant to type the latter] that is already sunk cost.
Try explaining to the man in the street that the PAUC went up because of conscious decisions to defer higher rates of production and stretch development to ‘reduce risks’ and NOT because the Contractor is jacking up the price. People’s eyes will glaze over if you try and explain everything that goes into the PAUC or APUC: Many of the costs tacked on to the PAUC would make no sense to the average citizen because we don’t buy things like a government does. Example: The Man in the Street doesn’t add the cost of a new garage to the cost of his new 4x4 because it is too big to put in the garage he already has. He pays the money and then observes he has bought a new 4&4 AND a new garage.
While PAUC is considered ‘true' costs of the plane by ‘some’, it isn’t. It is just an aggregation of a lot of direct costs that are then booked against each plane by dividing by the number of units. Obviously it includes the costs of infrastructure, new technology, and new knowledge. Much of it will invariably be used to advantage elsewhere – it just gets BOOKED against the program of record.
On the other hand URF is something people will understand because it’s the dollar cost number to buy ‘just one’. Just like the store down the street.
If you must, use both numbers. But only PAUC requires extensive explanation to prevent misrepresentation. And once you have significant sunk costs, to be completely honest with the public, you should also provide the PAUC for producing NO more units, including cancellation costs. If the requirement demands a new program after a cancellation, add the estimated PAUC for that program as well. Let the public see the true cost tradeoffs involved.

Monday, July 09, 2012

POGO Wrongly Cries “Foul!”... While Sniping in a Ghillie Suit

Guerrilla Reformers Falsely Accuse Defense Industry of Guerrilla Tactics 


UPDATED AND BUMPED 9 July 2012 (UPDATE BELOW: Look for the RED) 

Last week, POGO’s Ben Freeman posted another fact-free and ideologically-driven screed, this time at the ‘Puffington Host’ (You know where I mean. I try not to ever link to that swamp) titled “The Guerrilla Warfare of Pentagon Contractors”. To give you the flavor of the misdirection he peddles within, here’s a clip that gives a pretty good summation [emphasis mine]:
Last week Politico reported that defense contractor's new plan is to "threaten to send out layoff notices -- hundreds of thousands of them, right before Election Day." This threat is intended to frighten incumbents into rolling back the impending Budget Control Act sequestration, which would reduce Pentagon spending by roughly ten percent per year for the next ten years.
Despite the doomsday rhetoric and contractor funded "studies" reporting grossly overinflated job losses they claim would result if the Pentagon's more than half a trillion dollar budget is cut, there is absolutely no reason these companies would need to have massive layoffs. This is nothing more than a political stunt.

One would think POGO should know a stunt when they see one, but they either fell short this time or are willfully prevaricating. Perhaps it is because they aren’t too familiar with parts of acquisition law concerning Government contracting and labor rules? I do suppose there’s no exposure to the workings of the current monopsony in POGO’s exclusive digs in the Ivory Tower end of Castle 'Non-Profit'?

Contrast POGO’s flippant dismissal with this excerpt from a recent Defense News article:
Panetta’s meetings come a week after the heads of Lockheed Martin, Northrop Grumman and Pratt & Whitney met with top Office of Management and Budget officials seeking greater clarity on the government’s plan for implementing nearly $500 billion in mandatory defense cuts over the next decade that are scheduled to start Jan. 2.
OMB told the executives it does not plan to issue sequestration implementation guidance until after the November elections, sources said. The meeting was requested by Aerospace Industries Association President Marion Blakey.
Although defense industry leaders have long said that planning for sequestration will be difficult given it is unclear what the specific impact of automatic cuts will be, they have become increasingly vocal that job losses would be unavoidable starting in January.
And they’ve stressed that federal guidelines require them to notify their workers of potential mass layoffs at least 60 days in advance — that would be on the eve on the election.
Source: AEI
Having been one of the many people in the industry long enough to have found themselves on the receiving end of one of those federal ‘60 day notices’ when just one Government program was cancelled or cut back, and having witnessed many others, POGO’s dismissive attitude speaks volumes as to their indifference and/or ignorance. Multiple programs being suddenly cut/cancelled/impacted for reasons other than cause can only cause chaos in the industry. Carrying out such pointless cuts every year over a period of years? Sounds like POGO/Leftard heaven and National Defense Hell. Ask anyone who’s been around Defense Aerospace ‘more than a minute’. They’ll tell you: POGO is full of Sh*t.
Freeman’s POGO puff piece is irritating, but it is more important to keep in mind what this whole sequestration gambit is really about: Democrats playing political games with National Defense.

-------------------------

 Quick Sidebar: Hey! I see from their website that not only has Winslow Wheeler moved his shingle under the POGO rubric, he seems to have brought not only the Strauss Military Reform Project but also the Center for Defense Information with him (link)! I suppose this tells us something about how Reformers are dealing with a diminishing donor base. As I noted earlier: I love it when targets bunch up. On the downside, it seems “the radical trust fund baby cum 'photographer’[ HASN’T] got tired of paying his salary”.
-------------------------

Well Lookee’ Here!  POGO’s got Their Own ‘Snake Eaters’ On Point  

So While POGO’s Freeman is claiming the Defense Industry is employing ‘Guerilla Tactics”, I’ve noticed a marked uptick in the foreign blog and online alternative newspapers containing references to POGO’s pet ‘expert’ commentators. POGO ‘special operators/fellow travelers’ seem to be most active in F-35 Partner nations where economic conditions are tightest and in countries that represent existing or emerging markets for F-35 Foreign Military Sales (FMS). What a surprise (Not!). The most recent one to catch my eye was an English-version of a Korean ‘alternative’ paper article by one delightfully named ‘Stuart Smallwood’ who also mirrored most of his piece at his own blog.
Smallwood’s entire post reads like a POGO press release, and it is quite obvious from his phrasing and the conclusions surrounding his commentary that Mr. Smallwood (a ‘grad student’ in "Asian Studies" out of Canada now mucking around in other people’s cultures, Eh?) that he hasn’t a freakin’ clue as to what he is writing about. In the comments thread of his ‘blog’ last night I posted a challenge:
Heh. If I demonstrate that your post is erroneous on at least one or more key points, will you promise to never again publicly opine on defense topics about which you are ill-informed and not equipped [to discuss*]? And if so, will you also give POGO back the spoon with which they have been feeding you this stuff?
*I have an oversensitive touchpad on my laptop (that I keep turning off and Microsoft keeps turning on whenever they push updates) that causes me no end of typo and edit problems. I didn’t catch two words had dropped until after I posted my comment.


When I went back today to see if my kind offer was accepted I find not only was it rejected, but it seems to have been deleted (shocker). Not much of a Snake Eater after all, eh?
In the last comment on the short Smallwood thread, a thread which had quickly devolved into fantastic familial allegations about ‘bullying allies’, you will see as of this posting a comment (from his Mom?/Sister?) proclaiming: “bullying is everywhere!”. Perhaps Ms. Smallwood, perhaps. But it appears to be not nearly so widespread as intellectual cowardice. It’s to be expected under the circumstances I suppose. I have found that among the professions, the thick thinness of the skin is inversely proportional to the intellectual rigor required of its practitioners. [/snark ]

**************************** 

Update/Correction: Seems Smallwood's Got Game (Good on Him)

My comment has 'reappeared' in the thread:


I take back half the things I've said already. If he chooses wisely...Well. we'll see about the rest later.
Which point will I select for debunking?  I'm leaning towards "the myth of stealth". Stay tuned.

(Special thanks to my reader who e-mailed me the "head's up" on this development)

************** END OF UPDATE**************  

On a More Serious Note

Catching POGO in their machinations could be simply left as a case of blaming others for what they are guilty of: akin to when a grifter gets caught in the 'act'. But in the war of words, POGOs moves are a cross between Rules For Radicals and at least one of the best military theorists.
“If your enemy is secure at all points, be prepared for him. If he is in superior strength, evade him. If your opponent is temperamental, seek to irritate him. Pretend to be weak, that he may grow arrogant. If he is taking his ease, give him no rest. If his forces are united, separate them. If sovereign and subject are in accord, put division between them. Attack him where he is unprepared, appear where you are not expected.” 
—Sun Tzu
Their biggest disadvantage is that they scurry like vermin when the light hits them. 

P.S. Anyone else about had it with Blogger's formatting quirks?

Friday, July 06, 2012

Strange Silence on GAO F-35 June 2012 ‘Report’

F-35A USAF Photo

There’s evidence the report is either a blatant political hack job or there are absolutely NO experts on Reliability at the GAO. Take your pick – either reason is equally damning.

Has anyone else noticed the comparative ‘silence’ over the last F-35 GAO report compared to the previous releases? Other than the rather strange and rambling “F-35 by the Numbers” at DoD Buzz and the usual unattributed fear-mongering about “Costs!” at AOLPOGO Defense , this time around there hasn’t been much caterwauling coming out from under the usual rocks. My first thought was perhaps the POGO et al crowd was winding up to deliver another integrated PR attack against the program across a broad far-left front.

I decided to take the time to actually read the report itself in hopes of perhaps getting a preview of the latest Doomsayer topic du jour. Imagine my surprise when I found……not much: no blockbuster surprises, and surprisingly little hard information. There’s no ‘there’ there. It is “Same Sh*t. Different Day” in GAO-land.

There is a lot of unmitigated puffery and bull-hooey in this latest edition from the GAO. A good portion of it hinges on understanding the little ‘something’ within (as well as the missing associated bits) the report that strikes this experienced eye as more than a trifle ‘odd’. It is bizarre to the point it raises my suspicions that the F-35 program may either progressing better than ‘some’ would have us believe, or at least NOT doing as poorly as those same ‘some’ WISH we would believe.

If the GAO’s failings in this report are due to incompetence and inexperience, as is always my first instinct, I think that speaks of an even more unfortunate situation. We can overcome intrigue with the light from facts, figures and reason. But institutionalized incompetence? That can be a much tougher nut to crack. It was the part of the report that I found dubious. Quite frankly, it makes me wonder what it is doing in this report at all, unless its entire purpose is to prop up the rest of the report:

According to program office data, the CTOL and STOVL variants are behind expected reliability growth plans at this point in the program. Figure 9 depicts progress of each variant in demonstrating mean flying hours between failures as reported by the program office in October 2011 and compares them to 2010 rates, the expectation at this point in time, and the ultimate goal at maturity.  


As of October 2011, reliability growth plans called for the STOVL to have achieved at least 2.2 flying hours between failures and the CTOL at least 3.7 hours by this point in the program. The STOVL is significantly behind plans, achieving about 0.5 hours between failures, or less than 25 percent of the plan. CTOL variant has demonstrated 2.6 hours between failures, about 70 percent of the rate expected at this point in time. The carrier variant is slightly ahead of its plan; however, it has flown many fewer flights and hours than the other variants.

JSF officials said that reliability rates are tracking below expectations primarily because identified fixes to correct deficiencies are not being implemented and tested in a timely manner. Officials also said the growth rate is difficult to track and to confidently project expected performance at maturity because of insufficient data from the relatively small number of flight hours flown. Based on the initial low reliability demonstrated thus far, the Director of Operational Test and Evaluation reported that the JSF has a significant challenge ahead to provide sufficient reliability growth to meet the operational requirement. 
The explicit characterization “the CTOL and STOVL variants are behind expected reliability growth plans at this point in the program” can only spring from willful distortion and misrepresentation of the facts in hand OR -- more likely-- from a pack of feral accountants and auditors nobly working around a gaping chasm in their own consequential knowledge as to how aircraft reliability programs actually ‘work’. Only someone who had no idea of the true relevance of the data they had in their unprepared little hands would make such a statement. In demonstrating how aircraft reliability programs proceed, measurements are made, and performance is evaluated and graded, we will reveal the ludicrous, unintentional and laughable silliness of the GAO report excerpt above. That there apparently was no one in the Program Office that could have disabused them of this ignorance is even more disconcerting. 

For future reference then, I offer an introductory tutorial on how aircraft 'reliability' programs work, I’ll focus mostly on the F-35A numbers, but what is true for the F-35A is even truer for the F-35B and C as they have even fewer flight hours.

Aircraft Reliability Isn’t Graded in the Cradle

 Let’s begin by noting that by the end of October 2011, the timeframe given above, only approximately 2000 total flight hours had beenflown by all three F-35 variants. Given the F-35A had been flying more and in larger numbers than the other variants through to that timeframe, we can safely assume the F-35A flight hours make up at least half of the 2000 hour total (~1000-1200 hours?). The failure rates shown for the CTOL version include those flown by AA-1, the de facto F-35 prototype which was markedly different from later aircraft (and is now retired from flight and undergoing live fire testing). Given that the typical operating hours accumulated before aircraft type designs are considered ‘mature’ enough to evaluate and grade system reliability is 100,000 fleet flight hours (RAND TR-763Summary, Pg xiii), just mentioning an F-35A reliability metric at the ~1% mark is pointless. Assigning any meaning to the same value and framing a narrative around it demonstrates profound stupidity and/or a hostile agenda.
As there are three major variants of the F-35, and the chart above shows values for all three variants, I would assume there was cause for the program to take some composite approach to benchmarking the F-35, whereby a value lower than 100,000 hours for each variant may have been selected due to commonality and overlap between systems (100000 hours for each variant, while more statistically pure for benchmarking performance would have probably seemed as overgenerous and overkill to non-R&Mers… especially ‘bean counters’). Unless the program is supremely confident in the parts of the F-35 that are unique to each variant, they should keep the 100,000 hour benchmark at least for those unique variant aspects, but given the complexity of tracking partial and full system reliability, I doubt any program would view such an approach to reliability as workable. This means that when they get to a point late in the maturation process, that if the unique systems and features of the variants aren’t measured against a 100,000 hours benchmark, they had better be ‘ahead of the curve’ for what normally would be expected in their reliability performance.

How Programs Track Reliability Growth

One may ask: How programs achieve target reliability benchmarks in their maturity if they aren’t being ‘graded’ on their progress as they go forward? The answer is they ARE evaluated; it is just that they are evaluated in terms of trends for discovering and eliminating root causes, as well as in relation to other metrics to arrive at what the performance ‘means’ as part of the process of achieving required system reliability . Depending upon how far along the program is in maturing the system; the reliability performance at the time will mean different things and require different corrective or reinforcing actions. To illustrate what is evaluated, how a system is ‘matured’, and why it is impossible for a system to be ‘mature’ when it is first fielded, it is helpful to employ a typical reliability chart format with notional data for further reference and discussion. The following chart plots out a hypothetical weapon system’s Mean Time Between Critical Failure (MTBCF) performance, as I suspect the GAO report incorrectly refers to as ‘Mean Time Between Failure’, though all the observations we are about to make concerning same are true in either case. ‘Conveniently’ for our purposes, the hypothetical weapon system in this chart has the identical 2.60 hours MTBCF at 2000 hours, with the ultimate goal of 6 Hours MTBCF at 100000 flight hours, the same as noted in the GAO report for the F-35A.
Notional MTBCF Plot: Copyright 2012 Elements of Power

The reader should immediately note that the chart above is plotted in a ‘Log-Log’ format: both chart axes are plotted using a logarithmic scale. This has the effect of allowing the clear display of early values, where wider variations in data are to be expected and of showing trends (and deviations from same) more accurately. As more statistically relevant data is accumulated, on through to where the system maturity point is selected for determination as to whether or not the system meets the reliability requirement, the deviation from the mean value should lessen (more about that later). The reader should also observe that there are three values logged after the notional 2.60 ‘measurement’.
These values illustrate that the ‘current’ value evaluated at any point in time is usually a few measurements behind the latest measurements because the latest values will have to be “adjudicated” to ensure they are error free. Adjudication can be a daunting, time-consuming process (voice of experience) that often requires iterative communications between the Reliability and Maintainability group and units in the field before the data is purged of errors.
Some actual examples come to mind that illustrate how errors are introduced. On one of my past programs, there was an episode where there appeared to be a sudden increase in failures and subsequent removal and replacement of a cockpit component. It was only through careful review and correlation of several months’ worth of event data that impossible crew sizes (you can’t get 20+ people in a cockpit at one time) were revealed, which led to R&M eventually finding out that the maintainer organizations were running a series of training events and incorrectly logging them against the aircraft.

The adjudication process itself may also contribute to the eventual improvement of the weapon system’s reliability score. One category of maintenance logged against an aircraft is ‘For Other Maintenance’ (FOM). “Once upon a time” a certain weapon system was showing excessive low observable “Red X” events which flagged a certain area of the plane as experiencing frequent Low Observable outer-mold line (surface) failures (this also generated an inordinate amount of aircraft ‘downtime’ affecting another metric). Through inaccurate logging of the ‘How Malfunctioned’ (How Mal) code, the data masked the fact that the LO maintenance was driven by the need to restore the surface treatments to complete the removal and replacement (R&R) of a component located behind the surface that required restoration. This incorrect data not only pointed the program R&M efforts in a wrong direction, it helped mask the impact, and delayed the ‘fixing’, of what was considered prior to this discovery to be a low priority “nuisance” software glitch. Priority was then given to fixing the ‘glitch’ and along with a change to tech data, a maintenance and reliability ‘high-driver’ was completely eliminated.

The values shown at individual points on the chart above are not the cumulative value from current and all previous data points. They represent a value arrived at from a regression analysis of the last 3-6 data points (usually taken monthly) and the latest snapshot trends are used for further evaluation in conjunction with other performance data to determine true progress and problem trends. I’ve placed markers at various flight hour totals to illustrate points where the possible half-way and full reliability flight hour measurement periods might be for our hypothetical program to illustrate just how far away 1000-1200 flight hours are from any likely MTBCF ‘grading’ point. 

Dominant Factors When Experience is Low

‘Failures’ logged and tracked fall into three broad categories: Inherent or Design-Driven, Induced, or No Fault Found/Cannot Duplicate (NFF/CND) aka ‘false alarm’. When the flight hours of a new weapon system are few, the data tends to be more representative of operator and program learning curves than actual aircraft reliability, to the point that ‘No Fault Found’ and ‘Induced’ often represent one half to two-thirds of the total ‘failures’ so it is entirely within the realm of the possible that this is true at this time for the F-35. If the F-35 failure rate was driven by design problems we would expect to also see the GAO warning of undesirable ‘mission readiness rates’, ‘maintenance man-hours per flying hours’ or other negative performance measures. Without these kinds of details, any standalone MTBCF number is meaningless. Given there is no mention in the (GAO) report what we would expect to see if the F-35’s ‘failures’ to-date were dominated by design problems, I suspect the design reliability might be seen as ‘pretty good’ at this point in time by the R&Mers (Program Managers will always want ‘more’-and ‘sooner’-- so one will ever claim ‘good enough’ until all the reliability measurement hours are adjudicated).
US Navy Photo

STOVL Sidebar

The GAO report notes the STOVL ‘reliability’ figure as being even farther below the ‘expected’ value. As the first production F-35Bs were delivered in January of 2012 after the period ‘graded’, and the total hours flown must be far less than even the ‘A’ model’s paltry ~1000-1200 flight hours, the GAO even showing the numbers, much asserting that the “STOVL is significantly behind plans” is pitiable ignorant, but still useful for two reasons I’m certain the GAO didn’t intend.
First, the GAO’s statements clearly tie the numbers presented to a ‘plan’, Whether this ‘plan’ they refer to is the calendar schedule (which I suspect is true) or they are referring to planned flight hours through October 2011, both are inappropriate to use for MTB(C)F. The ACTUAL hours are what are relevant to the metric, and we’ve already covered how limited experience means less meaningful data.
Second, the STOVL observations help highlight something I’ve dealt with previously in managing small fleet performance improvements: something I call “The Tyranny of Small Numbers”. The very limited number of aircraft evaluated means that even a single ‘early’ failure event for one aircraft carries larger penalties than for a larger fleet. May we expect many more years of ‘behind plan’ reports from the GAO as a result of the ‘concurrency’ bogeyman used as an excuse to stretch the program?
At the end of the period covered in the GAO report was when the B models were getting some pretty important part number rollovers implemented.  Besides also highlighting the fact GAO is always way behind in reporting compared to the current status and thus always out of date, perhaps this was the source of the “because identified fixes to correct deficiencies are not being implemented and tested in a timely manner” cheap shot in the GAO report? (More about that below.) 

How Programs Manage Reliability Growth to Maturity

In viewing the chart above, the reader will see three dashed lines. The ‘red line’ is established at a level where the program sets a value where the program has decided any time the metric moves below the red line will trigger extra attention as to determining root causes, evaluating corrective actions in work and/or possibly decide additional actions are warranted. The ‘blue line’ represents the level of desired or expected reliability performance at every point along the timeline. As the program proceeds the values recorded should cluster progressively tighter at or above the blue line. Both the red and the blue line may be straight lines as shown, or curved. They may also incorporate ‘steps’ to reflect intermediate thresholds that the program office is expecting to meet. If the system performance moves much above the ‘green line’ representing the weapon system’s specified reliability requirement, believe it or not the program may review the weapon system to eliminate the ‘extra’ reliability if the extra reliability is achieved by incurring associated higher costs. 

Value and Tradeoffs

It must be remembered that every performance specification requirement is arrived at during the requirements process by making tradeoffs between performance values and the costs to achieve those values to meet mission requirements. If any single performance metric, such as MTBCF fails to achieve the specified levels, the real impact of same is not understood by just looking at the metric as a standalone. MTBCF is one of the more interesting metrics in that once the MTBCF rises above the expected (and designed) sortie length, the relevance of the metric begins shifting more towards its implications for and impacts to other metrics. By way of example, if our hypothetical program achieves 5.9 hours MTBCF, the probability of successfully completing the mission is reduced by an insignificant amount compared to the specified 6.0 hours. If the Mean Time to Repair (MTTR) is but a fraction of the allowable time and/or the Maintenance Man-Hours Per Flying Hour (MMH/FH) is lower than the maximum allowable, the program office would have to determine the value (cost vs. benefit) of pursuing that last 6 minutes between failures before deciding to ‘go after it’. By ‘value’ I mean if such metrics as the MTTR and MMH/FH are better than the predicted and required levels, the program will have to examine the impact of the increased material costs (if any) from that 6 minute 'shortfall' over the life of the program in balance against all the other factors.  
Since the GAO report fails to highlight the existence of poor MMH/FH and MTTR numbers, AND we know from the program announcements that flight test operations are ahead of current schedule for flights and test points, we can be almost certain that the internals of the performance data shine a better light on the program performance than the GAO is attempting to cast.
 
Of course even if all the data were known, this doesn’t mean a hypothetical POGO-like organization or sympathetic ‘news’ outlet wouldn’t, in their manifest ignorance and/or pursuit of a non-defense agenda, still bleat false claims of ‘cheating’ on the requirements. (Remember what I said earlier about institutionalized ignorance?).
Early in any program, there may be at any one time, one particular subsystem or component, or even false or induced failures that are standout ‘problems’ (Note: these days it is usually because systems do so well overall. Want to talk REAL maintenance burden? Pick something fielded before the 80s). In such instances the program may maintain and report two or more reliability number sets and plots showing trends for the overall system and the impacts of the offending parts or induced failure events on the overall performance as part of developing a corrective action. These contingencies very often need no more attention other than monitoring and are eventually cleared up through carrying out previously planned part number ‘rollovers’, completing the training of personnel, or updating technical data. The point again, is: mere snapshots of reliability performance without knowing trends and the ‘internals’ of the data are useless.  
The GAO comment above stating “JSF officials said that reliability rates are tracking below expectations primarily because identified fixes to correct deficiencies are not being implemented and tested in a timely manner” is “priceless”--for two reasons. First, given that early MTBCF data is tenuous at best, this may again highlight GAO (and possibly F-35 Program) naiveté on the subject. Reacting prematurely with very little data to implement fixes to things that may not be a problem is a recipe for wasting money. Second, if the ‘fixes’ haven’t been implemented ‘yet’ it is probably due to the F-35 Program Office priorities in having them implemented: planes fielded to-date are needed for other program requirements and this would prevent ‘instant’ fixes.
I seriously doubt the Program Office can’t get the contractor to do anything that it wants them to do given the budgets allocated and number of aircraft available. My experience tells me otherwise. If the GAO citation is correct, then shame on the Program Office for foisting the blame on the contractor.
Competent evaluation of program performance and sober observations resulting from such observations hardly drive web traffic, bring donors, or sell periodicals these days. (Just sayin') So while there are seeds above for quite a few questions that a curious ‘reformer’ or journalist (if either  even exist) might use these seeds to ask the GAO some pretty hard questions if they were interested in understanding and reporting what might be really going on within the F-35 program. 
Given the record of many of those so-called ‘reformers’, commercial websites and periodicals, we probably shouldn’t expect any sober observations. Given their demonstrated willful ignorance on the topic to-date, whether or not we could believe the answers reported is another question in itself.  
F-35A, USAF Photo
Personal Note: My apologies for not posting more lately, but my personal priorities place family needs and work ahead of play, and the need for attending to my family and work have been fairly high the last week or so, and I anticipate the situation to persist for at least a month.

Sunday, May 13, 2012

Av Week's LCS 1 "Hit Piece": Unintentionally Helpful

..to anyone who has been paying attention (of course MOST people haven't been).

Seriously, go read the AvWeek 'Investigative' report on LCS 1.  Match up the timelines for faults, findings, corrective actions. Set aside the 'scandalous' structure and phrasing and it will illuminate on many of the open LCS 1 design/build process questions I posed earlier in parsing the POGO arguments.

As to LCS 1 specifically, I am only 'mildly' interested in the Fabey 'tour' in dry dock that the Navy says never happened (protecting 'sources' there Fabey?).  I'm a little more interested in the INSURVs recent evaluation of crew (non)readiness.
LCS 1 at RIMPAC

Sunday, May 06, 2012

Project on Government Oversight: Still Shrill After All These Years

Know Your ‘Reformers’: Episode 1 in potentially a long series

Introduction

I’ve been toying for quite some time with the idea of maybe taking on a book project: a book about the modern era of so-called “Military Reformers” and the also so-called  ‘Military Reform Movement’. My interests in their activities reaches back to at least the late 70’s. As a byproduct of examining the output of the leading/most prolific ‘reformers’ in detail over the years I’ve managed to consume a great many of their screeds.  I have also acquired a fairly significant selection of their writings not available by other means (such as the internet).  Nearly all of the ‘reformer’ material I have acquired over the years has been either library remainder (free) or (mostly) purchased second-hand. The fine point here is this: as my research progressed and knowledge of the ‘Reformers’ increased, it became increasingly important to me to NOT subsidize their ‘work’ in any way, shape, or form.

Another Generation. Same Old Song and Dance.

In my ‘inbox’ earlier in the week was a link to an interesting piece posted at the Defense Professionals (DefPro) website (Although the publication of same calls the ‘Professionals’ part seriously into question). It is a classic example of the kind of thinking (or lack thereof) that goes into a typical POGO rant, but in this case, it offers the kind of transparency to POGO’s philosophy and modus operandi that I don’t think I’ve seen since Dina Rasor’s early effluences, back when she was cranking up POGO’s prior incarnation: the ‘Project on Military Procurement’.

Ben Freeman.
(A patriotic guy. Just ask him )
Source: POGO
The piece that follows was put together by one of POGO’s newest (and therefore greenest) ‘investigators’, one Ben Freeman, who has been rather prolific of late. The subject this time is the Littoral Combat Ship program, but it could be about almost any program. Indeed, as I read through the piece, which for our purposes Freeman conveniently structured in a ‘he said’-‘she said’ format, I was struck by the similarities in substance and tone that Dina Rasor used when she attacked (yes, a ‘trigger’ word, but that is what it was) the M-1 (tank and program) performance ‘back in the day’ without really understanding what a tank was for much less how it was to be used.  From the obvious parallels, it immediately became apparent that we could also use Freeman’s POGO piece to illustrate clearly the kind of philosophical, conceptual, and technical dissonance that exists between the worlds of those who ‘do’ things in the real world and those who ‘second guess’ from the trench lines of ‘Reformerland’.
Even better, we can accomplish this without having to deal with the more substantial issues of whether or not the LCS program is needed and justified and/or having to dissect the back-story motivations of the ‘second guessers’ for this go around and save just them for another time.
LCS 1 (Left) and LCS 2  (USN Photo)
I now present the DefPro piece in its entirety, with observations/commentary in [red brackets].     

Navy Defends $120 Billion LCS Program, POGO Publishes Rebuttal

08:27 GMT, May 2, 2012 POGO certainly caused a stir last week after sending a letter to U.S. Congress reporting that the USS Freedom, the first ship commissioned under the Navy’s Littoral Combat Ship (LCS) program, has been plagued with cracks, flooding, corrosion, and repeated engine failures. In response to POGO’s letter, Rep. Duncan Hunter (R-CA) amended the National Defense Authorization Act, “demanding that the Navy ‘fess up to Congress on problems with its Littoral Combat Ship,” according to AOL Defense. [First, note the self-promoting claim of causing a ‘stir’; we’ll get back to it later. The most interesting thing is how the quotation is used. If not read carefully, it might lead one to believe that Rep. Hunter was the one quoted, rather than a turn of a phrase that the author of the AOL piece -- one Sidney J. Freeberg Jr.-- used to punch up the opening of his article.]  
Hunter confirmed that our letter was the impetus for the amendment. “I didn’t realize the Navy had been so restrictive in its reporting even with DoD,” Rep. Hunter told AOL Defense. “We just want to know what’s going on.”

[Again, a carefully deceptive use of selective quotation.  One that rather carefully does NOT mention a more substantial quote a couple of paragraphs ahead of the ‘punchline’ Freeman lifted from Freeberg’s article. If Freeman had included the more explanatory quote ahead of this one, we would have read: "This simply makes the navy come to us and explain all the problems [and] all the good things about the LCS we need to know to conduct proper oversight," Rep. Hunter told the committee. "The Navy needs to be more forthcoming with us." But perhaps that more balanced description would have set the ‘wrong’ tone for what will follow? The claim of confirmation that POGO’s letter was the ‘impetus for the amendment’ is classic POGO: 
1) make claims where is not important whether or not they are ‘valid’, only that they cannot be ignored by legislators or administrators without risking escalation and appearance of indifference/malpractice.
2) Legislators/administrators move to at least pretend to examine the claims to avoid further complaint.
3) POGO then markets their activities as a ‘success’. “POGO gets results!” (as in the claim to have caused ‘a stir’)
Note: Expect mention of this ‘success’ in future POGO fundraising briefings/pleas to preserve and expand their donor base.]       
Rep. Hunter is joined in this bipartisan push for oversight of the LCS program by fellow House Armed Services Committee Members Hank Johnson (D-GA), who issued a statement supporting Hunter’s amendment, and Jackie Speier (D-CA), who sources confirm will be issuing LCS legislation of her own. And just yesterday, The Hill reported that Senators Carl Levin (D-MI) and John McCain (R-AZ), the Chair and Ranking Member of the Senate Armed Services Committee, respectively, have called for a Government Accountability Office (GAO) review of the program.

It all seemed to touch a nerve with the Navy, which quickly moved to defend the $120 billion LCS program, which calls for a new wave of nimble combat ships designed to operate close to shore. The beleaguered Freedom, manufactured by Lockheed Martin, is one of two LCS designs.

[Obviously Freeman is still trying to set up the right POGO vibe here. ‘Touch a nerve’? Will we perhaps see in a short while why a rapid response from the Navy should be considered so ‘remarkable’ or when viewed in context will it be actually ‘unremarkable’?]
The Navy issued a response to our letter so quickly that even Defense News remarked that it was delivered with “uncharacteristic alacrity.”

[Again, setting up the idea that the ‘Navy’ (yes, apparently ALL of it) was ‘unsettled’, by the machinations of the (apparently) ‘mighty’ POGO? If the previous comments serve any purpose other than casting the Navy in a less than flattering pose, it is not exactly clear what  they are here for, or otherwise why they would have been included in this POGO piece at all.]

One point the Navy protests is our statement that LCS ships will make up as much as half of the Navy’s surface fleet. The Navy cites a report to Congress that says the LCS will account for 22 percent of the “21st Century Battle Force.”

We can admit when we’re wrong. But in this case the “22 percent” the Navy cites is not accurate, either. The planned 55 LCS ships will account for 38 percent of the Navy’s surface combatant ships.

[So. POGO takes issue with the Navy’s ship count numbers. Is it because POGO has a better list of ships, more authoritative definitions of what constitutes a ‘surface fleet’ or ‘battle force’, or a better grasp of naval force plans than the Navy itself? Why is this example of what is really ‘communication at cross purposes’ included in this piece at all?  I think we are again left with the perception of some deceptive, and IMHO rather pissy, ‘battlefield prep’ on the part of POGO’s Freeman.]  
As for the rest of the Navy’s response to our letter, we’ll beg to differ and stand by our work.
Here’s a side-by-side comparison of the Navy’s response and our rebuttal:
[Finally!]
WHAT OUR LETTER SAID:
“Senior Navy officials have publicly praised the LCS program. However, the Navy has been reluctant to share documents related to LCS vulnerabilities with entities such as the Pentagon’s Office of the Director of Operational Test and Evaluation (DOT&E).”
• The Navy’s Response:

This is not correct. The LCS Program Office has been working in close coordination with the DOT&E community since the early days of the program. DOT&E has been an active member of the T&E Working level Integrated Program Teams (WIPTs) since 2004 and most recently at the [Office of the Secretary of Defense (OSD)] level in the milestone-related Integrating IPTs (IIPTs) and Overarching IPTs (OIPTs) that occurred in 2011. Draft Detail Design Integrated Survivability Assessment Reports (DDISAR) were provided to DOT&E in the second quarter of fiscal 2012 to initiate discussions while modeling results and shot line selections are completed. DOT&E is working with the program office to complete the DDISARs and move toward developing Total Ship Survivability Trials (TSST) plans that assess Seaframe survivability in fiscal 2014. DOT&E will receive the final DDISARs prior to the planning and conduct of the TSSTs. Additionally, the LCS Program Office provided a draft of the 57mm Live Fire Test and Evaluation Management Plan to OSD/DOT&E on 29 March, and received comments on 3 April 2012. Comment resolution is in process.
• Our Rebuttal:

The only two documents the Navy confirms sharing with DOT&E are a “draft of the 57mm Live Fire Test and Evaluation Management Plan,” and a draft of the “Detail Design Integrated Survivability Assessment Reports.” Both of which were just recently received by DOT&E. As our letter indicates, the Navy possessed several documents related to the ship’s performance and equipment failures that it failed to share with DOT&E. Plans to create trials in 2014 do nothing to improve oversight of a ship that will be deployed to Singapore in 2012.
[Got that? First POGO accuses the Department of the Navy with not being forthcoming with DoD’s DOT&E organization using the unbounded term ‘reluctance’ to describe LCS document sharing concerning the LCS’s ‘vulnerabilities’ . In response, the Navy points out that DOT&E representatives are embedded participants within the LCS test community, and lists specific LCS Program draft reports that have been submitted on relevant activities (to the ‘vulnerabilities’ topic that POGO highlighted). It is also apparent from the statement these reports are being submitted on an event-driven schedule.
POGO’s ‘rebuttal’ ? Freeman chooses to ignore the statement concerning ongoing DOT&E participation with the cognizant LCS Test &Evaluation IPT, then Freeman carps about the low number of reports acknowledged to have been shared by the Navy. Does POGO/Freeman really feel entitled to a comprehensive list of communications between the Navy and DOT&E based upon a ‘letter’ they wrote, or are they just keeping on the offensive as the best form of defense? (The latter could be described as a typical ‘reformer’ move BTW: think Boyd’s OODA Loop)
LCS 1 USS Freedom replenishment with LHD 6 USS Bonhomme Richard  (USN Photo)
In this case though I believe the former was more ‘wished for’ than expected. This appears more likely to be, in the best POGO/Reformer tradition, a case of asking for information and then making the next move based upon the response. 1) If the information requested is not provided, make assertions of ‘reluctance’ (the cycle on this path eventually ramps up to accusations of ‘coverup’ or worse). 2) If the information requested IS provided, then interpret it to support the agenda in hand.

The ‘tell’ this time is the importance Freeman mentioning “several documents related to the ship’s performance and equipment failures that it failed to share with DOT&E”. Aside from the inflammatory ‘failed to share’ phrasing, from a systems Reliability, Maintainability and Availability (RM&A) point of view, it would be fundamental nonsense to analyze failure data and draw any final conclusions as to failure significance or trends, and in some cases even root cause, this early in a program. The mixing of complaint about structural performance and system performance is either shotgunning the target hoping to hit something, or indicative that like many ‘reformers’ Freeman doesn’t know enough to distinguish between the two. Modern complex systems typically require tens of thousands of operating hours before system reliability can be ‘graded’ against specifications.  The only purpose for outside and uninformed interests to acquire such data this early is for target practice and laying groundwork for further misadventures in furthering their agenda. 
WHAT OUR LETTER SAID:

– “… (LCS-1, the first LCS ship) has been plagued by flawed designs and failed equipment since being commissioned, has at least 17 known cracks.”

– “Before and during the ship’s second set of rough water trials in February 2011, 17 cracks were found on the ship’…”
– “Another crack was discovered “below the waterline and is currently allowing water in... When discovered there was rust washing onto the painted surface. It is thought this is rust from the exposed crack surface. It is unknown how long this crack existed prior to being discovered.”

– “Similarly, cracks in the deck plating and center walkway on the port side were mirrored by corresponding cracks on the starboard side. Fifteen experts, including a source within the Navy, have informed POGO that the cracks in nearly identical locations on opposite sides of the ship may be indicative of systematic design issues.”
– “Last May, the LCS program manager issued near term operating guidance for LCS-1, which placed significant constraints on the ship’s safe operating envelope (SOE).”

– “Specifically, the new guidance states that in rough water (sea state 7; 19.5- to 29.5-foot waves) with following seas, the ship cannot travel at speeds greater than 20 knots, and cannot travel into head seas at any speed. Even in calmer seas (sea state 5; 8.2- to 13.1-foot waves) the ship’s peak speed into head seas is capped at 15 knots, relegating the Navy’s “cheetah of the seas” to freighter speeds.”

• The Navy’s Response:

Speed restrictions for LCS 1 have been lifted. With regard to the cracking discussion, these are not new findings. LCS 1 has experienced minor structural issues. The details of the cracks found on LCS 1 were briefed to the defense committees, including the Senate Armed Services Committee (SASC) over a year ago (March 2011). All repairs were conducted using approved repair procedures and satisfactorily inspected by American Bureau of Shipping (ABS) and the appropriate Naval Technical Authority. Thorough analyses and reviews of the designs and construction documentation were conducted, with the goal of improved production processes. Design changes, as necessary, have been incorporated in future hulls to resolve noted issues. Production processes were modified as needed, to prevent future issues. These design changes were implemented into LCS 1 throughout her post delivery period, the ship has been approved to operate with the full scope of the approved Safe Operating Envelope (SOE) since completion of the repairs.
• Our Rebuttal:
The Navy’s claim that the cracking issues have been reported is partially correct. The cracks were reported, but the extent of the cracking was not. These cracks have been repaired, but the cracking problem continues according to sources close to the program. Faulty welds and construction continue to cause new cracks on the ship that the Navy has yet to report.

The Navy also claims “the ship has been approved to operate within the full scope of the approved Safe Operating Envelope (SOE) since completion of the repairs.” But, being approved to operate within the full scope of the SOE and actually operating are completely different. The simple fact is that since completion of these repairs the ship has been unable to successfully perform at the upper end of its SOE.
[POGO first makes a litany of assertions related to structural cracks and their consequences, including a rather humorous appeal to authority  in employing ‘Fifteen experts’ stating a rather obvious  factoid ‘may’ be true. One would think one expert would have sufficed for such a weak assertion of something not likely to be disputed. Once you get past the unintended humor, the first questions that come to my mind are:
1) Is the discovery of the need to make structural tweaks a normal part of wringing out a new ship?
2) Is the scope and impact of the cracking to date typical, lower or higher than might be reasonably   expected?
3) Does the Navy (or ship builders in general) employ a methodical strategy for identifying, tracking and fixing structural issues/problems?
4) If they do not, why isn’t POGO raising a holy stink over the absence of same?

But we don’t need to get too deep into the topic of what the norms are because of the Navy response: Ummm. We fixed all those problems.

POGO’s rebuttal: There’s more problems that have occurred, and an unsupported assertion that the LCS in question has been unable to ‘perform at the upper end’ of its operating environment, which even if true, from the sound of it this is unrelated to structural problems, so why bring it up at all on this subtopic, except as sort of a ‘yes but’ deflection? ] 
WHAT OUR LETTER SAID:

“From the time the Navy accepted LCS-1 from Lockheed Martin on September 18, 2008, until the ship went into dry dock in the summer of 2011 — not even 1,000 days later — there were 640 chargeable equipment failures on the ship. On average then, something on the ship failed on two out of every three days.”

• The Navy’s Response:
As with any ship, all equipment failures on LCS 1, regardless of how minor the impact to mission, have been meticulously tracked, and this data has been invaluable in improving the reliability of ship systems. The 640 chargeable equipment failures from Ship Delivery until the summer dry docking, tracked in the LCS 1 Data Collection, Analysis, and Corrective Action System (DCACAS) represent all equipment failures to the ship for all systems (propulsion, combat systems, auxiliaries, habitability, C4I, etc) regardless of whether the equipment was repaired by the crew or off ship maintenance personnel.
The 640 failures referenced include multiple failures on a piece of equipment (38 for the Main Propulsion Diesel Engine) and single failures to equipment (one Man Overboard Indicator). From the DCACAS report dated 31 Aug 2011, approximately 12 percent of the equipment failures since delivery can be attributed to the Ship Service Diesel Generators (SSDGs). In May 2010 the Navy and Lockheed Martin instituted a Product Improvement Program for the SSDG. The resulting effort increased Mean Time Between Failures (MBTF) for the equipment from less than 150 hours (October 2008) to over 500 hours (April 2011).
This is a case of how the DCACAS data is used to improve the reliability of the ships early in the acquisition program. Overall the DCACAS data is a mechanism to evaluate every failure on the ship to determine if it can be attributed to infant mortality of the equipment, normal wear and tear for that equipment/component, or is a trend that needs to be addressed via design changes or reliability growth efforts.

• Our Rebuttal:
The Navy does not dispute the 640 failures, which had not been previously reported. The Navy mentions that the DCACAS data is used to determine if failure can be attributed to infant mortality, normal wear and tear, or is a trend. Their file confirms that nearly a third of these failures were potential or confirmed trends, which, according to the Navy should “be addressed via design changes or reliability growth efforts.” This is precisely our rationale for questioning this ship’s design.

[ POGO’s Freeman first commits the ‘fundamental nonsense’ I mentioned earlier. The Navy pretty much responds as if helping a child with their color matching skills. Freeman double-downs on the 640 failures as not being reported’ yet they must have been reported somewhere for Freeman to have been aware of them. Then Freeman takes the point that the Navy notes that failures where trends have been identified (obviously either simple systems or related to simple installation, or operating factors or problems anticipated via earlier analysis and test) should “be addressed via design changes or reliability growth efforts”. Freeman then makes the illogical claim that the existence of problems, the scope of which he has failed to establish are truly worrisome or even out of the expected norm “is precisely our rationale for questioning this ship’s design”. 
The fact that Freeman believes technical problems or issues arising on the introduction of a new weapon system (on which he has no expertise or just as important, no experienced perspective to judge the significance of) into its operating environment SHOULD give him cause to be “questioning this ship’s design”, would normally cause the recreational sailor in me to suspect that Freeman apparently has never been around a ‘boat’ much less a ‘ship’ long or often enough to be a proper judge of ship systems reliability and performance, and this last passage would seem to be evidence enough to suspect his qualifications to even ask the RIGHT questions concerning same.
EXCEPT…

LCS 2 Under Construction (GD/Austal Photo)
Except if you know how ‘reformers’ work, you would realize that this kind of faux indignance is their bread and butter.  Good engineers and program managers understand the challenges of complexity and can distinguish between necessary and unnecessary complexity, and they even know there is room for disagreement on same, one of the reasons for the term: Best Engineering Judgment.  Engineers and program managers know there will always be technical problems to solve when fielding any complex (and even simple) system. Engineers and program managers know that sources and remedies to the technical problems may be found in the design, the construction, the integration, or even the training and education of the operators. Engineers and program managers know that until you actually field a system--complex or simple--you will NEVER know about all-- much less be able to preclude all-- potential technical problems. Good Engineers and program managers see a technical problem as to be expected and solved. So-called ‘Reformers’ see technical problems as simply reasons to do something other than what is being done, something to be used in furthering their own agendas. And those agendas may or may not be what is publicly stated, but they are never FOR advancing a weapon system under development. ]  

WHAT OUR LETTER SAID:
“Secretary of the Navy Raymond Mabus told the Senate Armed Services Committee in December 2010 that both variants of the LCS were performing well, and that “LCS–1, the Freedom, demonstrated some of the things we can expect during her maiden deployment earlier this year.” Then-Chief of Naval Operations Admiral Gary Roughead echoed this praise for the LCS-1, stating “I deployed LCS earlier than any other ship class to assure we were on the right path operationally. It is clear to me that we are.”

• The Navy’s Response:
USS FREEDOM (LCS 1) arrived in San Diego on April 23, 2010, successfully completing her maiden deployment more than two years ahead of schedule and three to five years faster than conventional ship acquisition strategies. LCS 1 traveled 6,500 miles, transiting the Panama Canal. Highlights of operations in 3rd and 4th Fleet Areas of Responsibility include theater security cooperation port visits in Colombia, Panama, and Mexico, successful performance of strike group operations with the USS Carl Vinson Carrier Strike Group, joint maneuvers with the Mexican Navy, and counter-illicit trafficking patrols which resulted in 4 interdictions yielding over 5 tons of cocaine, 2 seized vessels, and 9 suspected smugglers taken into custody. The second phase of the early deployment included LCS 1 participating in the bi-annual Rim of the Pacific (RIMPAC) exercise with 14 other nations, 34 ships, 5 submarines, 100 aircraft and over 20,000 personnel. The early deployment included the development of a coordinated logistics support plan. The lessons learned from the LCS 1 deployment have provided critical data to inform the permanent support plan for the 55 ships of the LCS class, as well as valuable information used in the construction of both LCS 3 and the Block buy ships.

• Our Rebuttal:
These quotes are not an “issue” that we raised. We mentioned them in context of the ship’s failures to show the disconnect between what Navy officials were telling Congress and what was actually happening on the ship.

[No. To be accurate, you might reasonably claim you “mentioned them in context of” of what POGO views as “the ship’s failures” in an attempt “to show” what POGO asserts is “the disconnect between what Navy officials were telling Congress and what” POGO views as “was actually happening on the ship”.
WHAT OUR LETTER SAID:

“Mabus and Roughead failed to mention that during the approximately two-month deployment when the ship traveled from Mayport, Florida, to its home port in San Diego, California, there were more than 80 equipment failures on the ship. These failures were not trivial, and placed the crew of the ship in undue danger. For example, on March 6, 2010, while the ship was in the midst of counter-drug trafficking operations and reportedly “conducted four drug seizures, netting more than five tons of cocaine, detained nine suspected drug smugglers, and disabled two ‘go-fast’ drug vessels,” there was a darken ship event (the electricity on the entire ship went out), temporarily leaving the ship adrift at sea.”
• The Navy’s Response:

Throughout its deployment, LCS 1 safely operated and conducted its mission. Few of the 80 equipment failures cited above were mission critical. The ship did experience a brief loss of power, however, it should be noted that many commercial and U.S. Navy vessels have periods of power loss due to plant set-up and operator control. In the event of power loss, there are specific U.S. Navy procedures documented in the Engineering Operational Sequencing System (EOSS) to quickly restore power throughout the ship. To address concerns documented with electric power generation, the LCS Program executed Electric Plant Reliability Improvement Programs on both ship designs to increase reliability of ship service diesel generators and the performance and management of the shipboard electrical systems. This has resulted in changes that have been implemented through post-delivery availabilities on LCS 1 and LCS 2 as well as captured for LCS 3 and follow ships. Additionally, sensors were installed to monitor performance trends.

• Our Rebuttal:
The Navy confirmed “the ship did experience a brief loss of power” while deployed, which again had not been previously reported or shared with Congress in any public testimony. In addition, the Navy claims that, “Throughout its deployment, LCS 1 safely operated and conducted its mission. Few of the 80 equipment failures cited above were mission critical. The ship did experience a brief loss of power…” The fact that other ships lose power does nothing to lessen the danger of unexpected power outages on a ship the Navy would have us believe can survive naval warfare.

In other words, the Navy admits there were mission critical failures, including a brief loss of power, on this LCS-1 mission. This stands in stark contrast to Secretary of the Navy Ray Mabus telling Congress that this mission was a success and the ship “demonstrated some of the things we can expect.” Unless we are to expect rampant equipment failures, it appears that the Navy was misleading Congress about these issues.

[POGO says: Problems BAD! USN says: Problems Typical and Unremarkable. POGO says: Navy BAD for not reporting Typical and Unremarkable problems.

This reads more like POGO trying to manufacture the appearance of a cover up than anything else.]

WHAT OUR LETTER SAID:
“According to the DoD’s DOT&E FY 2011 Annual Report, the LCS is “not expected to be survivable in a hostile combat environment.”

• The Navy’s Response:
The LCS Ships are built to meet Joint Requirements Oversight Council-approved survivability requirements and include OPNAVINST 9070.1 Level 1 Survivability standards [note: OPNAVINSTs are instructions issued with the office of the chief of naval operations]. The LCS design specifically includes Level 1 plus additional tailored survivability enhancements (“Level 1+”). LCS survivability depends on a combination of ship design, ship numbers, and ship CONOPS [concepts of operations] which says LCS will:
– Operate as part of a networked battle force
– Conduct independent operations only in low to medium threat scenarios
– Operate as part of a networked battle force operation in high threat environments
– Create Battle Space/Avoid being hit
– Rely on networked battle force for threat attrition
– Rely on overboard systems
– Fight and survive if hit
– Ship design: Accept ship mission kill; keep ship afloat and protect crew after hit
– Battle force design: Maintain battle force fight-through capability through LCS numbers and mission flexibility
– Withdraw/reposition if hit

LCS is designed to maintain essential mobility after a hit, allowing the ship to exit the battle area under its own power. The LCS systems allow ship’s crew to navigate and communicate while repositioning after a hit all the while utilizing numbers (of LCSs), and CONOPS as force multipliers. LCS incorporates survivability systems to perform required missions in the littoral with an emphasis on crew survival.

• Our Rebuttal:
The Navy again confirms that the LCS has a “Level 1+” survivability rating. According to the Navy “Level I represents the least severe environment anticipated and excludes the need for enhanced survivability…in the immediate area of an engaged Battle Group or in the general war-at-sea region.” In other words, the ship is not expected to survive a true battle at sea. Additionally, given that the littoral combat ship will, by definition, be operating close to shore, it is also extremely vulnerable to land-based attacks, which it is ill-equipped to defend against.

The Chief of Naval Operations Admiral Greenert recently said the LCS was not prepared to “challenge the Chinese military” and you can’t “send it into an anti-access area.”
In short, this is a surface combatant that can’t truly engage in surface combat.

[POGO: DOT&E Report says ship not survivable, USN: Ship designed to be survivable where and when used as intended and BTW: here’s how, POGO: But the Navy can’t use it this other way-- so it doesn’t count. Neener Neener.
BTW: You just gotta’ love the ‘reformer’ chutzpa in rolling out their own definition of surface combat and insisting it overrides that of the USN’s.]

WHAT OUR LETTER SAID:
“Sources close to LCS-1 have now told POGO that after more than six months in port, the ship has been back to sea just twice. The sources also informed us about critical problems that surfaced on the ship during those two outings: several vital components on the ship failed including, at some point in both trips, each of the four engines.”

• The Navy’s Response:

LCS 1 had one of two gas turbines engines fail after over three years of operations (including post-delivery testing, fleet operations and ship early deployment). The root cause analysis of the engine failure revealed that the gas turbine intakes were allowing salt spray to be ingested into the engine intake structure during high seas evolutions, which lead to the eventual failure of a high pressure turbine blade. The salt water did not induce corrosion internal to the engine. However, it changed the air flow through the engine, which eventually led to the failure. As a result of the failure, a redesign of the intake structure along with improved mating seals was implemented on LCS 1 on post delivery and is in-line for LCS 3 and subsequent ships.
• Our Rebuttal:

The Navy does not dispute these previously unreported engine failures. They only discuss the results of an engine failure that occurred in 2010, which we do not mention in our letter.

[The USN blew off what smelled like a POGO fishing expedition, POGO doesn’t like it. That doesn’t make POGOs claims true or accurate and it doesn’t mean the USN even knew for certain what POGO was talking about (which would be just as valid a reason to not respond to POGO as any).]
WHAT OUR LETTER SAID:

“In addition, there were shaft seal failures during the last trip, which led to flooding.”
• The Navy’s Response:

During February 2012 sea trials LCS 1 suffered a failure of the port shaft mechanical seal (1 of 4 such seals). The remaining underway portion of the sea trial was ended and the ship returned to port unassisted. The failed boost shaft stern tube seal was analyzed by independent third party to gain insight into the failure. Repairs to the Port Boost Stern Tube Seal have been completed and the USS Freedom undocked on April 7. All other stern tube seals on FREEDOM were inspected and found not to have this issue. Due to manufacturing timelines and differences, it was determined that LCS 3 seals were not at risk of the same issue. In addition, LCS 3 seals have undergone extensive operation without failure.
• Our Rebuttal:

The Navy reports that shaft seals on the other engines of LCS-1 and those on LCS-3 were not at risk of this same failure. However, prior to this incident, the Navy was not aware the shaft seal that blew was at risk of failing either. [This is an incredibly stupid paragraph, isn’t it? What’s the difference between before and after? Hint: the Navy looked for the problem elsewhere after it occurred once. The Navy must understand the failure for them to state there is no risk for the same failure after looking at the rest of the seals.]

In short, the Navy has not taken any corrective action in response to this issue.

[POGO: Seal Problem! USN: After looking closely, seal failure seen as a onetime thing. Seal repaired! POGO: We don’t know the difference between a ‘repair’ for what appears to be a onetime issue and something that has to be fixed for all the ships (so we want to see a ‘corrective action’ plan?).
WHAT OUR LETTER SAID:

“The DOT&E’s FY 2011 Annual Report states that “[t]he program offices have not released any formal developmental T&E reports.” The report goes on to state that “the Navy should continue to report vulnerabilities discovered during live fire tests and analyses. Doing so will inform acquisition decisions as soon as possible in the procurement of the LCS class.”

• The Navy’s Response:
The Navy is actively developing the required reports documenting the results of all the Developmental Testing that has occurred on LCS 1. Once completed, these reports will be delivered to DOT&E as required.

• Our Rebuttal:

The Navy confirms the DOT&E’s statement, which we referenced in our letter, that “[t]he program offices have not released any formal developmental T&E reports.” In fact, the Navy’s response to this specific critique confirms that “the required reports documenting the results of all the Developmental Testing that has occurred on LCS 1” have not been completed. The Navy states that they will be delivered to DOT&E once they are, but offer no explanation as to why they have not been completed.
[Back to the ‘reports’ bleat eh? Notice how POGO conflates the fact there are no formal reports yet per 2011 DOT&E SAR, but conveniently fails to mention whether or not there were supposed to be any formal reports.  Now, if one bothers to actually read the report without bias, the reader will see that noting the absence of formal reports is not a critique, but a simple observation.  How typically ‘reformer’ of Freeman and POGO to twist facts to satisfy their purposes. ]

It is not unreasonable to ask the Navy to provide testing and evaluation reports for a ship that is scheduled to be deployed to Singapore and has already been deployed in the Caribbean. If the ship is performing as well as the Navy claims they should be eager to provide these reports.

[The assertion of belief as fact: more typical ‘reformer’.
Let’s correct this last paragraph:


POGO BELIEVES it is not unreasonable to ask the Navy to provide testing and evaluation reports for a ship that is scheduled to be deployed to Singapore and has already been deployed in the Caribbean. POGO BELIEVES if the ship is performing as well as the Navy claims they should be eager to provide these reports.

There, all better. ]

WHAT OUR LETTER SAID:

“The Navy has also repeatedly made significant changes to the program while giving Congress little time to evaluate these changes.”
• The Navy’s Response:

Configuration change management has been a key factor in controlling program cost. After incorporation of lessons learned from the lead ships into follow ships, the Program Office has controlled the design baseline closely in order to manage risk and cost.
The Program Office has captured and continues to capture data from these “first of class” vessels. The “first of class” discussion is an important perspective to add. USS Freedom (LCS 1) and USS Independence (LCS 2) not only are they “first of class” vessels but they were procured using research and development funds in a manner outside the bounds of previous ship programs. Previous combatant procurements leverage off of years of research and development, integration testing and validation of systems using surrogate platforms. Aegis Cruisers implemented a new combat system that was tested for over ten years on surrogate ships to a hull form that had already been tested and delivered. Aegis destroyers laid the same propulsion, power generation and combat system into a new hull form. All of these efforts did not preclude these ships from seeing “first of class” challenges.

The LCS programs however, took measures to instrument and collect data on the hull designs, execute design reviews/design updates and implement those findings into the follow-on awards. In addition, those findings have led to upgrades and changes on LCS 1 and LCS 2 to ensure that these research and development hulls are viable assets.
LCS 1 has traveled more than 65,000 nautical miles since it was delivered to the Navy in September 2008 and continues to meet our expectations.

• Our Rebuttal:

The Navy fails to respond to the actual issue we raised related to Congressional notification of program changes, specifically the shift from a down-select to a dual-award acquisition strategy. The Navy opted to instead discuss the “first of class” challenges on Aegis ships.
It’s true that all first of class ships will have problems. However, the extent and nature of the problems on this littoral combat ship are far more problematic than on other ships. Faulty welds, design, and ship construction are the root cause of many of this ship’s failings. These are not first of class issues; they are basic ship-building issues that appear to have been largely ignored on this ship.

[ Gee. We could have saved a lot of trouble by starting with this exchange.  POGO accuses the Navy of making changes that Congress can’t keep up with.  The Navy could have had some fun and just said “What do you mean?” or “Whose fault is that?’ but instead chose to detail why the LCS program is different. And from the Navy’s response we learn just HOW different the program is from previous programs (I had no idea how different anyway: sounds like a DARPA program that quickly turns into  production). The Navy details some of the ways the LCS had none of the advantages of previous classes (Aegis cruisers and destroyers) of ships and that those ships still had hurdles to overcome, then the Navy notes that the LCS ships are instrumented to find the kinds of things that might lurk in any design. This should be a hint to Freeman as to why the Navy apparently isn’t (and shouldn’t be?) too excited about the problems they’ve encountered. 
Freeman twists those observations into a “we’re not talking about the Aegis” snark and NOW he tells us that by ‘changes’ POGO meant the change from a downselect to one LCS to the continuation of both LCS designs. It turns out this is the one thing about the LCS I’ve watched with some interest.  
First we can throw out Freeman’s characterization of Navy decision-making concerning Congressional ability to keep up with the program and the change from a downselect to proceeding with a dual contractor approach. It is simplistic and reflects what I would call the Congressional Vanity POV (It was all about them!) found as part of a more extensive review of the issue in a Congressional  Research Service Report. Thus POGO’s carping over timing of requests and decisions in retrospect is pretty unoriginal as well as weak. Read the CRS report, and then ask yourself why it seems POGO would rather have the Navy going to Congress earlier with a half-baked plan, just to give Congress reason to refuse it because it was half-baked.  BTW: There were arguments being made as early as 2004 that the navy should buy two squadrons of competing designs and have them fight for supremacy. The ‘do we downselect’ or ‘do we continue with both designs’ question is hardly ‘new’    
Seems Freeman just can’t stop himself form asserting opinion as fact. He’s got the ‘reformer’ spirit within! With his last paragraph, he again tries to pass off ‘reformer’ POV as fact. Helping once again with a rewrite:

POGO agrees that what the Navy says is true: that all first of class ships will have problems. However, POGO believes the extent and nature of the problems on this littoral combat ship are far more problematic than on other ships. POGO believes Faulty welds, design, and ship construction are the root cause of many of this ship’s problems and are representative of failings in the program, design, and construction (that POGO believes should be seen as cause to kill this program? Notice the undeclared intent – we can only guess). POGO believes these are not first of class issues; POGO believes they are basic ship-building issues that appear to POGO to have been largely ignored on this ship
There. All better again]
FYI, and not that it matter one whit, I find the GD/Austal (LCS 2) design most appealing.
LCS 2. USS Independence (USN Photo)