When I started college, a few short decades ago, I commuted to Stevens Institute of Technology in Hoboken, N.J., just outside of New York City. The first time I took my father’s car for a solo journey to school, he asked, “Are you ready to drive the car alone? Do you think you’re mature enough to take the car into Hoboken?”
“Yeah, Dad, I’ll be OK. I’m ready,” I replied. My dad was an expert mechanic. Both he and I knew that the car was ready, and according to my driver’s license, the state of New Jersey felt I was ready too.
Later that morning, as I crossed “Old Ironbound”--the Pulaski Skyway--toward the Holland Tunnel, the driver of an oncoming southbound vehicle lost control. The driver jumped the low-profile center divider, striking my vehicle and three others traveling northbound.
Expect the unexpected
Let’s view my unfortunate incident from a systems standpoint. Although I was ready to drive my dad’s car to school, the system that was required to get me there was more complex than just my vehicle and I, the driver. The system boundary was bigger, which meant that there were more interfaces and integration scenarios to consider, such as another driver losing control.
Much more recently, my government customer asked if his critical system was ready to deploy. Yes, we conducted a technology readiness assessment (TRA); yes, we identified the critical technology elements (CTEs) of the system; yes, we performed performance reviews on the CTEs. The technology had been proven to work in its final form under expected conditions. However, I remembered the unforeseen car accident of my youth. I remembered the complex interfaces.
Integration and interfaces are where most things go wrong. Failures happen at the places where things come together. We had identified two CTEs. One had been used multiple times on other platforms but never in the customer’s type of system and not with the large number of interfaces. There were too many potential failure points.
System readiness assessment: a quantitative score
I’d been working with colleagues to develop and advance system readiness assessment (SRA) as a new systems engineering methodology to serve as a key element in transforming the government’s approach to complex systems integration. As a part of this methodology, new system metrics assess the readiness of systems for operation and help our customers to manage risk and reduce the total cost of ownership in an increasingly complex environment.
I convinced my customer to wait a few weeks. Using SAIC’s SRA User Environment, we assessed and evaluated not only the readiness of each technology and component but also each interface in the system. We scored the system with a readiness metric and determined how ready we actually were—not just the two CTEs but the entire system and all of its integrations. Using the Bayesian network modeling component of the environment, we actually arrived at a level of confidence in our decision.
My customer welcomed the surety, saying, “There’s too much at stake here to risk an integration failure in the field, especially in front of our partners. We need that confidence!”
Unlike my car scenario, our system deployed with foreknowledge of performance under expected and unexpected conditions. I told him, “We’ll be OK,” and I had the data to prove it.