GreenDart Inc. Team Blog

Stay updated with the latest trends at GreenDart Inc.. Please subscribe!

Cybersecurity Resilience


 

Cybersecurity, cyber resilience, operational resilience. Once we think we have grasped the inputs, outputs, expectations, and requirements of one word, industry shifts and new terminology arises. The conversation is one of nuance, encumbered by terminology and boundary differences. These terms are fairly new and easily misused and misunderstood.  For all intents and purposes within the IT space, Cyber Resilience is our term of choice. Cyber Resilience refers to an entity’s ability to withstand and recover from a cyber event. It is measurable in regards to the operational evaluation of an entity or system.

The key question Cyber Resilience addresses is:

How protected and resilient are the internal system attributes (applications, data, controls, etc.) assuming the threat has already penetrated the external cybersecurity protections?


Continue reading
2586 Hits
0 Comments

Test Program Verification

For very high value or otherwise critical development efforts, a customer may elect to procure an independent assessment of the system developer’s T&E program. This effort is typically known as Test Program Verification or simply Test Verification. The Test Verification agent may be brought in by the developer (but kept distinct from the developer’s T&E organization), or the agent may be directly hired by the Government customer, to achieve a higher degree of independence.

Since the intent is to drive rigorous developmental T&E effort, many of the same activities and issues discussed in our DT&E write-up are relevant to this effort, although the perspective is different (e.g., the Test Verification agent does not plan or execute tests). Although both the Test Verification agent and OT&E agent may both be members of an Integrated Test Team, their mutual interaction is apt to limited.

Test Verification involves many of the Test Verification steps defined in GreenDart’s Verification and Validation – Test description. However, the target for this effort is to review and assess the developer’s Requirements Verification Plan (RVP) and their Requirements Verification Report (RVR). Successful assessment of these developer products is critical to achieve customer confidence in the developer’s test program and, therefore, confidence in the successful delivery of the desired system. The figure below shows a notional Test Verification process flow.

 

RVP/RVR Assessment Process Flow

Continue reading
2074 Hits
0 Comments

Design of Experiments

The key test optimization opportunity of an effective T&E effort is the design and execution of Design of Experiments (DOE). DOE is a systematic method to determine the relation between inputs or factors affecting a process or system and the output of that system. The system under test may be a process, a machine, a natural biological system or many other dynamic entities. This discussion concerns use of DOE for testing a software intensive system (a standalone program, or integrated hardware and software).


Test planners have a range of software test strategies and techniques to choose from in developing a detailed test plan. The choices made will depend on the integration level (i.e., unit test to system of systems T&E) of the target test article, as well as the specified and generated test requirements. Typically, the complete test plan will involve a combination of these techniques. Most of them are commonly known, but applying DOE for testing software intensive systems may not be as familiar. In this context, the design in “DOE” is a devised collection of test cases (experimental runs) selected to efficiently answer one or more questions about the system under test.  This test case collection may comprise a complete software test plan, or a component of that plan.

DOE can save significant test time within the overall  DT&E and/or OT&E efforts. In one particular instance GreenDart achieved 66% DT&E schedule savings through the successful application of DOE.

 


Continue reading
1860 Hits
0 Comments

Agile T&E

Agile system development has emerged as an alternative to the long-standing “waterfall” software development process. Agile development involves the rapid development of incremental system capabilities. These incremental development activities are called “sprints”. At the start the program selects requirements from the overall system requirements specification, builds user stories around those requirements, and allocates those user stories to specific sprint events. During each sprint event the developers go through a mini-waterfall effort of requirements, design, code, and test of a very small segment of the overall system. Once the sprint is complete the resultant product is typically integrated into the evolving overall system. Any unfulfilled sprint requirements go into a requirements holding ledger called the Product Backlog (PBK) for reassignment to future sprints. Sprint re-planning occurs, as needed, based on the accomplishments of previous sprints.

 

Testing within each sprint roughly resembles a very short waterfall testing effort, as described in our T&E description earlier. However, for this discussion, we focus on the unique Agile T&E activities that occur outside of each sprint. These activities include requirements verification, trace, and test results assessments.

T&E validates the user stories and associated critical technical parameters against top level requirements, identifying any issues early in the sprint development cycles. As sprints are executed and various levels of story “completion” are achieved, the PBK is updated to re-capture and re-plan those requirements that were not completed during the sprint or that underwent vital user-driven updates. Perturbations to delivered incremental capabilities and changes to go-forward strategies are quite common. These changes are captured by a dynamic T&E planning effort. Finally, because the programs are typically on a rapid 2-week sprint cadence, the T&E engagement and reporting cycles are quite short. This creates additional T&E/developer coordination opportunities, which improves T&E planning timeliness.

At the end of each sprint, the T&E team assess the achieved sprint requirements, now integrated into the evolving target system. Tests are performed for those specific requirements, and regression tests on existing capabilities, with each sprint integration. Successful T&E results drive capability acceptance while T&E failures drive PBK updates and future sprint re-planning efforts.

Continue reading
2134 Hits
0 Comments

Operational T&E

The key step of an effective T&E effort, which drives product deployment, is the conduct of Operational Test and Evaluation (OT&E). OT&E is a formal test and analysis activity, performed in addition to and largely independent of the DT&E conducted by the development organization. OT&E brings a sharp focus on the probable success of a development article (software, hardware, complex systems) in terms of performing its intended mission once it is fielded. Probable success is evaluated primarily in terms of the “operational effectiveness” and “operational suitability” of the system in question. Operational effectiveness is a quantification of the contribution of the system to mission accomplishment under the intended and actual conditions of employment. Operational suitability is a quantification of system reliability and maintainability, the effort and level of training required to maintain, support and operate the system, and any unique logistics requirements of the system.

 

While DT&E comprehensively tests to the formal program requirements, OT&E concentrates on assessing the Critical Operational Issues (COI) identified for each program. Measures of Effectiveness (MOEs) are defined (ideally, early in the program life cycle) to support quantitative assessment of the COI. An MOE may reflect test results for one or several key requirements, while some (secondary) requirements may not map into any MOE. Similarly, the OT&E team uses Measures of Suitability (MOSs) to quantify development product performance against the “ilities” relevant to the particular development product. While the DT&E team may give little attention to evaluation of MOEs and MOSs, the OT&E team uses these technical measures extensively to focus their test planning and as a standardized and compact vehicle for communicating their findings to responsible decision authorities.

An OT&E campaign conventionally has two phases: an initial study and preparation phase, followed by a highly structured formal testing phase. Significant OT&E planning, assessment and preparation efforts occur throughout the first phase, which occurs in parallel to the development effort. Independent reports and recommendations are also made to the acquisition authority during this phase. The second phase, in DOD parlance known as Initial OT&E (IOT&E), follows final developer delivery of the target product, but precedes operational deployment. IOT&E is a series of scripted tests conducted on operational hardware, using developer-qualified deliveries, and under test conditions as representative as practical of the expected operational environment. If testing results meet pre-established criteria, IOT&E culminates with a recommendation to certify the development article for operational deployment. Once past the Full Rate Production milestone, Follow-on OT&E (FOT&E) of the development article may occur to verify the operational effectiveness and suitability of the production system, determine whether deficiencies identified during IOT&E have been corrected, and evaluate areas not tested during IOT&E due to system limitations. Additional FOT&E may be conducted over the life of the system to refine doctrine, tactics, techniques, and training programs and to evaluate future increments, modifications, and upgrades.
OT&E independence from the development program (including the program manager and immediate program sponsors) is a key attribute distinguishing it from DT&E. However, use of a common Test and Evaluation Master Plan (TEMP) is typical, and well controlled integrated DT/OT testing (integrated testing) is encouraged.

How GreenDart can help you: We are proven experts in designing and executing operational T&E programs for all hardware and software developmental program efforts. Please contact us.



Continue reading
1987 Hits
0 Comments

Developmental Test and Evaluation

A key component of an effective T&E effort is Developmental Test and Evaluation (DT&E). DT&E is conducted by the system development organization. DT&E is performed throughout the acquisition and sustainment processes to verify that critical technical parameters have been achieved. DT&E supports the development and demonstration of new materiel or operational capabilities as early as possible in the acquisition life cycle. After the Full Rate Production (FRP) decision or fielding approval, DT&E supports the sustainment of systems to keep them current and extend their useful life, performance envelopes, and/or capabilities. Developmental testing must lead to and support a certification that the system is ready for dedicated operational testing.

DT&E efforts include:

  • Assess the technological capabilities of systems or concepts in support of requirements activities;
  • Evaluate and apply Modeling and Simulation (M&S) tools and digital system models;
  • Identify and help resolve deficiencies as early as possible;
  • Verify compliance with specifications, standards, and contracts;
  • Characterize system performance, military utility, and verify system safety;
  • Quantify contract technical performance and manufacturing quality;
  • Ensure fielded systems continue to perform as required in the face of changing operational requirements and threats;
  • Ensure all new developments, modifications, and upgrades address operational safety, suitability, and effectiveness;
  • During sustainment upgrades, support aging and surveillance programs, value engineering projects, productivity, reliability, availability and maintainability projects, technology insertions, and other modifications.

DT&E is typically conducted to verify and validate developer requirements as for example those specified in documents such as the Software Requirements Specification (SRS), which are derived from customer top-level specifications. The reports and other products of this (i.e., component) level of verification may also serve as required inputs to the system acceptance process.

How GreenDart can help you: We are proven experts in designing and executing T&E programs for all hardware and software developmental program efforts. Please contact us.

Please provide any comments you might have to this post.



Continue reading
2579 Hits
3 Comments