The Quality Assurance Surveillance Plan defines the government's expectations, how the performance requirements (products or services) will be monitored and evaluated, and the incentives applicable to the task order. The ITES-2S Ordering Guide (1.2M MS Word file) describes the QASP in paragraph 4 of Chapter 3 with an annotated example provided as Attachment 3, including a section for a Quality Assurance Surveillance Plan.

The QASP is a government-owned document. It should be the government's plan for surveillance of contractor performance during the task order's life. The government (or a selected third party) executes the QASP. The contractor has significant roles in the surveillance process but these are limited primarily to making measurements and reporting information in accordance with the QASP

The QASP, as used in a PBSA task order, should not be confused with a Quality Assurance Plan, Quality Control Plan, Process Improvement Plan, or other plan. These are usually contractor-produced separate plans for internal use by the contractor.

In a PBSA task, the contractor is contractually responsible for quality assurance, and this responsibility is motivated through various incentives (or penalties). The government's focus is on assessing or evaluating performance through assessment methods to assure that the government is getting what they are paying for.

Normally, the requiring agency develops the QASP and PWS for inclusion in the Task Order Request. Alternatively, the requesting agency may develop a Statement of Objectives and request, through the Task Order Request, that contractors propose a PWS and QASP (and other TO proposal elements).

Additional topics on the QASP:

Developing the QASP

The QASP should be developed in conjunction with the PWS. Following the ITES-2S Ordering Guide (1.2M MS Word file), four sections are inserted directly from the PWS, the title should remain the same, and three sections developed to complete the QASP.

Relating PBSA Elements of PWS and QASP for ITES-2S Task Order Request
Performance Work Statement Relationship Quality Assurance Surveillance Plan
1. Project Title

same

1. Task Order Title
2. Background   n/a
3. Scope Inserted into 4. Scope of Performance
4. Applicable Documents   2. Work Requirements

5. Performance Requirements

n/a

Inserted into 3. Primary Method of Surveillance

6. Performance Standard

n/a


n/a

 

5. Performance Standards

6. Acceptable Quality Level (AQL)

7. Evaluation Method

7. Incentives Inserted into 8. Incentives (Positive and/or Negative)
8. Deliverables and Delivery Schedule   n/a
9. GFE/GFI
  n/a
10. Place of Performance   n/a
11. Period of Performance   n/a
12. Security   n/a
13. QASP   n/a

Note: Usually, the QASP is developed by the government. But in PBSA and in the ITES-2S contract when the Task Order Request package includes a SOO, the contractor is requested to respond with a PWS, including a QASP.

top of page


Methods of Surveillance

A good surveillance method will have the following characteristics:

  • It is “transparent” meaning that all elements of the measurement, data collection, metrics determination, and reporting are understood, documented, and auditable
  • Contractors provide all necessary measurements, tools, and reporting effort
  • The government execution of surveillance is straightforward and does not duplicate contractor or government quality assurance efforts
  • Surveillance efforts are scheduled or planned for specific time periods
  • Results aid the government in determining courses of action with the contractor
Example of Surveillance Methods
Surveillance Methods Examples Comments
100 Percent Inspections* Government will review each CDRL for completeness within 7 days Often costly but sometimes necessary for health and safety
Random Sampling* Random sampling of Help Desk transaction log showing calls and closure information (monthly)

Statistically based method

Applicable to recurring tasks

Generally requires large numbers of items to measure

Periodic Inspection* Sample the first 4 trouble logs every M/W/F for analysis Planned sampling at specific dates or time intervals
Customer Input*

Customer satisfaction survey

Survey end user for ease of use for newly produced documentation

Depends on firsthand information from customers

Qualitative method used in lieu of quantitative methods

Supplements or complements quantitative standards

Contractor Self-Reporting*

Contractor / Subcontractor revenues by socio-economic categories to report compliance with Small Business Plan

Network availability metrics (failures, operating hours, etc.)

This method can be the most efficient since it relies on the contractor’s in-place infrastructure and systems for measurement of performance indicators and results. The keys to making this work are effective partnering in the task and transparency of the contractor’s methods. And, of course the reporting must be auditable.

Trend Analysis

Conducted every quarter.
ISO 9001-2000 assessment

Provides for more continual assessment of on-going performance

Independent (third party) audit or assessment   Relies on third-part methods and standards

* Surveillance method identified in the ITES-2S Ordering Guide (1.2M MS Word file).

top of page


Acceptable Quality Level (AQL)

The Acceptable Quality Level (AQL) states the level of variation that a contractor may have from a performance standard. MIL-STD-105D states that an AQL is "the maximum percent defective (or the maximum number of defects per hundred units) that, for purposes of sampling inspection, can be considered satisfactory as a process average." While this standard and its definition pre-date the current PBSA methods by many years, it still provides good general guidance for approaching development of AQL and related factors for service contracts - whether based on sampling, inspection, or other surveillance methods.

Acceptable Performance - For a service, the AQL expresses the maximum (worst case) deviation from the stated performance standard that a customer is willing to accept during a defined measurement period. Performance with fewer deviations than the AQL is desirable (in most cases) but not required. Performance with more deviations is not desirable, but not necessarily the basis for total rejection of the service.

For example, let's assume that a requiring agency specifies end-to-end email delivery to be provided within 10 minutes of the end user sending the message 95 percent of the time (5 percent error rate). Therefore:

  • e-mail delivery is the "performance requirement" (service)
  • 10 minutes (for each message) is the" performance standard"
  • (less than or equal to) 5 percent (failure) is the "acceptable quality level"

Determining AQL - The initial values of AQL are determined after the required services and associated performance standards are determined. Specific quality levels can be based on historical or current performance, mission support requirements, direction from higher command, and other sources. Significant considerations:

  • Each required service should have one or more performance atandard.
  • Each performance standard should have an acceptable quality level.
  • Determine the level of performance variation that the government will permit during execution of the task order.
  • Review each standard to make sure that delivery of the service at that standard meets the needs of the government.
  • Determine the methods and application of positive incentives if the contractor exceeds the performance standard (acceptable level) and this adds value or benefit to the requiring agency
  • Determine the methods and application of negatives incentives if the contractor does not meet the acceptable level.

Cost/benefits Trade Off - Selecting an AQL for most performance standards represents a trade off between allowable defects (or customer satisfaction) against likely costs of service delivery.

For information on this chart please contact Program Manager Michael J. Kwak at 703-676-8836

The figure to the right illustrates a range of defects (horizontal axis) and a notional expression of service cost (vertical axis). As illustrated the designated AQL allows for a level of defect against the standard. Any performance with fewer defects (left of the AQL) would probably mean greater customer satisfaction and acceptance of the service. A greater number of defects in excess of the AQL (to the right) would be the cause of lesser satisfaction, reduced value, and possible rejection. A performance standard may be absolute allowing for no variation or a standard may have a range of acceptable (tolerable) variation. Variations will occur. An AQL of 0 percent (indicating perfect performance against the standard) can be very expensive to achieve and rarely achieved, if not impossible to achieve.

Measuring Defects or Accomplishment - Most measurements are based on identifying defects or failures within a lot. Often, it is easiest to express "perfection" as being ZERO defects and then "less than perfection" being some number of defects being recognized. Performance standards and AQLs may be based on this approach.

Alternatively, we often think in terms of accomplishment of a goal, therefore the goal is thought of in terms of 100 percent, with defects dropping the accomplishment level down. For example, we often think more in terms of "availability" of a network or other service rather than the "downtime" experienced.

Either way of expressing a performance standard is acceptable. It is important to make sure that for a specific standard, AQL and resulting incentive, that the terminology and methodologies are documented in a consistent manner with the approach chosen.

Service Level Needs – Setting the AQL for a new task order that essentially continues a service previously delivered should be pretty straightforward. The AQL should remain relatively consistent with that experienced in the past (unless that service level has been unacceptable). The AQL should be more stringent (fewer defects) than historical levels only when changed requirements are clearly driving a change in service levels and probably cost.

Sometime, though, the AQL can be purely an "intelligent guess" when an historical basis is not available. In other cases, a quantitative measure is not evident, then the standard may be judgmental or subjective, such as the appearance of a physical area.

The following table illustrates the relationship of "availability" with "outages." We often hear of quality at SIX 9's … a very difficult standard to achieve. When establishing metrics, one must consider the reality of the standard and AQL being set. On an annual basis, can the user live with an outage, or total of all outages, of more than 0.5256 minutes, TOTAL? When stablishing percentages or other factors, work through the math to determine what that means on a day- or week- or month-basis. And, is this a realistic standard to be buying?

Outage Per Year
Availability Minimum Average Day
90% 52560 876 36.5
99% 5256 87.6 3.65
99.9% 525.6 8.76 0.365
99.99% 52.56 0.876 0.0365
99.999% 5.256 0.0876 0.00365
99.9999% 0.5256 0.00876 0.00037

In the case of a guess without sound basis, it would be wise to establish a commitment for the requiring agency to negotiate revised performance standards and/or AQls with the contractor after some period of stable operation.

For information on this chart please contact Program Manager Michael J. Kwak at 703-676-8836

Range of Quality Levels
While the Acceptable Quality Level is a key point in the evaluation of contractor performance, there are additional points on the evaluation curve that may be significant for the evaluation, assessment, incentives, and customer satisfaction. These are illustrated on the chart to the right and summarized in the table below.



Quality Levels for Consideration in AQL and Incentives
Quality Level Usage Comment
“0” Zero Defect
No deviation from standard

Perfection – no defects

Usually more costly

Rarely achieved

Essential in areas of health and safety

Exceeds needs Exceeds all needs and economic value No additional incentive planned
Exceeds AQL Range of service quality that is better than the AQL but does not exceed the customer needs Establishes a range above AQL in which incentives are applicable to encourage performance at this level
Acceptable Quality Level Maximum allowable level of deviation from performance standard The only required level for PBSA
Degraded Service Service delivery at less than the AQL (greater number of defects) but still above a level that is deemed unacceptable Disincentive (negative incentive)
Unacceptable Rejection of the service Rejection of the service

top of page


Evaluation Methods

Example Evaluation Methods
Performance Standard Evaluation Method Comment
Tier 3 problem reports are tracked until closure
Review list of Tier 3 problems for problem log time and closure information (time, etc.).

Compare problem resolution logs from tier 3 vendors to problem reports.
 
No more than 5% of callers surveyed express dissatisfaction with service Review and validate customer surveys

Conduct independent surveys
Sometimes two or more evaluation methods may be used to assess contractor performance

top of page


Incentives (Positive and/or Negative)

Accountability in PBSA is how the measures of the metrics translate into incentive to the contractor. The structure of the incentive arrangement (positive or negative) ranges widely. The key to whatever arrangement is made is to have quantifiable results that are gathered in a mutually agreed upon manner to identify the earned incentive. Both the requiring agency and contractor must be accountable for actions taken in PBSA. The more these actions can be quantified and placed into the PBSA metrics, the clearer the relationship will be.

AQL Link to Incentives - Incentives and/or disincentives should be linked in an unambiguous manner to the AQL and performance metrics to define what actions are taken when the contractor deviates (plus or minus) from the AQL.

The AQL often equates to a neutral incentive, meaning the price offered (and base fee in award fee plans) are not adjusted at this level. But the requiring agency may want to establish a means of giving incentives to contractors to exceed theAQL. Conversely, in situations where there are more than a few issues or problems with quality and the contractor does not satisfy some or all of the established performance standards, there should be a mechanism for application of negative incentives (also known as "disincentives").

The key question for those establishing incentives is how to translate the significance or criticality of a service into an incentive. Incentives should be directly linked to the value of the required service. This value may be based on price, performance, schedule or other factors. Incentives can be monetary (financial) or non-monetary (non-financial).

Monetary Incentives - As proposed, ITES-2S task orders should be based on a basic or bid level of incentive for the contractor. Monetary incentives are used to encourage a contractor to achieve a higher level of performance (better than AQL) that the requiring agency desires. Conversely, should the contractor not achieve the AQL, then negative incentives, also know as service credits or penalties, may be levied.

There are several means of implementing monetary incentives under ITES-2S which may include the following:

  • Additional monetary incentives can be considered and incorporated by usage of Fixed Price Award Fee (positive)
  • Price may be reduced proportional to the level of degradation of service

Cautionary note: As noted in the ITES-2S Ordering Guide (1.2M MS Word file), all maximum financial incentives should be included in initial committed task order funding to assure prompt payment when earned.

Non-Monetary Incentives – While monetary incentives are most desirable, these may not be practical in all cases. And, nothing precludes offering both monetary and non-monetary incentives for outstanding performance. Several representative non-monetary incentives include the following:

  • Award term - extension of period of performance or automatic exercise of options
  • Reduced task order surveillance, e.g. less frequent measurements, reduced documentation requirements, etc.
  • Government will submit positive performance evaluations to external agencies such as the NIH Contractor Performance System
  • Increased payment frequency - for example, SAIC records labor (a major component of most ITES-2S task orders) on a bi-weekly basis in conjunction with our payroll and labor distribution systems. Invoicing for labor on a bi-weekly basis has a significant positive impact on corporate cash flow if the government pays in a timely manner.
  • Revise task order schedules for future milestones or deliverables following contractor suggestions that might benefit the contractor in workload management, staffing, reducing conflicts in schedule, etc.
  • Reduced oversight and surveillance such as sample size and/or frequency
  • Reduced reporting requirements, e.g. eliminate paper copy of all routine reporting and deliverables with email distribution accepted as the approved method
  • ITES-2S Evaluation of Contractor's Task Order Performance with no less than a Satisfactory rating on each item and an overall YES recommendation on item 7.
  • Letters of Commendation and other recognition of contractor personnel and teams
Examples of Incentives Linked to Performance Standards
Performance Standard AQL Incentive Comments
Help Desk available 24x7 No more than 1% of calls are answered in more than 45 seconds 0.50% to 1.00% - no incentive0.00% to 0.49 - price increased 2% for period1.00% - 5.00% - price decreased 2% for period  
Submit Final Test Report within 15 days of completion
of XXX

On schedule delivery.

No deviation.

No deviation - no incentive

1 deviation (late report) - price decreased 1% for period

Negative incentive (penalty) only