APR Data Fidelity Review

The following are common issues of confusion related to APR data reporting. Specific review procedures and items to look for (in the form of red flags) are provided to help identify issues that may be undermining data fidelity. Many of these red flags are built into the National Assistive Technology Act Data System (NATADS) in both the aggregate portion used by all Section 4 AT Act grantees to report their APR and the day-to-day portion that a number of grantees have opted to use as their internal ongoing data management system.

Demonstration

A demonstration is a decision-making event with one participant identified as the decision-maker who completes the performance measure. While a demonstration can have multiple participants, there must be one participant who is the decision-maker. That is typically an individual with a disability or a parent/guardian when one of those participant types is part of the demonstration. While providers can be the decision-maker who completes the performance measure, those demonstration events require a bit more oversight to ensure fidelity of data and ensure the decision is clearly made on behalf of one individual with a disability who is/will be the AT user.

Red Flag: Total Number of Demonstrations = Total Number of Participants

The likelihood of all demonstrations only having one participant who is the decision-maker for themselves or a specific consumer they represent is rather slim. It is possible that your program decided to only report the decision-maker who attends the demonstration even though others participated. While that can be done at program discretion, it is not exactly aligned with the intent of the APR. Even if this is the reason the number of demonstrations and participants are equal, it is worth a closer data review, especially when the one participant decision-maker is not an individual with a disability or parent/guardian. If there are large numbers of professionals or providers who are the participant decision-maker, you need to make sure this is actually a decision-making demonstration event (e.g., that decision-maker provider is getting guided exploration and feature comparison of devices for the specific purpose of making a decision for one identifiable individual with a disability, not general product information for potential application to clients/students with certain skill deficits in general). A small group or even just one SLP from a school district who explores AAC options for a few of their students is not likely a demonstration event but instead would be a training or public awareness event, because there would need to be separate performance measures collected for each decision for each student.

Red Flag: Total Number of Participants ≥ 4x Total Number of Demonstrations

If the average number of participants per demonstration is four or more, it is very likely that some of these events were training or public awareness instead of demonstration. It would be very unusual for every demonstration to have that many participants. If you review your individual demonstration records and see demonstrations with 10 or more participants, those demonstrations are highly suspect as that large of a group is just not conductive to a quality demonstration event. Not that there cannot be a rare exception, but that should be offset by demonstrations with one, two, or three participants, which is much more typical.

Red Flag: A significant number of individuals with disabilities participants are not reported as the decision-makers

The individual with a disability participant is likely the decision-maker for any demonstration they participate in when there are additional participants reported. If there are a significant number of individuals with disabilities reported as “additional participants” rather than as the decision-maker, that means there were two or more consumers in one demonstration, which is fairly rare. In addition, when an individual with a disability is not the decision-maker, their role in the demonstration might actually be as a family member or advocate rather than as an individual with a disability. Again, there is the odd exception of a married couple, both of whom have disabilities, who are exploring environmental adaptations for their home and are both are participating as individuals with a disability and no other role. But in general, it is helpful to carefully review the individual demonstration records that report multiple individuals with disabilities participating in one demonstration and track down the specifics of how a decision-maker was identified to complete the performance measure.

Short-term Device Loan

Device loans can either have a decision-making purpose, with the decision-maker borrower completing the access performance measure, or they can be for one of three other purposes in which the identified borrower will complete the acquisition performance measure. Nationally and historically, about 80% or more of device loans are for a decision-making purpose. Far fewer are for the combined total of the other three short-term purposes. Each of those have a situational time-limited purpose (e.g., accommodation for an event, loaner while waiting for funding/repair, professional development event). It is important to remember that these are confirmed “short term” situations — not someone borrowing a device to use while they are waiting for funding when they have yet to even identify a potential funding source. Events that are not clearly time-limited are better served through open-ended loans or other acquisition activities.

The accommodation purpose is one that can be misused for loans that are not short-term. This purpose is limited to providing AT for a fixed, time-limited event such as a two-day meeting, a week-long class, or a short inpatient hospital stay. One way to think about this: helping an organization provide an auxiliary aid as required under the ADA for a time-limited event. When a device is loaned out and used as an accommodation for the policy period of the loan program (e.g., 30 days) rather than a fixed, time-limited event unique to that person, that is not likely to be a short-term loan. For example, if a portable ramp is loaned to someone discharged from the hospital and there is no way to know how long they might need to keep it, that would be better as an open-ended loan that could be kept until a permanent solution is obtained. Additional guidance on distinguishing between the two types of loans can be found in Resource Brief #2 on short-term loans versus open-ended loans.

Red Flag: Number of short-term loans for decision-making is not the majority.

Unless there is another program meeting decision-making needs, this should be the primary focus of short-term loans. Resources used to purchase AT device inventory critical for short-term loan decision-making (e.g., complex communication and vision devices) should not be diverted for other uses. AT device inventory that can be obtained via donation and refurbished can/should be prioritized for non-decision-making purposes to ensure access to AT needed for complex decision-making.

Red Flag: Number of loan days by policy is more than a month.

If the primary purpose of short-term loans is decision-making, the loan period should ensure enough time to evaluate the effectiveness of the device but return quickly enough to allow the next borrow access in a timely manner. Loan periods that exceed one month suggest the primary purpose is something other than decision-making.

Red Flag: Borrower and device number is always equal, especially for decision-making loans.

This indicates that each borrower was loaned one device, which means there was no device comparison done before decision-making. It is possible that the borrower participated in a demonstration first and did the compare/contrast at that time and then borrowed the one device they thought was most appropriate. The challenge in this situation is that if the demonstration was also reported as a decision-making event, then there really is not another decision made for the device loan (more of a confirmation of the prior decision). When a demonstration and short-term loan blend together, it is very difficult to report two distinct decision-making events.

It is also possible that these numbers are equal because the AT device borrowed is the core record and the associated borrower data is duplicative. If one borrower gets three devices on short-term loan but each is reported as a separate device loan, then the borrower is reported three times and provided three separate decision performance measures (one for each device), which is not consistent with the APR provisions. This should have been reported as one device loan with three devices and one performance measure.

Reuse

Red Flag: Devices are reported with no retail dollar value.

If a number of AT devices is reported, there must be an associated retail price greater than zero. AT devices cannot be reported/acquired and no savings realized. If the device actually has no retail value, then it should be given away and not reported at all.

Red Flag: Average retail price per device is very low.

Look carefully at the total number of devices and retail price. If average retail price per device is very low ($1–$2), make sure the devices are really AT and not “consumables” or supplies.

Red Flag: Too many devices per recipient, or recipient always equals device number.

Look carefully at the total number of devices per recipient. While one person can have multiple AT devices reported, it is highly unusual for one recipient to have more than two to three devices acquired, especially multiple devices in the same AT type as those are typically reported as a group. Any one AT type device total that is larger than the total number of recipients should be investigated.

In addition, when overall reuse numbers get really large (2000 and more), it is a little worrisome when the device total and recipient total is identical as this suggests that perhaps the AT device is the core record and the associated recipient data could be duplicative. If one recipient acquires three devices but each are reported as separate reuse events, then that recipient is duplicated three times and they would need to provide three separate performance measures, which is not consistent with APR directions. This should have been reported as one reuse event with three devices and one performance measure.

State Financing Activities

Red Flag: Lowest/highest, sum, and distribution of incomes of applicants is not possible.

Carefully review the lowest and highest incomes reported, along with the sum of all incomes, the calculated average, and the frequency distribution table. There must be at least one reported in the frequency distribution cell that corresponds to the lowest income (i.e., if the lowest income is $10,000, then there must be at least one reported in the distribution cell of $15,000 or less). Similarly, if the highest income reported is $100,000, then there must be at least one reported in the cell for $75,001 or more. If the lowest income reported is $10,000 and the highest is $100,000 and there are two additional loans in the $60,000–$75,000 category, but the average is calculated as $35,000, then either the sum or the distribution is wrong. The smallest the sum could be with that distribution is $10,000 + $60,000 (x2) + $100,000 = $57,500. You should be able to look at the distribution and see if the average is consistent.

Red Flag: Lowest/highest, sum, and distribution of interest rates is not possible.

Same review process as above.

Training (including ICT Accessibility Training)

Red Flag: Total Participants in ICT accessibility training is 10 or less.

ICT accessibility training participants provide one of the required performance measures. As a result, if there are zero or very few training participants in this topic area, the performance measure calculation is highly unstable. Every effort should be made to increase these numbers.

Red Flag: ICT accessibility training narrative description is AT, not ICT accessibility.

There continues to be confusion about the content of information and communication technology accessibility training versus AT training. The term ICT includes websites, content delivered in digital form, electronic books and electronic book reading systems, search engines and databases, software, learning management systems, classroom technology and multimedia, telecommunications products (such as telephones), information kiosks, and automated teller machines (ATMs). Training on AT products used to access websites (e.g., JAWS) or AT telecommunications products (e.g., CapTel phone) are not ICT accessibility trainings.

Red Flag: Large portion of training participants cannot be categorized by type and/or have unknown geographic area.

When you are unable to report the type of geographic area for a large number of training participants, that suggests that the event might have been more of a public awareness than training. In general, participants in training events can be individually identified by type and general geographic area. Archived online training should be structured to allow for participant type and geographic area to be gathered along with verification of actual participation if the event is reported in the APR.

Technical Assistance

Red Flag: Narrative describes delivery of technical assistance to an individual.

Technical assistance, by definition, is not provided to an individual. It is provided to an agency or organization with a goal of improving something — their services, management, policies, or other areas. The goal does not have to be accomplished for the TA to be reported as it may be ongoing. If a goal has been accomplished, it may be appropriate to report as a state improvement outcome.

Coordination/Collaboration and State Improvement Outcomes

Coordination/collaboration activities are not required to be reported in the APR; however, such activities are a required activity of all Section 4 AT Act grantees. As such, when a grantee does not report at least one coordination and collaboration activity, it sends a message that they are not complying with legal requirements. Data from this section of the APR is used to populate the CATADA Catalogue of Initiatives, which is frequently utilized by ACL to identify grantee activities to highlight in partnership activities.

Similarly, not reporting any state improvement outcomes is permissible but can be interpreted as a lack of program focus on systemic policy work that is critical to supporting access to and acquisition of AT. These can be thought of as “systems change” initiatives that result in new, improved, or expanded policy, program, or practice outcomes that increase access to and/or acquisition of AT.

Red Flag: No coordination/collaboration, or state improvement outcomes are reported.

This suggests the program is not complying with the AT Act requirements and/or the program is not as robust as is could/should be.

Leveraged Funding

Leveraged funding reported in the APR are dollars that flow to the state AT program to be expended to support authorized AT Act activities included in the state plan. Any activities not authorized under the AT Act are not reported at all in the APR. Data is reported in the APR for all activities included in the state plan. Dollars leveraged and used by contractors (i.e., dollars do not flow to the state AT program) are NOT reported as leveraged funding. In kind contributions are NOT reported as leveraged funding.

Red Flag: Significant amount of federal leveraged funding.

The “federal” fund source category is limited to direct federal grants received by the state AT program and as such is not typically a large source of leveraged funding. Federal dollars that flow through state agencies (e.g., IDEA, VR, Medicaid) and are provided to the state AT program via agreement with those agencies are reported with a public/state agency source category even though the funds are federal for accounting purposes.

Performance Measures

Performance measures must be collected directly from recipients/participants without inappropriate influence. The performance measure questions (i.e., did you make a decision) and the response options should be presented to the recipient/participant so they can affirmatively select their choice. The state AT program should keep documentation of the performance measure choice made by the recipient/participant for verification of performance measure data collected/reported in the APR (e.g., a form directly completed by the recipient/participant or similar verification).

Red Flag: All performance measure data is always 100%.

It is highly unlikely that a program will never have a non-respondent or will never have a recipient/participant who selects “I did not make a decision” unless there are procedures in place that somehow “discourage” undesirable responses. This is especially true with very large numbers (thousands of recipients/participants). It is especially suspicious when there are non-respondents reported in consumer satisfaction but not in performance measures because those data elements are frequently collected at the same time.

Data Management System

Red Flag: Access to only aggregate data reports (monthly, quarterly, or similar).

Access to individual data records is necessary to ensure data accuracy and consistency. State AT programs must have the ability to identify problems at the individual data record level either through direct access to individual records in the data system or the ability to obtain data exports/runs of individual records that can be readily produced/accessed when needed. State AT programs must ensure any data system used internally or by a subcontractor collects and aggregates data consistent with the OMB-approved APR instrument through comprehensive understanding of the data system structure and data aggregation tables. Without access to backend data tables programming structure, state AT programs will need to conduct comprehensive testing to ensure the system produces accurate aggregate data for APR reporting. The NATADS D2D system does not need this kind of testing and ongoing oversight as it was specifically developed and is maintained in conformance with the OMB-approved APR requirements.

Historical Data Review

Before submitting current-year APR data, a comparison should be done with at least the prior year’s data and optimally with multi-year historical data to understand how key elements are increasing, decreasing, or not changing. Each screen of the APR in NATADS provides prior-year total data for that activity to make comparison easy. It is critically important for grantees to be able to explain data volume (output level) trajectory changes. Grantees can use the CATADA data portal to create custom tables with historical data to use in this review, and NATADS may be updated to pull a limited number of key data elements from the prior-year APR to provide a benchmark in the current APR to support this kind of fidelity and accuracy review.

Red Flag: No awareness of or ability to explain significant data change.

Grantees should be knowledgeable of their historical data and trajectory patterns over time, especially for key output data elements. Any significant changes should be understood and explained as actual variances due to some known cause (e.g., increases created by new funding and expanded program or decreases caused by program closure) or changes that are artifacts of data system integrity issues or data collection/reporting fidelity issues. If changes are caused by the latter, there should be a clear plan developed to remediate those issues to ensure cleaner and more stable data for the future.

Last updated January 2023

Last updated