SUBMITTED TO THE OFFICE OF THE NATIONAL COORDINATOR FOR HEALTH INFORMATION TECHNOLOGY Electronic Health Record (EHR) Reporting Program: Developer-Reported Measures Final Report prepared for the Office of the National Coordinator for Health information Technology (ONC) Fredric Blavin Emily Johnston Laura Barrie Smith Christal Ramos URBAN INSTITUTE URBAN INSTITUTE URBAN INSTITUTE URBAN INSTITUTE Gary Ozanich Kathy Frye Alex Horn HEALTHTECH SOLUTIONS HEALTHTECH SOLUTIONS HEALTHTECH SOLUTIONS December 2021 AB O U T T HE U R BA N I NS T I T U TE The nonprofit Urban Institute is a leading research organization dedicated to developing evidence-based insights that improve people's lives and strengthen communities. For 50 years, Urban has been the trusted source for rigorous analysis of complex social and economic issues; strategic advice to policymakers, philanthropists, and practitioners; and new, promising ideas that expand opportunities for all. Our work inspires effective decisions that advance fairness and enhance the well-being of people and places. Copyright © October 2020. Urban Institute. Permission is granted for reproduction of this file, with attribution to the Urban Institute. Cover image by vandame/Shutterstock. Contents Acknowledgments iv Introduction 5 Background 5 Approach to Identifying Developer Measures 6 HITAC and Public Feedback 8 Feedback Contributors 8 Feedback Themes and Resulting Revisions 10 High-level Feedback 10 Domain and Measure Specific Feedback 11 Potential Future Measures 14 Feasibility Testing 15 Process 15 Findings 16 Revised Developer-Reported Criteria 24 Conclusion: Issues to Consider 29 Appendix 31 Statement of Independence 36 CONTENTS iii Acknowledgments This report was funded by the Office of the National Coordinator for Health Information Technology (ONC). We are grateful to them and to all our funders, who make it possible for Urban to advance its mission. The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders. Funders do not determine research findings or the insights and recommendations of Urban experts. Further information on the Urban Institute's funding principles is available at urban.org/fundingprinciples.i The authors are grateful to members of the public, subject matter experts, and stakeholders who provided feedback on the measures, including those on the EHR Reporting Program HITAC Task Force. They also thank Kellie McDermott for her editorial assistance. ACKNOWLEDGMENTS iv Introduction Background The 21st Century Cures Act (Cures Act)ii directed the US Department of Health and Human Services to establish the Electronic Health Record (EHR) Reporting Program.iii The Office of the National Coordinator for Health Information Technology (ONC) contracted with the Urban Institute, and its subcontractor, HealthTech Solutions, to support development of the program. The EHR Reporting Program was intended to reflect voluntary end users' and developers' reporting of comparative information on certified health information technology (IT). The Urban Institute (hereinafter referred to as Urban for brevity) and HealthTech Solutions (herein referred to as HTS for brevity) have published voluntary user measuresiv for the EHR Reporting Program designed to provide publicly available, comparative information on certified health IT products to inform health IT users' purchasing and implementation decisions. These measures were informed by a 60-day public feedback period and focus on the domains identified under the Cures Act-interoperability, usability and user- centered design, privacy and security, conformance to certification standards-and other categories as appropriate to evaluate the performance of certified health IT. ONC does not plan to implement these voluntary user-reported measures at this time. The EHR Reporting Program also includes measures that developers of certified health IT will be required to report on as a condition and maintenance of certification under the ONC Health IT Certification Program. These developer measures aim to address information gaps in the health IT marketplace and provide insights on how certified health IT is being used. The first set of measures focuses on interoperability, with an emphasis on patient access, public health information exchange, clinical care information exchange, and standards adoption and conformance. This report presents the developer measures and concludes the Urban Institute and HealthTech Solutions' work to support ONC in the development of this program. The report includes: ◼ A description of the steps taken to identify developer measures; ◼ A summary of Health Information Technology Advisory Committee (HITAC) and public feedback and measure updates that were made in response to the initial set of draft measures; ◼ A description of feasibility testing conducted with developers and measure updates made based on findings; and BACKGROUND 5 ◼ The revised developer reporting measures. In addition, we include recommendations based on the input we have received for ONC to consider as they move forward with implementing the program. Approach to Identifying Developer Measures Identification and refinement of the developer measures occurred in three phases: 1. Identification of draft measures to publish for public and HITAC review; 2. Revision of measures based on HITAC and public feedback; and 3. Refinement of measures based on feasibility testing. Throughout these phases, the number and focus of the measures was narrowed (see Figure 1). FIGURE 1. STEPS TO IDENTIFY AND REFINE DEVELOPER MEASURES The identification phase is described in this section, while steps taken to update the measures based on HITAC and public feedback and feasibility testing are described in the subsequent sections. Identification of developer measures began with a broad literature and market scan that occurred in the fall of 2020 to identify potential measures based on topics named in the Cures Act. The scan included a focus on whether measures could potentially be reported through automatic capture using audit logs and other existing data. It included: ◼ A review of existing requirements related to certified health IT across federal programs; BACKGROUND 6 ◼ A review of ONC documents, reports, peer-reviewed and grey literature (see Table 1A in Appendix); and ◼ Market research discussions with subject matter experts (see Table 2A in Appendix). The scan resulted in 79 potential measures. However, findings suggested that automatic capture and reporting of measures is not likely feasible in the near future. In close partnership with ONC, we prioritized and drafted 20 measures to discuss in eight semi-structured interviews with developers of certified health IT and subject matter experts conducted in January 2021 (see Table 3A in Appendix). We identified organizations and interviewed participants through a purposive sample based upon initial discussion with ONC, followed by a snowball method to identify additional individuals or organizations. Interviews focused on measure value, reliability, collection burden, and generalizability. Based on interview findings, the 20 measures were further narrowed down to 12 measures. In close partnership with ONC and other subject matter experts, measures were further refined to align with ONC policy priorities. Ultimately, 10 measures were moved forward for public and HITAC review. The aim of these measures was to address information gaps in the health IT marketplace and provide insights on how certified health IT is being used. These measures focused on interoperability, with an emphasis on patient access, public health information exchange, clinical care information exchange, and standards adoption and conformance. The approach for selecting the 10 draft measures considered the following: ◼ The extent to which the measures related to priority interoperability functions; ◼ The potential for the measures to evolve and expand to other measure categories in future iterations of the Conditions and Maintenance of Certification requirements under the ONC Health IT Certification Program; ◼ Relevance of the measures to ONC policy priorities and broader stakeholder interests; ◼ Whether the value of the measures is net greater than the burden for collecting the measures; ◼ Whether the measures require regulation to be obtained; ◼ Whether the effort required to report the measures are not duplicative of other data collection; ◼ Whether developers can report the measures on a product level and across their customer base(s); and ◼ Whether the measures can be trended. BACKGROUND 7 HITAC and Public Feedback ONC, with support from the Urban/HTS team, convened a HITAC Task Force from July 15 to September 9, 2021, to make recommendationsv to prioritize and improve the draft set of 10 developer measures for the EHR Reporting Program. In addition, written public feedbackvi on the draft developer measuresvii was collected through an Urban Institute email inbox between July 14 and September 14, 2021. Both the HITAC and public were asked to provide feedback on the following topics: ◼ Frequency of reporting (e.g., annually, biannually, or quarterly); ◼ Data granularity (e.g., subgroups, product vs. developer level, single values vs. distributions); ◼ Appropriateness of look-back periods (e.g., active patients seen within last 12 or 24 months); ◼ Clarity of definitions and measurement; ◼ Benefit of measures relative to burden of collecting data; ◼ How to address potential interpretation challenges; ◼ Potential burden on users of certified health IT (e.g., clinicians); ◼ Potential burden on small or start up developers of certified health IT; and ◼ Value of measures to provide insights on interoperability to multiple stakeholders. In this section, we describe the participants in the HITAC Task Force and public commenters to provide context for interpreting feedback. We then summarize their feedback and describe the approach to updates made to measures in response. Feedback Contributors There were 12 members of the HITAC Task Force representing researchers, clinical providers and health care organizations, public health, developers of certified health IT, and a health insurance company. We also received 21 written public comments representing clinical providers, health care organizations, developers of certified health IT, a health insurance company, and other stakeholders and experts in health IT. A breakdown of HITAC Task Force and public feedback contributors by category is presented in Table 1. Some contextual factors should be kept in mind when interpreting the HITAC recommendations and public comments. First, the Task Force charge was to focus on prioritizing and improving draft measures, while the public comments included broader feedback on the program and measures. In addition, the Task Force HITAC AND PUBLIC FEEDBACK 8 feedback had more emphasis on the value of the measures, largely because researchers, providers, and policy experts were more heavily represented in the Task Force than among public commenters. In contrast, the public comments included more developer representation, which increased the focus on the burden of reporting measures. In addition, comments related to public health were more heavily represented in the public comments than on the Task Force. Some other factors worth noting are that the overlap between organizations represented on the Task Force and by public comments may amplify some views; Anthem, Inc., Epic Systems, OCHIN, and Washington State Department of Health were represented on both the Task Force and in submitted public comments. Furthermore, the timing of public comment submissions and content indicate the Task Force meetings and recommendations also influenced comments. Finally, some coordination among developer commenters was evident given use of the same language and reference to the EHR Association's comments in other developer's comments. TABLE 1. HITAC TASK FORCE AND PUBLIC FEEDBACK CONTRIBUTORS BY CATEGORY HITAC Task Force (n=12) Public Comments (n=21) Clinical Providers, Researchers and Other Health Care Raj Ratwani, PhD (co-chair), MedStar Health American Academy of Nutrition and Dietetics Jill Shuemaker, RN, (co-chair), ABFM Foundation National Association of ACOs Abby Sears, MBA, MHA, OCHIN OCHIN Inc. Kenneth Mandl, MD, MPH, Boston Children's Hospital Pew Charitable Trusts Jim Jirjis, MD, MBA, HCA Healthcare Premier Healthcare Alliance Joseph Kunisch, PhD, RN-BC, Harris Health Quest Diagnostics Steven Lane, MD, MPH Sutter Health Texas Medical Association Steven Waldren, MD, MS, AAFP Veterans Administration Public Health Bryant Karras, MD, Washington State Dept. of Health Washington State Dept. of Health Oregon Health Authority Tennessee Department of Health Developers of Certified Health IT Sasha TerMaat, Epic Epic Systems Corporation Zahid Butt, MD, Medisolv Allscripts Cerner Corporation EHR Association MEDITECH, Inc. Other Sheryl Turney, MEd, Anthem Anthem American Medical Informatics Association Chart Lux Consulting Connected Health Initiative Healthcare Leadership Council HITAC AND PUBLIC FEEDBACK 9 Feedback Themes and Resulting Revisions We received a range of thoughtful and detailed feedback from the HITAC recommendations and in the submitted public comments. We aimed to be responsive to comments while continuing to consider ONC's priorities and the opportunity to address some concerns through future iterations of the program. Below, we summarize high-level feedback from the HITAC and public comments, and then summarize major feedback for each measure domain where there was agreement and disagreement between the HITAC recommendations and the public feedback. We also describe how we revised measures in response to specific feedback. Finally, we summarize feedback received on potential future measures. As a reference, Table 2 summarizes the measures posted for review by the HITAC and public. Additional details can be found in the final HITAC recommendations reportv and the public commentsvi on the draft developer measures. TABLE 2. DOMAINS AND MEASURES POSTED FOR HITAC AND PUBLIC FEEDBACK Domain Measures Patient Access 1. Use of different methods for access to electronic health info 2. Use of third party patient-facing apps 3. Collection of app privacy policy Public Health 1. Sending vaccination data to Immunization Information Systems Information 2. Querying of IIS by health care providers using certified health IT Exchange 3. Submission of data to public health via third-party apps or Application Programming Interfaces (APIs) 4. Percentage of patients using write-back functionality on third party, registered patient-facing apps Clinical care 1. Viewing summary of care records information 2. Use of 3rd party clinician-facing apps exchange Standards Adoption 1. Use of Fast Healthcare Interoperability Resources (FHIR) profiles by and Conformance clinician-facing apps 2. Use of FHIR profiles by patient-facing apps 3. Use of FHIR bulk data Data Quality and 1. By data element, percentages of data complete Completeness Note: Italicized measures were posted as potential future measures. High-level Feedback Overall, both the HITAC and public commenters saw value in the focus on interoperability and emphasized the value of standards adoption and conformance measures and public health measures over patient access HITAC AND PUBLIC FEEDBACK 10 and clinical care measures. The public commenters generally agreed with the HITAC recommendations on specific measure specification (further described below), but there were points of disagreement. The HITAC and public commenters agreed that many of the measures will be complex for developers to report. There were multiple public comments that raised concern over whether the measures may ultimately increase the cost of certified health IT products for users; to reduce burdens and costs, commenters suggested prioritizing fewer measures, using a phased approach to introduce a few measures at a time, and/or have stratifications as a future goal of the program. They also noted potential redundancy between the measures, Real World Testing, and Promoting Interoperability requirements. In addition, the public raised concerns and had confusion over the purpose of the EHR Reporting Program as reflected in the developer measures. Multiple comments reflected a lack of clarity on whether the program should only measure what is required for certification, or if they should measure things that go beyond the requirements to address product inadequacies or push towards interoperability goals. Developers of certified health IT had concerns that measures focus on the usage of products by customer organizations rather than product performance. In addition, commenters raised concerns that not all Cures Act domains are covered by the draft developer measures (such as usability and user-centered design and privacy and security, which were areas of focus in the previously published user measures). Commenters also suggested several additional measure topics to be considered for the program, including safety, social determinants of health, information blocking, and quality reporting. Domain and Measure Specific Feedback In Table 2, we summarize the HITAC and public feedback themes received in each measure domain and our approach to revising the measures accordingly. As previously noted, detailed recommendations from the HITAC can be found in their final report,v and individual public commentsvi are posted on the Urban Institute website.xiii In addition, based on HITAC and public feedback, the following updates were made across all measures: ◼ Measures are to be reported annually for a 12-month reporting period to minimize burden and align with the frequency of other related programs; ◼ July 1 – June 30 is used as a default for a 12-month reporting period, primarily to prevent differences in seasonal vaccine popularity from complicating public health measures 1; 1Following the feasibility testing process, we chose to align the reporting period with other reporting programs to be based on the calendar year. HITAC AND PUBLIC FEEDBACK 11 ◼ Where possible, metrics will be reported at the product level (this was the HITAC recommendation, though there was variability among commenters whether this is feasible); and ◼ No lookback period beyond the reporting period is included. TABLE 3. HITAC AND PUBLIC FEEDBACK AND APPROACHES TO REVISION Feedback Approach to Revisions Patient Access Measures ◼ HITAC and most commenters agreed to prioritize ◼ Kept use of different methods for measure on access methods and remove app privacy accessing electronic health policy measures information measure ◼ HITAC and several commenters agreed sustained ◼ Removed sustained usage measure usage dimension of the access methods measure components and revised should be removed stratifications to reduce complexity/burden ◼ Multiple developers recommended removal of measure on use of third party patient-facing apps and ◼ Removed use of third party patient- said identifying method of patient access, facing apps and app privacy determining number of users, and identifying measures sustained access would be complex to report/collect ◼ Made additional specification and may reflect app and patterns of access to care updates to the use of different more than the CEHRT product methods for accessing health ◼ Several commenters recommended stratification by information measure based on patient characteristics should be a future goal of the HITAC recommendations, such as program because stratifications add significant including only active patients in the complexity and much of the data collection burden numerator and removing numerator falls on providers 1d (neither method). Public Health Measures ◼ HITAC and some commenters emphasized the ◼ Kept both initial measures on importance of defining "successful" exchange and sending vaccination data to IIS and challenges querying of IIS by providers ◼ HITAC and many commenters raised concerns over ◼ Limited subgroups to IIS submitted the reporting burden of the multiple sub-categories to/received from ◼ Some commenters raised concerns over the value- ◼ Considered other types of public added and the appropriateness of the measures health exchange for future measures ◼ Many commented on how variation in state regulatory environments and IIS capabilities/data ◼ Made additional updates to measure quality can make it challenging to use these measures specification based on HITAC for comparative purposes recommendations, such as updating the numerators and denominators of each public health measure o Measure 1 (sending): Updated the numerator and denominator from "number of individuals" to "number of immunizations administered" HITAC AND PUBLIC FEEDBACK 12 o Measure 2 (queries): Changed denominator from "number of individuals" to "number of encounters"; updated numerator to "number of query responses from the IIS received". Clinical Care Measures ◼ HITAC and developers agreed to count any ◼ Kept measure on clinical data Consolidated-Clinical Document Architecture (C- received from external source with CDA) document received, not just Summary of Care recommended changes from HITAC C-CDA document types ◼ Moved measure on clinician-facing ◼ HITAC and multiple developers recommended apps to the Standards and aligning these measures with the Standards Adoption Conformance category to eliminate and Conformance measures domain to eliminate any any duplicity duplicity ◼ Made additional updates to measure ◼ Multiple developers recommended removal of all specification based on HITAC measures in this category, particularly the use of recommendations, such as using the third party clinician-facing apps term incorporate instead of parse and integrate for the C-CDA ◼ Multiple developers suggested focusing on what the measure. CEHRT does, which is to ingest and reconcile discrete data elements into the patient's record in the EHR ◼ Accepted the recommendation for the definition of viewing to be having an open document displayed to a user, whether the display includes all or a subset of the data received, regardless of whether the user scrolls through or clicks on any of the data. Standards Adoption and Conformance ◼ HITAC and public suggested measures should focus ◼ Moved forward with clinician-facing on FHIR Resources over US Core and Non-Core apps, patient-facing apps, and bulk Profiles FHIR initially using an aggregated customer measure as the ◼ HITAC and public commented Bulk FHIR measure denominator denominator needed refinement ◼ Added vendor availability of apps ◼ HITAC recommended adding measures on electronic and Electronic Health Information health information export, vendor availability of apps (EHI) Export as new measures and cost of API use ◼ Considered API cost data, app types ◼ Commenters raised challenges for developers to by provider/user, technical track app use and calculate measures measures of data ◼ While HITAC focused on FHIR APIs, commenters volume/throughput, refinement of suggested measuring all APIs denominators as future measures. ◼ While HITAC suggested reporting separately for ◼ Consider reporting on a semi-annual inpatient and outpatient settings, commenters basis. suggested denominators be aggregated across sites ◼ Data completeness and quality to reduce complexity and increase feasibility measures were identified as HITAC AND PUBLIC FEEDBACK 13 potentially being better associated with other reporting programs due to influence of customer organizational factors. Potential Future Measures In addition, we collected feedback on potential future measures for the program focused on data quality and completeness, submission of data to public health via third-party apps or APIs, and percentage of patients using write-back functionality on third party, registered patient-facing apps. Of these three, only the data quality and completeness measure was reviewed by the HITAC due to time constraints. While the measure was considered highly relevant for the ability to report other measures by subgroups (such as based on patient demographics), the HITAC and public suggested much more work is needed to clarify what would be required for each data element to assure accuracy in terminology. Some commenters suggested ONC establish a special initiative for this issue. Public comments also suggested it is not feasible for developers to report on the quality of the data at this time, that demographic data may come from registration systems that do not reflect EHR performance, and that this measure goes back to Meaningful Use stage 1, where it was topped out at 99% and removed. For the other two potential future measures reviewed by the public, there was support for the public health measure by several commenters. Developers were strongly opposed to both measures given difficulty to collect information on third party apps. Some commenters suggested third party app developers could better provide this information. For the patient write-back measure, developers pointed out there is no certification requirement for them to support write-back and they would need guidance to help them develop this functionality. HITAC AND PUBLIC FEEDBACK 14 Feasibility Testing We conducted feasibility testing with targeted respondents to assess the extent to which the developer- reported measures for the EHR Reporting Program can be produced and reported by developers of certified health IT. Specifically, we were interested in understanding developers' ability to produce the measures from existing data systems; anticipated costs of preparing to produce the measures; relative burden of individual measures; and potential barriers to measure reporting. Process As part of our feasibility testing process, we first developed a semi-structured interview guide. This guide began with a brief explanation of the purpose of the EHR Reporting Program; an introduction to the purpose, goals, and structure of the interview; and consenting language explaining that participation is voluntary. We asked all participants if they had questions about the program or about the purpose of the interview before beginning. We also requested permission to record the interview to supplement written notes. The guide contained overarching questions about the feasibility of measuring and reporting measures; a table asking for standardized estimates of burden, cost, and time to implement for each measure; and detailed questions about whether aspects of each measure might make them difficult to report. The final interview guide reflects ONC priorities including: ◼ for each developer-reported measure, capturing: o a standardized estimate of burden o a standardized estimate of cost o estimated cost (in dollars) o estimated time to implement ◼ for the measures overall, understanding: o for which subgroups and stratifications measures can be reported o under what, if any, circumstances a developer should be exempt from reporting measures o developer perspectives on the appropriate frequency of measure reporting o whether developers anticipate unintended consequences related to reporting the measures We pilot tested this interview guide with Clinovations Govhealth, a health IT consulting firm with expertise in developer measurement of interoperability and experience working on similar projects with ONC. FEASIBILITY TESTING 15 We next invited five participants from our network of stakeholders engaged in the EHR Reporting Program development process with guidance from ONC: four individual developers that did not submit public comments and one developer trade organization (EHRA). All five invited organizations agreed to participate and were scheduled for a 1.5-hour virtual interview. At least four, but often more, members of the Urban/HTS team participated in each interview, allowing the interviewer to focus on the discussion while another team member took detailed notes. ONC representatives also attended each interview and answered participant questions about the program and specific measures as appropriate. We shared the interview guide and nine developer-reported measures (revised to incorporate HITAC and public feedback) with each participant organization in advance of the call and provided participants with the opportunity to share their feedback during the interview and in writing. Interviews took place between October 20 and November 19, 2021; written feedback was requested to be shared by the end of that period. The team then reviewed interview notes, recordings, and written feedback to identify emerging themes, consistently raised questions and concerns, and combined scores for standardized measures. In some cases, the feedback we received indicated that a participant organization did not have enough information-either from their internal team or because of uncertainty regarding the measure-to estimate a measure's burden. We included such feedback in our findings. Findings We present the findings from this feasibility testing process below including general themes from discussions, standardized summaries of anticipated burden and financial cost for each measure, detailed measure-specific comments, and key findings from the EHRA interview. We describe the EHRA findings separately because this call was different from the others; EHRA is a trade organization and the call included feedback from representatives of seven different developers, two of whom also provided one-on- one feedback. Finally, we describe revisions made to the measures and considerations for ONC as they implement the program based on our findings. General themes from discussions ◼ Multiple developers asked about the shift in program priorities away from a comparison tool for perspective purchasers. ONC explained that the program priorities for the developer measures have evolved to focus on providing insight into the functioning and performance of the marketplace. ◼ All developers indicated that they are constrained with existing programs and priorities and inquired about aligning or combining the EHR Reporting Program with Real World Testing (RWT) because of program overlap. One developer also mentioned how they are in the midst of final preparations for 2015 Edition Cures Update testing and are going through extensive preparations FEASIBILITY TESTING 16 for their National User Group Meeting. Another developer recommended linking the EHR Reporting Program to the CMS Promoting Interoperability program. ◼ All developers indicated that there would be a certain level of fixed costs associated with participating in the program and varying marginal costs associated with each measure. All developers were able to provide some burden or cost ratings associated with each measure, but none were able to provide specific dollar or full-time equivalent (FTE) estimates (additional time and information on final measures would be needed to provide cost estimates). ◼ Examples provided of general cost and burden drivers included server costs to store the data; developing a mechanism to transfer the data to ONC; data gathering, measurement calculation, and software development; renegotiating contracts or data use agreements (DUAs) with clients (only an issue for one of the developers interviewed); and having multiple products and product types. ◼ Developers indicated that EHR architecture can affect the burden and costs associated with the EHR Reporting Program. For example, the burden and costs could be lower in more agile, cloud- based systems relative to client-server EHRs. ◼ In general, developers did not indicate that there was a need for exemptions if they were provided with sufficient time to implement the measures. However, one developer recommended providing an opportunity for developers to supplement their results with qualitative information to provide a story for the market and customers that they serve, and why the data looks like it does. Exemptions may also be needed in cases where the measure is not applicable to the EHR product e.g., specialty EHRs without any clients that administer immunizations. ◼ Some developers raised concerns about data privacy and confidentiality, particularly where data stratification is requested. For example, one developer indicated that they only have one client in a particular state, and as such, the data reported by IIS for the public health measures would be tied to that particular client. It was asked that ONC should only request data when the sample size exceeds a certain minimum threshold. ◼ Developers are concerned about how to interpret and gather data for the public health measures. Each state and IIS are different in how they track sent and received data. For example, while developers know how many messages were sent to the IIS as per measure 2 (Public Health Send), reporting on measure 3 (Public Health Receive) is challenging because some IIS' are not sending back acknowledgements that they received the data. ◼ Developers also provided several recommendations that could reduce the burden associated with the EHR Reporting Program, such as: FEASIBILITY TESTING 17 o Requesting fewer stratifications, although some (e.g., Certified Health IT Product List (CHPL) ID, location, ambulatory status) are less problematic than others (e.g., Sexual Orientation and Gender Identity (SOGI)). Measure-specific comments are further discussed below. However, one developer indicated that they prefer to not report measures by CHPL ID but rather report the measures across their entire client base. o Aligning the reporting period with other reporting programs to be based on the calendar year. By not using the calendar year, it creates a burden to have to develop different reporting instrumentation for the different reporting periods. o Requiring less frequent reporting e.g., annual instead of quarterly. o Initially not requiring the reporting on 100% of customers among developers where contracts or DUAs need to be modified, but rather a subset of customers choosing to voluntarily engage in the program. Standardized Summaries of Anticipated Burden and Financial Cost Table 4 includes the sample size, mean, and range for the relative burden and cost ratings for five EHR products provided by the four developers we interviewed. This table captures two separate estimates for one of the developers: one for their cloud-based product, and another for their locally hosted products. This developer provided separate estimates for these products since the measures would be easier to deploy from the cloud-based product relative to the other products. Sample sizes vary across measures because some developers did not provide ratings for all measures during the discussions. As previously mentioned, none of the developers provided cost estimates in dollar amounts, and except for one developer, all ratings were discussed on the call without any written follow-up. Overall, the cost and burden estimates are highly correlated with one another. However, there is noticeable variation in the burden and cost ratings across developers as shown by the range of estimates. Based on this limited sample, the burden and costs are generally lowest among the patient access (measure 1) and standards and conformance (measures 5-9) measures. In contrast, burden and costs are higher among the public health (measures 2 and 3) and a component of the C-CDA measure (4-2 for reconciled and incorporated data). Table 5 includes ranges for the estimated time to implement each measure. Overall, the estimated time to implement all measures ranges from less than 12 months to 40 months, with most measures taking between 12 to 24 months to implement. Most importantly, all developers indicated that they could fully implement the measures within 40 months. However, there are several major limitations to note about these estimates. First, the sample sizes are extremely small and do not represent the universe of EHR developers or products. In fact, four of the averages in Table 4 are based on a single estimate. Second, the criteria that developers applied to generate FEASIBILITY TESTING 18 ratings were subjective and varied for each interview, making it difficult to compare findings across developers. For example, only one developer provided written feedback following our discussion whereas one developer provided on-the-spot approximations during the call. Table 5A in the Appendix includes the developer-specific burden ratings, cost ratings, and time to implement. These estimates provide a better sense as to how the cost and burdens vary within each developer. Cells with missing values indicate that ratings were not provided during the discussions. TABLE 4: STANDARDIZED SUMMARIES OF ANTICIPATED BURDEN AND FINANCIAL COST Burden Rating: 1 (low)-10 (high) Cost Rating: 1 (low)-10 Measure (high) N AVERAGE RANGE N AVERAGE RANGE 1: Patient access 4 4.3 2 – 6.5 3 3.7 2 – 6.5 2: PH (send) 5 6.7 2.5 – 10 1 10 10 – 10 3: PH (receive) 5 6.7 2.5 – 10 1 10 10 – 10 4 – 1: C-CDAs viewed 5 5.1 2–8 1 6.5 6.5 – 6.5 4 -2: C-CDAs R/I 5 7.9 5 – 10 3 8.8 6.5 – 10 5: # of apps 4 4.3 2–9 3 4 2–5 6: FHIR (clinician-facing) 4 5.6 2 – 9.5 3 4 2–5 7: FHIR (patient-facing) 4 5.6 2 – 9.5 3 4 2–5 8: Bulk FHIR 4 5.6 2 – 9.5 3 3.2 2 – 5.5 9: EHI export 1 2 2–2 2 3.5 2–5 Notes: Summary statistics are based on ratings of five products provided by four developers. One developer provided separate estimates for cloud-based and client-server EHRs. Midpoints were used when a developer provided a range for a given rating. R/I= reconciled/incorporated. FEASIBILITY TESTING 19 TABLE 5: STANDARDIZED SUMMARIES OF ESTIMATED TIME TO IMPLEMENT MEASURES (IN MONTHS) Range Measure N Min Max 1: Patient access 5 < 12 34 2: PH (send) 5 < 12 34-40 3: PH (receive) 5 < 12 34-40 4 – 1: C-CDAs viewed 4 < 12 34-40 4 -2: C-CDAs R/I 5 18-24 34-40 5: # of apps 5 12 34-40 6: FHIR (clinician-facing) 5 12 34-40 7: FHIR (patient-facing) 5 12 34-40 8: Bulk FHIR 5 12 34-40 9: EHI export 4 12 18-24 Notes: Summary statistics are based on ratings of five products provided by four developers. One developer provided separate estimates for cloud-based and client-server EHRs. R/I= reconciled/incorporated. Measure-Specific Comments Below, we describe measure-specific comments heard from developers during the interviews that drive some of the estimates in Appendix Table 5A. Measure #1: Patient Access ◼ Several developers indicated that stratifying the patient access measure by patient gender assigned at birth, SOGI, and Social Determinants of Health (SDOH) would add a significant burden. ◼ Developers also raised interpretation concerns over this measure. For example, this measure does not show how well the EHR is functioning since developers do not have control over whether the patient accesses their data. Additionally, SOGI information might not be entered uniformly across all clients and would be difficult to aggregate and interpret. ◼ One developer indicated that reporting 1a (via third party app) and 1c (combination of third-party app and patient portal) would be significantly more burdensome than 1b (via patient portal) since they have not yet found a way to identify apps. The other two developers did not think this measure would be very difficult to report on since they are able to track patient portal and third-party app usage. FEASIBILITY TESTING 20 Measures #2 and #3: Public Health Information Exchange ◼ One developer stated that data for these measures would be relatively straightforward to pull for their cloud-based product but would be moderately challenging to pull for their locally hosted products. ◼ In contrast, the other three developers stated that the burden associated with these measures, particularly measure 3 (PH receive), would be relatively high because each state and IIS is different in how they track sent and received data. For example, while developers know how many messages were sent to the IIS as per measure 2 (PH send), reporting on measure 3 (PH receive) is challenging because some IIS' are not sending back acknowledgements that they received the data. One developer also indicated that the burden for these measures would be relatively high because they are currently not capturing which IIS the client is connected to. ◼ One developer indicated that reporting by IIS and age group would add a burden relative to reporting by IIS only. Measure #4: Clinical Care Information Exchange ◼ Developers asked for additional clarity around key definitions associated with this measure (e.g., "view", "open", and "received") and if there is a timeframe for when the data are parsed. For example, one developer indicated that within their products, users can view documents without actually opening it. ◼ Definitional concerns aside, two developers indicated that measure 4 would only be moderately difficult to report on since both developers are collecting some of these data now and/or are generating capabilities to collect these data. In contrast, another developer indicated that numerator 2 for this measure would be extremely difficult for them to report on and would cost millions to implement since they do not have a way to uniquely trace the lineage of data once it is incorporated into the chart. Measures #5-9: Standards and Conformance to Certification ◼ One developer of certified health IT was unable to comment on the burden and costs associated with these measures because they are in the developmental stages of their API applications and services. They will be releasing this functionality as part of their 21st Century Cures work and should be able to implement these measures within an 18 to 24-month timeframe. ◼ One developer indicated that these measures would be relatively easy to implement and would not be overly costly, whereas the other two indicated that these measures would be moderately or highly burdensome to implement (particularly measures 6-8). For example, in many cases, one developer indicated they have the capability to track APIs (e.g., they will be certifying R4 FHIR APIs FEASIBILITY TESTING 21 relatively soon). However, uncertainty over how they can stratify and tag apps in an appropriate database that they would be reporting from adds to the burden. Measure 5 (number of apps) would be less burdensome. ◼ Developers generally agreed measure 9 (EHI export) would be relatively straightforward to implement as it is currently defined. One developer also recommended adding the use case for why the data was exported (e.g., moving to another EHR, use for population health tool, etc.) to get more useful information, although this would create an additional burden. Key findings from EHRA interview There were approximately 30 participants on the EHRA interview, representing a variety of developers and EHRA staff. Overall, we obtained feedback from seven EHR developers during the interview. In general, the major findings from the EHRA interview were consistent with the feedback from the three EHR developers. The key takeaway messages from the EHRA discussion are summarized below. ◼ A number of the developers represented agreed that measures 2-3 (Public Health), 6 (FHIR clinician-facing), and 7 (FHIR patient-facing) have the highest burden. o Measures 1 (Patient Access), 5 (number of apps), and 9 (EHI Export) had the lowest burden. o Measures 4 (C-CDA) and 8 (Bulk FHIR) were in the middle for burden. ◼ Developers agreed that the infrastructure items (i.e., fixed costs) needed to implement the program would take the longest time to develop. There would also be incremental costs and burdens for each additional measure. ◼ Some developers indicated that it will be a significant barrier if they are required to collect data from all their organizations that use certified products. This is due to needing their customers permission to access the metadata and pull it back. o One cost is related to the need to develop a reporting infrastructure to change all the license agreements or contractual agreements with customers to get them to provide data. o All developers have different contractual relationships with their clients. Depending on the type of services and type of clients, all would require some legal review of the language of data ownership and access. ◼ The developers indicated that they cannot provide accurate estimates without first knowing additional details on program implementation and measure specification. ◼ Developers agreed that stratification adds to the burden and complexity for each measure. ◼ Developers agreed that none of the measures could be reported on immediately; one developer reported that the timeline to implement would take 5 years after the Final Rule. FEASIBILITY TESTING 22 ◼ One developer indicated that as a specialty EHR, they did not have clients that administered immunizations so there would need to be exceptions for the public health measures. Recommended changes to measures based on feedback Table 6 highlights the cross-cutting and measure-specific updates we made to the measures based on the feedback from the feasibility testing process. These edits are relatively minor but can potentially reduce the costs and burden associated with implementation of the EHR Reporting Program. The final draft version of the developer measures that incorporate these feasibility test findings are in the next section. TABLE 6. FEASIBILITY TEST FINDINGS AND APPROACHES TO REVISION Findings Approach to Revisions Cross Cutting ◼ Not using a calendar year reporting period (i.e., ◼ Aligning the reporting period with January 1- December 31) creates a burden to have to other reporting programs to be develop different reporting instrumentation for the based on the calendar year different reporting periods ◼ Frequent reporting requirements add significant ◼ Opt for annual reporting of burden measures (revisions not needed) ◼ Reported data for measures alone provides an ◼ Allow developers to submit incomplete picture qualitative information/context around each measure, if desired Patient Access ◼ Stratifying the patient access measure by patient ◼ Remove this stratification from gender assigned at birth, SOGI, and SDOH would add reporting elements a significant burden Public Health Information Exchange ◼ Request fewer stratifications; Reporting by IIS and ◼ Remove stratification by IIS and age age group would add burden to developers group from reporting elements ◼ Some EHRs do not have clients that administer ◼ Provide exemptions for specialty immunizations EHRs Clinical Care Information Exchange ◼ Add clarity to definitions ◼ Define 'received' as successful receipt of a unique CCD-A that was matched to the correct patient'. See Paragraph (b)(2)(ii) of certification criteria (b)(2)viii ◼ Numerator 2 (reconciled and incorporated data) is ◼ Keep as is extremely burdensome Standards Adoption and Certification Conformance ◼ Measure alone does not provide much value ◼ ONC should consider adding use case for reason why data was exported (not revisions made) FEASIBILITY TESTING 23 Revised Developer-Reported Criteria The measures in Table 7 (patient access), Table 8 (public health information exchange), Table 9 (clinical care information exchange), and Table 10 (standards adoption and certification conformance) incorporate feedback from the HITAC, public comments, and feasibility testing with developers as described above. As such, the measures in these tables comprise the final draft developer-reported criteria. TABLE 7: PATIENT ACCESS MEASURES APPLIES TO CERTIFICATION CRITERIA (E)(1) AND (G)(10)ix Measures Reporting Elements and Format 1. Patient access to electronic health Require developers to report numerators and information: Percentage of patients who access denominators, not just percentages. their electronic health information. Measures should be reported annually for a 12-month Numerator: Number of active patientsa who reporting period. accessed their electronic health information January 1 - December 31 should be used as the default 1. Via third-party app only (authorization as a for a 12-month reporting period. proxy for access) Aggregated by CHPL Product Number. 2. Via desktop patient portal or app given by Denominator of encounter types should determine health care provider for portal use only the product association. The numerator should not (certified health IT developer's app tethered to distinguish between ambulatory and inpatient the EHR) encounters. 3. Combination of 1 and/or 2 above (e.g., third- party app, desktop patient portal, and/or Developers should specify how they measure the health care provider app)b number of active patients that accessed their electronic health information via third party apps (e.g., Denominator: Number of active patients. by access token or refresh token during the reporting period, audit log, etc.). Notes: Developers to submit documentation on the data sources and approaches (e.g., assumptions, information on providers or product that are included/excluded from numerators and denominators, etc.) used to report on the measure. Developers may also submit descriptive or qualitative information to provide context around each measure if desired or necessary. a The definition of an active patient, for the sake of this reporting, should be one that had an encounter within the reporting period. CMS generally defines a patient encounter as any encounter where medical treatment is provided and/or evaluation and management services are provided. Based on HTAC recommendations, developers should use NCQA's Outpatient Value Set for outpatient codes and SNOMED codes 4525004, 183452005, 32485007, 8715000, 32485007, and 48951000124107 for inpatient codes. If a reporter does not support encounters as defined in the code sets above, they can attest to that and submit a substitute code set that represents the majority of encounters in their system and use that definition for reporting. b We are distinguishing between certified health IT developer provided app (tethered to EHR) vs third-party apps (regardless of whether given by the healthcare provider or not). Thus, if organization offers a third- party app where the look and brand is tied to the provider organization, this would be counted as a third- party app. REVISED DEVELOPER-REPORTED CRITERIA 24 TABLE 8: PUBLIC HEALTH INFORMATION EXCHANGE MEASURES APPLIES TO CERTIFICATION CRITERIA (F)(1)x Measures Reporting Elements and Format 2. Vaccinations/immunizations: Percentage of For each measure, gather numerator and vaccine administrations where immunization data denominator counts by IIS submitted to/received were sent electronically to an immunization from. information system (IIS) Collect numerator and denominator counts but Numerator: Number of administrations from report out as percentages by specified subgroups. which the information was electronically Measures should be reported annually for a 12- submitted to a registry successfully (e.g., via month reporting period. HL7v2.5.1 transactions) January 1 - December 31 should be used as the Denominator: Number of immunizations default for a 12-month reporting period. administered The definition of successful transmission to an IIS 3. Immunization forecasts: Number of IIS queries registry, for the sake of this reporting, should be the made per encounter total messages submitted minus acknowledgments Numerator: Number of query responses from the with errors (2.5.1, severity level of E). a IIS received including query directly from EHR or CMS generally defines a patient encounter as any via a network such as an HIE or other type encounter where medical treatment is provided network and/or evaluation and management services are Denominator: Number of encounters provided. See notes for additional guidance. b For additional clarification on the definition of administered vaccines, see Paragraph (f)(1)(i) of certification criteria (f)(1).x Notes: Developer products (e.g., specialty EHRs) without clients that administer immunizations can be exempt from these measures. Developers to submit documentation on the data sources and approaches (e.g., assumptions, information on providers or product that are included/excluded from numerators and denominators, etc.) used to report on the measure. Developers may also submit descriptive or qualitative information to provide context around each measure if desired or necessary. For interpretation: • Not all clinicians consider immunizations in their scope of practice, and that this will affect data reported. Not every encounter would necessarily have a query. Some queries may also be performed outside the concept of an encounter. • Not all provider sites may be able to query, depending on their bidirectional connectivity status and vendor interoperability architecture. • Jurisdictions vary in mandated reporting for specific patient age groups, and the measure would only reflect those (patients) for which providers must electronically transmit data. • Measure 3 would not capture immunization data that is pushed to EHRs a This way IIS jurisdictions that do not send HL7 Acknowledgement messages (ACKs) will not be a limitation. This approach assumes that submitted messages are at a minimum reaching Public Health. b CMS definitions of encounter vary slightly by the program involved, usually with additional exceptions. Based on HTAC recommendations, developers should use NCQA's Outpatient Value Set for outpatient codes and SNOMED codes 4525004, 183452005, 32485007, 8715000, 32485007, and 48951000124107 for inpatient codes. If a reporter does not support encounters as defined in the code sets above, they can attest to that and submit a substitute code set that represents the majority of encounters in their system and use that definition for reporting. REVISED DEVELOPER-REPORTED CRITERIA 25 TABLE 9: CLINICAL CARE INFORMATION EXCHANGE MEASURES APPLIES TO CERTIFICATION CRITERIA (B)(1) xi AND (B)(2) Measures Reporting Elements and Format 4. C-CDAs: Percentage of C-CDA documents Aggregated by product, where possible. viewed by end users (such as care team members Measures should be reported annually for a 12- who treat patients associated with a provider) or month reporting period. clinicians (broken out by incorporation of records) January 1 - December 31 should be used as the Numerator 1: Number of unique C-CDAs received default for a 12-month reporting period. using certified health IT that are viewed by end Measure is not limited to a specific users and clinicians mechanism/mode, but includes types such as Numerator 2: Number of unique C-CDAs received Carequality, CommonWell, HIE, EHR to EHR, where data are viewed, reconciled, and vendor networks and API enabled. incorporated by end users and clinicians Exclude duplicate C-CDAs from the numerator and Denominator: Number of unique C-CDAs received denominator. If there are duplicate C-CDAs, using certified health IT measure should indicate whether at least one of the duplicates is viewed. Define 'received' as successful receipt of a unique CCD-A that was matched to the correct patient'. See Paragraph (b)(2)(ii) of certification criteria (b)(2).xi Define "viewing" a document as having an open document displayed to a user, whether the display includes all or a subset of the data received, and regardless of whether the user scrolls through or clicks on any of the data in the document itself. Define "incorporation" as to electronically process structured information from another source such that it is combined (in structured form) with information maintained by health IT and is subsequently available for use within the health IT system by a user. See Paragraph (b)(2)(ii) of certification criteria (b)(2).viii Note: CCDA align with 2015 Edition Certification requirement for CCD, referral note, and discharge summary document templates Notes: Developers to submit documentation on the data sources and approaches (e.g., assumptions, information on providers or product that are included/excluded from numerators and denominators, etc.) used to report on the measure. Developers may also submit descriptive or qualitative information to provide context around each measure if desired or necessary. REVISED DEVELOPER-REPORTED CRITERIA 26 TABLE 10: STANDARDS ADOPTION AND CERTIFICATION CONFORMANCE MEASURES APPLIES TO CERTIFICATION CRITERIA ((G)(10)xii AND (B) (10) Measures Reporting elements and format 5. Availability of Apps Using Certified API The reported data could be used in combination to Technology: The availability of apps using certified create a range of measures that provide indications API technology (170.315(g)(10)) with the following of the adoption and use of FHIR and associated specifications insight into the relative use of Core elements. Numerator 1: Number of registered with an EHR Data should be reported on a per product basis developer apps using SMART on FHIR EHR Launch (CHPL Product ID) for developers with Certified API Denominator 1: Total number of apps at product Technology. level Normalization of call frequency would be needed to Numerator 2: Number of registered with an EHR control for bulk FHIR and automatic refresh calls. developer apps using SMART on FHIR Standalone More than one denominator is appropriate to Launch provide insight into (a) the relative share and Denominator 2: Total number of apps at product frequency for individual FHIR Core profile calls level amortized over the number of applications in use and (b) the relative share and frequency for 6. Use of FHIR in Clinician-Facing Apps: The individual Core profile calls as a percentage of number and percentage of FHIR API resources by aggregate calls being made. Other ways to stratify clinician facing-apps could be by customer base e.g., small practices, large groups, hospitals, etc. Numerator 1: For clinician facing endpoints, number of FHIR API calls (searches/reads) by Require developers to report numerators and FHIR resource type and FHIR version denominators, not just percentages. Denominator 1: Number of FHIR API calls Measures should be reported annually for a 12- aggregated across all clients for the developer month reporting period. January 1 - December 31 should be used as the 7. Use of FHIR in Patient-Facing Apps: The number default for a 12-month reporting period. and percentage of FHIR API resources by patient Currently, there is no requirement to making facing-apps provider facing endpoints publicly available. Numerator 1: For patient-facing endpoints, However, the developer must still report the number of FHIR API calls (searches/reads) by measure across all endpoint regardless of whether FHIR resource type and FHIR version. publicly available or not. Denominator 1: Number of FHIR API calls Measures should be mapped to all FHIR resources, aggregated across all clients for the developer not limited to USCDI. 8. FHIR bulk data: Numerator 1: For bulk FHIR endpoints, number of FHIR API calls (searches/reads) by FHIR resource type and FHIR version. Denominator 1: Number of FHIR API calls aggregated across all clients for the developer 9. EHI Export: A measure on EHR Full Electronic Health Information (EHI) Export as required for certification per (170.315(b)(10)) Numerator 1: Number of full data EHI export requests processed Yes / No Attestation: "We enable direct-to- individual EHI exports" REVISED DEVELOPER-REPORTED CRITERIA 27 Note: Developers to submit documentation on the data sources and approaches (e.g., assumptions, information on providers or product that are included/excluded from numerators and denominators, etc. ) used to report on the measure. Developers may also submit descriptive or qualitative information to provide context around each measure if desired or necessary. REVISED DEVELOPER-REPORTED CRITERIA 28 Conclusion: Issues to Consider The developer measures for the EHR Reporting Program aim to address information gaps in the health IT marketplace and provide insights on how certified health IT is being used. The nine core measures in this report focus on interoperability, with an emphasis on patient access, public health information exchange, clinical care information exchange, and standards adoption and conformance. These measures have been revised from previously posted draft developer-reported measures to reflect feedback received from ONC, recommendations from the HITAC and public comments, and findings from feasibility testing with developers of certified health IT. Throughout the course of this project, we identified various themes, challenges, and issues for ONC to consider as they move forward with the rulemaking process and implement the developer measures. First, these measures will be implemented in a rapidly changing product marketplace and policy environment; moving forward, the EHR Reporting Program will need to be flexible to reflect these dynamics. Second, there is a tradeoff between the number/complexity of measures collected and the burden and costs that fall to health IT developers of certified health IT. Some of the costs and burdens to developers identified in this report could also be passed along to providers e.g., by necessitating new data use agreements, reporting requirements, and/or system enhancements required for data capture. Moving forward, ONC will need to consider these costs if additional measures or stratifications are added to the developer measures. Third, the HITAC, public, and developers all expressed concerns about potential redundancy with existing efforts such as Real World Testing and Promoting Interoperability Programs. To reduce this potential redundancy, ONC should consider ways to align or combine parts of the EHR Reporting Program with these other efforts. Finally, ONC should assess various data interpretation challenges. Since the EHR Reporting Program is focused on the developer measures, the final measures in this report are not intended to yield apples-to-apples comparisons across developers. Rather, these measures aim to provide insight into changes in outcomes over time for a given provider (i.e., "within provider" trends). ONC will need to take this perspective into account while interpreting the measures, and as these data become publicly available, additional context will need to be provided (e.g., data interpretation guide) on how the measures should be interpreted. Other potential actions to address data interpretation challenges include allowing developers to provide qualitative information around each measure to facilitate interpretation, suppressing small sample sizes, and relatedly, removing potentially sensitive information that can be used to identify customers or provide access to proprietary or competitive information. ONC may also need to consider additional refinements to the measures to provide developers additional clarity when the program is implemented. While the measure specifications have been refined throughout the project, new issues and detailed questions may continuously arise. For example, we recently provided CONCLUSION: ISSUES TO CONSIDER 29 additional clarity on the definition of administered vaccinations for the public health measures, even though this was never raised during the HITAC, public feedback, and feasibility testing processes. Other refinements that ONC may consider include changing the reporting period from 12 to 6 months, particularly for the API measures; adding additional measures or stratifications that were removed in response to the HITAC feedback (e.g., the data quality and completeness measure, which includes race/ethnicity data) and feasibility testing phase (e.g., stratifications by SOGI and SDOH for the patient access measure and reporting by IIS and age group for the public health measures); providing detailed guidance to providers on how the data will ultimately be reported e.g., sharing mock-up spreadsheets in advance to illustrate how ONC wants the data to be reported; and clarifying the technologies and methods for automated reporting where applicable. ONC may also consider adding additional numerator and denominators for FHIR resources transferred to measure 6 (Use of FHIR in Clinician-Facing Apps), measure 7 (Use of FHIR in Patient-Facing Apps), and measure 8 (FHIR bulk data). In addition, ONC may also recommend stratifying measure 4 (C-CDAs) by the mechanism or mode used and whether the C-CDA is linked to a patient via patient matching. These stratifications were identified as priority areas but were ultimately excluded because of the burden to developers and uncertainty over data quality. Finally, the measures in this report do not reflect voluntary end users' experiences using certified health IT, as originally intended by the EHR Reporting Program. As such, these measures do not fully address two major domains identified under the Cures Act-usability and user-centered design and privacy and security. To fill these gaps in the future, ONC could implement the previously-developed voluntary user measuresiv if resources of other opportunities become available. CONCLUSION: ISSUES TO CONSIDER 30 Appendix TABLE 1A. KEY DOCUMENTS RELATED TO MEASUREMENT OF CERTIFIED HEALTH IT PERFORMANCE ONC Publications/Programs ◼ ONC Strategic Plan 2020 – 20252 ◼ ONC HITAC Meeting Minutes3 ◼ ONC Interoperability Standards Task Force 4 ◼ Interoperability Roadmap5 ◼ 2015 CEHRT requirements6 ◼ United States Core Data for Interoperability (USCDI)7 ◼ Proposed TEFCA Measures8 ◼ ONC API Measurement Framework Report ◼ Interoperability Measurement Workshop: Current & Future Approaches to API measurement (ONC 9/23/20) CMS Publications/Programs ◼ CMS Promoting Interoperability Measures9 ◼ CMS MIPS Measures10 Standards Organizations ◼ Fast Healthcare Interoperability Resources (FHIR) 11 Health IT Publications ◼ An Assessment of Feasibility and Exploration of Methods for the Automated Measurement of Interoperability-Standards Usage in the United States (Sujansky and Associates, 2020) ◼ National Trends in the Safety Performance of Electronic Health Record Systems From 2009 to 2018 (Clausen et al.)12 ◼ Identification and Prioritization of Health IT Patient Safety Measures (National Quality Forum, February 2016)13 ◼ Developing a National API Measurement Framework: Working Group Recommendations. (Adler- Milstein et al. 2020) 2 https://www.healthit.gov/topic/2020-2025-federal-health-it-strategic-plan 3 https://www.healthit.gov/topic/federal-advisory-committees/hitac-calendar 4 https://www.healthit.gov/hitac/committees/interoperability-standards-priorities-task-force-2018 5 https://www.healthit.gov/topic/interoperability/interoperability-roadmap 6 https://www.healthit.gov/topic/certification-ehrs/2015-edition 7 https://www.healthit.gov/isa/united-states-core-data-interoperability-uscdi 8 https://rce.sequoiaproject.org/wp-content/uploads/2020/08/Tech-Forum-Session-Structural-Process-and- Outcomes-Measures-for-Networks-Enabling-Exchange-Final-Slides.pdf 9 https://www.cms.gov/regulations-guidance/promoting-interoperability/20202021-program-requirements-medicaid 10 https://qpp.cms.gov/mips/quality-requirements 11 https://www.hl7.org/fhir/overview.html 12 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7260621/ 13https://www.qualityforum.org/publications/2016/02/identification_and_prioritization_of_hit_patient_safety_measure s.aspx APPENDIX 31 ◼ Effective Reporting Could Improve Safe Use of Electronic Health Records (Pewtrusts, 2020)14 ◼ Metrics for assessing physician activity using electronic health record log data (Sinsky et al. 2020)15 ◼ EHR audit logs: A new goldmine for health services research? (Julia Adler-Milstein, et al., 01/20)16 TABLE 2A. PARTICIPANTS IN MARKET RESEARCH DISCUSSIONS Individuals: ◼ Dr. Sarah Corley, Chief Medical Officer, MITRE ◼ Dr. Jacob Reider, CEO, Alliance for Better Health ◼ Dr. David Nilasena, Chief Medical Officer, CMS ◼ Dr. Terry Cullen, Retired Rear Admiral, US Public Health Service and former CIO with the Indian Health Service (IHS) Organizations: ◼ OCHIN: Scott Fields, Jen Stoll, Paul Matthews ◼ Sequoia/Carequality: Dave Cassel, Mariann Yeager ◼ CRISP: David Horrocks, Marc Rabner, Adrienne Ellis ◼ NYeC: Rachel Eager, Christie Doria, Nicole Casey, Zoe Barber, Elizabeth Amato ◼ Texas Medical Association: Shannon Vogel, Dr. Joseph Schneider, Andrea Cobb TABLE 3A. PARTICIPANTS IN SEMI-STRUCTURED INTERVIEWS ◼ NYeC: Nicole Casey, Rachel Eager, Christie Doria, Elizabeth Amato ◼ Athenahealth: Joe Ganley, Jennifer Michaels, Dan Rosen, Chad Dodd, Stephanie Zaremba, Chris Barnes ◼ Nextgen: Cherie Holmes-Henry, Robert Larson, Lisa Bradshaw, Michelle Knighton, Mike Boucher ◼ Cerner: Jeff Wall, Dale Owens, Kayla Thomas, Becca Green, Dave Brumbach, Doug Pratt, Jason Mitchell, John Travis, Jessica Hall, Leslie Lindsey, Michael Warner, Hans Buitendijk, Drew Torres ◼ OCHIN: Jen Stoll, Paul Matthews, Scott Fields ◼ Epic: Michael Saito, Alya Sulaiman, Janet Campbell, Sasha TerMaat ◼ DirectTrust: Scott Stuewe ◼ Alliance for Better Health: Jacob Reider TABLE 4A. PARTICIPANTS IN ONC SME INTERVIEWS/DISCUSSIONS ◼ Rachel Abbey and Dan Chaput (ONC) ◼ Prashila Dullabh (NORC) ◼ Will Gordon (Brigham and Women's Hospital) ◼ Brendan Keeler (Zus Health) ◼ Nicole Kemper and Anita Samarth (Clinovations) 14 https://www.pewtrusts.org/en/research-and-analysis/issue-briefs/2020/03/effective-reporting-could-improve-safe- use-of-electronic-health-records 15 https://academic.oup.com/jamia/article/27/4/639/5728718 16 https://pubmed.ncbi.nlm.nih.gov/31821887/ APPENDIX 32 ◼ Julia Adler-Milstein (UCSF) ◼ Mark Savage (Savage Consulting) ◼ Walter Sujansky (Sujansky & Associates) APPENDIX 33 APPENDIX TABLE 5A: DEVELOPER-SPECIFIC ESTIMATES OF ANTICIPATED BURDEN, FINANCIAL COST, AND TIME TO IMPLEMENT MEASURES Burden Rating: 1 (low)-10 (high) Cost Rating: 1 (low)-10 (high) Time to implement measure (months) 1A 1B 2 3 4 1A 1B 2 3 4 1A 1B 2 3 4* Measure 1: Patient 2 - 6-7 5 3-4 2 2-3 6-7 - - < 12 12-24 24-36 18-24 34 access 2: PH 2-3 6 10 7 8 - - 10 - - < 12 12-24 36 18-24 34-40 (send) 3: PH 2-3 6 10 7 8 - - 10 - - < 12 12-24 36 18-24 34-40 (receive) 4–1: C- 2 4 5 8 - - - - < 12 - 18-24 34-40 CDAs viewed 6-7 6-7 36 4 -2: C- 10 10 10 10 36 36 CDAs R/I 5: # of 2-4 2-4 2 Do not 9 4-6 4-6 2 - - 12 12-24 12 18-24 34-40 apps know 6: FHIR 5-6 5-6 2 Do not 9-10 4-6 4-6 2 - - 12 12-24 12 18-24 34-40 (clinician- know facing) 7: FHIR 5-6 5-6 2 Do not 9-10 4-6 4-6 2 - - 12 12-24 12 18-24 34-40 (patient- know facing) 8: Bulk 5-6 5-6 2 Do not 9-10 2 5-6 2 - - 12 12-24 12 18-24 34-40 FHIR know 9: EHI - - 2 Do not - - 4-6 2 - - 12-24 12-24 12 18-24 - export know Notes: 1A, 1B, 2, 3, and 4 refer to de-identified EHR developers. Developer 1 provide separate estimates for their cloud-based (1A) and locally hosted EHR products (1B). R/I= reconciled/incorporated. *For Developer 4, the timeline for all measures except measure one would be 18-24 months for development and 16 months for deployment. Measure 1 would be 18 months for development and 16 months for deployment. APPENDIX 34 APPENDIX: LINK REFERENCE LIST i Urban.org. "Funding Principles." https://www.urban.org/aboutus/funding-principles. ii Department of Health and Human Services, Office of the Secretary. (2020) "21st Century Cures Act: Interoperability, Information Blocking, and the ONC Health IT Certification Program." https://www.federalregister.gov/documents/2020/05/01/2020-07419/21st-century-cures-act- interoperability-information-blocking-and-the-onc-health-it-certification.iii HealthIT.gov. (2021) "EHR Reporting Program." https://www.healthit.gov/topic/certification-health-it/ehr-reporting-program. iv Ramos, C., Johnston, E.M., Blavin, F., Ozanich, G., & Frye, K. (2020) "Electronic Health Record Reporting Program Voluntary User-Reported Criteria." https://www.urban.org/research/publication/electronic- health-record-reporting-program-voluntary-user-reported-criteria. v HealthIT.gov. (2021) "Final Report of the Health Information Technology Advisory Committee's EHR Reporting Program Task Force 2021." https://www.healthit.gov/sites/default/files/page/2021-10/2021- 09-09_EHRRP_TF_2021__HITAC%20Recommendations_Report_signed_508.pdf. vi Urban.org. (2021) "Electronic Health Record (EHR) Draft Developer-Reported Public Comments." https://www.urban.org/sites/default/files/2021/10/05/electronic_health_record_ehr_draft_developer- reported_public_comments.pdf. vii Urban.org. (2021) "Electronic Health Record Reporting Program." https://www.urban.org/sites/default/files/2021/08/11/electronic_health_record_reporting_program.pdf. viii HealthIT.gov. (2020) "§170.315(b)(2) Clinical information reconciliation and incorporation." https://www.healthit.gov/test-method/clinical-information-reconciliation-and-incorporation. ix HealthIT.gov. (2021) "§170.315(e)(1) View, download, and transmit to 3rd party." https://www.healthit.gov/test-method/view-download-and-transmit-3rd-party. x HealthIT.gov. (2020) "§170.315(f)(1) Transmission to immunization registries." https://www.healthit.gov/test-method/transmission-immunization-registries. xi HealthIT.gov. (2021) "§170.315(b)(1) Transitions of care." https://www.healthit.gov/test- method/transitions-care. xii HealthIT.gov. (2021) "§170.315(g)(10) Standardized API for patient and population services." https://www.healthit.gov/test-method/standardized-api-patient-and-population-services. xiii Urban.org. https://www.urban.org/policy-centers/health-policy-center/projects/ehr-reporting-program. APPENDIX 35 STATEMENT OF INDEPENDENCE The Urban Institute strives to meet the highest standards of integrity and quality in its research and analyses and in the evidence-based policy recommendations offered by its researchers and experts. We believe that operating consistent with the values of independence, rigor, and transparency is essential to maintaining those standards. As an organization, the Urban Institute does not take positions on issues, but it does empower and support its experts in sharing their own evidence-based views and policy recommendations that have been shaped by scholarship. Funders do not determine our research findings or the insights and recommendations of our experts. Urban scholars and experts are expected to be objective and follow the evidence wherever it may lead. STATEMENT OF INDEPENDENCE 36 500 L'Enfant Plaza SW Washington, DC 20024 www.urban.org STATEMENT OF INDEPENDENCE 37