1 Kilabar

Benchmarking In Healthcare Essay

Health Care Provider Essay

828 WordsApr 5th, 20154 Pages

Benchmark Assignment

Grand Canyon University
Spirituality in Health Care
HLT-310V

Benchmark Assignment
Healthcare is forever changing testing professionals to provide excellent care to the communities it serves. Seeing hospitals as healing environments and not as the customary place of curing an illness is an example of a present paradigm shift in health care now. By seeing hospitals as a healing environment instead of the current curing environment can change the way most moral issues and current work conditions are approached, perceived, and managed. Each member in a healing environment has a chance to heal and the responsibility to promote healing through their words, actions, and attitudes. This paper will discuss the three…show more content…

As a reminder to health care professionals of why they chose this profession is this strong culture of compassionate care serves. This holist methodology stimulates physical healing and creates a manifesto in which spiritual needs are also met.
Challenges of Creating a Healing Environment
Mixing spirituality with health care is not acknowledged completely without confrontation. The understanding exists about the need of providing a healing environment, however, the challenge of having an empirical approach that is focused on healing the disease, as a standard practice rather than focus on the patient’s individual needs, which increases moral distress to the healthcare provider, in addition to the patient and families. Spirituality plays an important role with the coping process in stress, trauma and illnesses, nonetheless, it is a topic commonly avoided due to fear of discrimination (Ashcraft, Anthony, & Mancuso, 2010). In order for nurses to serve as an instrument to healing they have to be able to meet the patients spiritual needs as well as physical. This can be a challenge to most healthcare providers because they may experience burn out from their work environment causing them to alienate patients. Another barrier for nurses is the lack of knowledge about spirituality in relation to the patients’ needs.
Biblical Aspects
In James 5:14-15 (King James Version), Jesus asks if there are any sick among the crowd and if so to

Show More

CRISP and Department of Quantitative Methods, University of Bicocca-Milan, V. Sarca 202, 20146 Milan, Italy

Academic Editors: V. Brusic, W. D. Evans, M. Fanucchi, and A. S. Levin

Copyright © 2012 Pietro Giorgio Lovaglio. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient’s state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

1. Introduction

Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare. Accreditation and certification procedures have acted as stimulating mechanisms for the discovery of skills and technology specifically designed to improve performance. Total Quality Management (TQM) and Continuous Quality Improvement (CQI) are the most widespread and recent approaches to implementing and improving healthcare quality control [1].

Besides offering accreditation and certification processes, recent approaches measure the performance of health structures in order to evaluate National Health Systems. For example, various international Agencies [2–4] measure the performance of health structures in different countries, considering three main dimensions: effectiveness, efficiency, and customer satisfaction.

In this perspective, performance measurement for healthcare providers, structures, or organizations (from here, hospitals) is becoming increasingly important for the improvement of healthcare quality.

However, the debate over which types of performance indicator are the most useful for monitoring healthcare quality remains a question of international concern [5].

In a classic formulation, Donabedian [6] asserted that quality of care includes (i) structure (characteristics of the resources in the healthcare system, including organization and system of care, accessibility of services, licensure, physical attributes, safety and policies procedures, viewed as the capacity to provide high quality care), (ii) process (measures related to evaluating the process of care, including the management of disease, the existence of preventive care such as screening for disease, accuracy of diagnosis, the appropriateness of therapy, complications, and interpersonal aspects of care, such as service, timeliness, and coordination of care across settings and professional disciplines), and (iii) clinical outcomes.

A clinical outcome is defined as the “technical result of a diagnostic procedure or specific treatment episode” [7], “result, often long term, on the state of patient well-being, generated by the delivery of a health service” [8].

Specifically, ongoing attention has been placed on the importance of combining structural aspects (such as governance and the healthcare workforce) with measures of outcomes to assess the quality of care [6, 9]. This consideration was taken into account by the Institute of Medicine, which, in 1990, stated that “quality of care is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” [10].

This definition has been widely accepted and has proven to be a robust and useful reference in the formulation of practical approaches to quality assessment and improvement, emphasizing that the process of care increases the probability of desirable outcomes for patient, reducing the probability of undesired outcomes.

This paper deals with hospital effectiveness, defined as the capacity of hospitals to provide treatment that modifies and improves the patient’s state of health. Of particular importance in this perspective is the concept of “relative effectiveness” that is, the effectiveness of each specific hospital in modifying the patient’s state of health within a strategy comparing different healthcare institutions, in short, effectiveness evaluation in a benchmarking framework [6].

Benchmarking in healthcare is defined as the continual and collaborative discipline of measuring and comparing the results of key work processes with those of the best performers in evaluating organizational performance [11].

Two types of benchmarking can be used to evaluate patient safety and quality performance. Internal benchmarking is used to identify best practices within an organization, to compare best practices within the organization, and to compare current practice over time. Competitive or external benchmarking involves using comparative data between organizations to judge performance and identify improvements that have proven to be successful in other organizations.

Our aim is to discuss the statistical aspects and possible strategies for the development of hospital benchmarking systems.

The paper is structured as follows: the next section introduces readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used. Section 3 presents statistical methods, while Section 4 explores the methodological problems related to performing consistent benchmarking analyses. Section 5 describes an application based on patient satisfaction that demonstrates the feasibility of the illustrated benchmarking strategies. Section 6 offers conclusions.

2. Perspective and Type of Indicators

The conceptual definition and assessment of “effectiveness” rests on a conceptual and operational definition of “quality of care”, which is an exceptionally difficult notion to define.

An important contextual issue is the purpose for which a performance indicator is to be used and by whom.

Performance indicators can be used for various objectives: to gain information for policy making or strategy development at a regional or national level, to improve the quality of care of a hospital, monitor performance of healthcare, identify poor performers to protect public safety as well as to provide information to consumers to facilitate the choice of hospital.

In general, the broader the perspective required, the greater the relevance of outcome measures, as they reflect the interplay of a wide variety of factors, some directly related healthcare, others not. Because outcome measures are an indicator of health, they are valid as performance indicators in as much as the quality of health services has an impact on health. As the perspective narrows, to hospitals, to specialties, or indeed to individual doctors, outcome measures become relatively less indicative and process measures relatively more useful.

Process measures have two important advantages over outcome measures. In fact, if differences in outcome are observed, before one can conclude that the difference reflects true variations in the quality of care, alternative explanations need to be considered. In contrast, a process measure lends itself to a straightforward interpretation (e.g., the more people without contra-indications who receive a specific treatment, the better). Second, the necessary remedial action is clearer (use the treatment more often), whereas for an outcome measure (e.g., higher mortality rate) it is not immediately obvious what action needs to be taken.

Despite these limitations, outcome measures have a role in the monitoring of the quality of healthcare that is important per se. To know that death rates from a specific diagnosis vary across hospitals is an essential finding, even if the reasons for the differences cannot be explained through the quality of care. Further, outcome measurement will reflect all aspects of the processes of care, although only a subset is measurable or measured (e.g., technical expertise and medical skill). Such aspects are likely to be important determinants of outcome in some situations and describe not only that a correct procedure is performed, but also the results for the patients.

Another possible reason why outcome indicators are often used in some countries is that available data refer to routine information systems (administrative archives) which regularly record clinical aspects and other dimension useful for case mix adjustment.

In the Italian context, at patient level, the Hospital Discharge Card (HDC) is the only available administrative archive in the health sector. The HDC, introduced in Lombardy in 1975 with the introduction of reimbursement system of the Diagnostic Related Group (DRG), collects clinical information about patient discharge.

In this perspective, the debate on the use of clinical administrative data to furnish useful information on quality assessment remains open.

Many authors have criticized the use of clinical outcomes in the evaluation of quality of the care and, particularly, mortality rates [12, 13]. According to Vincent and colleagues [14], administrative data does not provide a suitably transparent perspective on quality or improvement. Others suggest that limited clinical content may compromise its utility for this purpose, posing serious caveats against drawing definitive conclusions [15, 16].

Despite such concerns, major consensus exists on the use of clinical outcomes from administrative data as a useful screening tool for identifying quality problems and targeting areas in which quality should be investigated in greater depth [4, 16, 17]. Excluding mortality, various clinical outcomes which could indicate malpractice, are widely accepted by private or public Agencies [1–3, 18, 19] which evaluate national health sectors, for example, unscheduled surgical returns to the operating room within 48 hours, discharges against medical advice, death in low mortality DRGs, or failure to rescue (indicating deaths among patients developing specified complications during hospitalization).

2.1. Outcome Variability

In order to consider the methodological problems that may limit benchmark strategies, it is necessary to explore the possible causes of variation in an outcome. Four major categories of explanation need to be considered. The first of these is whether observed differences might be due to differences in the type of patient cared for in the different hospitals (e.g., age, gender, comorbidity, severity of disease, etc.).

The importance of this cause of variation is illustrated by studies where differences in crude outcome disappear when the outcomes are adjusted to take account of these confounding factors. To this end, researchers propose risk-adjustment methodologies as proper methods of equitable comparisons for evaluating quality and effectiveness of hospitals [12, 15, 20].

A second cause of variation in outcome (or its risk-adjusted version) is differences in the way data is collected. Differences in the measurement of events of interest (e.g., deaths) or in the population at risk (typically the denominator of an event rate) depending on different inclusion criteria for determining denominators, or when different case-mix data is used to adjust for potential confounding, will lead to apparent differences in outcome.

Thirdly, observed differences may be due to chance. Random variation is influenced both by number of cases included and by the frequency with which the outcome occurs. To this end, a fundamental issue is whether the outcome indicator is likely to have the statistical power to detect differences in quality. Statistical power depends upon how common the occurrence of the outcome is. For some rare events, the limited number of patients experiencing the events limits the power of the study [21].

Finally, differences in outcome may reflect real, although unobservable, differences in quality of care. This may be due to variations in different measurable or less measurable aspects such as the interventions performed or the skill of the medical team.

Hence, as these are different causes of an outcome variation, the conclusion that a variation in outcome is due to a difference in quality of care among hospitals is essentially a diagnosis through exclusion: if variation cannot be explained in terms of previous components (case-mix, data collection, chance), then hospital quality of care (relative effectiveness) becomes a possible explanation.

3. Statistical Methods

As described above, if one cannot explain the variation in terms of differences in type of patient, in how data is collected, or in terms of chance, then quality of care becomes a possible explanation. Following the perspective that variations in outcome are due to a difference in quality of care only as diagnosis through exclusion, institutional agencies gather larger data sets from administrative archives and apply risk-adjustment in order to validate quality indicators and to benchmark hospitals.

Administrative archives are less prone to the problem related to how the data is collected, and reduce the possibility that differences in outcome may be due to chance (although this risk increases when analyzing rare outcomes). Usually, the sizes of such databases cover the entire population of hospitalizations, enhancing their statistical power to detect important differences in outcomes.

Therefore, the last exclusion criterion invokes a consistent statistical model allowing comparisons between hospitals, in order to estimate relative effectiveness [22]. To this end, statistical methods for risk-adjustment identify and adjust variations in patient outcomes stemming from differences inpatient characteristics (or risk factors) across hospitals and, therefore, allow fair and accurate interhospital comparisons.

However, the kind of adjustment required for assessing effectiveness is not the same for the various subjects interested in the results. To this regard, it is useful to distinguish between two types of effectiveness. In fact, potential patients (users) and institutional stakeholders (agents) are interested in different types of hospital effectiveness.

Following the approach of Raudenbush and Willms [23], in a comparative setting, the relative effectiveness is usually assessed through a measure of performance adjusted for the factors out of the control of the hospital, so the difference between effectiveness simply lies in the kind of adjustment. The authors identify Type A and Type B relative effectiveness: Type A effectiveness deals with users interested in comparing the results they can obtain by enrolling in different hospitals, irrespective of the way such results are yielded; the performance of the hospital adjusted for the features of its users is evaluated. Type B effectiveness deals with Stakeholders interested in assessing the “production process” in order to evaluate the ability of the hospitals to exploit the available resources; in this case, the performance of the hospital is adjusted according to the features of its users, the features of the hospital itself, and the context in which it operates.

In the nineties, numerous authors proposed to estimate the concept of “relative effectiveness” by means of multilevel or hierarchical models [24, 25]. In fact, when the behaviour of individuals within organizations is studied, the data have a nested structure. Individuals/patients constitute the sampling units at the first and lowest level of the nested hierarchy. Organizations/hospitals constitute the sampling units at the second level.

Several recent statistical papers deal with risk-adjusted comparisons, related to the mortality or morbidity outcomes, by means of Multilevel models, in order to take into account different case-mixes of patients (for a review, see Goldstein and Leyland [26] and Rice and Leyland [27]).

One of the most attractive features of multilevel models is the production of useful results in healthcare effectiveness by linking individual (patient) and organizational (hospital) characteristics (covariates). Multilevel models overcome small sample problems by appropriately pooling information across organizations, introducing some correction or shrinkage, and providing a statistical framework that quantifies and explains variability in outcomes through the investigation of patient/hospital level covariates [27].

Quality indicators are typically calculated and disseminated at hospital level, dividing the number of events (in-hospital death or adverse event as a clinical error which results in disability, death, or prolonged hospital stays) by the number of discharged patients at risk.

However, at the patient/individual level, the event of interest is typically a dichotomous variable and the Multilevel model version for this kind of outcome is the Logistic Multilevel Model (LMM, [25]).

For patient nested in hospital , let be the probability of occurrence of a dichotomous adverse event , where is Bernoulli distributed with expected value . Instead of , the LMM specifies, as dependent outcome, its logistic transformation as a function of possible covariates, where log is the logarithmic transformation and the ratio of the probability that the adverse event occurs to the probability that it does not is called the odds of the modelled event.

The LMM without patients and hospital covariates (intercept-only LMM) assumes that depends only on the particular hospital charging patient , specified by a nominal variable designating the th hospital; the hospital effect is assumed to be random, meaning that hospitals are assumed randomly sampled from a large population of hospitals. Equations (1) and (2) define the intercept-only LMM: where is the intercept (effect) for the th hospital which can be decomposed in representing the average probability of adverse events (in the logit metric) across hospitals and a specific effect capturing the difference between the probability of adverse event for hospital and the average probability of adverse event across hospitals. These random effects are assumed to be independent and normally distributed with zero mean and variance , which describes the variability of hospitals’ effects. The intercept-only model constitutes a benchmark value of the degree of misfit of the model and can be used to compare models involving different covariates at different levels. Further, this model allows decomposing the total variance of the outcome into different variance components for each hierarchical level. Specifically, the Intraclass Correlation Coefficient (ICC), defined as the ratio between the variability among hospitals and total variability (plus the variability among patients within the hospitals, ) captures the proportion of total variability of a given risk factor that is due to systematic variation between hospitals. Nevertheless, in the case of a dichotomous outcome , the usual first level residuals , and hence their variance , are not in the model (1). This occurs since the outcome variance being part of the specification of the error distribution depends on the mean and thus does not have to be estimated separately.

However, approximating the variability of the first level with the variance of the standard logistic distribution () and summing this variance with the variability of the second level () allows separating the total variance in two components, giving the intercept-only model . This measure is used to assess the percentage of outcome heterogeneity existing between the hospitals involved in the analysis.

As the second step, the probability (in the logic metric ) of an adverse event occurrence for patients can be a function of patients’ characteristics (case-mix), other than the hospital effect. Hence (1) can be extended assuming that depends on patient covariates () where is the slope (regression coefficient) of the th person characteristic in hospital which is allowed to randomly vary across hospitals (e.g., the effect of length of stay on adverse event occurrence varies among hospitals). In the formulation (4), the specific effect for the th hospital on the outcome () is adjusted for the effects of the person-level characteristics (). In (5) represent the average slope across hospitals and the specific effect of hospital to the average slope (random effect). However, in effectiveness analyses, slope parameters () are assumed to be fixed (putting in (5) for), whereas only the intercept is allowed to randomly vary across hospitals. Such models, in which the regression slopes are assumed fixed, are denoted as variance component models.

In the model composed by (3)-(4) and (5) with, the reflects the relative effectiveness of the th hospital, depurated only by individual case-mix characteristics, and thus potentially depending on different hospital characteristics (Type A effectiveness).

For Type B effectiveness, one can move to the next step, accounting for variation in intercept parameters across hospitals by adding hospital variables to level 2 equations. Hence, (4)-(5) become in which slope parameters () referring to (3) are specified as nonrandom covariates across hospitals, but possibly varying depending on characteristics of hospital .

Methodologically, this step is justified when in the model (3)-(4) the intercepts do significantly vary across hospitals (by investigating the associated residual ICC), once the patients’ characteristics are controlled for.

The compact form of (3)-(6)-(7) is where the double sum in (8) captures possible cross-level interactions between covariates at different levels (e.g., exhibits that, for hospital , the effect of length of stay () on adverse event occurrence () may depend on the specialisation level of the hospital).

In model (8), the parameters , called level 2 residuals, specify the relative effectiveness of the hospital (Type B effectiveness): they show the specific “managerial” contribution of the th hospital to the risk of adverse event, depurated by overall risk , individual case-mix , structural/process characteristics of the hospitals , and their interactions . To make this interpretation clear, (8) can be rewritten by isolating the in the right term of expression (8): the effectiveness parameter is thus a hospital unexplained deviation of the actual outcome () from the expected outcome

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *