The second difference is the comparison of scores from a group of control hospitals in the same time frame. Finally, we used the second difference from the control group to rule out the part of the first score difference that is not influenced by GBR. This allowed us to estimate the treatment effect within the treatment group. More precisely, GBR adoption was considered the treatment, the hospitals implementing GBR constituted the treatment group, and hospitals not implementing GBR but otherwise similar were considered the control group. This allowed us to identify the treatment effect due to the impact of GBR as opposed to Medicaid expansion or other industry-wide trends. The treatment group was all Maryland hospitals that adopted GBR on January 1, 2014, but did not participate in the TPR program. According to the Annual Report on Selected Maryland General and Special Hospital Services Fiscal Year 2016,10 Maryland has 46 EDs located in general hospitals. Of those 46 hospitals, 10 rural hospitals have participated in the TPR program since July 2010 and are, therefore, excluded from the analysis. The control group includes hospitals from WV, RI, and DE. These three states adopted the original Medicaid expansion on January 1, 2014, at the same time as Maryland, but did not implement the GBR or TPR programs. The main reason that we chose these three as our control group is that Medicaid expansion might have caused and been accompanied by some unmeasurable changes in patient behavior. For example,grow tables 4×8 people who were newly eligible for Medicaid after the expansion would have had different strategies for choosing healthcare providers.
We assumed that people from the four states exhibited similar patterns in their reactions to Medicaid expansion. The online appendix section A2 provides the logic behind the selection of the control group.We formatted the final dataset into an unbalanced panel dataset and implemented a mixed-effects, linear regression model with a state-level fixed effect, a hospital-level random effect, and state-level heterogeneity to investigate the impact of GBR implementation on the ED1b scores of Maryland hospitals. The variables considered in our model are listed in Table 2 .At the patient level, GBR implementation correlates with longer ED LOS for patients being admitted to the hospital. We believe that this implies that GBR has fundamentally changed the way emergency physicians and hospital staff approach the hospitalization decision. The Evaluation of the Maryland All Payer Model Second Annual Report funded by CMS in 2017 emphasized that GBR targeted both healthcare cost and quality.9 The model has encouraged more workup and interface with case managers in the ED; the objective is to ensure patient safety and high-quality care in the community in lieu of admission for appropriate patients. These changes were likely contributing factors to the increase in the total time span for the care of an ED patient. Future work includes a study on whether and how Maryland hospital EDs adopted new strategies or modified their procedures for healthcare service delivery in response to the implementation of GBR. It remains to be seen if the changes in Maryland hospital EDs had or will have a substantial impact on Maryland’s healthcare system.
We found significant differences among the three Medicaid expansion states to which Maryland was compared. WV and RI had significantly shorter ED1b scores for admitted patients than Maryland. Delaware’s score was slightly longer. After applying sensitivity analysis using three alternative control groups, we found that the difference between Maryland’s ED1b and those different control groups remained significant. GBR, a state policy, is correlated with longer LOS for admitted patients. In our study, the state-level fixed effect is significant. Nevertheless, there may well be unidentified confounders that influenced our results. According to Benjamin C. Sun, professor of emergency medicine at Oregon Health and Science University in Portland, “It’s not really fair to compare, say, a public teaching hospital in the middle of New York City that sees 120,000 patients with one that is in a rural area that sees 5,000 patients.”Similarly, it may not be fair to simply compare ED scores across states. Our comparison across states assumes similar demographics and disease burdens, both of which could affect hospital utilization. Also, we are assuming similar admission practices across states. More particularly, we assume the changes in Maryland inpatient census other than affected by the implementation of Medicaid expansion and GBR can be controlled by our control group. In February 2017, a news report stated that “Maryland ER wait times are the worst in the nation,” a conclusion derived by simply comparing the ED scores published by CMS Hospital Compare.Viewed in this light, interpreting the significant state-level fixed effect obtained in our study without clarifying factors that may be unique or particular to each state, might confuse, rather than clarify, perceptions of hospital ED performance.
We acknowledge several other limitations in our study. The GBR policy was adopted on January 1, 2014, 10 days after Maryland began Medicaid expansion. The control group hospitals then had to come from neighboring states that also implemented traditional Medicaid expansion at the same time, thus, limiting our control group to WV, RI, and DE. Of these states, RI and DE have few hospitals. Another limitation was the incomplete report data. Overall, the reporting rate of the control group is 75%. According to KFF Total Hospital Reports,31 there should be 290 Hospital Compare data reports from CMS. However, we found only 218 complete reports. It is possible that the missing data might have some impact on our results. Another limitation is the possibility that unmeasured confounding factors may have affected ED LOS. Factors such as hospital closures, demographics, or shifts in access to care could have affected our results. To eliminate the effect of those possible confounding factors, the ideal measure would be the volume of each hospital’s ED visits. CMS started to collect volume data on January 22, 2015. However, some states in our study only started to report this measure on November 10, 2016. Therefore, we selected features other than volume data and note that we might not have been able to eliminate all effects. We were also limited in our choice of performance measure ED1b, which reflects the total time inpatients spend in the ED. Ideally our study would examine both ED1b and the corresponding outpatient measure, OP18. However, CMS only maintains Maryland State OP18 reports going back to January 1, 2014. As there is no data for the pre-treatment period, we cannot study the impact of GBR on the OP18 measure. Our design assumed that residents living in the four geographically close states shared similar reaction patterns to the Medicaid expansion. Then, from an aggregate point of view at the hospital level, we assumed that our control group could rule out the impact of Medicaid expansion on Maryland ED LOS. It is possible that not every hospital was affected by Medicaid expansion at the same proportion, which might have affected the estimates. Also, our secondary finding, the significant difference in time spent in EDs across the four states,plants racks should be further investigated by analyzing data from the Nationwide Emergency Department Database.Breaking bad news is considered to be one of the most important, stressful, and challenging responsibilities of a physician.Trainees and experienced physicians alike report being uncomfortable with this task, notably due to a lack of prior training.For patients, the acknowledgment of this information and their comprehension and perception are of paramount importance to facilitate their psychological adjustment and a long-term quality relationship with medical caregivers. The BBN process has changed drastically over the past decades, moving from a paternalistic medical approach to one of greater patient empowerment, which acknowledges the need for information and results in a greater awareness and clearer understanding of their diagnosis and prognosis.Patients prefer to receive individualized, comprehensive information communicated with warmth and honesty.Patient and family expectations regarding the exact content of news have been shown to be highly variable,making it difficult for healthcare professionals to tailor the information to suit each patient.
Bad news in an emergency department may consist in announcing that a relative has been admitted to the ED or in sharing with patients or their families news concerning the need for hospitalization or conditions that might lead to a life-threatening situation sooner or later.BBN in the ED is a particular challenge because the patient is generally meeting the emergency physician for the first time and neither of them enter into the relationship by choice. A recent survey revealed that 78.1% of BBN occurred without previous contact between the patient and the physician. Moreover, history taking, diagnosis, and the acknowledgment of bad news are usually accomplished within a very short time frame during which the physician is confronted with distractions, stress, or time constraints.EP training in communication skills to notify family members of a patient’s death has been reported to be poor at best,leading medical students, residents and young physicians to adopt inappropriate communication behaviors, which in turn significantly increase their stress levels.Inappropriate communication behavior does not take into account the needs of patients or their families. Several guidelines have been developed in oncology to help physicians deliver bad news.One of the most widespread BBN protocols is the SPIKES protocol.BBN training in the ED has scarcely been studied to date.The studies undertaken have included a limited number of participants,no validated assessment tools or control group,or were limited to death notification only.In this study, we assessed the effects of incorporating a four-hour ED BBN simulation-based training on self-efficacy, the BBN process, and communication skills among medical students and junior residents who rotated in the ED. We hypothesized that BBNSBT has the potential to increase self efficacy, the BBN process, and communication skills. Participants were split into small groups up to six members. theoretical course on BBN, SPIKES and communication skills with a 15-minute video illustrating SPIKES components; and 2) a three-hour simulation including six role-plays. Three participants were included in each role-play while the three other participants watched the simulation. Each one took 10-15 minutes plus 20-25 minutes for a debriefing. The debriefings followed the framework for Promoting Excellence and Reflective Learning in Simulation, using the advocacy-inquiry technique.The debriefings focused on the SPIKES protocol and effective communication behaviors. The following steps ensured the consistency of the BBNSBT: 1) the International Nursing Association for Clinical and Simulation Learning Standards of Best Practice for SimulationSM 41,42 were used to design the BBNSBT; 2) six experts including psychologists, EPs, and simulation instructors validated the scenarios and simulation design; 3) the same facilitators, a psychologist and an EP trained in BBN and certified as basic simulation instructors conducted training; 4) PowerPoint slides with major theory points accompanied the theoretical part of the BBNSBT; and 5) prewritten scripts were used for the roleplay explanations and the debriefings.BBN skills were assessed in simulation exercises involving two standardized family members played by actors. A randomly selected BBN scenario was used to assess each participant in both pre- and post-test. The scenarios were as follows: 1) a life-threatening situation after a motorcycle accident; 2) a life threatening cardiogenic shock; and 3) brain damage after a fight. Each trainee performed in one random scenario. The BBN skills assessments were video recorded and anonymized. Two blinded raters assessed participants by using two assessment tools. The SPIKES competence form,28 with 14 items, assessed the participants’ compliance with the SPIKES protocol. Each item was scored as “yes” or “no,” resulting in an overall score . The experts determined a cut-off score using the modified Angoff method.45 A passing score was 11 and above, and a failing score was below 11. We used the modified Breaking Bad News Assessment Schedule to evaluate communication.46 Rather than allocating points proportionally according to the results obtained, the mBAS is reversed, going from 1 to 5 . Overall scores ranged from 5-25. A passing score of 14 or lower was also set by the experts. A failing score was above 14. Assessments were made in two rounds. In the first round, raters independently rated the video. If items were adjacent raw disagreements between raters , they watched the video together, discussed it, and scored it again. The investigators entered the data collected into the R software, version 3.4.1 . The statistician used SAS version 9.4 .