Exclusion criteria were as follows: treatment seeking for AUD, current diagnosis of substance use disorder other than nicotine or alcohol, lifetime diagnosis of moderate-to-severe substance use disorder other than nicotine, alcohol, or cannabis, a diagnosis of bipolar disorder or any psychotic disorder, current suicidal ideation, current use of non-prescription drugs, other than cannabis, use of cannabis more than twice weekly, clinically significant physical abnormalities as indicated by physical examination and liver functioning labs, history of chronic medical conditions, such as hepatitis, or a chronic liver disease, current use of any psychoactive medications, such as antidepressants, mood stabilizers, sedatives, or stimulants, score ≥ 10 on the Clinical Institute Withdrawal Assessment for Alcohol-Revised indicating clinically significant alcohol withdrawal requiring medical management, and fear of, or adverse reactions to needle puncture.Participants arrived at the UCLA Clinical and Translational Research Center at approximately 10:30AM. At intake, vitals, height, and weight were measured, and participants were provided with a standardized high caloric breakfast. IV lines were placed by a registered nurse at approximately 11:30AM. After participants acclimated to the IV lines, they completed baseline assessments. The alcohol infusion paradigm began at approximately 12:00PM and lasted 180 min. To ensure all participants were safe to discharge, and to disincentivize low-levels of self administration for early discharge, all participants were required to remain at the CTRC for at least 4 additional hours. Discharge occurred when participant BrAC fell below 40 mg% or 0 mg% if they were driving. Throughout the infusion, participants were seated in a comfortable chair in a private room.
Participants were not able to view the infusion pump or technician’s screen. To control distractions, participants watched a movie . Study staff remained in the room to monitor the infusion, breathalyze the participant, take vital signs, administer questionnaires,rolling grow trays and answer questions but they did not significantly engage with participants otherwise.To enable precise control over BrAC and to dissociate bio-behavioral responses to alcohol from responses to cues, alcohol was administered IV using a physiologically based pharmacokinetic model implemented in the Computerized Alcohol Infusion System . CAIS estimates BrAC pseudo-continuously based on the infusion time course and participants sex, age, height, weight, and breathalyzer readings. The CAIS system was modified for this study to combine two alcohol administration paradigms: a 3-step standard alcohol challenge followed by self-administration. During the challenge, participants were administered alcohol designed to reach target BrACs of 20, 40, and 60 mg%, each over 15 min. BrACs were clamped at each target level while participants completed questionnaires . This challenge procedure closely mirrors previous studies by our group . Following the 60 mg% time point and a required restroom break, participants began the self-administration paradigm. Participants were permitted to exert effort to obtain additional “drinks” through the CAIS system, according to a progressive ratio schedule. Participants were required to order one “drink” to familiarize themselves with the procedure . The progressive ratio was log-linear and determined through simulations and pilot testing. Ratio requirements ranged from 20 responses to 3139 responses . Each “drink” increased BrAC by 7.5 mg% over 2.5 min, followed by a decent of −1 mg%/min. A maximum BrAC safety limit was set at 120 mg%. If an infusion would exceed this limit the response button was temporarily inactivated. Except for the first “drink”, participants were given no instruction with respect to their self administration. After 180 min, the infusion ended, the IV line was removed, and participants were provided lunch.The aim of this study was to develop a clinical neuroscience laboratory paradigm to test predictions emerging from preclinical research.
A key tenet of the Allostatic Model is that prolonged drinking produces neurobiological adaptations that diminish the salience of positive reinforcement while simultaneously producing abstinence-related dysphoria and potentiating negative reinforcement. In this study, we developed a novel IV alcohol administration paradigm in humans that combines standardized alcohol challenge methods with progressive ratio self-administration, providing a reliable assessment of subjective responses and a translational measure of motivation to consume alcohol, respectively. SR was measured in terms of positive dimensions , negative dimensions , sedation, and craving. Through integrating measures of subjective effects and behavioral reinforcement, we could test whether SR predicted self-administration behavior, thus capturing the relationships between reward and reinforcement central to allostatic processes. As expected, severity of alcohol use predicted greater overall alcohol craving and greater self-administration. Further validating the paradigm, we observed a robust relationship between self reported craving for alcohol during the challenge and subsequent reinforcement behavior. Interestingly, alcohol-induced increases in craving predicted self-administration independent of alcohol use severity suggesting that reactivity to a priming dose of alcohol may represent an independent risk factor for escalated alcohol consumption. Similar reactivity effects have been observed with respect to alcohol and stress. These results suggest that craving is a proximal predictor of alcohol consumption and thus is an appropriate target for intervention research. Our hypotheses regarding blunted positive reinforcement in severe alcoholism were not supported by these data. Alcohol use severity did not affect stimulation in the challenge, and stimulation did not robustly predict self-administration regardless of alcohol use severity. These results stand in contrast to our previous reports, which found diminished associations between stimulation/hedonia and craving in dependence, as compared to non-dependent heavy drinking.
However, our previous studies used craving as a proxy end point for reinforcement, and thus, those results may not generalize to actual motivated alcohol consumption. Several recent CAIS studies have observed significant relationships between stimulation and self-administration; however, multiple study factors, including sample drinking intensity and alcohol use disorder severity, target BrAC, and free-access vs. progressive ratio schedules of reinforcement may explain these discrepancies. Our hypothesis that negative reinforcement would be stronger among more severe participants were only partially supported. Alcohol use severity was associated with greater levels of depressive symptomatology and basal negative affect, but alcohol use severity did not predict alcohol-induced alleviation of negative affect. Furthermore, negative affect did not robustly predict reinforcement behavior. While these negative affect findings are consistent with our previous studies on craving, they appear inconsistent with a body of literature that has demonstrated relationships between negative affect and naturalistic alcohol use. Most studies that have observed a relationship between negative affect and drinking behavior assess negative affectivity as a trait-like variable whereas this study assessed state negative affect immediately prior to the self administration paradigm. It is possible that participants completing a laboratory paradigm such as this are in an atypically positive mood since they are going to be compensated for their participation are anticipating receiving alcohol, and do not have to deal with daily life hassles during their participation. Secondly, it is possible that the predictors of alcohol self administration in a controlled laboratory setting are dissociable from predictors of naturalistic drinking which is more susceptible to exogenous factors such as drinking cues, peer influence, drinking habits/patterns, and life stressors. Future studies are necessary to examine these multiple possible explanations. In terms of sedation, these results were partially consistent with the Differentiator and Low Level of Response Models that advance sedation as a protective factor against excessive alcohol use. Although alcohol use severity was associated with greater overall sedation,horticulture trays this effect represented a baseline difference that was carried forward rather than a difference in the acute responses to alcohol and greater sedation during the challenge did predict lower levels of self-administration. The lack of light-to-moderate drinkers in our sample may explain these counter intuitive challenge results, as most other studies compare lighter drinkers to heavy drinkers. This study should be interpreted in light of its strengths and weaknesses. The study benefits chiefly from a novel, highly controlled, and translational alcohol administration paradigm that measures alcohol reward and reinforcement and isolates reactivity to alcohol-related cues. The primary limitation was the relatively small sample of participants with severe AUD per DSM-5.
The fact that, for ethical reasons, participants were required to be non-treatment seeking and able to produce a zero on a breathalyzer test at each visit may have impeded recruitment of severe AUD participants. While severe AUD participants were enrolled, this subgroup was smaller and generally represented the lower range of severe AUD. Allostatic neuroadaptations may occur chiefly at higher levels of dependence severity, such as those induced by the ethanol vapor paradigm and participants at this level of severity would likely have been excluded from this study for safety reasons. The relatively scarcity of severe AUD participants also reduces statistical power to detect effects that are expected to arise at this severe range . That said, our sample is comparable to other “severe” samples recruited in alcohol challenge studies . A substantial self-administration ceiling effect, where 36% of participants reached the BrAC safety threshold, may also have affected our results. Additionally, the sample restriction to Caucasian ethnicity limits the generalizability of these results. Lastly, though this study was cross-sectional, allostatic processes are necessarily longitudinal. In these analyses, alcohol use severity was used as a proxy for this longitudinal process capturing multiple facets of alcohol use and problems; However, this approach assumes a relatively linear and progressive course of alcoholism, which may not represent many AUD patients. In conclusion, this study represents a novel approach to translating preclinical theories of addiction to human-subjects research. In these data subjective craving strongly predicted reinforcement behavior and sedation was moderately protective. Conversely, we observed relatively little evidence for the allostatic processes of diminished positive reinforcement and enhanced negative reinforcement in participants with relatively severe alcohol use and problems. Further studies refining and enhancing this translational paradigm, for example by including affective manipulations to test the role of stress in reward and reinforcement are warranted. Interestingly, ecological research has highlighted the role of acute stress events in predicting drug use as opposed to basal negative affect which was measured in this study . Furthermore, given the severity of dependence induced by preclinical paradigms, recruitment of more severe AUD samples may be necessary for a robust translational examination.A significant proportion of opioid overdose deaths occur in the presence of benzodiazepines , sedative medications commonly prescribed for anxiety and insomnia. Benzodiazepines increase the risk of fatal overdose from respiratory suppression when taken together with other central nervous system depressants such as opioids or alcohol. Though recent clinical guidelines caution against co-prescription of BDZ and opioids, BDZ are commonly co-prescribed with opioids and rates of co-prescription in outpatient settings have risen in recent decades . The number of BDZ prescriptions filled in the United States rose by two-thirds between 1996 and 2013, and in 2015 nearly a quarter of individuals who died from opioid overdose also tested positive for BDZ. Among individuals prescribed opioids in the U.S., the proportion with co-prescribed BDZ rose from approximately 2001 to 2014. Among individuals with non-cancer diagnoses prescribed opioids, BDZ co-prescription is associated with a higher prescribed opioid dose and longer duration of prescription. Co-prescription is associated with emergency department visits and inpatient admissions, and elevated rates of medical, mental and substance use comorbidities. Prior work has suggested associations between BDZ use and all-cause mortality, even for short durations of use, as well as associations with specific causes of death including cardiovascular disease and cancer, though findings have been mixed and inconclusive. Studies investigating specific harms of BDZ and opioid co-prescription utilizing electronic health record data linked with prescription drug monitoring databases , and those focusing on at-risk populations, including those with OUD, are relatively lacking. Extant studies that have utilized EHRs of single healthcare systems or Medicaid claims do not capture a broader, comprehensive range of populations filling prescriptions from multiple providers and pharmacies, including simultaneous prescriptions. We previously demonstrated that escalating prescribed opioid dose was associated with all-cause mortality in a large health system. Linking individuals’ medical, death, and PDMP records, the present study expands our previous efforts to investigate BDZ and opioid co-prescribing and dosage among patients with OUD and matched patients without OUD in relation to their mortality. We hypothesized that patients with OUD would be more likely to be prescribed BDZ than non-OUD patients; average daily dose of prescribed BDZ would be higher in OUD patients than non-OUD patients; higher BDZ dose would be associated with greater mortality in both OUD and non-OUD patients; and that co-prescription of BDZ and opioids would be associated with greater mortality than either alone in both OUD and non-OUD patients.