Outcomes before and after the policy is implemented are compared between the study group and the comparison group, which allows the investigator to account for whether a general secular trend was influencing both groups. I compared high school students’ marijuana use trends in the City of Los Angeles to trends in the cities that never allowed dispensaries using a difference-in-design. difference-indifference is a useful technique to use for observational studies, natural experiments, and other research analyses where it is not possible to randomize individuals into equivalent treatment and control groups . It is used for comparing trends between two groups and therefore requires pre and post intervention data for a cohort, for individuals over time, or repeated cross-sectional data at the individual or group level . This design allowed me control for events and secular trends such as incremental changes in state laws that occurred during the study period, and general changes in social attitudes that may have been linked to changes in laws in other states and shifts in attitudes at a national level that could have impacted both the intervention and control groups. Difference-in-difference analyses are based in traditional regression analyses and can be applied to linear or generalized linear regression models. To measure the relative difference between the intervention and control groups over time an interaction term is included that compares pre and post-intervention measures between the two groups and gives a parameter estimate for their combined effect.
Where the difference-in-difference approach departs from traditional longitudinal or repeated measures trend analyses is that change over time is measured not within the intervention group relative to baseline, drying marijuana but instead by how much more the intervention group changed relative to the control group. Focusing on outcomes in the intervention group relative to the control group allows the researcher to isolate the effect of the intervention from any secular trends or background influences that may have contributed to the observed outcomes for the intervention group. The control group accounts for any background trends or influences that could have contributed to change in outcomes that were not measured or could not be included in the analysis. The analyses that test Research Questions 2 – 5 and the cross-sectional analysis of the focal relationship between dispensary bans and student marijuana use will use HGLM models that account for students being clustered in cities. Although they are based on the combined 2015/2016 and 2016/2017 school years , these two school years are pooled and these analyses are treated as cross-sectional analyses. I elected to use logistic instead of Poisson regressions analyses to test the hypotheses for Research Questions 2-5 because there was no substantive difference in the estimates and conclusions between the Poisson and logistic regression analyses for the difference-indifference analysis and logistic regression is the most common approach to analyze binary data.For example, students can be thought of as being nested within schools, and schools within cities. Conducting research while ignoring whether students within the same city are more alike each other than not can lead to erroneous conclusions.
Research has shown that ignoring the levels and nesting that naturally occur in data according to organizational structures or how it was collected can impact estimated variances and degrade the ability to detect treatment or covariate effects . Ignoring nesting or clustering can also increase the odds Type I error and lead to substantive errors in interpreting the results of statistical significance tests . Multilevel models were developed to avoid these model specification errors by properly accounting for data that is correlated by geographic, political, or administrative units . For example, for this dissertation I am interested in modeling lifetime and recent marijuana use by individual students nested within cities . I am testing the dichotomous relationship , according to whether the city where they attend school allows dispensaries, while accounting for student characteristics such as grade level and city characteristics such as how many dispensaries are located in the city. HGLMs are appropriate to use for multi-level models that use categorical, non-normally distributed response variables including binary, proportions, count, or ordinal data. When dealing with categorical outcomes such as these, the assumptions of normally distributed, homoscedastic errors are violated and a nonlinear link function is used to transform the outcomes. A non-normal error distribution also needs to be incorporated into the models so that the model building strategies and the interpretations used for hierarchical linear models will still be applicable . Multilevel models with dichotomous outcomes most commonly use the binomial distribution and the logit link to estimate for example the odds ratios and the impact of various characteristics at different levels on those odds .
Conceptually it makes sense that there is a multilevel structure to the data given that the research questions and hypothesis focus on the impacts of city policy on student marijuana use and that students within the same city may be more like each other than students from other cities. It is best, however, to use the most parsimonious models that fit the data. To verify whether a multilevel approach was needed, I conducted a model building process that began with an unconditional model and compared model fit between the simpler and more complex models. Model fit for HGLMs is assessed using a “quasi-likelihood” strategy such as Maximum Quadrature estimation, a common estimation technique available with PROC GLIMMIX. Using this quasi-likelihood technique, I assessed the need for using a multilevel structure by noting change in the -2 log likelihood between a single level model and the nested model with a deviance test. Lower deviance implies better fit; however, models with more parameters will always have lower deviance. A likelihood ratio test is therefore used to investigate whether or not the change in the -2LL is statistically significant. This likelihood ratio test is analogous to a chi-square difference test, where χ2 is equal to the difference in the -2LL of the simpler model minus the -2LL of the more complex model, with degrees of freedom equal to the difference in the number of parameters between the two nested models . The primary reason to use city as the level 2 unit was that the research questions focus on predictors of marijuana use according to city marijuana policy and city characteristics,pruning cannabis rather than school-level predictors. To verify the use of city as the level 2 unit, I compared the intraclass correlation within an empty model using school as the level 2 unit vs. city as the level 2 unit. This analysis indicated that intraclass correlations within the schools were smaller than the ICCs observed for city . Cities were also a more appropriate level 2 unit because more than half of the cities in the county were represented by only one school, which would have conflated clustering by school and by city if school was used as the cluster variable. Next, however, I needed to address whether school should be maintained as a level of the HGLM in addition to city. Design effect calculations *ICC) for each outcome of interest all returned values less than 2, indicating that for each of these variables, adding school as the second level in the analysis and keeping city as the third level would add more complexity than clarity to the analysis and was not necessary. Missing data can introduce bias into regression analysis, so before testing my research questions I examined the frequency of missing responses in the CHKS dataset by generating frequency tables for each of the study variables. For the majority of the study variables, less than 5% of the responses were missing. It is assumed that a missing rate of 5% or less does not introduce bias into the analysis so the dataset was treated with list wise deletion , the default method programmed into SAS regression procedures.
The most important variable used in this dissertation that had a high proportion of missing values was the race/ethnicity variable. For example, in the 2015-2017 dataset used for the cross-sectional analyses, 14.64% of the values were missing in the 2015/2016 school year, and 12.95% of the values were missing in the 2016/2017 school year, which averaged to 13.75% missing values within the race ethnicity variable in the pooled 15-17 dataset. These missing values were primarily found among students who had reported Hispanic ethnicity in a separate question. I therefore addressed the missing values for race/ethnicity by creating a combined race/ethnicity variable that included Hispanic as a racial/ethnic category. Because the proportion of missing values for the Hispanic ethnicity variable was only 1.94% for the pooled 15-17 data, this brought the proportion of missing values for race/ethnicity within acceptable parameters. I applied the same technique to the earlier years of data that asked about Hispanic ethnicity as a separate category. The school years 2005/2006 through 2007/2009 included Hispanic ethnicity among the other racial/ethnic categories rather than a separate questionnaire item and did not have a high proportion of missing values. To address the assumption that the independent variables are measured without error, I ran diagnostic analyses to check for influential observations among the continuous independent variables such as data errors or valid outlier values. I assessed whether any of these potentially influential points had an impact on the regression coefficient estimates using the INFLUENCE and IPLOTS options to produce index plots useful for identification of extreme values. These commands display standardized Pearson residuals, deviance residuals and the leverage and plot them against the predicted probabilities and index numbers. The vertical axis of an index plot represents the categorical value of the marijuana use outcome, and the horizontal axis represents the sequence of the observation. The continuous measure of MMDS per 10,000 residents was the only source of extreme outliers in my data that could introduce bias into the parameter estimates, as the categorical nature of most of the other independent variables precluded extreme values. I identified extreme values for the number of dispensaries per 10,000 city residents in two communities that had several dispensaries but very small residential populations. I compared the Poisson regressions between city dispensary density and student marijuana use with students from these cities excluded to a model that included them. Due to the small number of cases they represented, there were no major differences in -2LL likelihood or parameter estimates between the models that excluded students from the cities with very high ratios of dispensaries per 10,000 population and the models that retained them. Because these were true ratios rather than data or coding errors I therefore retained all of the observations in the dataset. The assumption of the independence of observations was partially met. Because the CHKS survey is anonymous, each year of data is treated as an independent sample of students. Although a student may have taken the survey in a previous year there is not a way to link their data from one survey year to the next and then no way to account for these within person effects. Treating each year as an independent sample ignores the inherent dependence of any repeated observations of the same student across different survey years and represents a limitation of this data and analysis. To address this limitation, I used a method common to cross-sectional repeated measures survey data, where the unit of analysis for each year is the batch of students who completed the survey that year, and the mode of the dependent variable at the school level was included to adjust for the effect of each school . That students are clustered in schools and cities could also violate the independence of error assumption, as one could expect that the students within each school and city will tend to be more like each other than respondents from different communities and that errors associated with one observation would therefore be correlated with the errors of another observation. I accounted for this structure using multilevel models for the cross-sectional analyses and clustered standard errors in the repeated measures analysis. To test the hypotheses associated with Research Question 2, I used the rate of verified dispensaries per 10,000 city residents as the moderating variable and controlled for factors known to influence marijuana use among adolescents, such as gender, race/ethnicity, and social/economic status.