This typical pattern of neural maturation occurred among adolescents who remained nondrinkers

We found significant drinking status × time interactions in a number of distinct and reproducible brain regions commonly associated with response inhibition. Prior to initiating substance use, adolescents who initiated heavy use showed less BOLD activation during inhibitory trials in frontal regions, including the bilateral middle frontal gyri, and non-frontal regions, including the right inferior parietal lobule, putamen, and cerebellar tonsil, compared with those who continued to abstain from alcohol use. This pattern of hypoactivity among youth who later initiated heavy drinking during response inhibition is consistent with studies showing decreased activity during response inhibition predicts later alcohol use and substance use . Indeed, change in BOLD response contrast over time in the right middle frontal gyrus was associated with lifetime alcohol drinks at follow-up. Together, these findings provide additional evidence for the utility of fMRI in identifying neural vulnerabilities to substance use even when no behavioral differences are apparent. At follow up, adolescents who transitioned into heavy drinking showed increasing brain activation in the bilateral middle frontal gyri, right inferior parietal lobule, and left cerebellar tonsil during inhibition; whereas, non-drinking controls exhibited decreasing brain activation in these brain regions. These regions have been implicated in processes of stimulus recognition, working memory, and response selection , all of which are critical to successful response inhibition. Indeed, neuroanatomical models of inhibitory control highlight the importance of frontoparietal attentional control and working memory networks . These models posit that inhibition and cognitive control involve frontoparietal brain regions when detecting and responding to behaviorally relevant stimuli. Thus,drying cannabis findings suggest that heavy drinkers recruit greater activity in these neural networks in order to successfully inhibit prepotent responses.

Given the longitudinal nature of the current study, it is important to consider our findings in the context of typical adolescent neural maturation. During typical neural maturation, adolescents exhibit less activation over time, as neural networks become more refined and efficient . Adolescents who transitioned into heavy drinking showed the opposite pattern – increasing activation despite similar performance, suggesting that alcohol consumption may alter typical neural development. The current findings should be considered in light of possible limitations. Although heavy drinking and non-drinking youth groups were matched on several baseline and follow-up measures, heavy drinking youth reported more cannabis, nicotine, and other illicit drug use at follow-up. Differential activation remained significant after statistically controlling for lifetime substance use and such differences may contribute to our findings. Further, simultaneous substance use might be associated with these results. Future research should explore the effects poly substance use during the same episode compared to the effects of heavy drinking on neural responses. It is also important to note that adolescence is a period of significant inter-individual differences in neural development, and as such, we matched self-reported pubertal development and age at baseline and follow-up to address this issue. For the current sample, histograms of age distributions at baseline and follow-up are provided in Online Resource 1. Again, our groups were well matched on these variables; however, additional longitudinal research to examine the effects puberty and hormonal changes on neural functioning and response inhibition are needed. In summary, the current data suggest that pre-existing differences in brain activity during response inhibition increase the likelihood of initiating heavy drinking, and initiating heavy alcohol consumption leads to differential neural activity associated with response inhibition.

These findings make a significant contribution to the developmental and addictive behaviors fields, as this is the first study to examine neural responses differences during response inhibition prior to and following the transition into heavy drinking among developing adolescents. Further, we provide additional support for the utility of fMRI in identifying neural vulnerabilities to substance use even when no behavioral differences are apparent. Identifying such neural vulnerabilities before associated behaviors emerge provides an additional tool for selecting and applying targeted prevention programs. Given that primary prevention approaches among youth have not been widely effective, it is possible that targeted prevention programs for youth who are at greatest neurobiological risk could be a novel, effective approach. As such, our findings provide important information for improving primary prevention programs, as well as answering the question of whether neural differences predate alcohol initiation or whether differences arise as a consequence of alcohol use.Although researchers in sociology, cultural studies, and anthropology have attempted, for the last 20 years, to re-conceptualize ethnicity within post-modernist thought and debated the usefulness of such concepts as “new ethnicities,” researchers within the field of alcohol and drug use continue to collect data on ethnic groups on an annual basis using previously determined census formulated categories. Researchers use this data to track the extent to which ethnic groups consume drugs and alcohol, exhibit specific alcohol and drug using practices and develop substance use related problems. In so doing, particular ethnic minority or immigrant groups are identified as high risk for developing drug and alcohol problems. In order to monitor the extent to which such risk factors contribute to substance use problems, the continuing collection of data is seen as essential.

However, the collection of this epidemiological data, at least within drug and alcohol research, seems to take place with little regard for either contemporary social science debates on ethnicity, or the contemporary on-going debates within social epidemiology on the usefulness of classifying people by race and ethnicity . While the conceptualization of ethnicity and race has evolved over time within the social sciences, “most scholars continue to depend on empirical results produced by scholars who have not seriously questioned racial statistics” . Consequently, much of the existing research in drug and alcohol research remains stuck in discussions about concepts long discarded in mainstream sociology or anthropology, yielding robust empirical data that is arguably based on questionable constructs . Given this background, the aim of this paper is to outline briefly how ethnicity has been operationalized historically and continues to be conceptualized in mainstream epidemiological research on ethnicity and substance use. We will then critically assess this current state of affairs, using recent theorizing within sociology, anthropology, and health studies. In the final section of the paper, we hope to build upon our ”cultural critique” of the field by suggesting a more critical approach to examining ethnicity in relation to drug and alcohol consumption. According to Kertzer & Arel , the development of the nation states in the 19th century went hand in hand with the development of national statistics gathering which was used as a way of categorizing populations and setting boundaries across pre-existing shifting identities. Nation states became more and more interested in representing their population along identity criteria, and the census then arose as the most visible means by which states could depict and even invent collective identities . In this way, previous ambiguous and context-dependent identities were, by the use of the census technology, ‘frozen’ and given political significance. “The use of identity categories in censuses was to create a particular vision of social reality. All people were assigned to a single category and hence conceptualized as sharing a common collective identity” , yet certain groups were assigned a subordinate position. In France, for example, the primary distinction was between those who were part of the nation and those who were foreigners, whereas British, American, and Australian census designers have long been interested in the country of origin of their residents. In the US, the refusal to enfranchise Blacks or Native Americans led to the development of racial categories, and these categories were in the US census from the beginning. In some of the 50 federated states of the US, there were laws,curing cannabis including the “one drop of blood” rule that determined that to have any Black ancestors meant that one was de jure Black . Soon a growing number of categories supplemented the original distinction between white and black.

Native Americans appeared in 1820, Chinese in 1870, Japanese in 1890, Filipino, Hindu and Korean in 1920, Mexican in 1930, Hawaiian and Eskimo in 1960. In 1977, the Office of Management and Budget , which sets the standards for racial/ethnic classification in federal data collections including the US Census data, established a minimum set of categories for race/ethnicity data that included 4 race categories and two ethnicity categories . In 1997, OMB announced revisions allowing individuals to select one or more races, but not allowing a multiracial category. Since October 1997, the OMB has recognized 5 categories of race and 2 categories of ethnicity . In considering these classifications, the extent to which dominant race/ethnic characterizations are influenced both by bureaucratic procedures as well as by political decisions is striking. For example, the adoption of the term Asian-American grew out of attempts to replace the exoticizing and marginalizing connotations of the externally imposed pan-ethnic label it replaced, i.e. “Oriental”. Asian American pan-ethnic mobilization developed in part as a response to common discrimination faced by people of many different Asian ethnic groups and to externally imposed racialization of these groups. This pan-ethnic identity has its roots in many ways in a racist homogenizing that constructs Asians as a unitary group , and which delimits the parameters of “Asian American” cultural identity as an imposed racialized ethnic category . Today, the racial formation of Asian American is the result of a complex interplay between the federal state, diverse social movements, and lived experience. Such developments and characterizations then determine how statistical data is collected. In fact, the OMB itself admits to the arbitrary nature of the census classifications and concedes that its own race and ethnic categories are neither anthropologically nor scientifically based . Issues of ethnic classification continue to play an important role in health research. However, some researchers working in public health have become increasingly concerned about the usefulness or applicability of racial and ethnic classifications. For example, as early as 1992, a commentary piece in the Journal of the American Medical Association, challenged the journal editors to “do no harm” in publishing studies of racial differences . Quoting the Hippocratic Oath, they urged authors to write about race in a way that did not perpetuate racism. However, while some researchers have argued against classifying people by race and ethnicity on the grounds that it reinforces racial and ethnic divisions; Kaplan & Bennett 2003; Fullilove, 1998; Bhopal, 2004, others have strongly argued for the importance of using these classifications for documenting health disparities . Because we know that substantial differences in physiological and health status between racial and ethnic groups do exist, relying on racial and ethnic classifications allows us to identify, monitor, and target health disparities . On the other hand, estimated disparities in health are entirely dependent upon who ends up in each racial/ethnic category, a process with arguably little objective basis beyond the slippery rule of social convention . If the categorization into racial groups is to be defended, we, as researchers, are obligated to employ a classification scheme that is practical, unambiguous, consistent, and reliable but also responds flexibly to evolving social conceptions . Hence, the dilemma at the core of this debate is that while researchers need to monitor the health of ethnic minority populations in order to eliminate racial/ethnic health disparities, they must also “avoid the reification of underlying racist assumptions that accompanies the use of ‘race’, ethnicity and/or culture as a descriptor of these groups. We cannot live with ‘race’, but we have not yet discovered how to live without it” . In mainstream drug and alcohol research, traditional ethnic group categories continue to be assessed in ways which suggest little critical reflection in terms of the validity of the measurement itself. This is surprising given that social scientists since the early 1990s have critiqued the propensity of researchers to essentialize identity as something ’fixed’ or ’discrete’ and to neglect to consider how social structure shapes identity formation. Recent social science literature on identity suggests that people are moving away from root edidentities based on place and towards a more fluid, strategic, positional, and context-reliant nature of identity . This does not mean, however, that there is an unfettered ability to freely choose labels or identities, as if off of a menu .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on This typical pattern of neural maturation occurred among adolescents who remained nondrinkers

Eligible participants were invited to the laboratory for additional screening

Alcohol dependent patients who underwent cue-exposure extinction training had larger decreases in neural alcohol cue-reactivity in mesocorticolimbic reward circuitry than patients who had standard clinic treatment. Cognitive bias modification training, which similarly trains individuals to reduce attentional bias towards alcohol cues, resulted indecreased neural alcohol cue-reactivity in the amygdala and reduced medial prefrontal cortex activation when approaching alcohol cues . These studies suggest that fMRI tasks may be sensitive to treatment response. Further, neurobiological circuits identified using fMRI can be used to predict treatment and drinking outcomes, providing unique information beyond that of self-report and behavior. Individuals with alcohol use disorder who return to use demonstrate increased activation in the mPFC to alcohol cues compared to individuals with AUD who remain abstinent . Moreover, the degree that the mPFC was activated was associated with the amount of subsequent alcohol intake, but not alcohol craving . Activation in the dorsolateral PFC to alcohol visual cues has been associated with higher percent heavy drinking days in treatment-seeking alcohol dependent individuals . Increased activation in the mPFC, orbitofrontal cortex, and caudate in response to alcohol cues has also been associated with the escalation of drinking in young adults . Mixed findings have been reported for the direction of the association between cue-induced striatal activation and return to use. Increases and decreases in ventral and dorsal striatal activation to alcohol cues have been associated with subsequent return to use. Utilizing a different paradigm, Seo and colleagues found that increased mPFC, ventral striatal,industrial drying racks and precuneus activation to individually tailored neutral imagery scripts predicted subsequent return to use in treatment-seeking individuals with AUD . Interestingly, brain activity during individually tailored alcohol and stress imagery scripts was not associated with return to use .

While initial evidence indicates that psychological interventions are effective at reducing mesocorticolimbic response to alcohol-associated cues, few studies have prospectively evaluated if psychosocial interventions attenuate neural cue-reactivity that in turn reduces drinking in the same population. Furthermore, no previous studies have used neural reactivity to alcohol cues to understand the mechanisms of brief interventions. Therefore, this study aimed to examine the effect of a brief intervention on drinking outcomes, neural alcohol cue-reactivity, and the ability of neural alcohol cue-reactivity to predict drinking outcomes. Specifically, this study investigated: 1) if the brief intervention would reduce percent heavy drinking days or drinks per week in non-treatment seeking heavy drinkers in the month following the intervention and 2) if the brief intervention would attenuate neural alcohol cue-reactivity. In the first case, we predicted significant effects on drinking based on the existing clinical literature and, in the second case, we predicted decrements in alcohol’s motivational salience based on the feedback about the participant’s drinking levels relative to clinical recommendations and their personal negative consequences of drinking. The effects of neural cue reactivity on subsequent drinking outcomes were tested in order to elucidate patterns of neural cue-reactivity that predict drinking behavior prospectively. Participants were recruited between November 2015 and February 2017 from the greater Los Angeles metropolitan area. Study advertisements described a research study investigating the effects of a brief health education session on beliefs about the risks and benefits of alcohol use. Inclusion criteria were as follows: engaged in regular heavy drinking, as indicated by consuming 5 or more drinks per occasion for men or 4 or more drinks per occasion for women at least 4 times in the month prior to enrollment ; a score of ≥8 on the Alcohol Use Disorder Identification Test .

Exclusion criteria included under the age of 21; currently receiving treatment for alcohol problems, history of treatment in the 30 days before enrollment, or currently seeking treatment; a positive urine toxicology screen for any drug other than cannabis; a lifetime history of schizophrenia, bipolar disorder, or other psychotic disorder; serious alcohol withdrawal symptoms as indicated by a score of ≥10 on the Clinical Institute Withdrawal Assessment for Alcohol-Revised ; history of epilepsy, seizures, or severe head trauma; non-removable ferromagnetic objects in body; claustrophobia; and pregnancy. Initial assessment of the eligibility criteria was conducted through a telephone interview. Upon arrival, participants read and signed an informed consent form. Participants then completed a series of individual differences measures and interviews, including a demographics questionnaire and the Timeline Follow-back to assess for quantity and frequency of drinking over the past 30 days. All participants were required to test negative on a urine drug test . A total of 120 participants were screened in the laboratory, 38 did not meet inclusion criteria and 12 decided not to participate in the trial, leaving 60 participants who enrolled and were randomized. Of the 60 individuals randomized, 46 completed the entire study. See Figure 1 for a CONSORT Diagram for this trial. The study was a randomized controlled trial. Participants were assessed at baseline for study eligibility and eligible participants returned for the randomization visit up to two weeks later. During their second visit, participants completed assessments, and then were were randomly assigned to receive a 1-session brief intervention or to an attention-matched control condition. Immediately after the conclusion of the session participants completed a functional magnetic resonance imaging scan to assess brain activity during exposure to alcohol cues and completed additional assessments. Participants were followed up 4 weeks later to assess alcohol use since the intervention through the 30-day Timeline Follow back interview. Participants who completed all study measures were compensated $160. The brief intervention consisted of a 30–45 minute individual face-to-face session based on the principles of motivational interviewing .

The intervention adhered to the FRAMES model which includes personalized feedback , emphasizing personal responsibility , providing brief advice , offering a menu of change options, conveying empathy , and encouraging self-efficacy . In accordance with MI principles the intervention was non-confrontational and emphasized participants’ autonomy. The content of the intervention mirrored brief interventions to reduce alcohol use that have been studied with non-treatment seeking heavy drinkers. The intervention included the following specific components: 1) giving normative feedback about frequency of drinking and of heavy drinking; 2) Alcohol Use Disorders Identification Test score and associated risk level ; 3) potential health risks associated with alcohol use; 4) placing the responsibility for change on the individual; 5) discussing the reasons for drinking and downsides of drinking; and 6) setting a goal and change plan if the participant was receptive . The aim of the intervention was to help participants understand their level of risk and to help them initiate changes in their alcohol use. Sessions were delivered by master’s-level therapists who received training in MI techniques, including the use of open-ended questions, reflective listening, summarizing, and eliciting change talk,commercial greenhouse benches and in the content of the intervention. All sessions were audiotaped and rated by author MPK for fidelity and for quality of MI interventions using the Global Rating of Motivational Interviewing Therapists . On the 7-point scale, session scores ranged from 5.87 to 6.93 with an average rating of 6.61 ± 0.23, which indicates that the MI techniques used in the intervention were delivered with good quality. Supervision and feedback were provided to therapists by author MPK following each intervention session. The treatment manual is available from the last author upon request. Participants randomized to the attention-matched control condition viewed a 30-minute video about astronomy. In the control condition there was no mention of alcohol or drug use beyond completion of research assessments. Both the intervention and attention-matched control sessions took place within the UCLA Center for Cognitive Neuroscience in separate rooms from the neuroimaging suite. The following individual questionnaires and interviews were administered during the study: the 30-day timeline follow-back was administered in interview format to capture daily alcohol and marijuana use over the 30 days prior to the visit by trained research assistants ; the self-report alcohol use disorders identification test was administered in order to assess for drinking severity ; the Penn Alcohol Craving Scale to measure alcohol craving over the past week . Participants also completed the Fagerstrom Test for Nicotine Dependence .

Lastly, participants completed a demographics questionnaire reporting, among other variables, age, sex, and level of education. The Alcohol Cues Task involves the delivery of oral alcohol or control tastes to elicit physiological reward responses and subjective urges to drink . During the task, each trial began with the presentation of a visual cue such that the words Alcohol Taste or Control Taste were visually presented to participants. This was followed by a fixation cross , delivery of the taste , and a fixation cross . Alcohol and water tastes were delivered through Teflon tubing using a computer controlled delivery system as described by Filbey and colleagues . Participants were instructed to press a button on a response box placed in their right hand upon swallowing. Alcohol tastes consisted of participants’ preferred alcoholic beverage . Beer could not be administered due to incompatibility of the alcohol administration device with carbonated liquids. The presentation of visual stimuli and response collection were programmed using MATLAB and the Psychtoolbox on an Apple MacBook running Mac OSX , and visual stimuli were presented using MRI compatible goggles . The Alcohol Cues Task was administered over the course of two runs with 50 trials/run. For the analysis of the cues task, all first-level analyses of imaging data were conducted within the context of the general linear model , modeling the combination of the cue and taste delivery periods convolved with a double-gamma hemodynamic response function , and accounting for temporal shifts in the HRF by including the temporal derivative. Alcohol and water taste cues were modeled as separate event types. The onset of each event was set at the cue period with a duration of 11 seconds. Six motion regressors representing translational and rotational head movement were also entered as regressors of no interest. Data for each subject were registered to the MBW, followed by the MPRAGE using affine linear transformations, and then normalized to the Montreal Neurologic Institute template. Registration was further refined using FSL’s nonlinear registration tool . The Alcohol Taste > Water Taste contrast was specified in the first level models. Higher level analyses combined these contrast images within subjects and between subjects . Age, sex, cigarette smoking status, and positive urine THC were included as covariates. Additional analyses evaluated if neural response to alcohol taste cues was predictive of drinking outcomes. Two models were run, evaluating percent heavy drinking days and the average number of drinks per week in the 4 weeks following the intervention or matched-control. Both models controlled for age, sex, cigarette smoking status, positive urine THC, and baseline percent heavy drinking days or average drinks per week depending on the drinking outcome model. Z-statistic images were thresholded with cluster-based corrections for multiple comparisons based on the theory of Gaussian Random Fields with a cluster-forming threshold of Z > 2.3 and a corrected cluster-probability threshold of p < 0.05 . This study examined the effect of a brief intervention on drinking outcomes, neural alcohol cue-reactivity, and the ability of neural alcohol cue-reactivity to predict drinking outcomes. Results did not find an effect of the brief intervention on alcohol use in this sample, and the intervention was not associated with differential neural alcohol cue reactivity. Exploratory secondary analyses revealed inverse relationships between differential neural activity in the precuneus and medial frontal gyrus in relation to alcohol-related outcomes, but these relationships were across conditions. The lack of main effect of intervention on either drinking outcomes or on neural alcohol cue reactivity is contrary to the study hypothesis whereby individuals assigned to the brief intervention condition were expected to show greater reductions in alcohol use compared to a no-intervention control condition . In the present study, reductions in alcohol use were observed for both conditions and it appears that simply participating in an alcohol research study at an academic medical center prompted notable behavioral changes. Reductions in drinking following study participation may be attributable to assessment reactivity, in which participants curb drinking after completing alcohol-related assessments and interviews . This phenomenon has been well-documented across several assessment modalities , including the AUDIT and TLFB interviews, which were used in the present study. In addition, recent studies have highlighted the fact that single session interventions, while efficacious in relatively large RCTS, have modest effect sizes .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Eligible participants were invited to the laboratory for additional screening

Examining recognition and delayed recall was a critical first step to inform future diagnostic improvements

Marsland et al., 2015 did find that IL-6 and CRP were associated with worse memory and smaller hippocampal volumes in middle-aged adults; however, it was cortical grey matter volume, not the hippocampus, that mediated the relationship between inflammation and memory. Studies in adults with HIV have found that peripheral biomarkers of immune activation but not biomarkers examined in this study were associated with frontal and temporal lobe regions . Interestingly, in post hoc analyses examining participants on ART who were virally suppressed, greater CRP was associated with a thinner parahippocampal gyrus. This finding may be in line with the Marsland et al., 2015 study. However, the current study had a small sample size, and several analyses were examined in post hoc analyses without accounting for multiple comparisons, so this finding should be interpreted cautiously. Integrating the aging and HIV literature, it is unclear if the association between peripheral inflammation, medial temporal lobe, and episodic memory is consistently observed in mid-life. While the best method of determining the necessary sample size to detect a mediation effect is debated, it is still likely this study’s modest sample size of 92 is under powered to detect a mediation effect, particularly given that large effect sizes were not expected . Therefore, the role of inflammation and its association with brain integrity and episodic memory in PWH should continue to be examined, particularly in larger samples with greater power to detect these associations. It will be particularly important to examine these relationships in PWH aged 65 and over given that this is the age range in which these associations between inflammation, memory, and MCI/AD risk are more consistently found. One thing to note is that these peripheral inflammatory biomarkers were examined separately,botanicare rolling benches as each biomarker may have a different relationship with memory and brain integrity. There is currently no “gold-standard” way to combine inflammation biomarkers into a single composite. However, some researchers have examined inflammation composites .

Therefore, future studies may want to examine a wider array of biomarkers and employ an inflammation composite, particularly given that the impact of inflammation on brain integrity and memory may be due to the compounding effects of multiple inflammatory biomarkers. Additionally, these biomarkers were only examined at one time point, so a better understanding of how changes in these inflammatory biomarkers over time are associated with brain integrity and cognition is also needed. Lastly, this study examined peripheral inflammation. Peripheral inflammation is easier to assess more non-invasively in comparison to a lumbar puncture which is needed to collect CSF . However, peripheral inflammation may not be as reflective of neuroinflammation compared to CSF biomarkers. Although, some studies have shown that plasma inflammation may be more associated with cognition . Thus, future studies should ideally examine both plasma and CSF biomarkers to determine if examining peripheral inflammation is sufficient. Ultimately, a better understanding of the role of inflammation and the most efficient way to measure it could help to inform interventions that could lower inflammation in PWH, if future research indicates that lowering inflammation may be cognitively beneficial. In addition to the limitations discussed above, there are additional limitations that should be considered. First, the generalizability of the sample should be considered. As noted several times in the discussion, the age range may be too young to expect a significant number of participants to have started to accumulate AD pathology. Additionally, the sample was predominantly male , which is somewhat reflective of the current demographics of PWH in the United States . Nevertheless, there are known sex differences in HIV, AD, and inflammation that this project is under powered to test but should be further examined in future studies. For example, women living with HIV are at greater risk of neurocognitive impairment, particularly in the domains of memory, speed of information processing, and motor function potentially due to a difference in psychosocial factors , comorbid conditions , and biological factors .

It is also known that women are at greater risk of AD . Additionally, participants with severe confounding comorbid conditions were excluded from this study, and this sample was characterized by relatively low current drug use and relatively high ART use. These factors are also known to impact cognitive and brain functioning; for example, cannabis use has been associated with better cognitive functioning and lower inflammation in PWH . As the HIV population continues to age, it will be important to understand if there are any associations between these sociodemographic variables and AD risk that str specific to PWH. Related to generalizability, one odd finding was the higher-than-expected number of participants with the APOE e2 allele. The percentage of participants with at least one e4 allele was somewhat comparable to the general population, with estimates ranging from 10% to 25% of people having at least one e4 allele. Additionally, it is known that Black/African American persons and persons of African ancestry have increased rates of the APOE e4 allele compared to non-Hispanic White people or those of European descent . Indeed, the CHARTER study has found an increased prevalence of the e4 allele in Black/African American participants as compared to nonHispanic White participants . The APOE e2 allele is much less studied because it is more rare, but having an APOE e2 allele is associated with a lower-than-average risk of AD. In this study, the percentage of participants with at least one APOE e2 allele was higher than the general population . Similar to the APOE e4 allele, the prevalence of APOE e2 is known to vary by ancestorial continent and latitude. The APOE e2 allele penetrance is 9.9% in Africa, which is higher than the APOE e2 allele penetrance in Europe . Even accounting for these demographic differences, the prevalence of the APOE e2 is high, and this over representation of the APOE e2 allele may mean this group is, on average, at decreased risk of AD. This increased prevalence could be due to a selection bias .

Information on the APOE e2 in PWH is very limited, but more research is certainly needed to understand AD risk in diverse groups of PWH. One minor point is that four participants with the APOE e24 genotype were categorized as APOE e4-. The limited literature on this genotype does suggest a somewhat elevated risk of AD associated with this genotype, but much less than that of those that are APOE e34 or APOE e44 . Therefore, the APOE e24 participants were categorized as APOE e4- given the only slightly elevated risk. Other categorizations could be explored, although given the small number of participants that are APOE e24 it is unlikely to make a significant difference. In addition to the potentially limited generalizability due to the demographics and clinical characteristics of this sample, this study examined a relatively modest sample size. A sample size of 92 is not necessarily small compared to other imaging studies. However, as highlighted throughout this discussion, this modest sample size could still limit the power to detect associations. Future studies in this area would benefit from improving statistical power either by enrolling a larger overall sample and/or recruiting participants with memory impairment,commercial plant racks particularly recognition impairment. This study is also limited in that it does not include an HIV-negative comparison group. Utilizing preexisting CHARTER data allowed for longitudinal analysis over 12 years and the ability to efficiently examine the neuroanatomical correlates of memory in middle-aged and older PWH. However, this study is therefore limited by pre-defined CHARTER protocol and design. Specifically, CHARTER did not enroll HIV-negative comparison participants, which precludes examination of how the relationship between memory profiles and brain integrity differ by HIV serostatus. While there is ample HIV-negative middle-aging literature to compare these results to, many of these HIV-negative middle-aging studies are demographically and psychosocially different than this group. However, even with a good comparison group, it is difficult to discern the effect of HIV versus the neurotoxic effects of ART and the downstream consequences of ART . Nevertheless, future studies would benefit from a demographically and psychosocially similar HIV-negative group to better understand if the associations between memory and neuroimaging correlates are specific to PWH or if these are associations seen regardless of HIV status. In the current study, delayed recall and recognition were examined separately rather than dichotomously splitting participants into aMCI versus non-aMCI groups or comparing HAND versus aMCI groups as in Sundermann et al. . Additionally, examining delayed recall continuously was advantageous because it increases variability and more subtle differences observed in mid-life may not be captured by diagnostic cut-points. However, associations between biological markers associated with AD have been found in PWH using aMCI criteria .

Therefore, data could be reexamined using adapted aMCI criteria and HAND criteria to examine if a more comprehensive approach to examining episodic memory is more sensitive to the medial temporal lobe than examining delayed recall and recognition separately. As described in the Methods section, the differences in scanner by site was corrected by regressing scanner from the data. Accounting for scanner was necessary given that prior CHARTER studies have shown that pooling MRI data from multiple sites is feasible, but there are documented differences between the scanners . However, accounting for scanner is essentially accounting for study site, which is somewhat problematic given that study site has been shown to be associated with the risk of neurocognitive impairment in the CHARTER study. For example, Marquine et al. found a significant effect of study site, specifically when comparing New York and San Diego, on the risk of neurocognitive impairment that was not fully accounted for by race/ethnicity differences. It is thought that differences in the risk of neurocognitive impairment are likely due to psychosocial and environmental factors that are associated with geographic location . These psychosocial and environmental factors could also impact brain integrity, and thus accounting for scanner, while necessary, may mask real differences in brain integrity. Therefore, future studies may want to employ a different statistical method that could account for differences in scanner while not eliminating the effect of study site. Relatedly, future studies could explore alternative ways to analyze the imaging data. For example, a priori regions of interest were selected given the interest in focusing on brain structures associated with HAND and aMCI. However, the FreeSurfer processing approaches provide a broad array of additional regions that could also be explored. Furthermore, additional data-driven analytic approaches exist such as whole-brain voxel-based morphometry. This study took a hypothesis-driven approach, although examination of other regions of interest, such as subdivisions of the cingulate cortex, could be done in an exploratory fashion. Other imaging modalities such as diffusor tensor imaging to examine white matter integrity, arterial spin labeling to examine cerebral blood flow, MRS to examine neurochemical alterations, and amyloid PET imaging may also help to better understand episodic memory in PWH. Despite these limitations, this study has several clinical implications. This study showed that memory in these participants aged 45 to 68 was associated with prefrontal structures but not medial temporal lobe structures. This suggests that episodic memory in middle-aged PWH is more associated with frontally mediated etiologies such as HIV rather than etiologies associated with the medial temporal lobe such as AD. Second, recognition impairment was quite variable over time. Due to this variability over time, recognition may not serve as a good clinical marker to help distinguish aMCI from HAND. However, this group of participants is considerably younger than when late-onset AD presents; therefore, continued research is needed to examine if recognition may be a useful clinical marker to differentiate aMCI and HAND in older age. This study suggests that in middle-aged PWH without severe confounding medical conditions and high rates of ART use, there is not a greater than expected decline in delayed recall. However, more research is needed to more definitively determine if there is accelerated memory decline in middle-aged PWH. Lastly, while there was some indication that peripheral CRP may be associated with memory, overall, most biomarkers of inflammation were not associated with episodic memory and the medial temporal lobe did not mediate a relationship between inflammation and episodic memory. However, given the limitations described above, ongoing research on this topic is needed.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Examining recognition and delayed recall was a critical first step to inform future diagnostic improvements

Volumetric subcortical regions of interest included the hippocampus as well as the basal ganglia

The majority of older PWH are currently between the ages of 50 to 65, with a much smaller percentage over the age of 65. However, aging trends in the HIV population are predicted to continue . Additionally, age-associated physical comorbidities appear 5-10 years earlier in PWH , and, there is evidence of premature brain aging . Due to the neurotoxic effects of HIV and ART, as well as medical comorbidities and possible accelerated brain aging, PWH also may have less brain reserve to compensate for accumulating neurodegenerative pathology. Therefore, cognitive deficits indicative of aMCI could appear earlier in PWH compared to HIV-negative peers. Taken together, examining PWH in mid-life is advantageous as it could identify those with early signs of aMCI when interventions may be particularly efficacious. After excluding participants that did not meet inclusion/exclusion criteria as detailed below, the study included 92 PWH between the ages of 45 to 68 years old. All participants underwent at least one structural MRI scan between 2008 and 2010, comprehensive neuropsychological, neuromedical, and neuropsychiatric evaluation, as well as a blood draw. Most participants completed at least one follow-up neuropsychological, neuromedical, and neuropsychiatric study visit occurring in 6-month intervals. Participants were drawn from five participating sites: Johns Hopkins University, Mt. Sinai School of Medicine, University of California San Diego, University of Texas Medical Branch, and University of Washington. All CHARTER study procedures were approved by local Institutional Review Boards, and all participants provided written informed consent. UC San Diego IRB approval was sought for the current study, and it was determined by the IRB that this study was exempt. The CHARTER study aimed to recruit PWH to reflect the geographic and sociodemographic diversity of PWH around university-affiliated treatment centers in the U.S.; thus, CHARTER inclusion criteria were minimal and did not exclude participants with comorbidconditions that may impact cognitive function.

To determine the extent to which non-HIV related comorbidities have contributed to neurocognitive impairment,rolling tables developmental and medical histories of each participant were determined by Dr. R. K. Heaton and re-reviewed by an independent CHARTER clinician investigator. Participants with severe “confounding” comorbidities, as defined by Frascati criteria , were excluded from this project. Severe “confounding” comorbid conditions include comorbidities that could sufficiently explain neurocognitive deficits and thus preclude a HAND diagnosis. During clinician review, time course of comorbidities in relation to HIV and cognitive decline as well as the severity of comorbidities were considered when making comorbidity classification determination. Comorbid conditions that were reviewed and considered include history of neurodevelopmental disorders , cerebrovascular events , systemic medical comorbidities , non-HIV neurological conditions , and substance-related comorbidities . This comorbidity classification system has been shown to have excellent inter-rater reliability . The decision to exclude confounding comorbidities was further supported by a recent CHARTER paper showing that those with severe “confounding” comorbidities had worse brain integrity, but those with moderate comorbidities had fairly equivalent brain abnormalities as those with mild comorbidities . Additionally, CHARTER recruited a wide range of ages. To study the effect of aging with HIV, the age range for the current study was restricted to participants that were aged 45 or older at the time of the MRI scan. Additionally, one participant was excluded from the study given that their T1 structural MRI scan did not yield usable data .Tests of memory in the CHARTER study included the Hopkins Verbal Learning Test – Revised and the Brief Visuospatial Memory Test-Revised .

The HVLT-R and the BVMT-R include three learning trials, a longdelay free recall trial in which participants are asked to recall the stimuli previously presented, and a recognition trial in which participants are presented both target stimuli and non-target stimuli and asked if stimuli were presented in the learning trials. The delayed recall raw score is the total number of words correctly recalled during the long-delay free-recall trial. A recognition discrimination raw score was calculated by subtracting false positives from the total number of true positives. Note, this score is reflective of recognition discriminability, but this will be referred to simply as “recognition” throughout the text. Both the HVLT-R and BVMT-R have six alternate forms to attempt to correct for practice effects. Raw recognition scores were converted to Z-scores that account for demographic variables using normative data from the HNRP . Given that practice effect correction was not available for recognition and participants had a varying number of previous administrations, number of prior neuropsychological evaluations was included as a covariate in statistical analyses examining recognition. Raw delayed recall scores were converted to T-scores that account for demographic variables and practice effects using normative data from the HNRP. HVLT-R and BVMT-R recognition Z-scores were averaged to create a recognition composite. HVLT-R and BVMT-R delayed recall T-scores were averaged to create a delayed recall composite. Test-retest reliability estimates of the and HVLT-R recognition ranges from r = 0.27 – 0.40 and delayed recall ranges from r = 0.36 – 0.39. HVLT-R recognition and delayed recall show adequate convergent validity with other tests of verbal memory . The BVMT-R recognition and delayed recall trial have been shown to have adequate convergent validity with other tests of visual memory . Recognition and delayed recall were initially examined continuously rather than dichotomously splitting participants into impaired versus unimpaired groups. Examining recognition and delayed recall continuously is advantageous because it increases variability and more subtle differences observed in mid-life may not be captured by diagnostic cut-points.

However, when examining linear regression analyses from aim 1, the recognition analyses did not meet all assumptions for linear regression . Therefore, recognition was dichotomized into an impaired recognition group and an unimpaired recognition group for all analyses. Processing speed and psychomotor T-scores were used to examine processing speed and psychomotor performance . Raw scores from individual tests were converted to T-scores that adjust for the effect of age, sex, education, race/ethnicity, and practice effects using center-specific normative data. The T-scores from all tests in the domain are then averaged to obtain a domain T-score . The Wide Range Achievement Test-III , which has been shown to be a measure of premorbid verbal IQ in PWH , was reported to characterize the sample. Participants completed a standardized CHARTER neuromedical evaluation at each study timepoint. HIV serostatus was determined by enzyme-linked immunosorbent assay with a confirmatory Western Blot. The following HIV disease characteristics were collected from most participants at each visit: 1) current CD4 count measured via flow cytometry; 2) nadir CD4 measured via a combination of self-report and medical records; 2) CDC HIV staging; 3) HIV RNA in plasma measured by ultra-sensitive PCR ; 4) estimated duration of HIV disease collected via self-report; and 5) current ART regimen. Comorbid medical conditions , diabetes, hypertension, hyperlipidemia) were determined by self-report or taking medication for the condition. Comorbid psychiatric and substance use conditions were determined with the Composite International Diagnostic Interview , which is consistent with the DSM-IV. Additional details on the standardized CHARTER neuromedical assessment can be found in Heaton et al. . Additionally, CHARTER participants also have APOE genotype data for additional information). APOE genotype was dichotomized into APOE e4+ and APOE e4- . FreeSurfer version 7.1.1 was used to obtain cortical thickness and subcortical volume measures for several regions of interest , with a similar approach as earlier CHARTER work . After FreeSurfer processing, all T1 scans were visually inspected; in addition to the one participant excluded from all analyses as described above, one participant’s hippocampi were very overestimated, and therefore their hippocampal data were excluded from analyses. Neocortical thickness regions of interest included medial temporal lobe structures , prefrontal , and primary motor cortical areas. Specific structures were analyzed separately. Left and right volumes or cortical thicknesses for these regions of interest were averaged. In post hoc analyses,cannabis grow supplies if there were significant findings for the average region of interest then the left and right regions were examined separately to examine laterality. The differences in scanner from site to site was corrected for by regressing scanner from the data, given that differences between scanners have been well-documented in prior CHARTER work .

Differences in head size was accounted for by including estimated total intracranial vault volume as a covariate in volumetric data. Mean cortical thickness was included as a covariate in cortical thickness analyses. Additionally, age was included as a covariate to adjust for the normal differences of age on the brain. Five inflammation biomarkers were examined in this study. All inflammatory biomarkers have been found to be elevated in the context of HIV and aMCI . Plasma for biomarker assays was collected via routine venipuncture and EDTA vacuum tubes from all participants. All plasma biomarkers were measured using commercially available, multiplex, bead-based immunoassays according to manufacturer protocols; CRP was plated on a separate immunoassay given that it required a different dilution than other plasma biomarkers. Biomarker precision was ensured by assaying specimens in duplicate and repeating measurements with coefficients of variation greater than 20% or outliers that were more than 4standard deviations from the mean. Additionally, 10% of all assays were repeated to ensure batch consistency. The concentrations of these biomarkers typically have skewed distributions; therefore, the data were log-transformed prior to statistical analysis. Logistic regression was used for dichotomous recognition analyses . Multi-variable linear regression was used for continuous outcomes in aims 1b, 1c, and part of 1d. Primary predictors were tested separately. Age and imaging covariate were included as covariates in every model. The number of prior neuropsychological evaluations was included as a covariate in recognition models. Additional covariates , comorbidities, HIV disease characteristics, APOE status were selected by evaluating the bivariate relationships between potential covariates and outcomes. If a potential covariate was significantly associated with an outcome at p<0.10 it was then entered as a covariate in the model. Given the number of possible additional covariates, these additional covariates were only retained in the full model if the covariate remained associated with the outcome at p<0.10. Power analysis was conducted using GPower . These analyses were powered to detect medium effect sizes , with a two-tailed a = 0.05, and up to 5 covariates. Current CDC guidelines recommend immediately initiating ART and maintaining an undetectable viral load . Despite the fact that only 80% of PWH are engaged in care and 57% of PWH in the United States are virally undetectable , there is a trend towards examining PWH who are virally suppressed and on ART particularly in studies examining biological processes such as inflammation and neuroimaging . Therefore, post hoc analyses examining delayed recall, processing speed, psychomotor skills excluding participants that were not ideally treated for HIV disease , had a detectable viral load were excluded. Additionally, given the significant effects of methamphetamine on the brain , participants who had a current methamphetamine use disorder were also excluded in post hoc analyses . Dichotomous recognition models were not re-examined given that, with these exclusions, only 7 participants were impaired on the recognition composite. This aim utilized multi-level modeling to examine recognition and delayed recall across follow-up visits. Outcomes were examined separately. The “lme4 version 1.1-30” R package was used to conduct mixed-effects regressions . Mixed-effects logistic regression models were used to examine dichotomous recognition as the outcome. Models examining continuous delayed recall used linear mixed effects models. Analyses included a random intercept and a random effect for years since baseline . A cross-level interaction was used to test if baseline medial temporal lobe structure is associated with longitudinal recognition impairment or decline in delayed recall. Between-persons covariates included: age at baseline, imaging covariate, and covariates identified in aim 1. Power analysis was conducted using RMASS2 , and observed attrition was accounted for in these estimates. These analyses were found to be powered to detect small-to-medium effect sizes , with a two-tailed a = 0.05. Multi-level modeling was selected because it uses all available data and gives heavier weight to participants with more waves of data; thus, this methodology can account for participants that may have missed a follow-up visit and samples that have a differing number of follow-up assessments.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Volumetric subcortical regions of interest included the hippocampus as well as the basal ganglia

Previous studies have focused primarily on alcohol users but have not excluded participants for nicotine use

This suggests that brain areas implicated in processes such as reward and cognition show the most consistent gray matter atrophy in alcohol dependent individuals, but it is unclear whether overall amount of alcohol consumption or aspects of dependence severity explain these findings. Furthermore, some of the neuroimaging studies focusing on alcohol users have not mentioned whether the alcohol users also used nicotine , did not examine the effects of nicotine use on brain structure , did not control for nicotine use in their analyses , assessed nicotine use with a dichotomous questionnaire , or simply mentioned the number of smokers in the study . This makes it difficult to ascertain whether the observed neural effects were attributable to either alcohol and/or nicotine use and further illSimilar to studies of alcohol use effects on brain morphometry, several MR imaging studies have been conducted to specifically examine the effects of nicotine use on brain structure . As with studies of alcohol users, studies of cigarette smokers have attempted to quantify and incorporate a lifetime use variable, such as pack-year smoking history, which has been found to negatively correlate with PFC gray matter densities as well as gray matter volume in the middle frontal gyrus, temporal gyrus, and the cerebellum . Interestingly, Brody et al., found no significant association between pack-year smoking history and regions of interest determined as having significant between group differences, such as the left dorsolateral PFC, ventrolateral PFC, and left dorsal ACC. Given these conflicting findings, it is uncertain whether quantity variables, such as pack-year smoking history, account for many of the gray matter volume reductions observed in nicotine dependence. Dissimilar to studies of alcohol dependent individuals,indoor vertical garden systems some studies of nicotine dependent individuals have examined symptoms of dependence severity in relation to brain morphometry.

For example, the Fagerström Test for Nicotine Dependence , which was not associated with pack-year smoking history, was not correlated with PFC or insular gray matter density . The lack of a significant correlation between FTND scores and pack-year smoking history suggests that quantity of use and dependence severity symptoms may be unrelated in nicotine dependence, and thus have distinct relationships with brain structure. Overall, gray matter degradation has been observed in the thalamus, medial frontal cortex, ACC, cerebellum, and nucleus accumbens in nicotine dependent individuals . Due to widespread results, a meta-analysis was conducted, which found that only the left ACC showed significant gray matter reductions in nicotine dependent individuals compared to healthy controls . While studying primarily alcohol or nicotine using populations carries unique benefits, specific investigation is needed into heavy drinking smokers as past studies have shown compounded neurocognitive effects , as well as pronounced gray matter volume reductions in heavy drinking smokers when compared to nonsmoking light drinkers . Chronic cigarette smoking has been found to have negative consequences on neurocognition during early abstinence from alcohol and in one particular study, it was found that after 8 months of abstinence, actively smoking alcohol-dependent individuals performed worse on several neurocognitive measures, such as working memory and processing speed, when compared to never-smoking alcohol-dependent individuals . Additionally, formerly smoking alcohol users were found to perform more poorly than never-smoking alcohol users at this time point. These findings not only illustrate the contribution of smoking status on neurocognitive measures but establish the clinical relevance of nicotine use in heavy drinkers. This relevance paired with the compounded neurocognitive and morphometric effects further merit investigation into this unique sub-population of substance users.

The present work aimed to ascertain the effects of alcohol and nicotine dependence severity on gray matter density in a sample of 39 non-treatment seeking heavy drinking smokers using standard voxel-based morphometry . While some imaging studies have previously investigated the relationship of FTND scores with brain structure, to our knowledge, no imaging study to date has examined how alcohol dependence severity relates to gray matter density in heavy drinking smokers. Thus, the goal of this study was to examine if alcohol or nicotine dependence severity was correlated with gray matter density in heavy drinking smokers, while controlling for age, gender, and total intracranial volume . By examining dependence severity scores in addition to quantity of use variables, we may be able to capture how dependence is related to structural changes in the brain in a way that is not captured by variables that focus singularly on quantity of use. Based on previous findings, we hypothesized that gray matter density would be negatively related to quantity of both alcohol and nicotine use, in regions such as the middle frontal gyrus. We also hypothesized that dependence severity scores would uniquely relate to gray matter atrophy in several regions previously identified across the meta-analyses of voxel-based morphometry studies, such as the ACC, dorsal striatum, and insula.The subjects for the present study are a subset of participants from a medication development study of varenicline, naltrexone, and their combination in a sample of heavy drinking smokers. Subjects participated in the medication component of the study, details of which have been described in a previous publication , and a sub-sample was invited to complete a neuroimaging session.

Participants were recruited from the greater Los Angeles area through online and print advertisements with the following inclusion criteria: 1) between 21 and 55 years of age; 2) reported smoking at least 7 cigarettes per day; and 3) endorsed heavy drinking per the National Institute on Alcohol Abuse and Alcoholism guidelines: for men, >14 drinks per week or ≥5 drinks per occasion at least once per month over the last 12 months; for women, >7 drinks per week or ≥4 drinks per occasion at least once per month over the last 12 months. Participants were excluded from the study based on the following criteria: 1) had a period of smoking abstinence greater than 3 months within the past year; 2) reported use of illicit substances within the last 60 days, confirmed via positive urine toxicology screen at assessment visit ; 3) endorsed lifetime history of psychotic disorders, bipolar disorders, or major depression with suicidal ideation; 4) endorsed moderate or severe depression symptoms as measured by a score of 20 or higher on the Beck Depression Inventory-II ; 5) reported current use of psychotropic medications; 6) reported any MRI contraindications, such as any metal fragment in the body or pregnancy; and 7) reported MRI constraints, such as left-handedness or color blindness. As no Structured Clinical Interview for Diagnostic Statistical Manual 4th edition , or DSM 5th edition , Axis I Disorders was administered, drinking status for participants was determined solely via NIAAA heavy drinking guidelines . After a telephone screening to determine eligibility, participants came to the laboratory for a screening visit, during which informed,plant drying rack written consent was obtained. A urine cotinine test along with carbon monoxide levels verified self-reported smoking patterns and a breath alcohol concentration of 0.00 was required at the beginning of each visit. Eligible participants then came in for a physical examination and if eligible afterwards, began taking medication for nine days, previously described elsewhere . Participants received varenicline alone , naltrexone alone , their combination, or matched placebo. After the medication period, participants who were eligible for the MRI session were selected at random, given an additional three days of medication, and scanned within those three days. To our knowledge, no studies to date have tested the effects of varenicline and naltrexone on structural MRI measures; however, to ensure that there were no significant gray matter differences between the medication groups, we conducted a whole-brain one-way between-subjects ANOVA . A total of 40 subjects participated in the neuroimaging study. The Institutional Review Board of University of California, Los Angeles, approved all procedures for the study.Participants were administered the Alcohol Dependence Scale , the FTND, and the 30-day Timeline Follow-back . The ADS is a 25-item self-report measure that identifies elements of alcohol dependence severity over the past 12 months, such as withdrawal symptoms and impaired control over alcohol use on a scored scale with a range of zero to 47. The FTND is a six-item self-report measure that captures features of nicotine dependence severity on a scored scale of zero to 10, and questions on this measure are not confined to a specific time frame of substance use.

The TLFB assessed the daily amount of alcoholic drinks and cigarettes participants consumed in the past 30 days before the scan, from which mean drinks/drinking day and cigarettes/day were calculated.All images were obtained with a 3.0 Tesla Siemens Trio MRI Scanner at the Center for Cognitive Neuroscience at UCLA.As we expected no structural differences unrelated to gray and white matter volumes to be present in the sample, paired with past studies employing methodologies similar to ours, we chose to follow standard VBM protocols and spatially normalize the T1-weighted raw images to the same stereotactic space first . To do this, each image was registered to a standard template in Montreal Neurological Institute space using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra . After spatial normalization, the resulting DARTELwarped T1-weighted images were segmented into three classifications . The segmented images were then modulated, a process by which the images are multiplied by the Jacobian determinants produced for each image during spatial normalization. The advantage of modulation is that it corrects for individual brain size and brain matter expansion or contraction that occurs during normalization. The sample homogeneity of the resulting images was checked using a mean covariance boxplot, which assesses the covariance among the sample of images across participants. Higher covariance values are preferred, which indicate the image is more similar to other volumes in the sample, while a lower covariance value signals a potential outlier . The mean covariance value for the current sample was .74. One participant had a covariance value greater than 2 standard deviations from the mean. Upon inspection, the image appeared to have failed during segmentation due to motion artifact and was excluded from further analyses. This resulted in a total of 39 subjects. Finally, modulated images were smoothed using an 8-mm full width at half maximum Gaussian kernel. The smoothed, modulated images were used for subsequent analyses.Two separate multiple regression models were built, with the first analyzing the relationship between symptoms of dependence severity and gray matter density. This model included ADS scores and FTND scores as predictor variables. The second model examined the relationship between quantity of substance use and gray matter density. The variables DPDD and CPD were chosen for this model and entered as predictor variables. Age, gender, and ICV were entered as covariates in both models. The significance level was set at p < 0.001, uncorrected with an absolute threshold mask value of 0.1, and a spatial extent threshold of 78 voxels was empirically determined per standard VBM protocol and used for analyses. Additionally, post-hoc achieved power analyses were conducted using the effect sizes calculated with Cohen’s f 2 .Previous research has indicated that gray matter tissue can regenerate within 14 days of alcohol abstinence in alcohol dependent patients and that gray matter regeneration is most profound within the first week to month of abstinence . Given these findings, we examined whether days to last drinking day before the imaging session correlated with gray matter density at the whole-brain level. Days to last drinking day was computed for each participant based on the TLFB information collected at the time of image acquisition. The analysis conducted included days to last drinking day as a predictor variable and age, gender, ICV, and ADS scores as covariates of interest. Furthermore, to understand whether any of the effects were related to cannabis use within the current sample, we examined the relationship between frequency of cannabis use and drinking and nicotine variables using non-parametric Spearman’s correlations. Cannabis use was assessed using a single-item categorical question asking, “On average, how often do you smoke marijuana?”The purpose of the present study was to examine the relationship between quantity of alcohol/nicotine use and alcohol/nicotine dependence severity with gray matter density in heavy drinking smokers. Similarly, some prior studies that examined nicotine users did not establish exclusionary criteria based on alcohol use .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Previous studies have focused primarily on alcohol users but have not excluded participants for nicotine use

The peer group of patients in SUD treatment may still be using drugs and engaging in risk behaviors

Separate multinomial logistic regression models were performed for each sexual risk behavior: unprotected sexual intercourse with regular partners, unprotected sexual intercourse with casual partners, number of sex partners, having at least one high risk sex partner, and engaging in sex under the influence. We calculated changes in both ASI alcohol and drug composite scores and entered them in each regression model simultaneously as independent variables. Age, ethnicity, treatment modality, intervention condition, and number of intervention sessions attended were entered into the regression models as covariates.In the current study, we found high rates of ongoing sexual risk behaviors, with more than half of the sample reporting unprotected sex or having sex under the influence of drugs and/or alcohol, and a considerable minority reporting multiple sex partners and high risk sex partners. Most sexual risk behaviors decreased in frequency during the course of the study. We found relationships between decreased sexual risk behaviors and decreased drug/alcohol use severity, which were independent of treatment modality and intervention assignment modality. Overall, drug and alcohol use severity declined and most sexual transmission risk behaviors declined during the six month period. Rates of unprotected sex with both regular and casual partners decreased during the six month time frame of the study. However, these changes were not associated with changes in drug or alcohol severity. The HIV risk reduction interventions received may have driven the decrease in unprotected sex. As SUD treatment outcomes were not associated with changes in unprotected sex, this behavior likely requires specific intervention. Available interventions to decrease rates of unprotected sex have proven effective. REMAS significantly increased the percentage of protected sexual occasions compared to a control condition .

Other system-level interventions,rolling tables grow such as having free condoms available, can decrease rates of unprotected sex . Our findings demonstrate that the relationship between alcohol use severity and multiple sex partners can change over time, consistent with previous findings . Here, alcohol treatment might serve as a method of decreasing the behavior, resulting in a decrease in the incidence of HIV. Increases in drug use severity coincided with the initiation of a sexual relationship with a high risk partner, while the maintenance of a relationship with a high risk partner was associated with decreased drug use severity. This reduction is possibly the result of the interventions administered to participants during the course of the study. Meanwhile, the initiation of a relationship with a high risk sex partner may suggest a worsening of the individual’s substance use treatment outcomes. Also, discontinuing sex with a high risk partner was not associated with any change in drug use severity. Sex under the influence of alcohol and/or drugs decreased over the measured six months of drug treatment. Further, decreases in sex under the influence were associated with decreases in drug use severity. Drinking before a sexual encounter has been linked to sex with a high risk partner . This has been theorized to be associated with the disinhibition of alcohol, which leads users to perceive less risk and have more positive outcome expectations , and some evidence indicates it may be the case in stimulant users as well . The percentage of individuals who reported at least one high risk sex partner, however, remained unchanged during the course of the study. It may be difficult for participants in substance use treatment, whose peers are also substance users, to find low risk partners. Addressing one risk factor may decrease other risk factors as well. Sex under the influence is frequent and associated with other sex-risk behaviors, such as sex with casual partners and unprotected sex .

Having a high risk partner has previously been associated with increased condom use . As such, it is possible that decreasing drug use severity can impact several sex-risk behaviors through the vector of sex under the influence. The study’s results have implications for SUD treatment programs. Specifically, counselors in methadone maintenance treatment and outpatient drug-free programs should discuss the risk status of their clients’ sexual partners. This can be coupled with a discussion about condom use, including hands-on demonstrations on how to apply condoms. Also, incorporating cognitive-behavioral therapy and motivational interviewing can decrease sexual risk behaviors and substance use, while also helping to increase knowledge and skills that can lead to safer sex . Interventions should address the isolation and vulnerability that can lead individuals into risky sexual encounters . This study has limitations that suggest the need for further research. First, all participants enrolled in this study to participate in HIV risk reduction interventions, suggesting greater readiness to change than expected among all SUD treatment participants. As a result, findings may not be generalizable to SUD treatment patients less concerned about HIV transmission. It is important to note that participants in this study were not tested for HIV nor was information regarding participants’ HIV status gathered in the current study. Individuals may differ in their risk behavior based on their HIV status, as positive individuals may change their previous behaviors . Sexual dysfunction can result from methadone and chronic alcohol use , which could have influenced our results. We also do not have information on why certain participants are no longer in treatment , which could confound our data. Analyzing the data categorically obscures and fails to reveal the actual magnitudes of changes. Also, we calculated change scores for substance use severity, which are controversial among researchers .

Our study highlights the importance of drug and alcohol treatment in the reduction of sex risk behaviors. As the sexual transmission of HIV continues to increase , more understanding of this phenomenon is needed. Drug and alcohol use severity are associated with these risk behaviors, and treating the substance use in drug treatment may reduce HIV risk behaviors. However, some risk behaviors remain unchanged in spite of changes in drug and alcohol use severity, and require specific interventions. Further research is needed to pinpoint the effect of drug treatment independent of other factors, including having sex for money or drugs. Also, participants of different treatment modalities may have different risk patterns, and may respond differently to interventions. Drug treatment provides the opportunity for specialized HIV risk behavior interventions, which should be expanded upon wherever possible. Despite viral suppression on combination antiretroviral therapy , people with HIV suffer from depressed mood and chronic inflammation. Depression is the most common psychiatric comorbidity in HIV . Depressed PWH show poorer medication adherence , lower rates of viral suppression , greater polypharmacy , poorer quality of life and shorter survival . A subtype of treatment-resistant depression in the general population is associated with chronic inflammation . The potential clinical significance of this is high, since the anti-inflammatory TNF-alpha blocker tocilizumab and other drugs such as the antibiotic minocycline, the interleukin 17 receptor antibody, brodalumab, and the monoclonal antibody, sirukumab, have been shown to be effective treatment for this depression subtype , but these have not been studied in the context of HIV. Inflammation is associated with greater symptom severity, differential response to treatment, and greater odds of hospitalization in patients with major depressive disorder . Chronic inflammation persists in virally suppressed PWH and predicts morbidity and mortality . There also is an extensive literature on showing that depression correlates with markers of inflammation and immune activation in PWH , but most of these studies were performed in individuals who were not virally suppressed. We hypothesized that inflammation in virally suppressed PWH would be associated with poorer mood.We found that higher concentrations of a specific panel of markers of inflammation in blood were seen in PWH with worse depression. Additionally, PWH with depressed mood had markedly reduced quality of life and were more dependent in IADLs. In addition, higher inflammation associated with worse scores on numerous life quality indicators. Chronic HIV-associated inflammation and immune dysfunction have emerged as key factors that are strongly linked to nonAIDS complications . Our findings confirm those of previous investigations , and extend them by evaluating a more comprehensive panel of biomarkers and more extensive evaluation of impact on daily functioning and quality of life. If the link between inflammation and depression is causal,4×8 botanicare tray our results suggest that treatment with selected anti-inflammatory medications might benefit mood and life quality in some PWH. Depressed mood was specifically associated with a factor loading on d-dimer, IL-6 and CRP. Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. Thus,factor analysis is a method for dimensionality reduction and can help control false discovery.

Additionally, however, it is important to check the identified factors against known physiological relationships. Several prior reports link these specific markers with each other, particularly in the context of HIV, suggesting that they represent a physiologically congruent aspect of the inflammatory cascade. For example, the proinflammatory cytokine IL-6 stimulates the production of C-reactive protein in the liver . In one study, higher pre-ART CRP, D-dimer, and IL-6 levels were associated with new AIDS events or death . Also, in HIV patients, IL-6, hsCRP and D-dimer were intercorrelated and each was associated with an increased risk of cardiovascular disease independent of other CVD risk factors . In another report, baseline IL-6 and D-dimer were strong predictors of coronary risk in non-HIV-infected individuals and were associated with each other and with CRP . Additional support for the coherence of Factor 2 is that its components in this dataset demonstrate robust and statistically significant intercorrelations, while their correlations with other biomarkers are typically weaker and not statistically significant . Previous studies have demonstrated the importance for depression of the specific biomarkers identified in Factor 2. BDI-II scores at baseline and follow-up were highly correlated. Together with the finding that higher inflammatory markers at 12-year follow-up also were associated with depressed mood at baseline, these findings suggest that depressed mood is an enduring phenotype. A novel finding in this study was that although women had worse depressive symptoms, the association with inflammatory markers was seen only in men. While perhaps reflecting limited power due to the small number of women, this suggests that the underlying pathophysiology of depression is different in men and women with HIV. Of note, women tended to have higher markers of inflammation than men, consistent with a previous report . We found worse depression in non-Hispanic whites than in other ethnicities. This is consonant with higher rates of depressive disorders in whites in previous studies . The relationship between inflammation and depressive mood remained after accounting for ethnicity. Inflammation was not related to CD4 or viral load in this cohort of mostly virally suppressed PWH. Unlike other studies, elevations in inflammatory biomarkers were not associated with substance use disorders. Higher inflammatory biomarkers also were associated with greater disability , motor impairment, poor physical health , poorer general health, physical function, role function, social function, pain function, and worse health distress, emphasizing the importance of this phenotype. Inferences are limited by several factors in this study. There were relatively few women, However, inspection of the scatterplot reveals that there was no suggestion of a trend for an association between inflammation and depression in women. The panel of soluble biomarkers studied was limited, and important associations may have been missed. We did not characterize cellular markers of inflammation in these participants. The absence of a control group precludes consideration of whether effects of inflammation on depression are mediated or otherwise influenced by HIV infection itself. As noted previously, anti-inflammatory medications have shown promise for treatment-resistance depression. Future studies might evaluate the effectiveness of anti-inflammatory medications for the treatment of depression in PWH selected for the presence of inflammation and treatment resistance.North America comprises the world’s largest drug market and evidences the highest drug related mortality rate in the world , 2011. Within the United States the problem of prescription drug misuse and opioid misuse in particular, has reached epidemic proportions. Pain relievers were the most commonly misused drug in the psychotherapeutics category from 2002 to 2011 , 2012 and from 2004 to 2011, the number of medical emergencies involving opioids increased by 183% . Abuse of prescription drugs is a significant public health problem, associated with high costs both to the health care system and to the individuals who use them. From an economic perspective, it is estimated that opioid misusers’ medical care costs are eight times greater than those of non-misusers .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on The peer group of patients in SUD treatment may still be using drugs and engaging in risk behaviors

Any letter-writer signatures and titles were deleted prior to analysis to avoid introducing bias

Our longer-term goals will be to see the effects of this system on the promotion process within the department with an expectation that more junior faculty will become eligible for advancement. These effects will be evaluated by tracking the progress and content of junior faculty teaching portfolios compared to previous years and time to successful promotion. With a bottom-heavy young faculty group, our expectation is that this system will better prepare people for promotion as they can track their activities and determine where they need to place more effort to enhance their portfolio. Finally, this system will be used to improve the mentorship infrastructure within the department. Assigned faculty mentors will use the ARVU dashboard to mentor junior faculty on their progress for promotion. This dashboard will provide another data point for mentors to advise junior faculty where they need to focus their efforts in order to progress professionally. Gender disparities exist in academic medicine. Women in academic medicine are less likely to achieve the rank of professor or hold senior leadership positions compared to men, even after adjusting for age, experience, specialty, and research Stanford University School of Medicine, Department of Emergency Medicine, Palo Alto, California Northwestern University Feinberg School of Medicine. Department of Emergency Medicine, Chicago, Illinois Ohio State University College of Medicine, Department of Emergency Medicine, Columbus, Ohio productivity.Previous studies in other professional fields have shown that there are differences in language used in describing men and women in letters of recommendation.Additional studies have shown that evaluations of women medical students are more likely to describe women as “caring,” “compassionate,” and “empathetic,” in addition to “bright” and “organized,” than male medical students.

In addition,cannabis drying rack women are more often portrayed as teachers and students, and less often portrayed as researchers or professionals compared to men.Within emergency medicine the letter of recommendation, including both standardized letters and traditional letters, has been cited as one of the top four most important factors in selecting applicants to residency, along with EM rotation grade, interview, and clinical grades.More specifically, the letter of recommendation has been cited as the most important factor in selecting applicants to interview.Historically, in EM, letters of recommendation were written without guidelines or restrictions. In 1996, the Council of Residency Directors in Emergency Medicine implemented the standardized letter of recommendation , which was renamed the standardized letter of evaluation in 2013. The SLOE contains both a quantitative evaluation of an applicant and a narrative portion of 250 words or less.The SLOE narrative provides a focused assessment of the noncognitive attributes of potential residency candidates.The standardized format and universal instructions make the SLOE a good text sample to study for variation in language by gender. Additionally, while there are several studies analyzing traditional letters of recommendation for language variation between genders, there is a gap in the current literature in analyzing standardized letters of recommendation. Previously, our research team published a study in Academic Emergency Medicine Education and Training that showed minimal differences in language use between genders in evaluating 237 SLOEs from applicants invited to interview to a single academic EM residency for the 2015-2016 application cycle.The small dataset, and potential for a homogeneous sample , prompted the current investigation with a goal of confirming or refuting the original results with a larger dataset.

The choice to include all applicants was made with a goal of potentially increasing the variability in the language used within the SLOE . The aim of this study was to compare differences in language within specific word categories to describe men and women applicants in the SLOE narrative for all applicants to a single academic EM residency program for the 2016-2017 application cycle. We secondarily sought to determine whether there was an association between word categories’ differences and invitation to interview, regardless of gender, in order to better contextualize the possible importance of wording differences.SLOE narratives for all applicants to the residency for the application cycle 2016-2017 were downloaded from ERAS by the program coordinators and converted to Microsoft Word format. We included the narrative portion of the SLOE in analysis. The narrative is limited to 250 words and asks the writer to “Please concisely summarize this applicant’s candidacy including… Areas that will require attention, Any low rankings from the SLOE, and Any relevant non-cognitive attributes such as leadership, compassion, positive attitude, professionalism, maturity, self-motivation, likelihood to go above and beyond, altruism, recognition of limits, conscientiousness, etc.”If applicants submitted more than one SLOE, the SLOE from the first chronological clinical EM rotation was included in analysis. We analyzed firstrotation SLOEs, as opposed to all SLOEs, to provide a uniform evaluation of student performance and limit word differences based on varying experiences in time. Additionally, not every applicant had more than one SLOE. Exclusion criteria included applicants from non-Liaison Committee on Medical Education schools, as well as applicants with a first-rotation SLOE that was not available to be downloaded from ERAS. Analysis began after all NRMP decisions had been made and finalized and did not affect an applicant’s invitation to interview or placement on the rank list. Prior to analysis, each letter was read by two reviewers who screened for “stock” language.

These “stock” or standardized sentences were not related to applicant characteristics. They included statements in certain categories such as statements regarding waiving rights to see the letter ; stock opening statements ; stock closing statements ; descriptors of the rotation ; descriptors of grade calculation ; and descriptors of the letter writer . Pronouns were not made pleural or deidentified prior to analysis.This analysis found small but quantifiable differences in word frequency between genders in the language used in the SLOE. In this study, differences between genders were present in two categories: social words and ability words, with women having higher word frequency in both categories. Our prior investigation found differences of similar magnitude in affiliation words and ability words, with letters for women applicants having higher word frequency in both categories. For both studies, the differences in word frequency were statistically significant, but it is difficult to comment or draw conclusions about the significance of these small wording differences on application or educational outcomes. What is perhaps more notable than the presence of differences in two categories is the lack of difference in the remaining 14 categories. When looking specifically at the categories that had gender differences, our finding of ability words being used to describe women applicants more frequently than men applicants is in contrast to previous studies, while our other research finding, that women are more frequently described with social words than men, is in alignment with previous studies. In the medical literature, letters of recommendation for men applying for faculty positions contain more ability attributes such as standout adjectives and research descriptors than letters for women,and letters for women in medical school applying for residency positions are more frequently described by non-ability attributes such as being caring, compassionate, empathetic, bright, and organized.Looking specifically at ability words, this word category had statistically significant differences in both this investigation and our prior study,grow trays 4×4 with ability words occurring more frequently for women than men. Ability words include descriptors such as talented, skilled, brilliant, proficient, adept, intelligent, and competent.

This consistency of findings between the two samples suggests that letter writers employ multiple descriptors within the ability category to convey proficiency of women applicants. However, the reason for this difference is unclear. Notably, the word “bright” is one of the ability words for which there was no gender difference found, counter to findings from prior research wherein women applicants were more often described as bright.6,18 While the descriptor “bright” is often considered a compliment, it has also been suggested that its use “subtly undermines the recipient of the praise in ways that pertain to youth and, often, gender” stemming from its association with the phrase “bright young thing.” The finding that women were more frequently described with social words aligns with previous studies of letters of recommendations. Studies in letters of recommendation for psychology and chemistry faculty positions have shown that women are often described as communal , while men are described as agentic and have more standout adjectives Other studies have found women to be described as more communicative.We employed a secondary analysis with respect to the invitation to interview to determine if small differences in word categories were associated with invitation to interview. The adjusted analysis showed an association between more standout words and invitation to interview; however, this analysis did not account for other factors that may influence invitations to interview . Although these findings represent an association and not causation, they help to contextualize the potential importance of small differences in word use, although this is not conclusive. Notably, neither social words nor ability words influenced the choice to interview, and there was an equitable frequency of standout words between genders. Despite the small word differences in the categories of social and ability words, we did not find a difference in the 14 other word categories queried. There are several possible explanations for this lack of a finding. It is possible that the sample was under powered to detect small wording differences in the 14 word categories. Another explanation is that the SLOE format itself may be driving the lack of a difference. The short word format of the SLOE and specific, detailed instructions as noted above may reduce bias. Other explanations include the increasing use of group authorship, which may introduce less bias than individual authorship. In 2012, a sampling of three EM residencies calculated that 34.9% of SLORs were created by groups.24 In 2014, 60% of EM program directors participated in group SLORs, 85.3% of departments provided a group SLOR, and 84.7% of PDs preferred a group SLOR.Although the sample size and lack of a standard comparator limit the ability to determine why we did not find a difference for the majority of word categories, we hypothesize that it is related to the format and hope to further support that hypothesis through future work examining paired SLOE and full-length letters for candidates. A recently published study by Friedman and colleagues in the otolaryngology literature has been the only study, in addition to our own, to our knowledge that evaluates a standardized letter for gender bias. In this 2017 study, the SLOR and more traditional NLOR in otolaryngology residency applications were compared by gender, concluding that the SLOR format reduced bias compared to the traditional NLOR format. Although in both letter formats some differences persisted , the SLOR format resulted in less frequent mention of women’s appearance and more frequent descriptions of women as “bright.”Although their analysis strategy differed from the one we used in this study, their findings parallel ours in that there are minimal differences by gender in a restricted letter format and highlight the need for further study of the how the question stem and word limitations may be intentionally built to minimize bias. Lastly, of note, our study focused specifically on differences in language use in the SLOE. This study does not evaluate the presence or absence of gender bias in the quantitative aspects of the SLOE, nor does our multi-variable model include other factors that would influence the invitation to interview such as rotation grades, test scores, school rank, or AOA status. Such analyses were beyond the scope of our study, which was focused on the SLOE narrative itself. Other studies have evaluated this but have not evaluated the narrative portion of the SLOE.Additionally, there remain many other forms of evaluation, numerical and narrative, in medical training, in addition to the SLOE that have analyzed gender bias. Recent studies have suggested that bias persists in other forms of evaluation. Specifically, Dayal and colleagues’ recent publication notes lower scores for women residents in EM Milestones ratings compared to male peers as they progress through residency.Evaluations of narrative comments from shift evaluations are another area of interest, of which we are aware of two current investigations underway in EM programs. Additionally, a study of evaluations of medical faculty by physician trainees by Heath and colleagues also showed gender disparities.As this body of literature continues to grow and interventions are developed to minimize bias in all narrative performance evaluations, we believe it will be important to think carefully about the question stems and response length allowed.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Any letter-writer signatures and titles were deleted prior to analysis to avoid introducing bias

Preestablished macros for documentation of each application were shared with all providers

The images were then transferred from Q-Path to the hospital picture archiving and communication system where they are visible to all hospital providers. Finally, the findings were documented under the “Procedures” section of the ED provider note, and referenced in the medical decision-making portion of the note as appropriate.All faculty were credentialed in accordance with ACEP guidelines. Under this policy, residents may perform the ultrasound exam under supervision of credentialed faculty and submit scans to count toward their own credentialing.At the beginning of the study, we performed a qualitative needs assessment with a work group, including the authors of the study, residency leadership, QI leadership, and unstructured interviews with residents. We generated potential contributors to the observation that residents rarely use POCUS on shift and summarized them in a “fish-bone” diagram . Based on this list, we created a survey of residents to help further elucidate residents’ attitude towards POCUS and the leading barriers to POCUS use on shift . Participation in the survey was voluntary, and we received responses from 27/35 residents with comparable contribution from residents at all three levels of training. We found that 30% of all residents reported never using POCUS on shift, 52% reported using POCUS approximately once per shift, and 18% used POCUS more than once per shift. When asked about general attitudes toward ultrasound use and training, most residents somewhat agreed or strongly agreed that ultrasound is an important skill for residents to learn and practice in our ED . Most residents also somewhat agreed or strongly agreed that POCUS will be important in their future practice . However, indoor grow facility responses were somewhat tempered in considering whether availability of POCUS would be important in their search for future employment: 63% somewhat or strongly agreed, while 7% somewhat disagreed .

In assessing barriers to on-shift use of ultrasound we found that the “inability to use results in documentation” received the highest weighted average rating of 3.7 on a five-point Likert scale with 41% and 25% of residents, respectively, reporting that this was a significant and extreme barrier. Time barriers, including time to complete/optimize exams and time required to initiate an exam were also rated highly with weighted averages of 3.6 and 3.2. Barriers pertaining to tools and technology such as Q-Path navigation, inability to find the machine, space on the machine, and gel availability were generally ranked as only “slight barriers” with weighted average scores of 2.2, 2.1, 1.8, and 1.6, respectively . Finally, we attempted to assess potential incentives that would help residents overcome the barriers above. We found that increased attending support was the top perceived incentivizer for residents with a weighted average of 4. Residents also felt that clear guidelines on charting were likely to incentivize scanning . Ultrasound training is a core feature of EM residency training. However, there is a considerable variability in the form this training takes throughout residencies in the United States.In order to characterize POCUS training of EM residents, Hayward et al. applied Ericsson’s deliberate practice model of acquiring procedural proficiency. This model divides learners into novice, intermediate, expert, and advanced expert levels who are able to learn the basics, apply them efficiently, apply them intuitively, and apply advanced applications of the procedure respectively.To advance trainees from intermediate sonographers to expert sonographers , one must have a detailed understanding of the barriers to such a transition. To the best of our knowledge, this study is the first attempt to systematically define and address these barriers in a resident population. Our data highlight a number of key findings, likely relevant to curriculum and POCUS workflow design:First, we found that residents’ perception of ultrasound and its importance in modern EM training is overwhelmingly positive with 96% of residents believing that ultrasound is an important skill to learn during their training.

Despite this, only 63% of residents believed that ultrasound availability would be an important feature for them in their future job search. This discrepancy likely underscores the larger problem posed above: While residents are enthusiastic and competent in image acquisition and interpretation, next level training in methods of integrating ultrasound into daily practice is lacking. Second, we were somewhat surprised that the major barrier identified by residents at the time of our study was the perceived inability to use ultrasound for medical decision making rather than conventional barriers of time available in the ED or equipment malfunction. However, when viewed through the lens of the deliberate practice model of transitioning from intermediate to advanced competency, it makes sense that our residents’ grasp on how to use ultrasound in daily practice was the major perceived barrier. Third, our finding that implementation and education of a documentation policy is associated with increased integration of ultrasound in clinical decision-making has significant implications for resident education and its integration into subsequent ED ultrasound billing workflows. Recent studies have demonstrated that a continuous workflow quality improvement efforts for all staff also significantly increased the proportion of reported and billed ultrasound studies. Another recent study found that resident education of billing practices significantly increased RVU billable by resident encounters.Taken together, this body of literature suggests that educational interventions such as ours can have a quantifiable effect on ED revenue and future EP documentation practices. A potential confounder in the before-after design of our study was a concomitant push for faculty credentialing, which was underway in our department during the study period. To assess whether the increase in the patients scanned may have been due to this confounder we also analyzed the number of POCUS studies uploaded to PACS by faculty without resident involvement.

We found that faculty uploaded 124 vs 138 studies, which were done without resident involvement, during the pre- and post-intervention phases of the study, an absolute increase of 6%, while resident scans uploaded to PACS increased by 78% . Thus, it appears that the increase in scans performed was primarily resident-driven. Finally, while it is difficult to infer causation in this observational, before-after study, it does provide a suggestion that incentivization of residents and faculty might be linked. Our secondary outcome demonstrated that the resident-based intervention increased scanning among non-fellowship trained faculty, more so than among ultrasound fellowship-trained faculty. As methods of faculty credentialing and education continue to advance, it may be useful to integrate resident and faculty education. Future inquiry into the effect and interplay of faculty and resident incentivization may help make the transition from intermediate to advanced sonographer more robust and efficient.This study was performed at a single academic center with an EM residency program, and as such may be limited in external applicability. However as mentioned earlier, our institution faces many of the same problems and barriers that have been reported by other institutions in the literature. These include the low rate of POCUS utilization, need for deliberate practice, implementation of intuitive documentation processes, and lack of time in a busy ED.While we did solicit feedback from residency leadership and residents, within the limitations of a single-center quality improvement study, we did not perform separate validation of the survey. The survey portion was also subject to sampling bias, since we had only a 77% response rate. However, we believe that voluntary and anonymous reporting on the survey provides a sufficient advantage. Our low sample size, given its single-center nature, is an important limitation as it limits the statistical power of the study, and it would be useful to repeat this study on a nationwide level. The survey itself includes closed-ended questions, which may introduce response bias; however, write-in,indoor grow rack free-text responses were allowed. In regard to our primary outcome, our study may be limited by the assumption that the number of exams uploaded to PACS is an accurate marker for the number of scans used in the medical decision-making process. Indeed, the survey responses suggest that 82% of residents used POCUS one or more times per shift, but even after the intervention there were only 5.8 scans documented per resident. This suggests that a large proportion of POCUS studies are never documented . In addition, this surrogate marker also relies on the cooperation of the appropriate attending, as residents did not have ability to upload images to PACS. However, the survey does identify lack of documentation ability as an important barrier, and documentation of POCUS studies is essential to appropriate medical decision-making and billing as laid out in ACEP’s clinical guideline on POCUS use. Thus, our study’s primary outcome is relevant to the key objective of the study .

Another key limitation of our study is the before after design, which introduces a number of confounders. During the study period faculty received ongoing reminders and were actively incentivized to increase clinical use of POCUS. It is unlikely that the increase in scans is due solely to our intervention; however, we found that the increase in resident-performed POCUS studies is disproportionate to the number of studies done by faculty alone, suggesting that resident involvement in POCUS documentation should be a key factor in improving the quality of POCUS use in clinical decision-making.Long-term cognitive impairment , defined as new or worsening deficit in cognition that persists following acute illness, is a well described phenomenon occurring in an estimated 16% of older adults who are acutely ill.This often leads to increased disability, loss of independence, and decreased quality of life. Currently no effective therapies, especially those that can be administered early in the acute illness course, exist to prevent or treat LTCI following acute illness. While the mechanism of LTCI has not been fully elucidated, it is hypothesized that systemic proinflammatory cytokines, in response to an acute medical illness such as sepsis,lead to increased central nervous system inflammation, microglial activation, and neuronal injury and death.Vitamin D is a pleotropic hormone that modulates systemic and CNS inflammatory responses.Therefore, patients with Vitamin D deficiency may be particularly vulnerable to LTCI following an acute illness. Several observational studies have suggested that Vitamin D deficiency is associated with poorer long-term cognition among community-dwelling adults.However, the relationship between Vitamin D deficiency in the setting of acute illness and subsequent development of LTCI remains unknown in acutely ill patients, especially in the emergency department setting. Therefore, we sought to determine whether serum Vitamin D at ED presentation was associated with poorer six-month cognition in acutely ill older adults. This study was an observational secondary analysis within the DELINEATE prospective cohort study, which enrolled ED patients age 65 years and older who were subsequently admitted the hospital for an acute illness at a large, academic, tertiary care hospital.This study enrolled patients from March 2012 – November 2014. The local institutional review board reviewed and approved this study. Details and rationale of the selection of participants have been described previously.Briefly, we included patients if they were 65 years or older and in the ED for less than four hours at the time of enrollment. Patients were excluded if they were non-English speaking; previously enrolled; deaf, comatose, non-verbal or unable to follow simple commands prior to their current illness; were considered unsuitable for enrollment by the treating physician or nurse; were unavailable for enrollment within the four-hour time limit secondary to clinical care ; or were discharged home from the ED. Patients were included for this analysis if they had blood specimen available for Vitamin D measurement and had a surrogate available to complete a short form Informant Questionnaire on Cognitive Decline in the Elderly obtained at enrollment to establish pre-illness cognition. Pre-illness and six-month cognition were measured using the short form IQCODE in patients who had a surrogate in the ED who knew the patient for greater than 10 years. It ranges from 1 to 5 . This surrogate-based cognitive screen was used because patient-based measurements in the ED may not accurately reflect true baseline cognition especially in the setting of delirium.The IQCODE is also a validated measure of cognition, which has been previously used to assess cognitive decline.At time of study enrollment, informants were asked to assess the patients’ pre-illness cognition at two weeks prior to ED presentation, and followup assessment at six months over telephone with all attempts made to have the same person complete the IQCODE questionnaire as the individual who completed the pre-illness questionnaire.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Preestablished macros for documentation of each application were shared with all providers

The most common manifestation is dyspnea or edema from elevated LV filling pressures

These systems help determine the appropriate interventions to reduce the likelihood of developing severe LV dysfunction, thereby reducing the patient’s potential morbidity and mortality.Other means of classification depend on the presence of cardiomyopathy or acute coronary syndrome . The Nohria-Stevenson classification for decompensated HF in the setting of cardiomyopathy uses perfusion and congestion, while the Killip and Forrester classification systems evaluate AHF in the setting of ACS.In general, short-term mortality is low for well-perfused groups and is higher in poorly-perfused patients.Unfortunately, these classification systems are not as useful for acute exacerbation of HF, thereby limiting their applicability in the ED setting. In the ED, classification is based upon the patient’s hemodynamic status, perfusion, and blood pressure.This differentiation can guide therapy and provides important prognostic information. Most patients are hypertensive or normotensive upon presentation.The hypertensive form is commonly associated with pulmonary edema, which may occur rapidly .In the normotensive progressive form, systemic edema is predominant.Hypotensive AHF is associated with end-organ hypoperfusion, while systemic and pulmonary edema is minimal. ACS can occur simultaneously with or exacerbate HF and requires emergent coronary angiography.Right-sided HF is associated with right ventricular dysfunction, leading to systemic venous congestion without pulmonary edema if the LV is not involved.Due to the complex pathophysiology involved in HF and multiple phenotypes , the history and physical examination may vary.

Patients with HF are heterogeneous in terms of the cardiac structure and function, the etiology of their HF, the precipitant of the AHF exacerbation, comorbidities, and current medications. Early diagnosis is vital,indoor cannabis grow system as a delay or misdiagnosis has been associated with an increased risk of adverse outcomes and death.Misdiagnosis occurs in up to one-third of patients upon initial presentation.While no single historical factor or examination finding can significantly reduce the likelihood of HF in isolation, initial clinical gestalt has been shown to have a sensitivity of 61% and specificity of 86% for the diagnosis.Risk factors for HF include hypertension, renal disease, heart disease, diabetes, male gender, older age, and obesity.In particular, advanced age, renal disease, and lower blood pressure are associated with increased mortality in AHF.Precipitating factors for AHF exacerbation can include cardiac and non-cardiac causes.Cardiac causes include uncontrolled hypertension, dietary or medication noncompliance, aortic dissection, dysrhythmias, and cardiac ischemia.Noncardiac causes include pulmonary disease, endocrine disease, infection, worsening renal function, anemia, and medication side effects.Patients who are non-compliant with their diet and medications have been found to have a lower EF, higher brain-type natriuretic peptide levels, and greater congestion when compared with their counterparts.Dysrhythmias are another frequent precipitating cause. Among those, atrial fibrillation is the most common.ACS is more commonly associated with de novo HF.Components of the history such as weight gain, dyspnea, chest pain, peripheral edema, substance abuse, new medications, past complications, prior hospitalizations, diet changes , and medication compliance are vital to determine the underlying etiology, and an identifiable trigger can be found in approximately 60% of patients.Acutely, the most common symptoms associated with AHF include paroxysmal nocturnal dyspnea , orthopnea, and edema.

However, the classic symptoms such as PND, dyspnea, and orthopnea demonstrate poor sensitivity and specificity . On examination, an S3 heart sound has the highest specificity, ranging from 97.7–99%, but it has only 12.7% sensitivity.Additionally, an S3 heart sound can be difficult to detect in the ED setting, and inter-rater reliability can be poor.Hepato-jugular reflux and jugular venous distension possess a specificity of 93.4% and 87% and sensitivity 14.1% and 37.2%, respectively, for HF. Lung auscultation is also less reliable, as the presence of rales has a sensitivity of approximately 60% and a specificity approaching 70%.Lower extremity edema has a sensitivity of 50% and specificity 78%.A meta-analysis evaluating various signs and symptoms in patients with dyspnea found that no single sign or symptom was sufficiently able to rule out AHF, chronic obstructive pulmonary disease, asthma, or pulmonary embolism.However, elevated jugular venous pressure, third heart sound, and lung crepitations were strongly suggestive of a diagnosis of AHF.Laboratory assessment in the patient with suspected AHF can provide important diagnostic and prognostic information.Testing should include a complete blood count, basic metabolic panel with renal function testing, liver function testing, troponin, and a BNP level.Abnormalities in liver function are found in approximately 75% of patients with AHF and are associated with more severe disease.If the right ventricle is involved, bilirubin and alkaline phosphatase levels may be elevated, while left sided disease is more commonly associated with elevated transaminase levels.Renal function is an important assessment, as it is a predictor of disease severity and mortality.Decreased glomerular filtration rate is associated with increased length of in-hospital stay, short-term mortality, and long-term mortality.In patients with AHF, every 10 mL/minute decrease in GFR is associated with an increase in mortality of 7%.Troponin testing can assist in prognostication and in the detection of underlying ischemia as a potential inciting event for AHF. Elevated troponin levels are associated with higher re-hospitalization rates and 90-day mortality.Troponin elevation is common in AHF, as one study found elevated troponin levels in 98% of patients with diagnosed AHF, with 81% of the levels above the 99th percentile.

Other studies have suggested that this may be closer to 30-50%. However, an elevated troponin is not specific for ACS and may be seen with a variety of other causes, including demand ischemia and renal dysfunction.Natriuretic peptides may be a valuable adjunct when the provider is unclear of the diagnosis.BNP is produced by cardiac myocytes when exposed to significant myocardial stretch. Use of BNP and NTproBNP may be sensitive, but not specific for the diagnosis of AHF. Other conditions associated with elevations in natriuretic peptide levels include pulmonary embolism, pulmonary hypertension, valvular heart disease, and acute respiratory distress syndrome. BNP levels of 100-400 pg/mL and NT-proBNP levels of 300-900 pg/mL are non-specific and may require further testing.Although these biomarkers may assist in differentiation of other conditions, studies have not demonstrated improved patient-centered outcomes with use of natriuretic peptides .Observational trial data suggest natriuretic peptides demonstrate sensitivity over 90%, but specificity is poor.Data from randomized, controlled trials found that knowledge of the BNP levels did not significantly change the ED treatment, mortality, or readmission rates; however, it may decrease hospital length of stay and total cost.Imaging is an important component in the patient with suspected heart failure. The most common modality used is the chest radiograph . Several findings suggest the diagnosis of heart failure on CXR, including cardiomegaly, central vascular congestion, and interstitial edema .However, a normal CXR should not be used to exclude the diagnosis of AHF, as up to 20% of CXRs may appear normal in AHF.Studies evaluating physician accuracy with identifying AHF on CXR have demonstrated sensitivities of 59- 74.5% and specificities of 86.3-96%.While CXR should not be used to exclude AHF, it can be valuable for identifying alternate disease processes that may mimic AHF.Bedside ultrasound can be valuable for diagnosing AHF, with high specificity and positive likelihood ratios . Ultrasound can be used to evaluate for B-lines, pleural effusions, inferior vena cava size and respiro-phasic variability, and cardiac contractility.B-lines are vertical artifacts that result from sound wave reverberation through fluid-filled pulmonary interstitium. The presence of greater than three B-lines in two bilateral lung zones defines a positive lung ultrasound examination.The number of lung zones examined varies in the literature, with eight thoracic lung zones used in the initial lung ultrasound protocols,equipment for growing weed while newer studies have used four or six lung zones. B-lines demonstrate high sensitivity and specificity for interstitial edema,while the identification of pleural effusions is not as helpful. Assessment of EF on ultrasound may be assessed with visual assessment or quantitative measurements. Qualitative visual estimation is made by assessing the inward movement of the interventricular septum and inferior wall of the LV during systole.

E-point septal separation is a quantitative measurement assessing the distance between the anterior mitral valve leaflet and ventricular septum. An EPSS measurement > 7 mm is suggestive of an EF < 50%.Ultrasound can also estimate intravascular volume through the measurement of inferior vena cava diameter and percentage change during the respiratory cycle. However, diagnostic performance is controversial, with many confounding factors and a wide range of sensitivities and specificities.One study found that by using a combination of lung, cardiac, and inferior vena cava ultrasound, the authors were able to improve diagnostic accuracy by 20%.Others have suggested that combining CXR with ultrasound may increase the sensitivity and specificity for diagnosing AHF.The escalating cost of healthcare in the United States is unsustainable. In 2016 spending reached 17.9% of the gross domestic product, or $10,348 per person.1 Many studies on healthcare reform in the U.S. focus on the factors driving the University of Maryland, Robert H. Smith School of Business, Decision, Operations, and Information Technologies, College Park, Maryland American University, Kogod School of Business, Department of Information Technology and Analytics, Washington, District of Colombia University of Maryland School of Medicine, Department of Emergency Medicine, Baltimore, Maryland nation’s high level of expenditure.The payment system is the subject of one major stream of research. The State of Maryland is at the forefront of healthcare reform in the U.S. The state is unique in its implementation of an all-payers payment system for hospitals. The system is governed by the Health Services Cost Review Commission , which sets hospital rates for all providers for both inpatient and outpatient services. In 1977 the federal government granted the state a Medicare waiver that required government payers to abide by HSCRC hospital rates. Global Budget Revenue is a revision of this waiver and was implemented in 2014. GBR drives a value-based healthcare service by setting global budgets for acute care hospitals, i.e., creating a capitated system for hospitals. In 2011, Maryland implemented the Total Patient Revenue program, a revenue constraint policy designed by the HSCRC. TPR was implemented as a pilot project in 12 Maryland hospitals located primarily in rural and geographically isolated parts of the state. Under TPR, these pilot hospitals were guaranteed a certain annual revenue calculated from a formula based on the prior year’s revenue and reasonable annual adjustments. This structure provided an incentive to control costs by reducing unnecessary hospitalizations and inpatient resources. Communities were rewarded for the development of robust outpatient resources and improving the health of the population. Based on the success of TPR, the state and federal government moved forward with GBR on a statewide basis.On January 1, 2014, the State of Maryland began the GBR program with the main goals of improving the health of communities, improving the patient experience, and lowering the cost of healthcare services for all patients. In contrast to the 36-year-old waiver policy that preceded it, GBR guarantees a hospital’s annual revenue by calculating global budget based on market share. Adjustments in global budgets are tied to changes in market share and the state’s gross domestic product. In some ways, GBR is an extension of TPR. However, GBR is not a voluntary program; it requires every Maryland hospital to participate. The main difference is that TPR was implemented in geographically isolated areas of the state where catchment areas are clear. Hospitals under GBR operate in more competitive market environments.7 In the online appendix, Table A1 lists the names of the Maryland hospitals that are under the GBR program. In the past, hospital revenue was directly linked to the number of medical services that the hospital provided. In contrast, under GBR and TPR, each hospital’s total annual revenue is defined by the HSCRC and known at the beginning of each fiscal year. The hospital margin is the difference between the global budget and annual cost. As a result, hospitals are motivated to control costs while maintaining or growing market share.7-10In this study, we used the difference-in-differences method , which is widely used in healthcare management and policy analysis.DID determines two differences and calculates the treatment or policy effect by determining the difference of the two differences. Examples of studies using DID include that work by Tiemann and Schreyogg on the impact of privatization on hospital efficiency in Germany. Buchner et al. used DID to study the impact of health system entry on hospital efficiency and profitability.In our study, the first difference is the comparison of a GBR, hospital’s performance before and after GBR implementation.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on The most common manifestation is dyspnea or edema from elevated LV filling pressures

Previous work has shown that triage assessments can have poor interrater and intra-rater agreement

They report challenges obtaining timely access to sick visits with primary care doctors and urgent visits with specialists and dentists. Additional barriers that make obtaining unscheduled care challenging include identifying clinics that offer comprehensive interpretation services, accept Refugee Medical Assistance, and are geographically convenient. Scheduling appointments over the phone, specifically automated services, is particularly challenging for refugees with limited English proficiency. On arrival to the ED, the same language barriers create challenges to understanding care received. In addition, the lack of trauma informed care can hinder the appropriate workup and treatment of symptoms. Finally, after obtaining care in any acute care setting, refugees face significant financial risk due to limited understanding of the health insurance system. It is important to highlight that some of the aforementioned barriers to acute outpatient care reported exist among U.S.-born individuals, including geographical and insurance barriers, and difficulty accessing mental and dental services. However, these challenges are exacerbated for refugees due to language and cultural barriers. The U.S. healthcare system is new and often quite different from health systems refugees have used in the past, adding an extra layer of complexity to understand. The lack of interpretation services limits already limited resources such as appointments with specialists, dentists, and mental health providers. Additionally, refugees have unique mental healthcare needs given their history of trauma that adds an additional challenge when identifying appropriate mental health services. There is limited existing data on the utilization of acute care services by refugees in the U. S.

In Australia a study evaluating the use of emergency services by refugees suggested that some refugees know how to call for emergency help,weed drying room yet have significant fear of calling for help because of security implications faced previously in their home countries.10 In our study, refugees identified knowing how to call 911 if they were ill but did not express fear as a barrier to using this service. It is possible that the study population perceived less fear because the resettlement employees recommended the use of 911. A qualitative study in the U.S. evaluating healthcare barriers of refugees one year post resettlement also identified individual and structural barriers to accessing health services. Barriers included challenges with language, acculturation processes, and cultural beliefs.Similarly, our study found that language and acculturation were significant barriers when accessing health services. Our study differed in that we were specifically focusing on barriers to acute care access and that we identified additional barriers related to health insurance and perceived poor access to prompt outpatient clinic options. Additionally, our results identified the important role of resettlement agencies in addressing these barriers. Notably, our study occurred early in the resettlement process, a time when resettlement agencies are typically more involved, as opposed to one year after resettlement. Respondents identified several areas for improvement to reduce barriers to accessing care for newly arrived refugees . Areas for improvement within the acute care system include establishing partnerships with resettlement/post resettlement agencies to assist with triage of refugees with acute conditions, and developing specific protocols that may help resettlement employees direct patients to appropriate levels of care. Finally, respondents recommended incorporating cultural competency and trauma-informed care training for providers. Trauma-informed care is based on the premise that past exposure to trauma can have long-lasting effects on the physical and mental health of patients. Thus, providers and organizations can respond by adopting trauma-informed models of care.

A trauma-informed organization acknowledges that trauma is pervasive, recognizes the signs and symptoms of trauma, and integrates knowledge about trauma into policies, procedures and practices with the goal of avoiding retraumatization.While it is challenging to accurately estimate the number of refugees who have experienced trauma prior to resettlement, estimates suggest that the prevalence rate may be as high as 35%.This does not account for trauma associated with the resettlement process. ED specific approaches of trauma-informed care have been suggested for violently injured patients who have been injured due to violence and are treated in the ED; and some components may be applicable to refugee populations.While more research is needed to establish trauma-informed models of care for refugees in the ED, providers should acknowledge a patient’s history of trauma, ongoing signs and symptoms, and avoid practices that may result in retraumatization. A major theme in our interviews was the importance of interpretation services. Refugees and resettlement employees describe challenges at all points of acute care access due to language barriers and a lack of appropriate interpretation services. Revisions to the Affordable Care Act in 2016 mandated that healthcare facilities must offer qualified interpreters to limited English proficient patients and the 2010 Joint Commission standards also require qualified interpreter services in hospital settings.However, patients with LEP have worse clinical outcomes and receive a lower quality of care.18 In the ED formal interpretation should be offered to all patients who do not identify English as their primary language, and operation teams should ensure interpretation services are embedded throughout a refugee’s ED course, and that all members of the ED team are routinely trained on how to use in-person and phone interpreters. Similarly, clinic teams can ensure that interpretation services are available during clinic visits, but also when refugees call to schedule appointments or ask questions. Another common barrier reported by resettlement employees and refugees is that refugees struggle to understand health insurance, which is also supported in prior studies.More education for refugees was suggested as a potential intervention to address this concern, and may be useful.

However, additional policy changes may be required to avoid insurance-related barriers to accessing care. For example, refugees who live in states without Medicaid expansion have a much smaller chance of enrolling in health insurance once Refugee Medical Assistance ceases.Additionally, it has been reported that in states where Medicaid requires reapplication annually, refugees often have a gap in insurance coverage.A study evaluating health coverage for immigrants suggests that expanding universal coverage may actually reduce net costs for LEP patients by increasing access to primary prevention and reducing emergency care for preventable conditions.For refugees, the cessation of Refugee Medical Assistance after eight months occurs at a difficult time of transition. At six to eight months, cash assistance from the government typically ends as does support from the resettlement agency based on the expectation that refugees are self-sufficient after six to eight months of support.A study evaluating unmet needs of refugees demonstrated that refugees in the U.S. for a longer period of time are more likely to report a lack of health insurance coverage and a delay in seeing a healthcare provider.Policymakers should consider extending Refugee Medical Assistance beyond the first eight months as an additional strategy to improve access to health insurance and ensure stable access to care. Finally, additional research is needed to understand networks of care for refugees. In order to understand ED utilization by refugees and barriers to acute care,drying rack for weed future studies should focus on prospectively following refugees after arrival to identify patterns of use and integration long term. This would then help guide types of interventions at locations where refugees most frequently seek acute care. Systematic identification of refugees in national datasets would assist with understanding variations in patterns of utilization between different regions and identifying areas of particular importance. We obtained the data from this study from one city. This limits the generalizability as results may be specific to the refugee experience in this location and healthcare system. However, our sample engaged refugees from a variety of countries, representing the current distribution of refugees resettled to locations throughout the country. This study did not specifically evaluate differences in access to acute care barriers for refugees based on country of origin, gender, educational, cultural, or economic background; however, all of these factors may influence experiences and are important to consider in future studies. Interviews with refugees occurred at a refugee clinic affiliated with a local resettlement agency and did not include refugees without acces to care and services.

Similarly, resettlement agency employees were recruited by the study team, largely consisting of physicians. Interviews with refugees were conducted mostly within three months of their arrival, thus only targeting newly arrived refugees. Barriers to access may differ at different stages of the resettlement process. However, this early period is likely to be the most vulnerable time with significant language, acculturation, and financial challenges. In addition, refugees typically see a physician within 30 days of arrival in the U.S. Many resettlement agencies work with specific clinics to meet this goal, making this the optimal time to capture a diverse population receiving care at one location. Some members of the study team had significant experience working at the refugee clinic and may have been influenced by potential biases from previous work with refugees, specifically when identifying themes. To counter these potential biases, members of the study team included individuals who did not work at the refugee clinic. Transcripts were double coded by both a clinic and non-clinic investigator and reviewed by a non-clinic investigator. Additionally, the use of interpreters may have altered responses from refugee patients. In some languages, a direct translation for specific words or meanings may not exist and as a result may be translated in a meaning that is different than what was intended. Finally, as with all qualitative studies, results generate hypotheses from the experience of the participants rather than testing or measuring a hypothesis. The Joint Commission, other medical governing agencies, and various hospital policies mandate that certain screening questions be asked of all patients who come through the emergency department for evaluation. Before a patient has even seen a physician, they have likely been asked dozens of screening questions as part of the triage or nursing assessment. Screening questions are often implemented with good intentions and some questions serve as public health screening where the ED acts as a safety net.The downstream consequences of adding on numerous questions to the ED stay are often not considered. There is the potential for a significant amount of nursing time to be used administering assessments. Additionally, the purpose of triage is to identify and prioritize patients who require immediate treatment over those who do not. The required screening questions often have an unclear benefit on determining triage acuity and on the care that the patient receives in the ED. In many instances the addition of screening questions is based on rudimentary studies that do not examine clinical outcomes or costs.4 Screening questions can add time to the triage process and ED wait time, and take nurses away from performing more direct patient care. While any individual question may not take long to ask, when you multiply it by the tens of thousands of patients who pass through the ED and the expanding number of screening questions, it quickly adds up to a significant amount of time. Our objective was to analyze the time nursing spent conducting standardized nursing screens and calculate the corresponding time cost.This is a cursory look at the potential monetary and time costs of standardized screening questions in the ED. The calculated values directly affect time and cost efficiency in the ED process and could potentially be redirected to more direct patient care. For just the five observed triage questions alone, we estimated the nursing time cost to our institution to be $20,675.50. This time cost would be significantly increased if we examined additional triage and nurse screening questions. Furthermore, this is just the time spent in a single ED. If all 136.9 million adult ED visits in the U.S. included the five studied questions the screening would take 964,354 hours to complete.5 This equates to $33.8 million in nursing costs annually. The required screening questions are often unrelated to the patient’s chief complaint and have a debatable impact on the medical management in the ED. Questions that may impact care, such as medication allergies, are typically asked by multiple medical providers during the ED visit, and redundancy leads to additional wasted time and cost. It is unclear whether the standardized questions are suitable for triage where the goal is to identify and prioritize patients who require immediate treatment over those who do not.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Previous work has shown that triage assessments can have poor interrater and intra-rater agreement