Findings remained unchanged after controlling for verbal intellectual functioning

The rapid event-related design of the BART fMRI task created some special challenges for analysis. Because the task moved very quickly through the Think, Pump, Wait, Inflate, Pop or Win, and Rest conditions , it is possible that there was overlap of the hemodynamic response across task conditions. However, the deconvolution process is generally effective at disentangling the individual hemodynamic response functions so that the BOLD response pattern specific to each task condition can be appropriately measured. Rapid event-related designs are also limited by a lower signal-to-noise ratio, compared to blocked designs or slower event-related designs. This ultimately leads to a loss of statistical power. Also, some of the task conditions may not have been perceived as separate, distinct events to participants, and thus BOLD response may not be qualitatively different between these conditions. For example, the “Wait” control condition occurred immediately after subjects inputted their number of pumps. Although the active “Inflate” condition was designed to measure the anticipation stage of decision making, the “Wait” condition may have captured some anticipatory response as well. The visual display of a balloon inflating likely added an element of anticipation above and beyond the “Wait” phase ; however, both “Rest” and “Wait” conditions were used as control conditions for the “Inflate” conditions, in order to address this issue. It is somewhat surprising that heavy drinkers and controls showed very few differences on BART task performance variables. This contrasts with previous studies using BART paradigms which have found group differences between adolescent smokers and non-smokers and adolescents with “serious substance use and conduct problems” and controls . However other studies have not shown differences between adolescent substance users and non-users . One reason for the lack of performance differences may be that the version of the BART used in this study differs from the original design of the task where participants are asked to sequentially input discrete pumps, one by one, to pump up the balloons . It may be that inputting each pump separately allows for anticipation to build to higher levels,indoor cannabis grow system leading to an increased response to reward or loss which may further risk-taking performance on the task.

Some studies have also analyzed the first and second half of the BART separately, to examine group differences in risky decision-making as the task progresses. A preliminary study of the BART in the current sample used this method and found that heavy drinking adolescents had a greater number of pumps on the second half of the task compared to controls, though after 5 weeks of abstinence, the groups were equivalent . In addition, at baseline, number of pumps on the second half of the task was positively correlated with number of recent alcohol binges and number of drinks per day. Although the lack of group differences in BART task performance within the current study allows for easier interpretation of the fMRI data, it is also important to consider the real-world generalizability of any laboratory measure of an abstract concept like risky decision-making. Another limitation to this study and all fMRI studies is that other variables may have affected the magnitude of the BOLD response other than the construct of interest . For example, heavy drinkers and controls differed in their use of cannabis and nicotine. However, findings remained unchanged after controlling for cannabis use. While tobacco use was not controlled for in this study, the level of use in the heavy drinking group was relatively low, and no participants met criteria for tobacco dependence. Therefore it is unlikely that tobacco use could have accounted for variability in BOLD response. Heavy drinkers and controls also differed in terms of verbal intellectual functioning and level of externalizing problems. However, level of externalizing problems was not controlled for, primarily because the difference observed on this variable was thought to represent a naturally occurring difference between adolescents who use substances and those who do not. Cerebral blood flow was not measured in this study and is known to have some effect on BOLD response. As resting state perfusion can affect the magnitude of the BOLD response , it is possible that differences in cerebral blood flow could explain BOLD response differences between heavy drinkers and controls.

A recent study by Jacobus and colleagues found that for heavy adolescent marijuana users, cerebral blood flow was reduced in four cortical regions and increased in one region at baseline ; however, after four weeks of abstinence, no between-group differences in cerebral blood flow were found. In extrapolating these findings to adolescent heavy alcohol users, it seems possible that cerebral blood flow could have an impact on the baseline between-group differences in BOLD response in this study, but may be less likely to impact BOLD response differences at the +2weeks and +4weeks time points. The issue of test-retest reliability of the BOLD response should be considered. There has been some controversy about this topic in the literature, as some studies have found high test-retest BOLD signal reliability , and others have reported considerable within-subject variation in BOLD signal change across scan sessions . Changes in BOLD signal response across different scan sessions for the same subject can occur for several reasons, such as fluctuating mood/anxiety/alertness states , levels of effort , motion, scanner drift , physiological changes , and developmental maturation. Many of these variables were measured and controlled for in this study. Specifically, acute anxiety and alertness were found to be equivalent between groups and are therefore unlikely to affect results. In addition, subpar effort was controlled for by removing subjects with five or more “Too Slow” outcomes within a given scan session, as it was assumed that subjects with a high degree of “Too Slow” responses were not adequately engaged in the task. Excess motion was controlled for by removing participants with more than 20% repetitions containing excessive head motion. Field maps applied to the fMRI acquisitions also helped to control the BOLD signal stability across scans, as this process minimizes warping and signal dropout, as well as reduces mislocalization errors, especially in frontal regions. Physiological changes and scanner drift were more difficult to control for and thus there is a small probability that these variables could have affected the reliability of the BOLD signal across repeated assessments in this study. Finally, the probability of Type I error in this study is likely higher than desired due to the large number of tests that were run within each hypothesis.

Although Hypotheses 1 and 2 included corrections for multiple comparisons within each ROI , there was no comparable correction for the number of comparisons examined across ROIs . Hypothesis 3 also did not employ any multiple comparison corrections, and these analyses should be considered exploratory. Taken together, the results of this study suggest that heavy drinkers display abnormalities in neural functioning during risky decision-making compared to their non-drinking peers, particularly in the right insula during anticipation and the ventromedial prefrontal cortex during evaluation of negative outcomes. Abnormalities in these regions appear to resolve after two to three weeks of abstinence. In addition, heavy drinkers showed some changes in BOLD response across repeated assessments , which provide further support for a neural recovery hypothesis. However, after five weeks of abstinence, heavy drinkers and controls show some differences in neural functioning that persist across time. This suggests that other regions of the brain may take longer to fully recover, or, that there are pre-existing differences in these regions,indoor weed growing accessories which could represent vulnerabilities for future substance use. In addition, this study suggests that differences in neural functioning in reward-related regions can effectively predict real-world report of risk-taking behavior. These findings are important as they suggest that neural functioning may be used as a potential biomarker for risk-taking vulnerability in the future. Future directions for this study should first include replication within a larger sample, to increase confidence in the findings. Second, an examination of the effect of length of abstinence from alcohol prior to study entry on BOLD response in the heavy drinking group will be necessary to determine whether variability in length of abstinence may have contributed to the between-group differences observed in this study. It would also be informative to analyze the first and second half of the BART task separately, as Hansen and colleagues did with the non-fMRI BART task. In addition, an examination of gender differences could be important, given results from previous studies suggesting that females may be especially susceptible to the effects of heavy alcohol use. Measures of hangover severity and withdrawal effects could also be investigated as possible moderating factors of group differences in BOLD response to risky decision-making, as these have been implicated as significantly related to performance on cognitive tasks in heavy alcohol users. The ultimate goal of this study and others like it should be to disseminate findings to youth and families, and to provide psycho education through the creation of prevention materials and public service campaigns. By understanding how the brain responds to risky decision-making after recent alcohol use, and how the brain’s circuitry may “repair” itself with sustained abstinence, adolescents could become motivated to remain abstinent from alcohol, which ultimately, would reduce the rates of accidents and deaths in this age group as a result of risky behavior. This work is being prepared for submission for publication as “fMRI Correlates of Risky Decision-Making in Adolescent Alcohol Users: The Role of Abstinence.” The dissertation author will be the primary author of this material along with co-authors Alan Simmons, Ph.D., Carmen Pulido, Ph.D., Susan Tapert, Ph.D., and Sandra Brown, Ph.D.

Two endogenous agonists of cannabinoid receptors have been well characterized and are now widely used in research: anandamide , and 2-arachidonoylglycerol . Both molecules derive chemically from the polyunsaturated fatty acid, arachidonic acid, which is used in nature as the starting material for other important signaling compounds, such as the eicosanoids. Additional endocannabinoid-related compounds present in the body include virodhamine, which may act as an endogenous antagonist of CB1 receptors, and arachidonoylserine, which may engage an as-yet-uncharacterized cannabinoid-like receptor expressed in the vasculature. As is well-known, the Cannabis plant contains more than 60 cannabinoids, which include -D9 -tetrahydrocannabinol , cannabigerol, cannabidiol, cannabinol, cannabichromene and cannabicyclol. Attention has been mostly focused on D9 -THC, because of its multiple biological properties. Nevertheless, less studied compounds such as cannabidiol may also be important, although we do not yet know at which receptors they may act to achieve their effects. D9 -THC is the only natural cannabinoid presently used in the clinic. In addition to these plant-derived cannabinoids, an extensive set of synthetic cannabinergic agonists has been developed over the last 30 years. Products of these efforts include CP-55940 , created by opening one of the rings of the tricyclic D9 -THC structure and introducing other small changes in its structure; HU-210 , a very potent cannabinoid agonist resembling some D9 -THC metabolites; and WIN55212-2 , which belongs to an altogether different class of chemicals, the aminoalkylindoles. Additionally, the metabolically stable synthetic analog of anandamideR-methanandamide is routinely used as a pharmacological probe to circumvent the short half life of the natural substance. Two important new additions to this armamentarium under discussion at the workshop include a peripherally acting cannabinoid agonist in preclinical development by Novartis for the treatment of neuropathic and inflammatory pain , andBAY-387271 , a centrally acting cannabinoid agonist in Phase II clinical studies for the treatment of stroke. The interest of the pharmaceutical industry in the application of cannabinoid agonists to the treatment of pain conditions is not recent. Indeed, most of the compounds now in experimental use derive from such an interest. Historically however cannabinoid agonist development has not proved clinically fruitful, largely because of the profound psychotropic side effects of centrally active cannabinoid agonists, hence the attention given to peripherally acting cannabinoids, which exhibit significant analgesic efficacy and low central activity in animal models. Neuroprotection is a relatively new area for cannabinoid agonists, but one that appears to be already well advanced. Preclinical studies have made a convincing case for the efficacy of cannabinoid agents not only in experimental brain ischemia, but also in models of Parkinson’s disease and other forms of degenerative brain disorders.

Posted in hemp grow | Tagged , , | Comments Off on Findings remained unchanged after controlling for verbal intellectual functioning

Other factors such as gender and family history of AUDs may moderate this relationship

Cognitive theories attempting to explain adolescent risk-taking as the result of underdeveloped decision-making skills have found little, if any support, as studies demonstrate that adolescents show an adequate understanding of the steps involved in the decision-making process, such as weighing pros and cons. In fact, children as young as 4 years old have some understanding of consequence probabilities and adolescents and adults show equal levels of awareness of the consequences associated with risky behaviors . In some cases, adolescents may even overestimate their personal vulnerability to risk consequences compared to adults . Further, interventions designed to provide adolescents with information about the risks of substance use, drinking and driving, and unprotected sex have proved largely unsuccessful and have done little to change adolescents’ actual behavior . Simplified theories of “immature cognitive abilities” in adolescence are also inconsistent with a developmental perspective, as the increase in cognitive sophistication from childhood to adolescence would imply a decrease in risk-taking behaviors with age, rather than an increase . Steinberg proposes an alternative view of adolescent risk-taking behavior that is rooted in developmental neuroscience. Specifically, heightened risk-taking in adolescence is described as the product of a “competition” between a socioemotional network that is sensitive to social and emotional stimuli , and a cognitive control network that is responsible for regulating executive functions such as planning, organization, response inhibition, and self-regulation. The socioemotional network relies on limbic and paralimbic structures such as the amygdala, ventral striatum, orbitofrontal cortex,cannabis plant growing ventromedial prefrontal cortex, and superior temporal sulcus, while the cognitivecontrol network consists of lateral prefrontal and parietal cortices as well as the anterior cingulate . During adolescence, the brain undergoes significant structural, functional, neurochemical, and hormonal changes that directly impact the development of the socioemotional and cognitive control networks, among other regions.

Specifically, synaptic pruning and myelination processes result in reduced gray matter volume and increased white matter volume by late adolescence/early adulthood . Increases in white matter during adolescence are associated with greater structural connectivity and faster, more efficient neural communication between brain regions . Evidence from neuroimaging studies note that dramatic changes occur in the brain’s dopaminergic system at puberty, primarily in prefrontal and striatal regions . Specifically, dopamine activity shows substantial decreases in the nucleus accumbens, an important region of the ventral striatum well known for its role in reward processing. Dopamine has been implicated as a primary mechanism of affective and motivational regulation and is linked to the socioemotional network ; thus, the sudden decrease in this neurochemical creates a “dopamine void” which may compel adolescents to seek out novel and risky behaviors to compensate . Changes in brain regions associated with the cognitive control network also take place in adolescence, including gray matter decreases and white matter increases in the prefrontal cortex, and an overall increase in synaptic connections among cortical and subcortical regions of the brain . In contrast to the acute changes that occur to socioemotional regions with puberty, changes in the cognitive control network are gradual and typically continue into the mid-twenties. As a result of timing differences in the developmental brain changes that occur during adolescence, there appears to be a “timing gap” between the maturation of the socioemotional network and the maturation of the cognitive control network. Greater motivational drives for novel and rewarding experiences combined with an immature cognitive control network may predispose adolescents to risky behavior, including substance use. This becomes especially relevant under conditions of high emotional arousal , where the socioemotional network is likely to become highly activated and the cognitive control network must “work harder” to override it . While neurochemical modifications and other developmental brain changes may contribute to adolescents’ increased propensity for risk-taking , it is paradoxical, as the brain may be especially vulnerable to the insult of alcohol during this critical time . A handful of studies have shown a deleterious effect of heavy alcohol use on adolescents’ neuropsychological performance in varied domains, including visuospatial abilities , verbal and non-verbal retention , attention and information processing , and language and academic achievement .

Female adolescent alcohol users have also shown deficits on tasks of executive functioning, specifically those involving in planning, abstract reasoning, and problem-solving . Post-drinking effects, such as hangover severity and withdrawal symptoms have been demonstrated to be important predictors of alcohol-related neurocognitive impairment, as greater self reported withdrawal symptoms have been linked with poorer visuospatial functioning and poorer verbal and non-verbal retention . Longitudinal studies have examined whether observed neurocognitive deficits in this population represent premorbid risk factors for use or consequences of heavy alcohol use. In one study, after controlling for recent alcohol use, age, education, practice effects, and baseline neuropsychological functioning, substance use over an 8-year follow-up period significantly predicted neuropsychological functioning at Year 8. Specifically, adolescents who reported continued heavy drinking and greater alcohol hangover or withdrawal symptoms showed impairment on tasks of attention and visuospatial functioning compared to non-using adolescents . These findings were replicated in a prospective study that characterized at-risk adolescents prior to initiating alcohol use. For females, initiation of alcohol use over the follow-up period was associated with worsening visuospatial functioning, while greater hangover symptoms over the follow-up period predicted poorer sustained attention in males . Taken together, these studies suggest that heavy drinking during adolescence is associated with deficits in cognitive performance, which likely result from, rather than predate, alcohol use.Specifically, evidence suggests that females may be more vulnerable to the negative impact of heavy alcohol use in adolescence , and a positive family history of AUDs has been associated with worse neurocognitive performance in adolescent heavy alcohol users, particularly in language and attention domains . Structural magnetic resonance imaging studies provide evidence for anatomical brain abnormalities in adolescents with histories of heavy lifetime alcohol use, compared to their non-using peers.

The hippocampus appears to be one area of potential vulnerability, as decreased bilateral hippo campal volumes have been observed in adolescents meeting criteria for AUDs, with smaller hippo campi related to earlier onset and longer duration of the disorder . Nagel, Schweinsburg, Phan, and Tapert found similar results, with smaller left hippo campal volumes observed in heavy alcohol-using adolescents compared to controls, even after excluding teens with co-occurring Axis I disorders. Hippo campal volume did not correlate with degree of alcohol use in this study, suggesting that between-group differences may be reflective of premorbid factors, and not solely the result of heavy alcohol use. Another area of the brain that may be especially vulnerable to the effects of heavy alcohol use in adolescence is the prefrontal cortex. As a key component of both the cognitive control and socioemotional networks, this region is important to the study of risk-taking. In a sample of adolescents with co-occurring psychiatric and AUDs, DeBellis and colleagues found significantly smaller prefrontal cortex volumes in alcohol users compared to controls. These findings were replicated by Medina and colleagues in a sample of alcohol dependent adolescents without psychiatric disorders; however,vertical grow rack system a significant group by gender interaction was observed. Specifically, alcohol dependent females showed smaller prefrontal cortex and white matter volumes than female controls, and alcohol dependent males showed larger prefrontal and white matter volumes than male controls. In a cortical thickness study of adolescent binge drinkers, Squeglia, Sorg, and colleagues found alcohol use by gender interactions in four left frontal brain regions, where female binge drinkers had thicker cortices than female controls and male binge drinkers had thinner cortices than male controls. Thicker frontal cortices corresponded with poorer visuospatial, inhibition, and attention abilities for females and worse attention abilities for males, providing further evidence that females may be especially vulnerable to brain changes brought on by heavy alcohol use in adolescence. Diffusion tensor imaging studies have yielded corroborating evidence of altered brain development in adolescent heavy alcohol users. In one study, adolescents with histories of binge drinking showed decreased white matter integrity in 18 major fiber tract pathways, specifically in the frontal, cerebellar, temporal, and parietal regions . Another study found reduced white matter integrity in the corpus callosum of youth with AUDs, particularly in the posterior aspect . In addition, reduced white matter integrity in this region was related to longer durations of heavy drinking, larger quantities of recent alcohol consumption, and greater alcohol withdrawal symptoms .

There is evidence that poorer white matter integrity may be both a consequence of adolescent alcohol use and a predisposing risk factor for use. Specifically, in a study of 11- to 15-year-old alcohol naïve youth, Herting, Schwartz, Mitchell, & Nagel, found that youth with a positive family history of AUDs had poorer white matter integrity in several brain regions, along with slower reaction time on a task of delay discounting, when compared to youth without a family history of AUDs. In addition, Jacobus, Thayer, Trim, Bava, and Tapert found that poorer white matter integrity measured in 16- to 19-year old adolescents was related to more self-reported substance use and delinquency/aggression at an 18-month follow-up. In fMRI studies, altered neural processing has been observed in heavy drinking adolescents during cognitive tasks of spatial working memory , verbal encoding, and visual working memory . Tapert and colleagues found that adolescents with a history of heavy drinking over the past 1-2 years showed increased blood oxygen level-dependent response in bilateral parietal regions during a SWM task, but decreased BOLD activation in the occipital and cerebellar regions compared to lighter drinkers. In addition, BOLD activation abnormalities were associated with more withdrawal, hangover symptoms, and greater lifetime alcohol consumption. Similarly, in a study of verbal encoding, Schweinsburg, McQueeny, Nagel, Eyler, and Tapert showed that adolescent binge drinkers had more BOLD response in the right superior frontal and bilateral posterior parietal regions but less BOLD response in the occipital cortex, compared to non-drinkers. Control adolescents also showed significant activation in the left hippocampus during novel encoding, whereas binge drinkers did not. A 2011 follow-up to this investigation found increased dorsal frontal and parietal BOLD response among 16- to 18-year-old binge drinkers, and decreased inferior frontal response during verbal encoding . Squeglia, Pulido, and colleagues found comparable results during a VWM task, in that heavy drinking adolescents showed more BOLD response compared to matched controls in right inferior parietal, right middle and superior frontal, and left medial frontal regions, but less BOLD response in left middle occipital regions. Notably, this investigation included a longitudinal component with a separate sample of adolescents in which the brain areas showing group differences in BOLD response to the VWM task were identified as ROIs. Adolescents were scanned at baseline before they ever used alcohol or drugs and then scanned again at a 3-year follow-up time point. Adolescents from this sample who transitioned into heavy drinking during the follow-up period showed less BOLD response to the VWM task compared to continuous non-drinkers in frontal and parietal regions at baseline; in addition, BOLD response in these regions increased significantly over the follow-up period for the heavy drinkers, while controls’ BOLD response did not change significantly over time. Finally, less BOLD activation at baseline predicted subsequent substance use, above and beyond age, family history of AUDs, and baseline externalizing behaviors. Taken together, results from these studies suggest that the adolescent brain is indeed sensitive to the insult of excessive alcohol use, and structural alterations and neural reorganization may result from continued heavy drinking. In turn, this altered brain development may trigger cognitive, emotional, and behavioral changes, leading to further alcohol use and other risk-taking behaviors. As the majority of fMRI studies of adolescent alcohol users to date are cross sectional in nature, it is difficult to determine whether the observed neural abnormalities predate the onset of alcohol use, or are consequences of alcohol use. However, results of the Squeglia, Pulido et al. study suggest that a combination of both explanations may be most accurate. Specifically, neural functioning differences may be evident prior to the initiation of drinking, but early alcohol use may also change the trajectory of normative brain development observed in adolescence, leading to less efficient neural processing over time.

Posted in hemp grow | Tagged , , | Comments Off on Other factors such as gender and family history of AUDs may moderate this relationship

Similar findings were also reported in genetic mouse models of diabetic nephropathy

Although not observed under controlled conditions, MDMA use beyond research settings has been associated with SS in case reports and toxicology studies . The vast majority of SS clinical case reports in published literature include a combination of two or more serotonergic agents including various classes of antidepressants, and other medications with serotonergic activity such as opioids , antibiotics , antihistamines , and atypical antipsychotics . Given the high percentage of the PTSD population for whom serotonin modulating therapeutics are prescribed and the high prevalence of other PTSD comorbid conditions, including substance use , depression , anxiety , sleep , and pain disorders treated by serotonergic drugs, further exploration of MDMA related Adverse Events reports from the drug safety surveillance database in the FDA Adverse Event Reporting System is warranted. In this study, we evaluated individual cases listing MDMA use associated with SS and reported to FAERS through MedWatch . We evaluated reports for the presence of MDMA as the sole reported compound, and for the presence of any additional substances or medications, particularly those that might increase the risk of SS due to their inherent serotonergic activity.The kidneys play a central role in normal body homeostasis through a variety of functions, including removal of byproducts of metabolism, clearance of toxins, regulation of body volume status, electrolytes and systemic hemodynamics, and production of hormones such as erythropoietin and active vitamin D. Hence, it is not surprising that kidney damage is associated with significant morbidity and mortality. The latter is true whether the decline in renal function is part of an acute process such as acute kidney injury due to tubular necrosis,growing cannabis indoors or a more chronic process such as chronic kidney disease caused by hypertension or diabetes. Furthermore, the mechanisms responsible for renal injury are complex and can be varied.

While these mechanisms are regularly categorized based on the type of injury and anatomic part of the nephron affected , there is significant overlap between these categories. For instance, there is evidence indicating that AKI can result in CKD. In addition, there is frequent overlap between the different anatomic sites of injury given that damage to one part of the nephron over a period of time can result in injury to other sites. For example, while diabetic kidney disease often manifests with glomerular injury and proteinuria, over an extended period of time it also results in tubulointerstitial damage and fibrosis leading to progressive CKD and end-stage kidney disease. Therefore, understanding the underlying pathways whose alterations can result in various forms of renal damage and injury can play an important role in devising effective therapies to prevent and treat kidney disease. In this regard, there is accumulating evidence that indicates that the endocannabinoid system plays a major role in normal renal physiology. In addition, there are data demonstrating that alterations of this pathway can lead to the pathogenesis of both acute and chronic kidney disease. Therefore, evaluation of the EC system can be a promising area of discovery, which may result in the generation of potentially novel therapies aimed at treating various forms of kidney disease. The EC system comprises endogenous fatty acidderived ligands, their receptors, and the enzymes required for their biosynthesis and degradation.The most wellcharacterized ECs are N-arachidonoyl ethanolamide, also known as anandamide , and 2-arachidonoylsn-glycerol .These lipid-derived molecules are generated on-demand by the metabolism of membrane phospholipids in response to various stimuli, including elevated intracellular calcium or metabotropic receptor activation.After production, they bind to the local cannabinoid receptors in an autocrine or paracrine manner, although measurable concentrations of these ligands can also be found in the blood, cerebrospinal fluid, and lymph.While the potential endocrine actions of these ECs remain an area of active research, it is well established that they act locally by binding with two widely studied cannabinoid receptors, cannabinoid subtype-1 and subtype-2 .

AEA and 2-AG can subsequently be taken up by cells through a high-affinity uptake mechanism and rapidly degraded through the action of the enzymes, fatty acid amide hydrolase , and monoacylglycerol lipase , respectively.While the role of the EC system has been initially a focus of extensive research in the central nervous system, over the course of the past two decades, a significant number of studies have confirmed its presence and importance in the peripheral organs, including the kidneys. In this regard, substantial concentrations of ECs, the machinery required for their biosynthesis and degradation, as well as CB receptors have been detected in kidney tissue.The effects produced by the actions of this system in the normal and pathological conditions of the kidney, however, have not been fully delineated given the many complexities involved in the production and breakdown of EC ligands.In addition, the differential distribution and actions of the CB1 and CB2 receptors in various structures and cell subtypes in the kidney can ultimately result in varied signaling outcomes whose overall impact will be difficult to predict. Accordingly, identifying the physiologic and pathophysiologic roles of the EC system in the field of nephrology remains an active area of exploration.It has been shown that CB1 and CB2 belong to a class of seven transmembrane domain G-protein-coupled receptors that are functionally dependent on the activation of heterotrimeric Gi /G0 proteins.Although the activation of both receptors results in the inhibition of adenylyl cyclase enzyme and increased activity of mitogen-activated protein kinase , CB1 activation has also been shown to stimulate nitric oxide synthase and directly control the activation of ion channels. The latter include the inwardly rectifying and A-type outward potassium channels, D-type outward potassium channels, and Ntype and P/Q-type calcium channels.Despite the common G-protein subunit shared between CB1 and CB2 receptors, their activation can produce opposing biological effects in normal and diseased states, in part due to the abundance and localization of these cannabinoid receptors and their EC ligands. While the CB1 receptor was initially thought to be localized to the central and peripheral nervous system,it has been shown to be present in peripheral organs such as the kidneys.For instance, the presence of functional CB1 receptor has been demonstrated in proximal convoluted tubules, distal tubules, and intercalated cells of the collecting duct in the human kidney.

Furthermore, CB1 receptor expression has also been found in other parts of the nephron in rodents, such as the afferent and efferent arterioles,thick ascending limbs of the loop of Henle,and glomeruli,as well as in various kidney cell sub-types such as glomerular podocytes, tubular epithelial cells,and cultured mesangial cells Similarly, the expression of CB2 receptors, although previously thought to be predominantly in immune cells,has also been demonstrated in renal tissue.For example, CB2 receptor expression has been localized to podocytes,proximal tubule cells, and mesangial cells in human and rat renal cortex samples. In addition to differential expression of CB receptors in different tissues and cells, the complex regulation of the biosynthesis and degradation of the kidney’s high basal levels of ECs through downstream enzymes contributes to the varied signaling effects of these ligands.While the renal cortex displayed similar levels of AEA and 2-AG, AEA was demonstrated to be enriched in the kidney medulla compared with the cortex,growing indoor cannabis while the levels of 2-AG in the medulla were similar to those of both ECs in the cortex.Moreover, AEA is present in cultured renal endothelial and mesangial cells at low levels and can be synthesized from arachidonic acid and ethanolamine and catabolized by AEA amidase in these kidney cell sub-types.The expression of FAAH was shown to be augmented in the renal cortex in comparison to its low expression levels in the medulla.Considering the diverse localization of the ECs and their receptors, as well as the complexities involved in their synthesis and catabolism, this system can play various roles in kidney function. Under normal conditions, the EC system is capable of regulating renal homeostasis as demonstrated by its control over renal hemodynamics, tubular sodium reabsorption, and urinary protein excretion. These effects are largely imparted through the activation of the CB1 receptor. In the following sections, we describe some effects of EC system activation on renal physiologic function .Under normal physiologic conditions, the EC system plays a critical role in the regulation of renal hemodynamics. For instance, it was shown that intravenous administration of AEA decreased glomerular filtration rate and increased renal blood flow in rodents, independent of changes in blood pressure.In vitro studies showed that AEA can vasodilate juxtamedullary afferent or efferent arterioles through a CB1-dependent process, normally inhibited by nitric oxide synthase,to regulate glomerular filteration rate . The actions of the AEA signaling system are likely conducted through endothelial and mesangial cells, which are capable of producing and metabolizing AEA,as well as through the hyperpolarization of smooth muscle cells via the activation of potassium channels.It should be noted that there are also non-CB1 receptor–dependent mechanismsby which ECs can mediate a vasodilatory effect and thereby regulate renal hemodynamics.Future studies need to further elucidate the role of the latter mechanisms in normal renal physiologic homeostasis.It is well known that diabetes has major renal complications, including progressive kidney disease and pathology, a condition known as diabetic nephropathy.

Diabetic nephropathy is characterized by glomerular hypertrophy and hyper filtration, which can result in albuminuria, renal fibrosis, GFR decline, and end-stage renal disease.Several studies have examined the role of the EC system in diabetes-related podocyte, mesangial and tubular cell injury, as well as the function of CB receptor activation on the adverse outcomes of diabetic nephropathy . The evaluation of mouse models of diabetic kidney disease and renal tissue from humans with advanced diabetic nephropathy have shown elevated levels of CB1 receptor expression in the kidney, and in particular in glomerular podocytes and mesangial cells.In addition, in vitro studies have shown CB1 receptor upregulation with exposure to increased glucose and albumin concentrations in mesangial cells andproximal tubule cells, respectively.Furthermore, the CB1 receptor has been found to be over expressed in glomerular podocytes in experimental mice with diabetic nephropathy.The potential consequences of the latter changes were shown in another study that found that hyperlipidemia, as induced by diabetic nephropathy, can be associated with palmitic acid–induced apoptosis in proximal tubular cells. These actions are mediated through upregulated CB1 receptor expression.Given the evidence indicating a deleterious role for the CB1 receptor in diabetic nephropathy, several studies have investigated the utility of CB1 antagonist/inverse agonists as a potential therapeutic option for diabetic kidney disease.In a streptozotocin -induced mouse model of diabetic nephropathy, albuminuria was reduced as a result of CB1 receptor blockade through a selective CB1 receptor antagonist.It was found that a marked reduction in proteinuria occurred through the preservation of glomerular podocytes and restoration of the expression of podocyte proteins nephrin, podocin, and zonula occludens-1. In addition, CB1 antagonism was also found to be associated with decreased glomerular and proximal tubular apoptosis, ultimately leading to improvements in renal function.In Zucker diabetic fatty rats, which develop type 2 diabetes due to obesity caused by a dysfunctional leptin receptor, chronic administration of a CB1 receptor inverse agonist restored GFR, reduced proteinuria, and improved the markers of podocyte health through modulation of the renin–angiotensin system and inhibition of apoptosis.While diabetic kidney disease is associated with increased expression of the CB1 receptor in various parts of the nephron, there is also evidence that CB2 receptor expression is significantly reduced. For example, STZ induced diabetic nephropathy in mice is associated with the down regulation of glomerular podocyte CB2 receptor expression.Similarly, there is decreased expression of the CB2 receptor in proximal tubule cells following exposure to elevated concentrations of albumin and glucose.Furthermore, CB2 receptor activation has been shown to ameliorate albuminuria, restore podocyte protein expression, reduce monocyte infiltration, and decrease the expression of renal profibrotic markers in rats with obesity-related nephropathy. CB2 agonism in obese diabetic nephropathy BTBR ob/ob mouse strain also reduced albuminuria, ameliorated dysfunctional nephrin expression in podocytes, and reduced mesangial matrix expansion, fibronectin accumulation, and sclerotic damage.These studies demonstrate that antagonism of CB1 receptors and activation of CB2 receptors using selective pharmacological ligands is associated with the restoration of renal structure and function, specifically albuminuria and the expression of inflammatory markers, in genetic and experimental models of diabetic nephropathy.Obesity is associated with and acts as a risk factor for the development of diabetic nephropathy,with obese individuals possessing a higher risk of progressing to end-stage renal disease.

Posted in hemp grow | Tagged , , | Comments Off on Similar findings were also reported in genetic mouse models of diabetic nephropathy

Studies of typically developing adolescents show increases in FA and decreases in MD

Conversely, both CPD and ND were negatively genetically associated with hundreds of other diseases in BioVU , including those known to be associated with smoking, such as chronic airway obstruction, lung cancer, and metabolic diseases . Most of the associations between CPD or ND and psychiatric disorders were attenuated, and no longer significant, when we adjusted for TUD or AUD . We repeated our analyses using a quantitative measure of ND and obtained very similar findings . All pairs of PRSs showed significant correlations, except for AUDIT-C PRS and ND PRS . All r coefficients were positive, the strongest association being between CPD and FTND , except for AUDIT-C’s associations with and CPD and with FTND, which showed a negative association . The current study examines smoking and alcohol consumption phenotypes as genetic surrogates for nicotine dependence and alcohol misuse, respectively, using PRSs constructed from well-powered GWAS in the UKB and other population-based non-UKB cohorts. In applying the PRSs to a large pheWAS, we found that smoking consumption was a good proxy for dependence, but alcohol consumption was not a good proxy for alcohol misuse . Ascertainment bias may explain some of the inverse genetic correlations between alcohol consumption and, for example, obesity and type 2 diabetes . UKB and other similar collections based on voluntary participation, are only available to individuals who are relatively healthy and who have both the means and opportunity to participate, resulting in an over representation of data from individuals with higher education levels and socioeconomic status and alcohol consumption than the general population but, crucially, lower levels of metabolic disorders and problem drinking. Importantly, both alcohol consumption and alcohol misuse were measured in UKB. Thus,mobile vertical grow racks the difference between alcohol consumption and misuse could indicate that the genetic overlap between alcohol consumption and AUD is dependent on the specific patterns of drinking .

For example, Polimanti et al identified a positive genetic correlation between alcohol dependence and alcohol drinking quantity , but not frequency. Similarly, Marees et al showed that high alcohol consumption frequency was associated with high socioeconomic status and low risk of substance use disorders and other psychiatric disorders, whereas the opposite applied for high alcohol consumption quantity. Furthermore, these genetic correlations may be dissimilar to those observed when analyzing alcohol consumption in alcohol dependent individuals; such studies have yet to be performed. Notably, even though studying alcohol consumption has shown some utility, it is apparent that this phenotype measured in volunteer collections is not an optimal proxy for AUD. Similar observations have recently been described for cannabis use versus disorder with regards to proxy measures of psychosocial and anthropometric indices . Initial stages of recreational use may be etiologically distinct from later stages of pathological use for commonly used substances such as alcohol and cannabis, with only the latter stages of dependence and abuse indexing vulnerability to psychiatric impairment. Whereas use of nicotine may be a more addictive, alcohol, particularly measured in population-based cohorts such as the UKB, may represent a social habit. In contrast with alcohol, the genetic correlation between smoking consumption and ND was almost identical, and both scores showed similar patterns of genetic association with psychiatric and smoking-related comorbid diseases. We speculate that consumption phenotypes represent distinct indices of use depending on the drug: cigarette smoking may be a more accurate phenotype than drinks consumed , in addition to being a better index outcome of problematic use, such that quantity of cigarettes smoked may reflect ND. Indeed, CPD is a major component of standard measures that are used to define ND, most notably the Fagerström Test for ND. Although studying the genetics of ND alongside other smoking traits is key to gaining a better understanding of the neurobiological processes that influence the trajectory of smoking behaviors and their treatment implications, our findings suggest that smoking consumption phenotypes measured in volunteer cohorts can capture relevant sources of genetic information applicable to later stages of dependence and abuse.

Our analyses are not without limitations. We lack information regarding the potencies of cigarette and alcohol products used by individuals in the discovery and target samples. High-potency substance use is associated with increased severity of dependence, especially at younger ages. Relatedly, our results may be restricted to PRSs calculated in populations with low levels of alcohol-related problems, like UKB. Furthermore, although we used a proxy measure of alcohol misuse , instead of a clinical diagnosis of AD, the pheWAS findings using problematic alcohol use in BioVU suggest that the results of AUDIT-P PRS are similar to AD PRS . In addition, results from our sensitivity analyses revealed that the associations were slightly attenuated after diagnosis of AUD or TUD were included as covariates. It is plausible that many of the relationships between alcohol misuse and ND and psychopathology, detected in BioVU, may be consequences of an AUD or TUD diagnosis rather than due to shared genetic risk. Alternatively, these associations could reflect pleiotropy or be a causal peripheral effect of alcohol/nicotine persistent/pathological use. The reduction in the number of statistically significant associations after adjustment for AUD or TUD may imply shared genetic liability between these disorders and comorbid psychopathology, or that, simply, our correction for AUD or TUD was too stringent considering that most of the effect sizes were essentially unchanged and that we expected some degree of collinearity between AUD/TUD diagnosis and the PRSs that we calculated. Future studies should aim at exploring causal mechanisms. Lastly, our estimates of genetic overlap may be sensitive to environmental factors, for example when comparing results from UKB to younger cohorts. In summary, we performed a pheWAS of consumption and dependence/misuse polygenic scores. We conclude that smoking consumption measured in healthy volunteer cohorts is a powerful proxy for genetic studies of ND . For alcohol consumption, by using multivariate approaches that give statistically-derived weights to alcohol phenotypes or by including further restrictions to the study cohort , we may be able to mitigate some of the inverse associations between alcohol consumption and poor health and, in doing so, we may realize the full potential of alcohol consumption phenotypes as proxies for AUD.

Moreover, and as a collateral finding, we identified very robust associations between well-characterized measures of alcohol and nicotine consumption, misuse, and clinical diagnoses from a real world medical-center setting . These series of analyses demonstrate the value of using broad electronic health record measures for genetic studies of substance use disorders. Adolescence is a time of subtle, yet dynamic brain changes that occur in the context of major physiological, psychological,vertical cannabis grow systems and social transitions. This juncture marks a gradual shift from guided to independent functioning that is analogized in the protracted development of brain structure. Growth of the prefrontal cortex, limbic system structures, and white matter association fibers during this period are linked with more sophisticated cognitive functions and emotional processing, useful for navigating an increasingly complex psychosocial environment. Despite these developmental advances, increased tendencies toward risk-taking and heightened vulnerability to psychopathology are well known within the adolescent milieu. Owing in large part to progress and innovation in neuroimaging techniques, appreciable levels of new information on adolescent neurodevelopment are breaking ground. The potential of these methods to identify biomarkers for substance problems and targets for addiction treatment in youth are of significant value when considering the rise in adolescent alcohol and drug use and decline in perceived risk of substance exposure . What are the unique characteristics of the adolescent brain? What neural and behavioral profiles render youth at heightened risk for substance use problems, and are neurocognitive consequences to early substance use observable? Recent efforts have explored these questions and brought us to a fuller understanding of adolescent health and interventional needs. This paper will review neurodevelopmental processes during adolescence, discuss the influence of substance use on neuromaturation as well as probable mechanisms by which these substances influence neural development, and briefly summarize factors that may enhance risk-taking tendencies. Finally, we will conclude with suggestions for future research directions.The developmental trajectory of grey matter follows an inverted parabolic curve, with cortical volume peaking, on average, around ages 12–14, followed by a decline in volume and thickness over adolescence . Widespread supratentorial diminutions are evident, but show temporal variance across regions . Declines begin in the striatum and sensorimotor cortices , progress rostrally to the frontal poles, then end with the dorsolateral prefrontal cortex , which is also late to myelinate . Longitudinal charting of brain volumetry from 13–22 years of age reveals specific declines in medial parietal cortex, posterior temporal and middle frontal gyri, and the cerebellum in the right hemisphere, coinciding with previous studies showing these regions to develop late into adolescence . Examination of developmental changes in cortical thickness from 8–30 years of age indicates a similar pattern of nonlinear declines, with marked thinning during adolescence. Attenuations are most notable in the parietal lobe, and followed in effect size by medial and superior frontal regions, the cingulum, and occipital lobe .

The mechanisms underlying cortical volume and thickness decline are suggested to involve selective synaptic pruning of superfluous neuronal connections, reduction in glial cells, decrease in neuropil and intra-cortical myelination . Regional variations in grey matter maturation may coincide with different patterns of cortical development, with allocortex, including the piriform area, showing primarily linear growth patterns, compared to transition cortex demonstrating a combination of linear and quadratic trajectories, and isocortex demonstrating cubic growth curves . Though the functional implications of these developmental trajectories are unclear, isocortical regions undergo more protracted development and support complex behavioral functions. Their growth curves may reflect critical periods for development of cognitive skills as well as windows of vulnerability for neurotoxic exposure or other developmental perturbations.In contrast to grey matter reductions, white matter across the adolescent years shows growth and enhancement of pathways . This is reflected in white matter volume increase, particularly in fronto-parietal regions . Diffusion tensor imaging , a neuroimaging technique that has gained widespread use over the past decade, relies on the intrinsic diffusion properties of water molecules and has afforded a view into the more subtle micro-structural changes that occur in white matter architecture. Two common scalar variables derived from DTI are fractional anisotropy , which describes the directional variance of diffusional motion, and mean diffusivity , an indicator of the overall magnitude of diffusional motion. These measures index relationships between signal intensity changes and underlying tissue structure, and provide descriptions of white matter quality and architecture . High FA reflects greater fiber organization and coherence, myelination and/or other structural components of the axon, and low MD values suggest greater white matter density .These trends continue through early adulthood in a nearly linear manner , though recent data suggest an exponential pattern of anisotropic increase that may plateau during the late-teens to early twenties . Areas with the most prominent FA change during adolescence are the superior longitudinal fasciculus, superior corona radiata, thalamic radiations, and posterior limb of the internal capsule . Other projection and association pathways including the corticospinal tract, arcuate fasciculus, cingulum, corpus callosum, superior and mid-temporal white matter, and inferior parietal white matter show anisotropic increases as well . Changes in subcortical and deep grey matter fibers are more pronounced, with less change in compact white matter tracts comprising highly parallel fibers such as the internal capsule and corpus callosum . Fiber tracts constituting the fronto-temporal pathways appear to mature relatively later , though comparison of growth rates among tracts comes largely from cross-sectional data that present developmental trends. The neurobiological mechanisms contributing to FA increases and MD decreases during adolescence are not entirely understood, but examination of underlying diffusion dynamics point to some probable processes. For example, decreases in radial diffusivity , diffusion that occurs perpendicular to white matter pathways, suggests increased myelination, axonal density, and fiber compactness , but have not been uniformly observed to occur during adolescence. Similarly, changes in axial diffusivity , diffusion parallel to the fibers’ principle axis, show discrepant trends, with some studies documenting decreases , and others increases in this index .

Posted in hemp grow | Tagged , | Comments Off on Studies of typically developing adolescents show increases in FA and decreases in MD

There are encouraging examples of sound policy at both the federal and state levels

As SUDs, particularly involving opioids, increasingly affects pregnant women and their families, it is important to better understand how state policy environments with respect to substance use in pregnancy have evolved and the nature of policies being enacted by states. Professional societies and federal agencies universally endorse supportive policies and oppose punitive policies. Statements from the American College of Obstetricians and Gynecologists, the American Academy of Pediatrics, the Centers for Disease Control and Prevention, the Substance Abuse and Mental Health Services Administration, the American Nurses Association, and several others all warn that policies penalizing pregnant women and imposing negative consequences for disclosing substance use to health care providers increase the fear of legal penalties and discourage women from seeking prenatal care and addiction treatment during pregnancy. Guidance documents and professional society committee opinions further suggest that punitive policies may lead to disengagement from care and poor pregnancy outcomes, although few studies have examined this issue. Expert consensus is grounded in the view of substance misuse in pregnancy as a medical condition requiring integrated care for both the pregnancy and the SUD and the recognition that supportive policies reduce barriers to care. For example, punitive policies enacted, in part, to reduce neonatal opioid withdrawal syndrome , have the opposite effect. Infants born in states that implemented policies that punish pregnant women for substance use had higher rates of NOWS than those born in states without such policies.The change in state policy environments with respect to substance use in pregnancy from 2000 to 2015 are detailed in the maps in Figure 1. Six types of relevant policies were examined: those that define substance use in pregnancy as child abuse or neglect, criminalize it, or consider it grounds for civil commitment,vertical growing system mandate testing of infants with suspected prenatal substance exposure or pregnant women with suspected substance use; require reporting of suspected prenatal substance use to officials at local health and human services departments; create or fund targeted programs for pregnant and postpartum women with SUDs; prioritize pregnant women’s access to SUD treatment programs; and prohibit discrimination against pregnant women in publicly funded SUD treatment programs.

Consistent with prior work and others’ approach, policies imposing legal consequences for substance use or requiring health professionals to test for or report suspected substance use to authorities were considered punitive. Policies reducing barriers for pregnant women with SUD or those that expand treatment were considered supportive. If a state enacted a policy with both punitive and supportive components, it was considered to have a mixed policy environment. Enactment dates were obtained from the Guttmacher Institute18 and supplemented with information from the National Conference of State Legislatures, ProPublica, and published studies retrieved through a targeted literature review.In addition, state statutes were reviewed to capture language illustrative of policy categories. Box 1 shows an example punitive policy enacted in North Dakota in 2003 and Box 2 contains a supportive policy enacted in Kentucky in 2015. Figure 1 shows substantial state policy activity in this area, with more states adopting punitive policies than supportive policies. This increase, from 18 states with at least one punitive policy in 2000 to 33 states in 2015, was primarily driven by states adopting policies considering substance use in pregnancy to be child abuse, grounds for civil commitment, or a criminal act, as well as policies requiring healthcare professionals to report suspected prenatal drug use. By 2015, states with only punitive policies increased from six to eight, while states with only supportive policies declined from 17 to 8. States with both types of policies doubled from 12 in 2000 to 25 by 2015, and only 10 states had no policies specific to substance use in pregnancy in 2015, down from 16 in 2000. While encouraging that 28 states had supportive policies in 2000, only 4 additional states adopted supportive policies in the subsequent 15 years. The maps in Figure 1 are consistent with a pattern described in 199824 of more states enacting punitive policies than policies expanding treatment for women with SUD and echo the punitive approaches taken towards women with crack cocaine use in the 1980s and 1990s.These policies disproportionately affected Black women and women living in poverty,and continue to do so today.While the government’s current approach to substance use in the general population is “remarkably less punitive” than its approach a few decades ago, it has recently been observed that “…pregnancy may represent an exception to the overall national willingness to treat the opioid epidemic as an issue of public health and not of law enforcement.”In addition, as one journalist put it, “There’s a growing consensus in the U.S. that drug addiction is a public health issue, and sufferers need treatment, not prison time. But good luck if you are pregnant.”Despite overwhelming consensus on the principle of a non-punitive approach towards substance use in pregnancy , the increase in punitive policies over the past two decades suggests that the gap between principles and practice is widening. What is needed is a holistic, public health-and prevention-oriented approach to substance use in pregnancy, consistent with the statements in Table 1.

Imagine for a moment that pregnant women with diabetes, or epilepsy, or major depressive disorder, all of which are chronic medical conditions that confer some level of risk to the fetus, faced criminal charges and imprisonment if convicted of harming their infants. These examples illustrate just how differently many in the public and medical community view addiction. Addiction is a chronic medical condition, but pregnancy is a temporary period in the life course of a woman dealing with the recurring and remitting illness of addiction. Yet, too often, policies, health systems, and health services are designed to engage individuals in treatment only during pregnancy which is insufficient. Instead, women with SUD should be engaged throughout their life course.Women with SUD need comprehensive, coordinated, evidence-based, trauma-informed, family-centered care not only during the 40 or so weeks of pregnancy but in the preconception, postpartum, and inter-conception periods—as well as throughout the life course for those not able to or not choosing to have children. This care should be delivered in a compassionate and non-punitive environment, and clinicians, policymakers, and public health officials all have a role to play in achieving this goal. For example, recent federal legislation takes a much-needed public health approach to this issue, building on prior efforts to address gaps in the continuum of care for women who are pregnant and postpartum and strengthening Plans of Safe Care for infants with prenatal substance exposure. There has been a slow but noticeable shift in federal policy language towards less stigmatizing terminology and “people-first” language, such as an “individual in recovery” as opposed to a “drug addict,” and replacing “NAS baby” with “infant experiencing withdrawal.” Certain states are taking a dyadic approach to the challenge of mothers and infants affected by opioids. Medicaid policy levers have also shown promise. In Virginia, the Addiction and Recovery Treatment Services program,launched in 2017 to increase access to services for Medicaid members with SUDs, increased residential treatment capacity and removed the 16-bed reimbursement limit, which was a barrier to children and mothers remaining together during the mother’s treatment. ARTS successfully increased the percentage of pregnant women with SUDs receiving treatment from 2% to 18% a year after implementation. Further research is needed to examine factors that may influence state-level variation in both the implementation and impact of different policy responses to substance use among pregnant women,vertical growing towers but these are promising models. It is also encouraging that both federal and state policymakers are testing innovative ways to expand SUD treatment for women who are pregnant and parenting, including through telehealth and through telementoring and remote capacity building, based on the Project ECHO model.Importantly, public health and health systems are collaborating to address the often-overlooked “fourth trimester,” the vulnerable early postpartum period in which a lot of the support and services a pregnant woman was eligible for rapidly fall away.

Finally, the recommendation by multiple professional societies to extend postpartum Medicaid coverage to one year postpartum is garnering much-needed attention from policymakers.In conclusion, effectively addressing SUD, including opioid misuse, among pregnant women is a pressing public health issue, given both the dramatic increase in NOWS2 as well as the deleterious effects of untreated maternal opioid use disorder on both mothers and young children.Policymakers are aware of this issue, given the rapid pace of enacting policies addressing substance use in pregnant women. However, the greater increase in punitive compared to supportive policies is a concern. Better understanding how policies related to prenatal substance use affect maternal and child outcomes is essential as decision makers seek to best support pregnant women with SUDs. Women who use substances during pregnancy are at increased risk for poor perinatal outcomes, including preterm labor, low birth weight, congenital abnormalities, and stillbirths, and there can be additional long lasting physical, mental, behavioral and neurodevelopmental consequences for their children . Recognizing prenatal substance use as a primary cause of preventable birth defects, US guidelines consider substance use screening and referral to be essential for prenatal care . Alcohol and nicotine are among the most commonly used substances by women before and during pregnancy. Prenatal alcohol use is associated with structural impairments, increased risk for adverse birth outcomes , fetal alcohol spectrum disorder and fetal alcohol syndrome, and neurodevelopmental problems in childhood . Prenatal nicotine use is associated with pregnancy complications , poor infant outcomes , sudden infant death syndrome, birth defects, and long-term health issues in childhood . National data indicate that alcohol use is increasing over time, and nicotine use is decreasing over time, among US women of reproductive age . However, corresponding with growing awareness of the potential harms of alcohol and nicotine use during pregnancy, initial data suggest that prenatal alcohol and nicotine use are decreasing over time . For example, data from the National Survey of Drug Use and Health indicate that among US adult pregnant women, any past-month use of alcohol during pregnancy decreased non-significantly from 9.6% in 2002 to 8.4% in 2016, and any past month cigarette smoking decreased significantly from 17.5% in 2002 to 10.3% in 2016 .Past-month alcohol and nicotine use is most common during the first trimester of pregnancy , during which time women may not realize that they are pregnant. Although healthcare systems are well poised to screen women of reproductive age for substance use, it is challenging to predict which women are at risk for using nicotine and alcohol when they become pregnant, and health care systems have limited resources and require better data to prioritize whom to target with education about prenatal substance use prior to conception. Initial data from nationally representative studies indicate that lower socioeconomic status, lower education, White race, and serious psychological distress are associated with higher risk of nicotine use during pregnancy , while younger age, other substance use, depression, higher socioeconomic status, higher education, and being unmarried are associated with greater risk of alcohol use during pregnancy . Less is known about risk factors that are associated with continued use versus quitting alcohol or nicotine among those who use these substances prior to pregnancy. Given the health risks associated with alcohol and nicotine use during pregnancy, and the changing prevalence and patterns of use of these substances among women of reproductive age in the US , research is needed to better understand trends in prenatal use of alcohol and nicotine and to identify factors associated with quitting versus continuing to use these substances during pregnancy. The primary objective of this study was to examine trends in daily, weekly, and monthly or less self reported use of alcohol and nicotine in the year before pregnancy and during pregnancy from 2009 to 2017, among a diverse population of pregnant women within a large healthcare system with screening for substance use as part of standard prenatal care. We also examined whether frequency of alcohol and nicotine use in the year before pregnancy were associated with continued use of these substances during pregnancy.Kaiser Permanente Northern California is an integrated multi-specialty healthcare delivery system that provides care to more than 4 million diverse members who are representative of the Northern California region .

Posted in hemp grow | Tagged , , | Comments Off on There are encouraging examples of sound policy at both the federal and state levels

It was difficult to determine which bands belonged to the rope and which to the contaminant

Our results do not contradict those of Wang et al. and Dick et al. ; the results are mutually consistent. Instead, they reveal a novel age-specific risk factor undetectable by solely examining the condition of alcohol dependence rather than its age of onset. In view of the age differences between the sample studied in this paper, and the sample used in the studies of Wang et al. and Dick et al. it is not possible that they should contradict one another. In the Wang et al. study, about 5% of the alcohol dependent subjects had ages of onset of less than 16 years of age. This is too small a fraction to have an effect on the results. As we noted in our discussion of the trend tests, in our study the genotypic distributions of the alcohol dependent subjects change with age of onset. While we do not observe a significant SNP effect in the oldest age range with DTSA, the fraction of subjects with the minor allele in those who become alcohol dependent is greater than the fraction of subjects with the minor allele in those who do not become alcohol dependent . This trend acts to produce a similar genotypic distributions for alcohol dependent and non alcohol dependent subjects when considered regardless of age of onset. In terms of the methodology, DTSA requires that there be differences in genotypic distributions between alcohol dependent and non alcohol dependent subjects to give a statistically significant results for a SNP; this is not true for the family based method used by Wang et al. . Our interpretation is that family based studies are more powerful than the type of association study employed here; the absence of a distributional difference does not mean that there is no genetic effect. It is important to note that the objectives of the twin studies considered here and of this study are quite different. The twin studies investigate the presence of a “disease” condition, although exactly which condition varies considerably among studies. The objective of this study,vertical farming suppliers as a survival analysis, is to analyze the factors contributing to an event, the onset of a condition. Once the condition has come to pass, it is not of further interest in survival analysis.

The genetic effects which produce the condition are only significant at the onset of the condition, and their effects persist only if the subsequent onset of the condition in other subjects is attributable to them. In the twin studies post-onset presence of the condition is part of the outcome analyzed. That is, in the longitudinal studies using multistage models, the affected subjects are retained throughout the study subsequent to their becoming affected, while in the survival analysis method used in this study, the affected subjects are removed from consideration in the study once they have become affected, and no longer influence the results. Therefore, although the use of a longitudinal multi-stage model in van Beek et al. and Baker et al. enables genetic influences to have age-specific characteristics, these effects are modeled as persisting through time as a result of an effect at a single age range. If early onset alcohol use is associated with the more genetically determined form of alcoholism then it would be expected that genetic factors leading to early drinking and dependence would be manifest. Our results are consistent with this hypothesis. The pattern of genetic results obtained here, albeit from a single gene, is weighted towards the strongest effects manifesting themselves in the youngest age range. However, most twin studies find low genetic influences at younger ages and increases in genetic influences with age , although not all twin studies have this conclusion . These results can be understood after examination of the populations from which the twin samples are drawn and the outcomes which are modeled. The samples in the twin studies are drawn from the general population, not from the densely affected families which form the bulk of the sample used here. Thus genetic effects will be more difficult to find in the twin studies, particularly for the rarer, more genetically affected conditions. In a number of studies outcome definitions are broad, and are not subject to as strong genetic effect as more restricted outcomes such as alcohol dependence or externalizing disorders. The most dramatic example of this is the difference between the cross-sectional results from the Minnesota twin studies in which the outcomes are narrowly defined and the cross-sectional results from a Dutch twin study with the very broad outcome of having one or more alcohol abuse symptoms. The Minnesota twin studies have A > 0.6 for ages 11 and 17, while the Dutch twin study has A < 0.3 for ages 15–17 and 18–20, where A is the additive genetic effect.

Mid-adolescence is a vulnerable developmental period for cigarette smoking uptake, the onset of mental health conditions, and the emergence of comorbid tobacco use and mental health problems . The over-representation of smoking among adolescents with mental health problems generalizes across various conditions , remains robust after controlling for confounders, and is mediated by theoretically-relevant factors suggesting a causal relation . The rapid emergence and appeal of novel tobacco and nicotine products such as electronic cigarettes raises the question as to whether the same adolescent subgroup with mental health problems is at risk for using these products . This is important to address because this population may be particularly vulnerable to nicotine addiction, given that neural plasticity during adolescence and neuropathology in psychiatric conditions can enhance the brain’s sensitivity to nicotine . E-cigarettes—electronic devices that deliver inhaled nicotine emulate the sensorimotor properties of conventional cigarettes—are gaining popularity among adolescents. According to 2014 estimates, past 30 day use of e-cigarettes is more common than conventional cigarettes among U.S. 8th- and 10th- graders, and many adolescent e-cigarette users have never tried conventional cigarettes . E-cigarettes may be an attractive alternative to conventional cigarettes among youth because of beliefs that they are less harmful, addictive, malodorous, and costly than conventional cigarettes . Furthermore, e-cigarettes come in flavors appealing to youth and may be easier to obtain than conventional cigarettes because of inconsistent enforcement of restrictions against sales to minors . Such factors may facilitate e-cigarette initiation in adolescents who would not otherwise smoke conventional cigarettes and may perhaps have fewer risk factors for smoking —including mental health problems. Dual use of conventional and e-cigarettes is also common in adolescents , raising the possibility that some adolescents may use e-cigarettes to substitute for conventional cigarettes in situations where smoking is restricted. Indeed, school bathrooms and staircases are among the most common places adolescents report using e-cigarettes .

Given that adolescents with mental health symptoms are more prone to nicotine dependence , these populations could be more likely to initiate use of e-cigarettes to bridge situations when they are not able to smoke, which ultimately could perpetuate the over-representation of smoking among individuals with mental health problems. While research has yet to characterize the psychiatric comorbidity with patterns of conventional and e-cigarette use in adolescents, a recent study of Hawaiian adolescents found that alcohol/marijuana use and other psychosocial risk factors were highest in dual users, moderate in e-cigarette only users, and lowest in non-users . Most pairwise comparisons involving conventional cigarette only users were not significant in that study, perhaps limited by reduced statistical power due to the smaller size of this group . Given these findings, stratification of psychiatric comorbidity across dual use, single-product use, and non-use in adolescents is plausible. The current study characterized the mental health of adolescents who reported ever using ecigarettes, conventional cigarettes, both, or neither. To provide a wide-ranging picture of psychiatric comorbidity,vertical farming systems cost traditional syndrome-based indices of various depressive, manic, anxiety, and substance use disorders were administered. Consistent with NIMH’s Research Domain Criteria Initiative , we also assessed several transdiagnostic phenotypes implicated in multiple internalizing and externalizing psychopathologies and conventional cigarette use . Up to this point, data on the psychiatric comorbidity associated with ecigarette and dual use is virtually absent, leaving unclear as to how the mental health of these two groups compare to conventional cigarette users and non-users. Given that conventional cigarettes and e-cigarettes have both similarities and differences, whether the patterns of psychiatric comorbidity are similar or different between e-cigarette only users and conventional cigarette users is unclear. As the first study to comprehensively characterize psychiatric comorbidity in adolescent e-cigarette and dual use, this study may yield data that is important to tobacco policy by identifying adolescent populations that are psychiatrically vulnerable and potentially at risk for use of traditional and emerging tobacco products. Such data could highlight the need to protect psychiatrically vulnerable adolescents from tobacco product use take via targeted tobacco product regulation and behavioral health prevention programming for this populations.This report is based on a cross-sectional survey of substance use and mental health among 9 th grade students enrolled in ten public high schools surrounding Los Angeles, CA, USA.

The schools were recruited based on their adequate representation of diverse demographic characteristics. The percentage of students eligible for free lunch within each school on average across the ten schools was 31.1% . Students not in special education or English as a Second Language programs were eligible . Of the students who assented to participate , 3,383 provided active parental consent and enrolled in the study. In-classroom paper-and-pencil surveys were administered across two 60-minute data collections during the fall of 2013, conducted less than two weeks apart. Some students did not complete all questionnaires within the time allotted or were absent for data collections , leaving a final sample of 3310. The University of Southern California Institutional Review Board approved the protocol. Based on patterns of lifetime use, the sample was divided into: use of neither electronic nor conventional cigarettes ; use of conventional cigarettes only ; use of electronic cigarettes only ; use of electronic and conventional cigarettes . Primary analyses used generalized linear mixed models that accounted for clustering of data within school, in which the 4-level cigarette use group variable was a categorical regressor variable and a mental health indicator was the outcome variable, with separate models for each outcome. GLMM specified binary and continuous distributions for the lifetime substance use status and mental health quantitative outcomes, respectively. Because of skewed distributions on the three substance use problems measures, Poisson distributions were specified for these outcomes. For outcomes with omnibus groups differences, we conducted follow up pairwise contrasts using an adjusted p-value, correcting for study-wise false discovery rate of 0.05. GLMMs were adjusted for gender, age, ethnicity, and highest parental education; missing data on covariates were accounted for by dummy coding a ‘missingness’ variable to allow inclusion in analyses. Results are reported as standardized effect size estimates .As illustrated in Table 2, there were omnibus differences across the four groups for all outcomes. Pairwise contrasts indicated that adolescents who used conventional cigarettes only reported worse mental health than non-users and e-cigarette only users on multiple internalizing emotional syndromes and transdiagnostic phenotypes . On these internalizingemotional outcomes, the conventional cigarettes only and dual use groups did not significantly differ. For some internalizing outcomes , e-cigarette only users had higher elevations than non-users, but lower problem levels than conventional only or dual users. Relative to non-users, use of either product was related to the externalizing phenotypes of poorer inhibitory control and impulsivity. An ordered effect of dual use vs. e-cigarette use only vs. non-use was found for elevations in mania, positive urgency, and anhedonia. An ordered effect of dual use vs. either single product use vs. non-use was also found for lifetime use status and level of abuse/problems for all substances. Given the differences in patterns across internalizing and externalizing and positive-emotion seeking behaviors, syndromes, and traits, we plotted standardized T-scores of the outcomes by conventional/e-cigarette use status separately in the two domains. These figures respectively illustrate general trends of: differentiation of conventional and dual cigarette use from never and e-cigarette use on most internalizing outcomes , and tri-level ordered differentiation of never vs. single product vs. dual use on externalizing outcomes . Analyses of the substance problem outcomes utilizing the overall sample cannot distinguish between substance ever-users who report zero drug/alcohol-related problems and substance never-users.

Posted in hemp grow | Tagged , , | Comments Off on It was difficult to determine which bands belonged to the rope and which to the contaminant

We find a larger variance in county random intercepts in the pre-Prop 47 period

When comparing the full sample prior to propensity score matching, a greater fraction of post-Prop 47 arrest events had concurrent arrests of other types , suggesting a decline in arrests when drug possession was the sole offense. Post-Prop 47 arrestees also appeared to differ in terms of criminal histories, with more prior arrests . The propensity score matched sample had better covariate balance. For sale/transport arrests, pre and post groups were much more similar . Though they were not compared statistically, we find the population arrested for these offenses appears quite different from those arrested for Prop 47 drug offenses. Prop 47 offenders had more numerous but lower level prior arrests and convictions. Racial differences were notable as well, with larger racial disparities among sale/transport arrests .With regard to Prop 47 arrests, pre and post estimates for each county suggest counties where felony convictions were more likely pre-Prop 47 were reduced towards zero to a greater degree, such that post-Prop 47 outcomes were more similar across counties. Mixed models with random pre-Prop 47 intercepts, random coefficients for the policy effect, and an unstructured covariance structure allowing for a correlation between intercept and slope random effects, showed a significant, negative covariance between random effects . Aligning with the pattern depicted in Figure 3.2, this suggests counties where felony convictions were more likely in the pre-period also declined more towards the less punitive counties, reducing the variance across counties. The reduction in county differences is corroborated by variance estimates from models with county specific random intercepts for the pre and post periods . The likelihood ratio test comparing an exchangeable covariance structure as a nested model indicated that the unstructured covariance structures which allowed pre- and post-Prop intercept variances to differ,vertical aeroponics farming was a better fit to the data . To put this in concrete terms, of the 56 California counties, prior to the policy the most punitive county had a conviction probability of .38 , whereas the least punitive county had a conviction probability of .04 .

After Prop 47 was adopted, the most punitive county had a conviction probability of .19 whereas the least punitive county had a conviction probability of .02 . Another way to conceptualize these results is in terms of how discrepant the statewide probability of felony conviction was from the least punitive county pre-Prop 47, and the extent to which that discrepancy changed post-Prop 47. Prior to Prop 47, the statewide probability of felony conviction was 17 percentage points higher than the least punitive county , meaning 81% of statewide felony convictions following Prop 47 drug arrests would not have occurred if prosecuted in the least punitive county. Whereas after Prop 47, the statewide probability was just 3 percentage points higher than the least punitive county prior to passage . There was also significant variation across counties in the likelihood of felony conviction following a sale/transport arrest , ranging from 0.05 in Merced County to 0.51 in Calaveras County in the pre-Prop 47 period. However, mixed model results indicated the significant variance in the pre-Prop 47 period , did not decline post-Prop 47 . This suggests that, while people arrested for sale/transport were less likely to ultimately get a felony conviction after Prop 47 was adopted, this effect did not vary substantially across counties, and no county showed an increase in felony conviction probability for sale/transport arrests. In other words, it does not appear that more punitive counties altered plea bargaining practices for sale/transport arrests to retain pre-Prop 47 levels of felony convictions, as this would have resulted in an increase in variance in felony conviction probabilities for this category of arrest. Findings aligned with sensitivity analyses that assumed all cases with missing dispositions received felony convictions .In this study of the change in felony convictions in California counties after Proposition 47 reduced criminal penalties for drug possession, we found significant declines in the likelihood of a felony conviction following arrests for Prop 47 drug offenses and non-Prop 47 felony drug offenses . Prior to Prop 47, dramatic geographic inequalities in probability of felony convictions after drug possession arrests prevailed between counties, and these geographic inequalities were substantially reduced after adoption of Prop 47.

The reduction in felony convictions aligns with reports from the Judicial Council of California on reductions in felony filings following Prop 47 passage , while providing new evidence that reductions led to declines in geographic disparities in felony convictions for drug arrests. By holding county-specific case characteristics constant across time, this study identified a reduction in the excess variation that was attributable to county practices. This impact likely reflects that Prop 47 eliminated prosecutorial discretion for how drug possession can be charged. While previous research has found that the county-specific interpretation and implementation of reforms tends to reinforce the preexisting prosecution and sentencing practices within the county , results from the current study do not indicate counties attempted to mitigate the effects of Prop 47 with felony filings for concurrent offenses, or reducing plea bargaining for sale offenses. Several factors could explain why Prop 47 led to reductions in geographic disparities in case outcomes, when other reforms have not. Prop 47 was a voter initiative, and considering the influence of community priorities for law enforcement on charging policies and decisions, prosecution practices may be more responsive to these types of reforms. Secondly, Prop 47 called for reduced criminal penalties, whereas prior studies have evaluated reforms like three strikes laws which maximize punishment. Maximizing punishment is costly, whereas reducing it can assuage overburdened courts. Therefore, we may be more likely to see change resulting from reforms that call for lesser criminal penalties, and especially when that call comes from the public. Reducing variation in the likelihood of a felony conviction for two equivalent cases mitigates inequalities in criminal justice exposure due to unequal applications of the law. However, requiring that all drug possession offenses be prosecuted as misdemeanors also suggests that cases with different characteristics are now being treated more similarly. A defendant can still be convicted of a felony for concurrent felony offenses, so it is the effect of criminal history on case outcomes which we would expect to be minimized post-Prop 47. Criminal history is strongly associated with race/ethnicity, which may reflect biases and practices in drug law enforcement , while increasing the severity of punishment for subsequent drug offenses .

There is evidence Prop 47 in fact reduced the effect of criminal histories in San Francisco, where prior to Prop 47, racial disparities in case dispositions and sentencing were attributable to more extensive pretrial detention and criminal histories among black defendants . When Prop 47 reclassified drug possession offenses to misdemeanors, these characteristics had lesser effects on case outcomes, and racial disparities declined. Further research could assess whether findings from San Francisco apply statewide. There are also implications for substance use disorder treatment. Prop 47 generated $103 million in savings in the first year, awarded through grants to counties to increase access to substance use disorder and mental health treatment, and education . Counties with few felony convictions pre-Prop 47 may have had greater support for and availability of drug diversion options which allow dismissal of charges for successful drug treatment completion. However, Prop 47 generated concerns that without the possibility of a felony conviction, the incentive to engage in treatment would be removed . Prior research has suggested that,vertical cannabis farm as compared to volitional substance users, individuals with more severe substance use disorders tend to fail to meet the court’s conditions for diversion and ultimately receive harsher termination sentences . If this were the case, it would be logical that this group would opt out of diversion options now that the sentence for drug possession is less severe. Whether this is the case, and if so, understanding successful strategies counties have developed to increase access to needed treatment through other routes, would be valuable. CA DOJ’s Statewide Automated Criminal History System data is the most comprehensive data source available for studying criminal justice policy changes in the state, and has been used in significant studies of Prop 47, as well as other reforms such as Prop-36, which increased drug diversion following arrest . While the use of ACHS to capture the outcomes of all arrests in the state is a strength of this study, ACHS also faces the quality challenges typical of large administrative datasets, as CA DOJ must rely upon consistent and timely reporting from 58 counties. Though courts and law enforcement agencies are mandated to report within 30 days of final case dispositions and the CA DOJ’s policy is to update the data system within 90 days of receipt, a substantial portion of arrests did not contain dispositions. We assumed that these arrests without dispositions were not prosecuted for the primary analysis. However, if cases with no dispositions in fact include some felony convictions, and felony conviction missingness is associated with county, it could contribute to some of the geographic variation in convictions. The analysis of change in variation across time could be biased if felony conviction missingness differed within counties in the year pre- vs. post-Prop 47. There are several pieces of evidence that provide some reassurance. First, missing dispositions were more likely in the post period, which we would expect to occur if missing dispositions were indicative of no conviction, since the classification of drug possession offenses was reduced. Second, cases with missing dispositions were less severe in terms of concurrent offenses, which would correspond with lower likelihood of felony conviction. Third, the sensitivity analysis assuming that cases with missing dispositions had resulted in felony convictions did not alter findings.

The impact of the study design on the potential for bias should be considered. By comparing events just within the year before and after Prop 47, we attempted to limit the effect of time trends in felony convictions, though some reduction in felony convictions could be attributed to a pre-existing trend towards leniency for drug possession. That said, the large and immediate reduction in felony convictions across nearly all counties is unlikely to have occurred in the absence of the policy change. We extracted monthly SUD-related hospital visits in California from October 5, 2011 – September 4, 2015, as collected by the Office of Statewide Health Planning and Development. These months were generated such that the analytic period began after the start of California’s Public Safety Realignment , the post-Prop 47 period could begin on the first effective date of November 5, 2014, and no visits after September 30, 2015 were included. The ICD-9-CM coding system underwent major changes when it shifted to the ICD-10-CM system on October 1, 2015, and we anticipated a period of unreliable coding in the early months of this shift. The study therefore uses only the ICD-9-CM coding system. All visits with a SUD-related condition as the principal diagnosis among patients ages 15-64 were included. These comprised the following ICD-9-CM codes: amphetamines dependence , nondependent amphetamine abuse , cannabis dependence , nondependent cannabis abuse , cocaine dependence , nondependent cocaine abuse , poisoning by cocaine , adverse effects from cocaine , hallucinogen dependence , nondependent hallucinogen abuse , poisoning by hallucinogens/psychodysleptics , accidental poisoning by hallucinogens/psychodysleptics , adverse effects from hallucinogens , opioid dependence , combinations of opioids with any other , nondependent opioid abuse , poisoning by opium , poisoning by heroin , poisoning by methadone , poisoning by other opiates and related narcotics , heroin poisoning , adverse effects from heroin , sedatives/hypnotics/anxiolytic dependence , nondependent sedative/hypnotic/anxiolytic abuse , drug withdrawal , drug-induced psychotic disorder with delusions , drug-induced psychotic disorder with hallucinations , pathological drug intoxication , drug-induced delirium , drug-induced persistent dementia , drug-induced persistent amnestic disorder , drug-induced mood disorder , drug-induced sleep disorders , other drug-induced mental disorder , unspecified drug-induced mental disorder , other specified drug dependence , combinations excluding opioids , unspecified drug dependence , other mixed or unspecified drug abuse , or drug dependence complicating pregnancy/childbirth/puerperium . Though they made up just 14.7% of all SUD-related visits, we restricted the analysis to principal diagnoses, considered to be chiefly responsible for the hospital visit, to reduce the possibility of finding a spurious increase in visits attributable to the rise in insurance coverage through the Affordable Care Act in 2014.

Posted in hemp grow | Tagged , , | Comments Off on We find a larger variance in county random intercepts in the pre-Prop 47 period

It is not clear from the data who was prescribing psychotropic medications for these women

Marginal FS was associated with 1.82 times higher odds of antidepressant use and 1.73 times higher odds of sedative use, while low FS was associated with 1.66 times higher odds of antidepressant use. There were no significant associations between FS and antipsychotic use in these models, and no significant associations between very low FS and any of the psychotropic medication outcomes. We next examined associations of FS with any psychotropic medication use, antidepressant use and sedative use, additionally adjusting for CESD and GAD-7 scores . Marginal FS remained associated with 1.93 times higher odds of any psychotropic medication use. The AORs for associations of marginal FS with antidepressant and sedative use were 1.64 and 1.42, respectively, although neither reached statistical significance. Associations between low FS and each psychotropic medication outcome were close to 1 , while very low FS was associated with lower odds of each outcome, with the association between very low FS and antidepressant use statistically significant . Of the other variables, higher incomes were consistently associated with lower odds of psychotropic medication use prior to adjusting for CESD and GAD-7 scores. Having an annual income of $12 001–24 000 remained significantly associated with lower odds of any psychotropic medication use and antidepressant use after adjusting for CESD and GAD-7 scores, compared to having an annual income ⩽$12 000. Age was positively associated with any psychotropic medication use, antidepressant use and sedative use , both before and after adjusting for CESD and GAD-7 scores. Self-identifying as African-American/Black was associated with lower odds of any psychotropic medication use, antidepressant use and sedative use, compared to self-identifying as non-Hispanic White . In the fully adjusted model, CESD and GAD-7 scores were positively associated with any psychotropic medication use, antidepressant use and sedative use.In this study of women with HIV in the USA,cost of vertical farming food insecurity was associated with the symptoms of common mental illness but displayed a complex relationship with psychotropic medication use.

Similar to previous studies , we found a dose–response relationship between food insecurity and symptoms of common mental illness. We hypothesised that associations between food insecurity and psychotropic medication use would reflect this dose–response relationship, but our findings suggest a more complex picture. While marginal FS was associated with significantly higher odds of taking any psychotropic medication, antidepressants and sedatives, the magnitude of the associations decreased as severity of food insecurity increased. Very low FS was associated with lower odds of psychotropic medication use after adjusting for CESD and GAD-7 scores . While there may be several possible explanations for this pattern of findings, it is most likely that people who experience very low FS find it difficult to engage in mental health care because of competing resource demands. Such individuals may find it difficult to access mental health services and therefore have fewer chances to be prescribed psychotropic medications. Alternatively, they may access care but find it more difficult to adhere to medication regimens and subsequently have prescriptions discontinued. This possibility is supported by previous studies. Food insecurity has consistently been associated with poor engagement in care among PLHIV, including missing clinic visits and sub-optimal adherence to medications . In qualitative studies, food-insecure PLHIV describe how hunger, exhaustion, pre-occupation with finding food and less money for transport erect major barriers to attending clinics . Similarly, in a nationally representative sample of non-elderly adults in the USA, individuals with severe mental illness who had very low FS were twice as likely to report being unable to afford mental health care, and 25% less likely to be using mental health services, compared to food-secure individuals with severe mental illness . Two studies in the USA have found that severe food insecurity was associated with higher rates of acute mental health care utilisation. Among outpatient users of mental health services, severe food insecurity was associated with five times the odds of having any psychiatric emergency room visit ; and among a national sample of homeless adults, food insufficiency was associated with three times higher odds of psychiatric hospitalisation .

Poor access to ambulatory outpatient mental health services among severely food-insecure individuals may therefore result in inadequate long-term symptom control and, consequently, greater acute mental health care utilisation – which is the same pattern that has been seen in studies of food insecurity and HIV care . Another key finding is that marginal FS remained associated with nearly twice the odds of any psychotropic medication use after adjusting for symptoms of depression and anxiety. This supports our hypothesis that among these women living with HIV, food insecurity, at least in a milder form, may be associated with being prescribed psychotropic medications independent of symptoms of common mental illness. This suggests that people experiencing complex social problems such as food insecurity may be prescribed psychotropic medications at a higher rate than those without such problems. This is supported by higher incomes also being associated with lower odds of psychotropic medication use in models additionally adjusted for CESD and GAD-7 scores. These findings indicate that there may be structural incentives and concomitant factors that favour the prescription of psychotropic medications for all forms of distress, regardless of the nature of the dominant contributing factors . As explained above, these factors may include the clinical training of prescribers, the absence of resources for social interventions and/ or the relative stability and provision that can accompany a psychiatric diagnosis through disability . Given that the data were cross-sectional, these possible explanations must be interpreted cautiously, and further research is needed. It remains possible that the causality runs in the reverse direction: people with symptoms of common mental illness significant enough to warrant treatment with psychotropic medications may be more likely to be food-insecure, and then may be referred to food assistance services that prevents them from experiencing the most severe form of food insecurity. Longitudinal studies investigating directionality and dominant mediating mechanisms are needed to comprehensively understand this association. Our study has other limitations. Greater clarity on this aspect of the findings would be helpful. Similarly, we have no data on access to mental health services and attendance at clinic appointments, which would clarify some of the mechanisms and explanations behind the findings. Third, we have no data on other mental health treatment modalities among these women, and no data on adherence to treatments, whether pharmacological or nonpharmacological. Measurement of these variables will be important for future studies. Finally, we were unable to adjust for any other symptoms of mental illness besides depression and anxiety because such data were not collected in the WIHS.Behavioral, cognitive, and neurobiological profiles of individuals with pathological gambling resemble those of individuals with substance use disorder, especially stimulant addiction . As a consequence, pathological gambling was recently reclassified as an addiction disorder in the DSM-5 .

However, it is still unclear whether some of the dopamine abnormalities that characterize substance use disorder are also present in pathological gambling. The current study examined the role of dopamine synthesis capacity in pathological gambling. Below, we highlight central findings linking altered dopamine function with substance addiction before reviewing existing evidence of altered dopamine function in pathological gambling. Converging evidence from various lines of research indicates that substance use disorder is characterized by a decrease in striatal dopamine D2/D3 receptor availability , even though this reduction is more consistently observed among stimulant users than in individuals with opiate, nicotine, or cannabis dependence . In humans, this has been evidenced by cross-sectional studies using [11C]raclopride positron emission tomography and single-photon emission computed tomography imaging techniques . In addition, human studies focusing on dopamine synthesis capacity, measured with [18F]fluoro-levo-dihydroxyphenylalanine PET, have revealed either low or unaltered dopamine synthesis capacity across various substance use disorders . Whether observed differences in D2/D3 receptor availability are a cause or consequence of drug addiction, and how it interacts with presynaptic dopamine function, is an area of active research. Longitudinal studies in animals have revealed that diminished baseline availability of striatal dopamine D2/D3 receptors is both a predictor and a consequence of continued drug use. For example,vegetables vertical farming lower baseline availability of striatal dopamine D2/D3 receptors in drug-naïve monkeys predicts high rates of subsequent cocaine self-administration . Longitudinal scanning further reveals reduced D2/D3 receptor ligand binding following repeated drug exposure . Micro-PET studies in rats have confirmed and extended these findings by showing that high impulsivity traits are associated with low dopamine D2/D3 receptor availability and predispose to the development of drug addiction . These findings concur with human studies showing that trait impulsivity is a vulnerability marker for addiction , although—in contrast to animal research—the direction of the association with dopamine D2/D3 receptors is less clear. Indeed, whereas some studies have reported positive correlations in healthy control subjects , other studies have reportednegative correlations in HCs and methamphetaminedependent users . Studies focusing on targets of dopamine functioning have so far led to different results in pathological gambling compared with stimulant dependence. In fact, all PET studies in pathological gambling have failed to reveal abnormal dopamine D2/D3 receptor availability in pathological gamblers relative to HCs . Despite this lack of group differences, two studies found a negative correlation between baseline dopamine D2/D3 receptor binding in the ventral striatum and trait impulsivity in PGs .

Similarly, PET studies investigating gambling-induced dopamine release have failed to reveal overall group differences but have shown correlations with relevant measures related to gambling severity, excitement, and performance . Currently, direct evidence for abnormal dopamine functioning in pathological gambling comes exclusively from studies showing altered responsiveness to dopaminergic drugs . In particular, PGs were shown to display greater amphetamine-induced dopamine release in the dorsal striatum, as measured with PET imaging using the D3 receptor–preferring radioligand [11C]–4- Propyl-9-hydroxynaphthoxazine, compared with HCs . This increased dopaminergic response echoes a recurrent clinical observation in Parkinson’s disease: following dopaminergic treatment aimed at compensating for dopamine cell loss, a subset of patients with Parkinson’s disease develop gambling disorder symptoms . These observations suggest that enhanced dopaminergic transmission may represent a biological substrate of gambling disorder. Thus, so far nearly all dopamine PET studies on pathological gambling have focused on dopamine D2/D3 receptors, investigating either receptor availability or the effects of dopaminergic drugs and gambling tasks. To date, there has been a paucity of research investigating dopamine synthesis capacity in PGs, with only one recent study reporting no difference with HCs . Yet, increased dopamine synthesis capacity has been associated with increased behavioral disinhibition and financial extravagance in healthy subjects and patients with Parkinson’s disease . Here we used dynamic [ 18F]DOPA PET imaging to investigate striatal dopamine synthesis capacity in male PGs and HCs matched for age, education, and an estimate of verbal IQ.In total, 15 PGs and 15 HCs were recruited. All HCs and 13 PGs had also participated in a previous pharmaco-functional magnetic resonance imaging study . The other 2 PGs were newly recruited. PGs were recruited through advertisement and addiction treatment centers, and they reported not to be medicated or in treatment for their gambling at the time of the PET study. HCs were recruited through advertisement. All subjects who had participated in the pharmaco-fMRI study underwent a structured psychiatric interview [Mini International Neuropsychiatric Interview–Plus, ] administered by a medical doctor prior to the fMRI study. The 2 PGs who were newly recruited were also assessed with the Mini-International Neuropsychiatric Interview–Plus administered by a clinical psychologist. Subjects were excluded if they had a lifetime history of schizophrenia, bipolar disorder, attention-deficit/hyperactivity disorder, autism, bulimia, anorexia, anxiety disorder, or obsessive-compulsive disorder or if they had a past 6-month history of major depressive episode. Current or past-year substance use disorder was also an exclusion criterion, as assessed at the time of the PET study using the 10-item Drug Abuse Screening Test questionnaire . Based on this criterion, data from 2 PGs were not included in the main analyses because of meeting the DSM-IV-TR criteria for cannabis dependence during the past year. As assessed with the Mini-International Neuropsychiatric Interview–Plus interview, 1 of the excluded cannabisdependent PGs also had a history of cocaine dependence that lasted for 1.5 years and ended 5.5 years prior to the PET study. In addition, 1 included PG had histories of alcohol and cocaine dependence that lasted for 1 year and ended 8 and 15 years prior to the PET study, respectively.

Posted in hemp grow | Tagged , , | Comments Off on It is not clear from the data who was prescribing psychotropic medications for these women

Replacing missing values is another form of multiple imputation that was selected for this study

The great majority of cases are represented by these four patterns. It is important to note that patterns 21, 51, 43, 45, and 53 are considerably smaller than the first four patterns, and they are similar in size. This means that the patterns of missingness across the variables is somewhat consistent, and that no dominant pattern to the missingness is readily seen. Based on this extensive analysis, it was determined that variables total GCS, alcohol screen result and THC Combo are not missing completely at random.When missing values in each variable account for less than 5%, those values can be missing at random and list wise deletion can be performed relatively safely is appropriate to do. This holds true for all the variables except for THC Combo, positive for drugs, alcohol screen result, age in years, ethnicity and total GCS. These variables, three quantitative and three categorical, were found to have greater than 5% missing values. On observation of the missing value analysis, it was observed that most cases had these two variables as missing, perhaps suggesting a relationship, or an effect. Furthermore, the Little’s MCAR test revealed that missing data may not be missing completely at random. Deleting cases with missing values can reduce the statistical power of the analysis and result in biased outcomes and estimates. Therefore, the use of multiple imputation is appropriate for this dataset and this study. Another method in SPSS that can be utilized is the Replacing Missing Values method. The Linear Interpolation method will be utilized. The Linear Interpolation method is a simple statistical method used by SPSS which estimates the value of one variable from the value of another and using regression methods to find the line of best fit. Using the Replacing Missing Values method in this study will help solve the problem of bias and ensure that power is not decreased because a large majority of the sample size will be preserved. It is important to consider the implications associated with imputing or replacing missing data. Multiple imputation or missing value replacement analyses will avoid bias only if enough variables predictive of missing values are included in the replacement method. If variables that may be predictive of the estimates are not included in the model,horticultural vertical farming for example the effect of age on alcohol result, replacement computation will underestimate these associations and bias the final analysis.

Therefore, it is preferrable to include as many predictive variables as possible in the model when either imputation or replacing missing value methods are utilized.Replacing missing values was utilized to minimize the many problems associated with missing data. The absence of data reduces statistical power and can also lead to bias in the estimation of parameters and analyses. Finally, missing data can diminish the representatives of the sample size and cases . It is important to consider that though replacing or imputing data is a common approach to the problem of missing data, it still does not allow analyses of actual data that is provided by actual participants, or in this case, data entered by abstractors and hospital registry systems. In gaining a larger sample size, and perhaps a more representative sample, confidence is lost that actual responses provided are those analyzed. It is important to note that methods used to account for missing data only provide researchers with the best estimated guess of what actual data may have been had it been documented in the first place. It is this ideology that influenced the decision to include some of the variables with missing data to be multiply imputed. Though multiple imputation process was utilized, it presented a complication in terms of the number of iterations and the subsequent analysis. Since the dependent variable, total GCS, was not selected for imputation/replacement, it was recommended and deemed appropriate to utilize the Replacing Missing Values function in SPSS to establish estimates for a select group of variables with missing data values. Replacing Missing Values method, a different form of imputation, allows the creation of new variables from existing ones by replacing them with estimates computed with a variety of methods. For this study, the Linear Interpolation method was used. This method utilizes the last valid value before the missing value and the first valid value after the missing value. The variables selected for missing value replacement were age and alcohol screen result. The variable age was selected due to its effect on traumatic brain injury incidences as well as post TBI outcomes . Additionally, the use of alcohol and other substances is prevalent in young adults with more than half of those who die from overdoses being younger than 50 years of age . The impact of age on TBI, substance abuse and outcomes could not be overlooked, and omitting this large percentage of cases will bias analysis results.

The variable of alcohol screen result was also important to replace because of the known impact and association alcohol abuse has on TBI incidence and outcomes. Alcohol and TBI are closely associated, with up to 50% of adults noted to drink more alcohol than recommended prior to their injury, and ultimately incurring worse outcomes . The variables of total GCS, THC Combo and positive other drugs were not included. Total GCSis the dependent variable, and having estimates instead of actual data seemed conceptually and logically inappropriate. For being the main predictor variables, both THC Combo and positive other drugs were not included to ascertain a more accurate and true account of the effects they may have on TBI severity. The Replacing Missing Values method yielded 7872 entries for age, with only 3 missing cases. The mean for age in the new dataset with replaced values was 31.19 years with a standard deviation of 26.1 compared to 33.78 years with a standard deviation of 27.3 for the non-replaced dataset. The replacing missing values method yielded 7822 valid entries for alcohol screen result, compared to 2087 entries in the non-replaced dataset. In the new dataset, alcohol screen result had a mean of .03, a standard deviation of .0752, with a minimum value of .00 and a maximum value of .66. The original dataset, with 7875 cases, was used for the missing value replacement method, because as mentioned previously, it is preferrable to include as many predictive variables as possible in the model so that the new replaced/imputed values are indeed best estimates. Once the dataset had the missing variables for age and alcohol screen result replaced, the dataset was then amended to only include participants greater than 16 years of age to meet the inclusion criteria. Once those cases were removed, the final dataset consisted of 4910 unique cases. The first aim of this study was to identify the prevalence of THC in a purposive sample of TBI patients. In this study, it was found that 27.7% of study participants tested negative for THC, and 6.2% of study participants had tested positive for THC on presentation to the emergency department. An overwhelmingly large percentage of the data was attributed as missing, 66% to be exact. This large percentage of missing data makes it difficult to have confidence in the 6% prevalence rate found in this study. National surveys on drug use and health have documented an increase in individual daily marijuana use over the last 5 years, with almost 22 million users each month in the United States . Federally, marijuana use remains illegal in the United States, however, in 2017, the year corresponding to the data of this study, 29 states had legalized marijuana for medical use, and 8 states for recreational use.

A recent study has found that marijuana use tends to be higher in states that have legalized its use compared to marijuana use in the United States overall . As a result, it is difficult to have confidence in the low prevalence rate found in this study. Another important consideration to make regarding the large percentage of missing data is the scarcity of studies investigating marijuana use and prevalence in TBI patients. As noted earlier in the literature review, only one study, by Nguyen et al. ,indoor agriculture vertical farming investigated the effects of THC presence on mortality in patients who had sustained a TBI, and they reported a prevalence rate of 18.4%. However, Nguyen’s et al. study involved a 3-year retrospective review of data obtained from a local hospital-based database, which can perhaps help explain their higher prevalence rate. The availability of a larger sample size because of 3 years’ worth of data may have contributed to that study’s higher prevalence rates. A recent publication has already noted areas of improvement necessary for the NTDB to improve data quality and completeness . It is important to note that the dataset used for this study reflects only one year worth of data, from 2017. At the start of this research study, the last dataset available for use was from 2017; datasets from 2018 and onward had not yet been released. Therefore, establishing previous prevalence rates for comparison, from the NTDB, could not be calculated because the presence of THC was never abstracted nor documented in earlier NTDB databases established before 2017. Finally, it is imperative to consider what happens at the bedside, or the clinical setting, when trying to understand why there is a large percentage of missing data when it comes to the presence of THC. When it comes to the care of the trauma patient, it is a common expectation amongst trauma centers, that a urine drug screen would be completed on every trauma patient presenting the emergency department. Despite this, drug screens are often either not obtained, not resulted, or not documented by the clinical team. At times, clinicians may simply forget to draw a screen and send it to the lab. This commonly occurs in patients who do not receive a foley catheter, a practice that is now encouraged in hospitals. As a result, patients may take a while to urinate, often doing so in the absence of the trauma nurse, or later in another unit or when under the care of a non-trauma nurse who then simply forgets to collect the sample. At times, the sample may be collected, but the result was never documented in the medical record. All these clinical factors can also contribute to the missing data by simply not including it in the medical record, and ultimately not making it into the trauma registry itself. When examining the differences between the group of participants with THC and those without and the influence on TBI severity, it was noted the group of participants who tested positive for THC had worsened GCS scores compared to those who tested negative for THC on presentation to the emergency department. The findings were significant, indicating that individuals who were positive for THC had a worsened neurological status as evidenced by lower GCS scores than those who tested negative. This finding is different than findings reported in the study by Nguyen et al. , which examined the relationship between the presence of THC and mortality after TBI. Their study only focused on mortality after TBI and not TBI severity. Based on toxicology test results, participants who tested positive for THC had a significantly higher number of males. Additionally, participants in the group that tested negative for THC were significantly older than participants who tested positive. This is supported by the literature, which indicates that men are more likely than women to use marijuana, as well as almost all other types of drugs . Individuals 18-29 years of age were the largest group of marijuana uses in the US in 2019 . Marijuana use dropped among older age groups, with seniors the least likely to use marijuana . No differences were noted in Non-Hispanic versus Hispanic groups regarding marijuana use. Marijuana use was higher in the American Indian and Black participants when compared to all other race groups. Participants who identified as ‘other’ had a greater proportion of testing negative compared to all other race groups. Marijuana use disorder was greatest among African Americans compared to other race/ethnicities .

Posted in hemp grow | Tagged , , | Comments Off on Replacing missing values is another form of multiple imputation that was selected for this study

Risk of bias in terms of selection and information was determined for each study

See Figure 1 for an illustration of the exclusion process. Study quality and risk of bias for each of the included studies, according to the NHLBI quality assessment tool for observational cohort and cross-sectional studies, is presented in Table 1. All eight studies employed an observational cohort study design and were assigned a “C” Level of Evidence. One of the studies included a prospective cohort study design while the remaining seven studies included a retrospective study designs. The prospective study was assessed as a good study as the investigators had control over the quality, accuracy and completeness of collected data. In the remaining seven studies, a retrospective approach was used where investigators had to rely on pre-existing data that could not be confirmed nor deemed reliable. This creates a susceptibility to recall bias and attrition bias. Though not highly esteemed as randomized controlled studies, observational cohort studies can be efficient in answering specific type of research questions. However, special attention must be given to the presence of potentially confounding factors. Only four of the eight studies included addressed confounding factors; rendering the remaining four studies a “fair” quality rating.Participants in each of the eight studies were selected based on the presence of a TBI, with some studies including TBI severity in their definition of TBI. Based on the study designs utilized by included studies, selection bias regarding sampling was anticipated as participants are not randomized, rather they are selected based on the outcome and exposure of interest; in such study designs, convenience sampling is most often utilized. Due to the nature of the studies included in this review, allocation concealment and blinding of outcomes assessors is not feasible. Because the exposure of substance abuse has not been allocated randomly,vertical farm tower a causal effect may not be possible as other variables may be found to influence the outcomes studied, rendering all eight studies at a major disadvantage with potential bias in outcomes.

Study methods employed by each of the eight included studies varied with some studies utilizing medical chart reviews, while others utilized validated surveys and questionnaires to gather their data. The studies by Andelic et al., Barker et al., and Bombardier et al. all utilized the participants’ medical charts for retrospective review for presence of substances. The studies by Andelic et al., Nguyen et al., and O’Phelan et al. used trauma registry databases to collect data on TBI patients and the presence of substance abuse. Pakula et al. collected data on the presence of substance abuse in post-mortem patients with traumatic cranial injuries by evaluating autopsy reports. Finally, the studies by Bombardier et al., Kolakowsky-Hayner et al., and Kreutzer, Witol and Marwitz utilized questionnaires to interview participants. The variance in study methods, ranging from retrospective review of charts to the use of self-report methodology subjects the included studies to recall bias and unreliable data. A factor negatively contributing to the quality of the included studies is the variance in defining a TBI. Three of the studies did not provide a definition for what constitutes a TBI, nor did they describe the severity of TBI. The study by Andelic et al. defined TBI using the TBI Modified Marshall Classification. The study by Barker et al. defined TBI using the TBI Model Systems Data Base definition. Nguyen et al. used the International Classification of Diseases-Ninth Revision codes and the Abbreviated Injury Severity codes to define TBI. These codes are widely used in trauma data registries for entering and recording the injury type and severity, for performance improvement and billing purposes. However, reliability can be an issue as coding may be subjective. The information is extracted from the chart by registrars who read and enter notes written by physicians. Often, coding depends on physician documentation, attention by trauma registrars to the various sources of documentation and communicating with physicians when necessary. If not subject to continuous data validation, a data gap may ensue. The study by Pakula et al. defined a central nervous system injury by the presence of any of the following written diagnosis as found in the autopsy reports: 1) TBI, 2) skull base fracture, 3) spinal cord injury, and 4) cervical spine injury. Only one study, the study by O’Phelan utilized a Glasgow Coma Score to define a severe TBI.

The majority of the articles were subject to selection bias in terms of their participant population and methods of data collection: See table 2 for specifics. The included studies varied in their definition of TBI. One study used the Modified Marshall Classification of TBI which is a Computed Tomography scan derived metric used to grade acute TBI on the basis of CT findings. Another study defined TBI using the TBI Model Systems National Database definition. The TBIMS-NDB has been funded by the National Institute on Disability and Rehabilitation Research in the U.S. Department of Education to study the course of recovery and outcomes following a TBI. They describe the TBIMS-NDB TBI as: Damage to brain tissue caused by an external mechanical force as evidence by medically documented loss of consciousness or post-traumatic amnesia , or by objective neurological findings on physical or mental examination that can be reasonably attributed to TBI. Three of the eight studies did not specify how TBI was defined. One study used the following International Classification of Disease, 9th Revision codes to define TBI: 800.1- 800.39 ; 800.6-800.89 ; 801.1-801.39 ; 801.6-801.89 ; 803.6-803.89 ; 804.6-804.79 ; 851 ; 852 and 853 . Another study used the International Classification of Disease, 10th Revision codes to define TBI: S02.0xx ; S02.1 ; S06.1 ; S06.2 ; S06.3 ; S06.31; S06.32 ; S06.33 and S09.x . Finally, the last of the eight studies used autopsy reports to evaluate individuals with severe central nervous system injuries. For purposes of that study, severe CNS injuries were defined as “any traumatic brain injury, skull base fracture, spinal cord injury, or cervical spine injury.” Although all eight studies investigated marijuana exposure in TBI patients, only one study specifically investigated the use of marijuana alone on outcomes in TBI. All other remaining studies investigated the presence of all possible substances and/or drugs, meaning investigators were not specifically examining marijuana exposure by itself. In Nguyen et al. all patients who had sustained a TBI and had a urine toxicology screen were included. The actual noted presence of marijuana was obtained from the urine toxicology screen and not through any other modes of measurement. The authors classified study patients according to marijuana screen results which they defined as greater than 50 ng/ml. Though marijuana was noted to have been detected across all eight studies, the actual numerical or absolute value measured was never reported by any of the studies. Additionally, it is important to note that excluding the study by Nguyen et al., the presence of marijuana was not reported in a quantifiable manner, making any potential statistical inference impossible.

Lastly, six of the included studies investigated the presence of marijuana at the time of injury, while the remaining two studies measured the presence of marijuana use during the past year and post-mortem respectively. The study by O’Phelan et al. did not investigate any other time frame for which marijuana may have been used, rather, the authors only collected data on the presence of drugs at the time of injury. An important finding from the systematic literature review showed that marijuana was the most favored drug reported. However, only one study of the eight studies included explicitly searched for and found a connection between the presence of a positive toxicology screen for marijuana and mortality outcomes in TBI patients. Nguyen et al. three-year retrospective review of trauma registry data found that 18.4 percent of all cases meeting inclusion criteria had a positive marijuana screen and overall mortality was 9.9 percent . Nguyen et al. found that mortality in the marijuana positive group was significantly lower when compared to the marijuana negative group . Authors adjusted for the following differences between study participants: age, gender, ethnicity, alcohol,vertical farming greenhouse abbreviated injury scores, injury severity scores, and mechanism of injury. After adjusting for differences, Nguyen et al. found that a positive marijuana screen was an independent predictor of survival in TBI patients .This review sought to determine the use of marijuana and its role in TBI prevalence and outcomes. A key finding from this review is that there are few studies available that examine the specific role of marijuana exposure on TBI severity, leaving many questions unanswered. Furthermore, this review found that there is a significant variation in how substance abuse has been defined, conceptualized, and operationalized in TBI research. Another important finding was that the reviewed studies operationalized substance abuse inconsistently, often combining alcohol and drugs in one category titled ‘substance abuse,’ making it difficult to ascertain if there was an association between specific drugs, particularly marijuana, and TBI severity and outcomes. The difference in how substance abuse was operationalized in these reviewed studies has important implications for how findings are interpreted as well as provide recommendations for future research. Although there was no restriction made to the countries in which these studies were conducted, those meeting inclusion criteria were all studies conducted in the US except one from Norway. Therefore, the applicability of findings from that one non American study is limited. Additionally, it is difficult to draw valid and reliable conclusions when the studies reviewed utilized a wide variety of study objectives, sample size, study methods, and varying definitions for substance abuse classification.

The review showed a great variation existed across the studies in types of data collected and methods used, thus severely minimizing comparability. For example, the disparity in measurement of blood alcohol levels considerably reduce the reliability of data related to pre-injury intoxication. In the reviewed studies, information on alcohol and substance use was obtained from a range of different sources, including self-reports and patient records, as well as a variety of different measures rendering results unreliable across studies. This review set out to answer a specific question: what influence, if any, does marijuana exposure at time of injury have on TBI severity and outcomes? Only one study about marijuana’s effect on TBI outcomes was available. Nguyen et al. reported that a positive marijuana screen is an independent predictor of survival, suggesting a potential neuroprotective effect of cannabinoids in TBI. The rest of the studies yielded a variety of findings, with the most common finding being that marijuana and other drug use, including alcohol, are common before TBI. To clearly understand what marijuana’s influence on TBI is, potential confounding variables must be identified and controlled for. The literature review identified no consensus on relevant confounding variables aside from age and gender. The variability in all other demographic variables highlights the lack of certainty of the full range of relevant demographic variables. Another potentially important confounding variable is mechanism of injury. Historically, the most frequent cause of TBI related deaths in civilians was considered motor vehicle crashes. However, recent data show that falls are actually the leading cause of TBI related hospitalizations, with the second leading cause is being struck by another object. Importantly, only six of the studies included mechanism of injury as a variable in their analysis of findings. Five of the eight included studies did not address TBI severity as a variable. The remaining three studies each operationalized TBI severity utilizing different methods. Andelic et al. used the Marshall classification to classify neurological anatomical abnormalities as seen on CT scans. Nguyen et al. utilized the Abbreviated Injury Scale score for the head and neck region to classify TBI severity. The use of the AIS score is common in general research studies as often times the GCS score is not always recorded for each individual participant. Hence the only study showing a link between marijuana exposure and TBI severity did not use the gold standard of GCS to measure TBI specific severity. Finally, severity as a variable in the TBI population is an important characteristic and is a parameter of interest when answering the research question of whether or not marijuana influences TBI severity; available studies are not able to answer that question mostly because the majority of them did not measure severity in the first place.

Posted in hemp grow | Tagged , , | Comments Off on Risk of bias in terms of selection and information was determined for each study