The classic hippocampal circuit is a trisynaptic circuit utilizing glutamatergic neurotransmis sion

By follow-up in June 2021, on average, there were no significant differences from pre-pandemic patterns of alcohol and nicotine use. Findings are consistent with previous short-term studies showing a pandemic related increase in the number of days drinking. In our data, this change reflected a different distribution of drinking across the population: compared to pre-pandemic, fewer young adults were drinking, but those who did drank more frequently. While two previous studies found decreases in binge drinking , we did not find a statistically significant change in the number of days of binge drinking at any timepoint in the current study. However, the non-significant reduction we observed in binge drinking in June and December 2020 was directionally consistent with these previous studies. In addition, the time frame of measurement may explain the discrepancy: those two previous studies focused on changes earlier during the pandemic, in March and April 2020, whereas another study focusing on changes in June and July 2020 also found no significant change in binge drinking. As in one previous study , we did not find an average effect of the pandemic on nicotine use. However, this appeared to obscure opposing changes among those who suffered vs. did not experience impacts on their financial security. Relative to pre-pandemic, in June 2020, those with past-month nicotine use had increased the number of days using if they experienced financial impact and had stable or decreased number of days using if they denied experiencing financial impact. Loss of job or reduction in work hours could increase smoking during periods of boredom at home or to cope with the attendant stress. This pattern is consistent with the larger literature documenting how the pandemic may exacerbate health dis parities based on pre-existing socioeconomic advantage. 

However,cannabis grow kit moderation of multiple out comes was tested, so the current findings should be regarded as preliminary and await replication. This study had limitations. First, findings may not generalize beyond emerging adults ages 18–22 years old. Second, for nicotine use, we did not measure the quantity used each day, which could have changed. Third, we did not consider other substances such as cannabis. Fourth, the mode of assessment differed from the pre pandemic to during-pandemic assessments, potentially introducing differences.Fifth, secular changes in the rates of alcohol or nicotine use among young adults between 2016 and 2021 could be confounding the effect of the pandemic, potentially introducing bias.Sixth, pre-pandemic responses on a free-response scale had to be mapped onto the discrete response options , potentially limiting precision. Seventh, we assessed the degree to which the pandemic impacted in dividuals’ financial security but not the form of this impact. Eighth, pre-pandemic observations were not anchored to the months of June and December, so seasonal effects could explain part of the observed differences. We reported here the most extended follow-up to date of pandemic related changes in drinking and nicotine use in emerging adults. The study had several further strengths. We used seven years of pre pandemic assessments and a rigorous age-based design to identify the pandemic’s impact over and above typical developmental changes. We incorporated three assessments spanning the first 15 months of the pandemic to study whether early changes in drinking and nicotine use persisted. Participants spanned five sites across the U.S and multiple racial and ethnic backgrounds. Finally, we focused on a critical developmental period associated with elevated risk for problematic use. In summary, in a heterogeneous group of young adults, pandemic related changes in drinking patterns were no longer detectable in June 2021. Pandemic-related increases in nicotine use occurred only for participants who reported greater impact of the pandemic on their financial security—these subgroup effects were no longer statistically significant in June 2021, though a large effect size for past-month nicotine use remained. Thus, those whose financial security has been adversely impacted by the pandemic may reflect a vulnerable group worth targeting for supports to manage drinking and nicotine use.

Continued follow-up beyond summer 2021 is necessary to verify that the pandemic’s effects on drinking and nicotine use have indeed faded and understand the pandemic’s long-run impacts of substance use trajectories into adulthood. Parkinson’s Disease treatment has been based on dopamine replacement therapy for 35 years. Yet, side effects resulting from long-term use of DA agonists, namely dyskinesias and on–off responses, are prompting investigations of alternative neurotransmitter manipulations to modulate basal ganglia function and normalize motor activity. Dyskinesias often result from lesion or disturbance affecting the transcortical loop or indirect pathway, with disruption of balance between excitation and inhibition in the globus pallidus pars externa-subthalamic nucleus-globus pallidus pars interna circuit. Thus, dyskinesias reflect altered patterns of neuronal firing in this circuit, which result in the improper selection of specific motor programs and, eventually, in the development of hyperkinetic movements. Endocannabinoids, the endogenous ligands of cannabinoid receptors, are synthesized upon demand by neurons in response to depolarization , and, once released, diffuse backwards across synapses to suppress pre-synaptic GABA or glutamate release. Because of these properties, the endocanna binoid system may offer new pharmacological targets for the treatment of neurologic conditions characterized by abnormal firing patterns. One application of cannabinoid based therapeutics would be for dyskinetic syndromes, hyperkinetic disorders characterized by changes in pattern, synchronization, mean discharge rates, and somatosensory responsiveness of neurons in the direct and indirect extrapyramidal motor circuits. Further applications of cannabinoid-based therapeu tics may extend to treatment of seizure disorders, changes in behavioral or cognitive state resulting from hypersynchro nous excessive neuronal discharges in other, for example, limbic, cortical or thalamic circuits. To test the hypothesis that endocannabinoids act as endogenous antidyskinetic agents with modulatory effects on abnormal basal ganglia circuits, we examined endocan nabinoid production in specific areas of the basal ganglia of rats infected with Borna disease virus and how cannabinoid agonists and antagonists affect their motor behaviors. Borna disease virus is a negative strand RNA virus epidemiologically linked to patients with neuropsychi atric disorders and Parkinson’s-plus syndromes. 

After infection, BD rats develop an extrapyramidal disorder with sponta neous dyskinesias, hyperactivity,cannabis drying racks stereotypic behaviors, partial DA deafferentation, DA agonist hypersensitivity, and Huntington’s-type striatal neuropathology. Our investigations revealed elevations in the endocannabinoid anandamide in the subthalamic nucleus of BD rats, associated with increased metabolic activity in this key basal ganglia relay nucleus. As pharmacological antagonism of CB1 receptors caused BD rats to seize, we also evaluated the relationship between changes in anandamide levels and seizure phenomena. Our results suggest that anandamide acts as both an endogenous antidyskinetic and anticonvulsive compound, in part via interactions with the opioid system.Our results are consistent with a functional role for anandamide signaling as a natural mechanism to buffer abnormal firing patterns in various neural circuits. Using BD rats, a rodent model of viral-induced neurodegenerative syndrome and spontaneous dyskinesias, we showed significant anandamide elevations in the STN, a critical basal ganglia relay nucleus in which abnormal firing has been linked to dyskinesias, dystonia and hemiballismus. The characteristics of STN neurons, such as fast firing kinetics, short membrane refractory periods and ability to modify their firing pattern after small changes in impinging synaptic input , render the STN well-suited to regulation by activity-dependent modulators such as endocannabinoids. In keeping with this hypothesis, CB1 receptor protein and functional CB1 receptors have been found in the STN of rats. However, neither WIN 55,212-2 nor AM404 had robust antidyskinetic effects in BD rats, which we attribute to loss of CB1 receptors along with GABA neurons in other nuclei of the basal ganglia circuit, as indicated by striking loss of GAD immunoreactivity in BD rats. STN hyperactivity is a recognized feature of PD , but may also signifying abnormal patterns of firing, as in dystonia. Thus, in BD rats, which display mixed Parkinsonian and Huntington’s lesions, the anandamide and Col activity elevation observed in the STN may represent an important compensatory or modulatory reaction to abnormal input to STN, with the net effect of reducing pathologic or dyskinetic movements.

The convulsant effect of CB1 receptor antagonist SR141716A was an unexpected result. Since abrupt reduction of endocannabinoid tone produced hippocampal seizures in BD rats, we suggest that anandamide, in addition to its compensatory function in the basal ganglia, may have a role in maintaining homeostatic or balanced activity in limbic networks.To further evaluate the role of anandamide in convulsive phenomena, we used a seizure paradigm already developed in BD rats. In these rats, degenerative changes extend to hippocampus and amygdala and self-limited limbic seizures can be consistently produced within 5 to 10 min of administration of the general opiate antagonist naloxone. We found that administration of naloxone did not change anandamide levels in the hippo campus and amygdala of BD rats that seized, while naloxone did cause significant anandamide elevation in the same brain areas of normal rats that did not seize. The failure of BD rats to increase limbic region levels of anandamide in response to the opiate antagonist naloxone is consistent with the idea that decreased availability of anandamide on demand contributed to seizures induced by a chemoconvulsant. Dynamic neurotransmitter buffering in rapid response to excitatory stimuli may be a general principal of endogenous anticonvulsants, applying to classic inhibitory neurotrans mitters such as GABA and to neuromodulators such as opioids or endocannabinoids. For example, when opioid tone was reduced by naloxone, the result was increased EEG activity of both normal and BD rats. When BD rats developed increased or hypersynchronous EEG activity but could not increase anandamide levels, it was the ananda mide transport blocker AM404 that limited or reversed naloxone excitability and rescued the animal from seizures. Anticonvulsant efficacy was most likely via elevation of anandamide or other endocannabinoid tone, an interpreta tion consistent with anecdotal reports by patients of improvement in seizure frequency or severity with mar ijuana use. However, at this time, we cannot exclude a vanilloid-mediated effect of AM404, since this drug binds to the TRPV1 receptor. Our study widens the role of potential cannabinoid– opioid interactions beyond substance abuse, tolerance dependence phenomena, analgesia, hypothermia, and inflammation and suggests a reciprocal relation between these two systems with respect to convulsive phenomena. So far, cannabinoids and opioids have been implicated separately in seizures. While CNS opioid dysregulation has been considered a substrate for the interictal personality disorder , CNS endocannabinoid signaling changes because of their association with schizophrenia and psychotic symptoms might also contribute to interictal cognitive or personality syndromes. Endocannabinoid upre gulation during opiate withdrawal could explain the absence of seizures during opiate withdrawal. The opioid system includes several families of related neuropeptides and A, y, n opioid receptors. In other work , kappa opioid receptors have been identified as a major contributor to anticonvulsant efficacy. When K and CB1 receptors are compared, they are found to have convergent biochemical mechanisms. Both are members of Gi/o protein coupled receptor family and signal through cAMP-Protein Kinase A, inwardly rectifying K+ channels and N, P/Q, R type Ca++ channels. KOR and CB1 receptors exhibit overlapping neuroanatomic dis tribution in hippocampus. CB1 receptors are found on CCK expressing interneurons , while KOR receptors are found on mossy fiber terminals, principal neurons, perforant path and supramammillary afferents, and GABA/SOM/NPY containing interneurons.At each step, excitatory tone is modulated by a diverse group of inhibitory and excitatory neurons. KOR or CB1 stimulation of selective interneurons could desynchronize GABA inputs to a post synaptic network. Desynchronization of signals from GABA containing interneurons to their networks of pyramidal cells is one mechanism of enhanced inhibition of principal neurons. KOR stimulation at other sites, producing pre- or postsynaptic inhibitory effects on large pyramidal neurons, would also modulate excitability of principal neurons, with net effect the modulation of hippocampal outflow pathways. Further studies in our model will investigate the effect of AM404 on endocannabinoid production in the limbic system. Greater understanding of the conditions for inter action between endocannabinoids and opioid system will enhance our knowledge of neural circuits that serve fundamental or broad homeostatic functions and also will be the goal of future studies. In conclusion, knowledge of endocannabinoid distribu tion and function throughout basal ganglia circuits could lead to the identification of non-dopamine pharmacologic targets for dyskinetic disorders and a greater understanding of the role of output pathways in the genesis of motor behaviors and involuntary movements.

Posted in hemp grow | Tagged , , | Comments Off on The classic hippocampal circuit is a trisynaptic circuit utilizing glutamatergic neurotransmis sion

Further study into the individual effects of AVF on compression strategy is warranted

This event rate is, however, higher than the reported adverse event rate with using ketamine as a single agent.This discrepancy may be due to the fact that children in this study may have had more than one adverse event documented during a single sedation, such as apnea, oxygen desaturation, and BMV. Previous studies have shown that ketamine has a low side-effect profile with the most common adverse events being those related to respiratory compromise and emesis.In fact, the odds of respiratory adverse events associated with ketamine use increases when it is administered intramuscularly instead of intravenously.In addition, ketamine-associated emesis can be reduced by administering ondansetron prior to the start of PSA.However, neither of the two patients who had emesis during PSA in this study received ondansetron as a premedication. Moreover, while the authors did not evaluate NPO status and how this relates to emesis, previous studies have shown that the NPO time does not affect the rate of major adverse events during PSA.Despite advances in the field of resuscitation science and modest improvement in outcomes, mortality from in-hospital cardiopulmonary arrest remains relatively high.However, a common denominator in recent reports of modest outcome improvements in CPA resuscitation has been the link to quality of cardiopulmonary resuscitation.In particular, high-quality chest compressions have been described as the foundation that all additional, “downstream” resuscitative efforts are built upon and highly associated with improved survival and favorable neurological outcomes.Most recently, high-quality chest compressions have been defined by the updated 2015 American Heart Association adult guidelines as a depth of 2-2.4 in, full chest recoil, a rate between 100-120 beats per minute, and a chest compression fraction of at least 60%.Even when delivered according to guidelines,cannabis grow supplier external manual chest compressions are inherently inefficient, providing only 30% to 40% of normal blood flow to the brain and less than one third of normal blood flow to the heart.

This inefficiency highlights the need for rescuers to deliver the highest-quality chest compressions in a timely and consistent manner.Although the relationship between high-quality chest compressions and improved survival has been well described, concern remains with the reports of trained rescuers performing suboptimal compression depth, rate, and hands-off fraction time.Rescuer overestimation of depth and underestimation of rate, as well as increased performance fatigue in prolonged situations, may be primary forces in the relatively poor adherence to current guidelines.Real-time, CPR performer feedback via defibrillator has been a relatively recent approach in maintaining chest compression performance and associated with continuous high-quality chest compression.Currently, there are no studies investigating the ability to maintain high-quality chest compressions within the current 2015 AHA guidelines with and without the influence of real-time audiovisual feedback , which may assist in maintaining high-quality chest compression. The goal of this study was to assess the ability to maintain high-quality chest compressions by 2015 updated guidelines both with and without AVF in a simulated arrest scenario.This was a randomized, prospective, observational study conducted within a community hospital with over 22,000 annual inpatient admissions. All participants were voluntary emergency department and medical-surgery nursing staff with both Basic and Advanced Cardiac Life Support certification. We obtained institutional review board approval, and written consent was required prior to participation. We defined CPR providers as a two-person team consisting of one participant performing chest compressions while the second administered ventilations via bag-valve mask. Chest compressions and ventilations were performed on a Little Anne CPR Training Manikin. AVF on chest compression rate and depth was provided to participants through ZOLL See-Thru CPR® on R Series® defibrillators. In a “mock code” scenario, 98 teams were randomly assigned to perform CPR +/- AVF chest compression feedback.

Participants were further randomly assigned to perform either standard chest compressions with a compression-to-ventilation ratio of 30:2 to simulate CPR without an advanced airway or continuous chest compressions to simulate CPR with an advanced airway for a total of four distinct groups.Chest compressions were performed for two minutes, representing a standard cycle interposed between rhythm/pulse checks and/or compressor switch. Defibrillator data for analysis included chest compression rate, depth, and compression fraction over the entire two minutes. The primary outcome measured was ability to maintain high-quality chest compressions as defined by current 2015 AHA guidelines.Secondary outcomes included group differences in chest compression depth, rate, and fraction time. Based on recent findings per Wutzler et al. on the ability to maintain effective chest compressions we estimated a sample size of at least 68 teams to maintain a two-sided alpha of 0.05, and a power of 80%.27 Data are presented as means and standard deviations. We compared CPR variables between respective groups by Mann-Whitney U test or continuous variables and by chi squared test for categorical variables. Only participants with technically adequate data available were used in this comparison. We considered p values < 0.05 statistically significant. No participants were excluded.Previous iterations of the AHA’s CPR and Emergency Cardiovascular Care guidelines have recommended chest compression rate ≥ 100 compression/min; however, the 2015 updates have called for a chest compression-rate upper limit of 120/min. The recommendation appears to be based on both animal studies as well as recent clinical observations from large out-of-hospital cardiac arrest registries describing an association between chest compression rates, return of spontaneous circulation , and survival to hospital discharge.This makes sense as observations in animal studies have described anterograde coronary blood flow as positively correlated with diastolic aortic pressures and subsequently compression rate. However, at rates greater than 120 compressions/min, this relationship weakens as diastolic coronary perfusion time decreases.Regarding human data,cannabis drainage system recent observations from the Resuscitation Outcomes Consortium registry suggest an optimum target of between 100 and 120 compressions per minute.In this randomized, controlled study we report that overall, AVF is associated with a greater ability to provide simultaneously guideline-recommended rate and depth.

This is important as previous studies have focused on the proportion of correct chest compression rate and depth; however, it has been shown that despite adequate individual mean values, the actual proportion of chest compressions that fell within guideline criteria simultaneously for rate and depth was low.Overall comparisons between SC and CCC cohorts were without significant differences in compression dynamics. AVF appeared to have an effect regardless of chest compression strategy, with isolated analysis of both compression strategy groups notable for differences. Within the SC group, significant differences were noted in both average rate and proportion of compressions within current guideline recommendations. Analysis of the CCC cohort was notable for the association with AVF, a greater proportion of compression depth within current guidelines and proportion of time with ideal compressions. One potential explanation for the association between AVF and ability to perform “high quality” chest compressions on a more consistent basis is the ability to possibly avoid early fatigue by “pacing” an individual through the early periods of a highly stressful cardiac arrest situation where one could understandably want to push as fast and hard as possible, which in turn may lead to early fatigue and subsequently “poor quality.”Finally, similar to overall analysis, comparisons between compression strategies without AVF did not result in any significant compression differences. The isolated effect of AVF on compression dynamic overall appears to be related to compression strategy. Within the CCC cohort, the effect appears to be on ability to maintain ideal depth, while in the SC cohort, the effect appears to be related to rate control. We do note that within this cohort, although a statistically significant difference is noted in average rate of compressions, both are within current guidelines. However, it should be noted that the non-AVF cohort demonstrated an average rate at the most upper level of current recommendations, and more importantly was associated with a lower rate of proportion of compressions with rate within guideline recommendations over the testing period. This is important as recent studies have reported an inverse association between compression rates and depth, with rates above 120/min having the greatest impact on reducing compression depth.Recent reports have called this upper rate limit into question and suggest that faster rate limits may be actually associated with a higher likelihood of ROSC in in-hospital cardiac arrest.Unfortunately, in that study compression depth was not reported, leaving optimal rates in in-hospital arrest up to continued debate.Interestingly, within the AVF cohort, chest compression depth appeared to be both deeper on the average and out of guideline recommended depth for the SC cohort. Yet again, these differences did not translate to overall differences in the proportion of time within recommended depth between compression groups. Chest compression strategy and relationship with AVF may be related to the nature of the strategy. That is, with continuous compressions fatigue may become an issue and feedback on depth may be of greater importance over time while bursts of activity after brief pauses with standard compressions may require greater mindfulness in rate of compressions.Finally, we note that although the presence of AVF appears to have improved the quality of chest compressions, proportions of high-quality compressions were surprisingly low between all groups with a high of 25% and nadir of 3.3% 1, 2.

However, our findings are consistent with reported “effective compressions,” i.e., trial period with mean compression rate and depth within guidelines and CCF ≥80% per Wutzler et al. In their simulation-based study, there was an “effective compression” rate of 25.4% with feedback vs. 12.7% without.These findings warrant further investigation into possible influencing factors and sources of variation including fatigue, critical care experience, and time since last training update. The translation of preclinical theories of alcoholism etiology to clinical samples is fundamental to understanding alcohol use disorders and developing efficacious treatments. Human subjects research is fundamentally limited in neurobiological precision and experimental control, whereas preclinical models permit fine grained measurement of biological function. However, the concordance between preclinical models and human psycho pathology is often evidenced by face validity alone. The aim of this study, therefore, is to test the degree to which one prominent preclinical model of alcoholism etiology, the Allostatic Model, predicts the behavior and affective responses of human subjects in an experimental pharmacology design. The Allostatic Model was selected for translational investigation due to its focus on reward and reinforcement mechanisms in early vs. late stages of addiction. In this study, we advance a novel translational human laboratory approach to assessing the relationship between alcohol-induced reward and motivated alcohol consumption. A key prediction of the Allostatic Model is that chronic alcohol consumptions results in a cascade of neuroadaptations, which ultimately blunt drinking-relate hedonic reward and positive reinforcement, while simultaneously leading to the emergence of persistent elevations in negative affect, termed allostasis. Consequently, the model predicts that drinking in late-stage dependence should be motivated by the relief of withdrawal related negative affect, and hence, by negative reinforcement mechanisms. In other words, the Allostatic Model suggests a transition from reward to relief craving in drug dependence. The Allostatic Model is supported by studies utilizing ethanol vapor paradigms in rodents that can lead to severe withdrawal symptoms, escalated ethanol self-administration, high motivation to consume the drug as revealed by progressive ratio breakpoints, enhanced reinstatement, and reduced sensitivity to punishment. Diminished positive reinforcement in this model is inferred through examination of reward thresholds in an intracranial self-stimulation protocol. Critically, these allostatic neuroadaptations are hypothesized to persist beyond acute withdrawal, producing state changes in negative emotion ality in protracted abstinence. Supporting this hypothesis, exposure to chronic ethanol vapor produces substantial increases in ethanol consumption during both acute and protracted abstinence periods.. Despite strong preclinical support, the Allostatic Model has not been validated in human populations with AUD.Decades of human alcohol challenge research has demon strated that individuals differences in subjective responses to alcohol predict alcoholism risk. The Low Level of Response Model suggests that globally decreased sensitivity to alcohol predicts AUD. Critically however, research has demonstrated that SR is multi-dimensional. The Differ entiator Model as refined by King et al suggests that stimulatory and sedative dimensions of SR differentially predict alcoholism risk and binge drinking behavior. Specifically, an enhanced stimulatory and rewarding SR, particularly at peak BrAC is associated with heavier drinking and more severe AUD prospectively.

Posted in hemp grow | Tagged , , | Comments Off on Further study into the individual effects of AVF on compression strategy is warranted

Pain scores and injury severity scores may have differed and were not studied

This may reflect random variation or purposeful decline in opioid prescribing influenced by the significant attention recently brought on by the “opioid epidemic.” The providers were not notified of removal of the default quantity; therefore, it is less likely that the intervention itself influenced the decrease in number of prescriptions. The data on prescribing patterns from the ED in recent years are limited, and it is unknown if there has been a widespread decline in prescribing patterns over this same time period.As a retrospective analysis, unmeasured confounders may have influenced our analysis. Factors that were not studied may have influenced opioid prescribing patterns. These include the physician’s perception of pain intensity, the age of the patient, the provider’s experience level, and the diagnosis at the time of discharge. Furthermore, it is unknown whether the increased variation post-intervention really represents true individual prescribing variation. Further evaluation would be required to analyze each individual provider’s prescribing patterns before and after the intervention to determine whether they each exhibited the same increase in variability as the entire group or if, after removal of the default quantity, each provider relied on his/her own individual default quantity for each patient regardless of painful condition. Other potential explanations for the findings observed were not studied directly. One potential confounder is a change in the patient population or ED providers during the study period, which may have influenced prescribing habits. Comparing patient acuity in the period before and after the intervention demonstrates similar Emergency Severity Index scores and admission rates. This suggests similar patient characteristics in the pre and post-intervention period. The total number of Level I and II trauma activations and ED visits for adult patients was lower in the post-intervention period as expected,cannabis hydroponic set up given the duration of the post-intervention period was shorter.

Although it appears that prescribing patterns may have been more appropriate after elimination of default quantity, this assumption was not directly tested. Changes in provider mix may also account for differences in opioid prescribing during the post-intervention period. Although this was not studied directly, there was minimal turnover among the provider group during the study period with a total of one hire and two departures of full-time faculty during the combined time periods. Further studies would be needed to determine which factors influence physician-prescribing patterns of opioid analgesics for specific, painful conditions including analysis of pain scores.Frequent users of emergency departments have been the subject of substantial research given the implications for resource utilization, healthcare costs, and ED crowding.A unique subset of frequent ED users are those who present to the ED repeatedly for acute alcohol intoxication.As ED visits for acute alcohol intoxication are increasing,9 the burden of alcohol-related frequent users will be important to Hennepin County Medical Center, Department of Emergency Medicine, Minneapolis, Minnesota explore. Existing studies describing frequent ED users often cite alcohol-use disorders as a common comorbidity and a precipitant for their disproportionate utilization of emergency services.Despite this established association, there is a paucity of data describing the encounters and individuals who frequently use the ED for alcohol intoxication, or the extent to which they use the ED for other reasons. The purpose of this study was to describe this population and their ED encounters.This was a retrospective, observational, cohort study of ED patients presenting for acute alcohol intoxication from 2012 to 2016. It was approved by the institutional review board. The study hospital is a county ED with an annual volume of 100,000 visits and 7,000 visits for alcohol intoxication. The ED has a 16-bed area within the department that clusters all intoxication encounters. The purpose of this area is to treat patients who are in the department for intoxication at patients who ared to treat complicated medical or trauma patients who also happen to be intoxicated from alcohol.

Patients are selected for treatment in this area at the discretion of triage nurses, paramedics , and the emergency physicians. All alcohol intoxication encounters are seen in this particular area of the ED, but there is occasional overflow to other parts of the ED if these rooms are full. All patients who are treated in one of these rooms are entered into the electronic medical record using the chief complaint “altered mental status.” We included adults if they presented to the ED for alcohol intoxication during the study period. These patients were identified using the EMR by querying for all visits where the chief complaint was “altered mental status ng for ir initial ED room was within the intoxication section of the ED. Patients were excluded if their breath alcohol concentration was zero. The variables for analyses were chosen a priori. We selected them if they were hypothesized to be relevant to the study population and if they were readily available in the EMR. A data analyst who was blinded to the purpose of the study obtained the following variables without any manual chart abstraction: age, gender, race/ethnicity, insurance status, primary care physician, medical/ psychiatric comorbidities, breath alcohol concentration, testing obtained , chemical sedation administered, ED disposition, and length of stay. Additional data for each frequent user was manually abstracted from the chart by another investigator ; these included counts of ED visits that were not for alcohol intoxication, hospital admissions, and visits to a separate psychiatric services ED. Multiple definitions for ED frequent users exist in the literature, ranging from 3-20 visits per 12-month period.For this study,hydroponic system for cannabis we elected to use the upper limit of this range and categorize an alcohol- related frequent user as greater than 20 visits for acute alcohol intoxication in the previous 12 months, in order to describe the highest-user cohort possible. Non-frequent users were those who did not meet this criterion. After we identified the frequent-user cohort, we analyzed encounter characteristics for those with a frequent-user designation during that visit compared to those without. For analysis of patient characteristics and demographics, duplicate observations were excluded. The patient encounter that was retained for demographic analysis was the most recent encounter during the study period.

For all comparisons, we calculated differences in means or proportions with associated 95% confidence intervals. We checked a subset of 20 charts to confirm accuracy of data abstraction.Frequent users for alcohol intoxication are a unique subset of frequent ED users who merit attention given increasing numbers of alcohol-related visits nationally.9 In this study, we identified 325 patients with 11,370 encounters for alcohol intoxication over a five-year period, where some individuals used the ED for alcohol intoxication more than 100 times in a year. In this study, we identified several variables that differed for frequent users compared to non-frequent users. First, there were comparatively higher rates of medical and psychiatric comorbidities among alcohol- related frequent users. This finding reiterates the complexity of this population, and the fact that any of these “routine” visits have the potential for clinical decompensation and may require resources beyond the scope of simple observation for intoxication. We also identified differences in demographics , as well as differences regarding health insurance status. In contrast, several variables were not different among the two groups; namely, diagnostic workups were similar between the groups, but interpretation of this finding is limited by practice patterns at our institution, where workups tend to be minimal for most alcohol intoxication encounters. Another important finding in this study was the low admission rate among frequent users. While it is not unexpected that presentations for alcohol intoxication would result in low admission rates , it does illustrate a potential barrier in caring for this population. In other studies describing frequent users for other general medical complaints, admission rates are reported to be as high as 40%.3 In those cases, interventions can be implemented as inpatients, and resources can be initiated during admissions. In the population we describe, since admissions are so uncommon, the responsibility may be on ED personnel to identify these patients, as they will not be addressed by an inpatient team. In our cohort of alcohol-related frequent users, we identified some concerning features regarding primary care access and utilization. Less than half of the frequent-user population had primary care physicians, and only 4% were participants in a coordinated primary care program intended for the hospital’s greatest utilizers. We believe that this is an important gap in coverage for a very high-needs population. This finding also contrasts the general ED frequent-user literature, where most describe primary care access as over 90%.Our institution does not appear to be identifying alcohol-related frequent users for primary care services as effectively as those who use the ED for other problems. Possible explanations for this gap in coverage could include a lack of readiness for healthcare accountability, or a struggle maintaining primary care relationships in the setting of ongoing substance abuse. We were unable to determine the prevalence of important social stressors such as homelessness, employment, or government assistance in this cohort, but addressing these stressors in future will play an important role in assisting this population.

Multiple social services interventions have been proposed for frequent ED users, such as case management and referral programs, but these have been shown to have variable rates of success.One study conducted in our community investigated use of case management and demographic-specific housing referrals among 92 chronic inebriates. While the study found that the healthcare costs decreased pre vs. post intervention, ED visits did not decrease.Oxygen desaturation below 70% puts patients at risk for dysrhythmia, hemoglobin decompensation, hypoxic brain injury, and death.The challenge for emergency physicians is to secure an endotracheal tube rapidly without critical hypoxia or aspiration.1 Preoxygenation prior to intubation extends the duration of “safe apnea” , to allow for placement of a definitive airway.Below that level, oxygen offloading from hemoglobin enters the steeper portion of the oxyhemoglobin dissociation curve, and can decrease to critical levels of oxygen saturation within seconds.Alveoli will continue to take up oxygen even without diaphragmatic movements or lung expansion. Within some of the larger airways, turbulent flow could generate a cascade of turbulent vortex flows extending into smaller airways.Denitrogenation involves using oxygen to wash out the nitrogen contained in lungs after breathing room air, resulting in a larger alveolar oxygen reservoir. When breathing room air , 450 mL of oxygen is present in the lungs of an average healthy adult. When a patient breathes 100% oxygen, this washes out the nitrogen, increasing the oxygen in the lungs to 3,000 mL. EPs and emergency medical services use several devices to deliver oxygen or increased airflow to patients in respiratory need. Nasal cannula is used primarily for apneic oxygenation rather than pre-oxygenation. Previous recommendations were to place high-flow nasal cannula with an initial oxygen flow rate of 4 L/min, then increase to 15 L/min to provide apneic oxygenation once the patient is sedated. A nasal cannula can be placed above the face mask until just prior to attempting laryngoscopy, at which point it is placed in the nares to facilitate apneic oxygenation. The standard non-rebreather mask delivers only 60% to 70% inspired oxygen at oxygen flow rates of 15 L/ min. The FiO2 can be improved by connecting the NRB to 30-60 L/min oxygen flows from rates of 15 L/min. The use of NRBs is limited in patients with high inspiratory flow rates as FiO2 may be decreased due to NRB design. Some devices with effective seals and valves will collapse onto the patients face at high inspiratory flow rates causing transient airway obstruction. A bag-valve mask may approximate an anesthesia circuit for preoxygenation. BVMs vary in performance according to the type of BVM device, spontaneous ventilation vs. positive pressure ventilation, and the presence of a positive end-expository pressure valve. During spontaneous ventilation the patient must produce sufficient negative inspiratory pressures to activate the inspiratory valve. The negative pressures generated within the mask may lead to entrapment of room air and lower FiO2 during pre oxygenation. A BVM’s performance increases during spontaneous breathing by administering high-flow oxygen, using a PEEP valve, and assisting spontaneous ventilations with positive pressure ventilations in synchrony with the patient’s spontaneous inspiratory efforts. Continuous positive airway pressure improves oxygenation by increasing functional residual capacity by reversing pulmonary shunting through the recruitment of poorly ventilated lung units.

Posted in hemp grow | Tagged , , | Comments Off on Pain scores and injury severity scores may have differed and were not studied

EPs are poorly equipped to determine the burden of AF or the origin of the arrhythmia

Atrial fibrillation and flutter is a pervasive disease affecting 6.1 million people in the United States.Each year it is responsible for more than 750,000 hospitalizations and 130,000 deaths.In contrast to overall declining death rates for cardiovascular disease,4 AF as the “primary or contributing cause of death has been rising for more than two decades.”The annual economic burden of AF is six billion dollars; medical costs per AF patient are about $8,707 higher than for non-AF individuals.Thrombotic embolism of the cerebral circulation, or stroke, is the principal risk of AF and ranges from less than 2% to greater than 10% annually.AF is the cause of 100,000-125,000 embolic strokes each year, of which 20% are fatal.Anticoagulation to prevent these embolic events is standard of care unless contraindicated.However, it is not without risk, as even minor trauma can cause substantial and potentially life-threatening bleeding. Given that AF is the most common arrhythmia among the elderly,balancing these competing risks is challenging. Anticoagulation for AF is most commonly accomplished with a vitamin K antagonist, warfarin. However, its use requires patient education, medication compliance, dietary consistency, and close monitoring.CHA2 DS2 -VASc, ATRIA, HAS-BLED, ORBIT, and HEMORR2 HAGES are just some of the decision-support tools available to objectively weigh the risk of stroke and life-threatening bleeding from therpy.Newer, novel oral anticoagulant agents provide a benefit/risk profile that may surpass warfarin, especially when considering initiation in the emergency department. 16-18 In this issue of WestJEM, Smith and colleagues present a prospective observational evaluation of anticoagulation prescribing practices in non-valvular AF. Patients presenting to one of seven Northern California EDs with AF at high risk for stroke were eligible unless admitted, not part of Kaiser Permanente of Northern California ,cannabis grow indoor or already prescribed anticoagulation. During the 14-month study there were no departmental policies governing the initiation of anticoagulation in AF patients. University of Alabama School of Medicine, Department of Emergency Medicine, Birmingham, Alabama.

The authors report 27.2% of the 312 at high risk for stroke received a new anticoagulant at ED discharge, and only 40% were prescribed oral anticoagulation within 30 days of the index ED visit. Anticoagulation was more likely to be initiated in the ED if the patient was younger , had persistent AF at discharge, or when cardiology was consulted during the index visit. Furthermore, only 60.3% of patients were given patient education material on AF in their discharge instructions.Critics of Smith et al. will take issue with their inclusion criteria that required participation in KPNC. By definition, all members of KPNC are insured; they also have guaranteed access to timely primary care follow-up and are of higher socioeconomic means than the general population.Many of the factors that contribute to successful anticoagulation therapy – diet stability, monitoring of renal function, education and intervention of modifiable risk factors, smoking cessation, and fall risk – can all be assessed by a primary care physician and addressed with shared decision-making ensured in the KPNC system.While these limitations are acknowledged by the authors and narrow the generalizability of these findings, Smith and colleagues demonstrate the challenges of addressing ongoing chronic disease in the ED and highlight the complex decision making required. AF patients without insurance in the U.S. lack reliable access to primary care, and emergency physicians likely under-prescribe anticoagulation therapy due to an abundance of caution.Lacking the objective data to quantify these thromboembolic risk factors of AF, EPs are reluctant to initiate thromboprophylaxis, despite its known benefits, in light of the well-demonstrated risk for life-threatening bleeding.However, the risk is largely misperceived. Recent findings from the Spanish EMERG-AF trial demonstrate that initiating this therapy in the ED is at least as safe as in other settings and has clear mortality benefit at one year. Furthermore, that benefit does not come at the expense of reduced effectiveness over the course of one-year follow-up.In addition to highlighting the challenges of prescribing anticoagulation in the ED setting, Smith et al. also illustrate the opportunity for EPs to prevent future strokes in the setting of known AF. This opportunity is likely larger than reported considering the limitations of this investigation. Thankfully, there are clear guidelines to assist EPs based upon validated methods of risk-stratification.

Furthermore, of those patients receiving anticoagulation therapy in the first 30 days, more than half were initiated in the ED. While these subjects likely represent the least complex decision-making, these results also suggest some prescribing inertia; anticoagulation was continued by the primary care physician because it has already been initiated in the ED. Despite these limitations, Smith and colleagues demonstrate an immense target for EPs to improve stroke risk for at least 60% of AF patients discharged from the ED. Coupled with other evidence demonstrating that such practice is efficacious, safe, and cost effective, Smith makes a compelling case that thromboprophylaxis should be initiated in all but the most complex AF patients who will likely be admitted. EDs should develop policies to assure that AF patients can receive anticoagulation therapy on discharge. These local policies could include decision pathways that rely on guidelines, decision-support tools, and account for insurance status. As EPs, we should embrace the responsibility to provide thromboprophylaxis regardless of the likelihood of primary care follow-up. To defer that decision ignores the role emergency medicine plays in providing for the public health in the U.S., and frankly misses the mark. Arterial lines are important for monitoring and providing care to critically ill patients. Not only do they allow for rapid access to blood, but they also allow a provider continuous access to the patient’s blood pressure, which enables minute titration of vasoactive medications. Traditionally there are two locations for arterial line placement: femoral and radial arteries. The choice between sites is often made according to the provider’s preference with very little evidence guiding this decision.Although initial beliefs that arterial lines are immune to infection are certainly unfounded,vertical farming supplies there is evidence that the infection risk is proportionally similar to their central venous counterparts regarding location.It has also been shown repeatedly that central arterial monitoring provides different information from both peripheral and non invasive monitoring.Older studies have shown that the femoral artery is superior to the radial artery for blood pressure monitoring, but these results come from a different era of medicine when placement technique was different and the landscape of monitoring was not what it is today.

It is therefore important to reinvestigate femoral artery access in today’s environment. Line failure adds significant and unnecessary costs to the treatment of critically ill patients, including financial costs , time , and health. In the present study, we attempt to determine if one site is more prone to failure. We performed an ambispective, observational, cohort study to determine variance in failure rates between femoral and radial arterial lines. This study took place at a single center, a county teaching hospital with 12 adult ICU beds, and was approved by our institutional review board. Any patient with an arterial line placed anywhere in our hospital met our inclusion criteria. Providers at our site were not using ultrasound for arterial line placement routinely, so this metric was not evaluated. Although the specific indication for arterial line placement was not captured in our study, it is customary at our institution to place arterial lines for either ongoing titration of vasopressor agents or expected repeated evaluation of the management of patients with ventilatory support. Our institution uses the Arrow RA-04020 quick kit for radial arterial lines, which is a 20-gauge, 4.25 cm catheter. The Arrow select kit is used for femoral lines, which is also a 20-gauge catheter, though 12 cm in length. All patients in our study were admitted to an ICU bed and were therefore of high acuity. Exclusion criteria were patient age < 18 years old and line removal before 24 hours. We performed the retrospective arm of this study using the hospital billing database. Records from every patient who received and was successfully billed for an arterial line between January 2012 and June 2015 in our hospital were included. Research assistants , who were blinded to the study hypothesis , were provided a training presentation on how to extract relevant information from the electronic health record , including patient’s age, line insertion time, line removal time, and whether line removal was due to failure. We compiled their results into a database, and a pilot quality inprovement study was initially performed on every 20th patient in the study. The two principal investigators then reviewed the data to ensure that data acquisition was accurate between all RAs, demonstrating reliable inter-observer agreement regarding insertion and removal dates and classification of line failure. After confirming that our proposed method of data acquisition was precise, the RAs performed the complete review on the total cohort and the acquired data was kept in a spreadsheet without analysis until the prospective portion of the study was completed. The prospective arm of the study took place from June 2015 to March 2016. RAs obtained information on every adult patient in whom an arterial line was placed in our hospital during the enrollment period. To ensure capture of all patients, RAs would observe each ICU bed and ED resuscitation bay for new arterial lines three times daily. They compiled an ongoing list of known lines, noting the time of insertion, location of the line , patient age, and patient comorbidities. If the arterial line was found to have been removed, the RAs would document the time of removal and determine why the line had been removed , noting whether it was considered a failure and if it was replaced. The RAs obtained this information from nursing flow sheets or nursing interview at the time of their evaluation. Causes of failure included the following: 1) inaccuracy , 2) blockage , 3) site issue , and 4) accidental removal. We hypothesized a 2x greater failure rate of radial arterial lines compared to femoral amounting to a 50% reduction in failure rate by placing the line in the femoral artery. We postulated a 60% radial and 40% femoral distribution of line placement, based on observance of local practice. We calculated that 128 patients would provide sufficient power to detect the hypothesized failure rate if lines were split evenly between the two sites. We therefore planned to enroll 200 patients as the actual distribution was not known a priori. We chose an ambispective design as the EHR made retrospective data acquisition easy, allowing for greater power to the study. We subsequently used the prospective data to help validate our retrospective findings. In total, we evaluated 272 arterial lines over both the prospective and retrospective arms of our study, with 58 lines leading to failure for a combined total failure rate of 21.32%. Comorbidities between the two cohorts were similar, as shown in the Table. Our retrospective arm screened 304 arterial lines; however, only 196 met criteria for analysis over the three-and a-half years. The radial cohort had 43 failures and the femoral cohort had three failures , for an absolute risk reduction for failure of 25.4% if the femoral site was chosen. The prospective arm had 76 total lines, which included 39 radial and 37 femoral. The radial cohort had 10 failures and the femoral cohort had two failures. This similarly provided an absolute risk reduction of 20.2% in failure rate if a femoral line was placed instead of a radial arterial line. This outcome was consistent between the retrospective and prospective arms of the trial and led to a number needed to treat of 4.1 patients to prevent one line failure. Secondary outcomes evaluated include time to failure and cause of failure. Combined data showed the median time to failure for radial lines as two days compared to femoral lines having a median time to failure of four days. From the prospective data, the primary causes of failure for the radial lines were accidental removal , a line not drawing , and inaccurate readings. There were no radial lines removed due to “site issue” in our prospective arm; however, such issues were responsible for 15% of radial removals in our retrospective arm. Conversely, accidental removal accounted for only 5% of all removals in the retrospective cohort of radial lines but 40% of failures in the prospective arm.

Posted in hemp grow | Tagged , , | Comments Off on EPs are poorly equipped to determine the burden of AF or the origin of the arrhythmia

A standardized post resuscitation debriefing template is introduced

Pre-reading establishes a basic knowledge base for the learners and encourages personal reflection prior to the classroom session. Group discussions aid in the practical incorporation of that knowledge into the residents’ practice. It is recommended that the four modules be spaced over several months to maximize retention of the material via spaced repetition.16 The mini-modules may also be combined into a single 60-90 minute session to accommodate didactic conference schedules. The first module exposes learners to the concept of SVS, as well as potential stages of recovery. This module emphasizes establishing a foundation of knowledge pertaining to SVS, laying the groundwork for later modules to introduce practical tools and concepts for coping with and preventing SVS. The second module describes a method to help recognize SVS in colleagues. Residents and faculty are encouraged to help colleagues identify when they are suffering from SVS and to help create an appropriate follow-up plan. In addition, a method for performing a “hot debriefing” is described, which occurs immediately following a significant mistake or negative patient outcome. The third module serves to make learners aware of resources that are available at their individual institution and encourages learners to access them prior to being affected by SVS. Finally, the fourth module focuses on department-wide prevention of SVS through culture change and the use of routine, group debriefings following difficult resuscitations.Mindfulness is the practice of purposeful and nonjudgmental attentiveness to one’s own experience, thoughts, and feelings. Meditation is a technique for resting the mind and attaining a state of consciousness that is distinctly different from the normal waking state. A regular practice of meditation can provide a lasting sense of mindfulness that lasts throughout the day. Both mindfulness and meditation have become more mainstream and socially acceptable ways to manage stress and increase productivity.

Within the field of medicine, research has shown that being mindful or developing a meditation practice improves job satisfaction and decreases burnout.Multiple studies have demonstrated benefits to mindfulness and meditation,ebb and flow flood table such as increased empathy, life satisfaction and self-compassion, and decreased anxiety, rumination, and burnout, and decreased cortisol levels.Because EM residents stand to benefit tremendously from these effects, we determined that mindfulness and meditation were important as a wellness toolkit for educators. Although mindfulness and meditation have become more integrated into some medical schools, these concepts are not frequently found in residency training programs.We developed a mindfulness and meditation lesson plan to address this gap. Our educator toolkit consists of three 30-minute modules and a longitudinal, guided group meditation practice designed to span several months or an academic year. The two initial modules outline and define meditation, as well as describe how to start a meditation practice. These modules include an opportunity to practice meditating as a group, an invitation to start an individual practice, and a chance to discuss barriers to practice. Ideally, these modules would be offered during the first month of the academic year, separated by one to four weeks. Following the second module, a longitudinal, guided meditation practice should be incorporated at regular intervals throughout the residency conference schedule. The final module should be implemented toward the end of the academic year following the longitudinal meditations. This module provides the residents a forum to debrief and reflect on their practices of meditation and mindful thinking, cultivated throughout the year. It also serves as a chance to consider and evaluate how meditation and mindfulness have impacted individual residents, the residency program, and the department.Positive psychology is the conscious participation in acts to improve well being by creating and nurturing positive feelings, thoughts, and behaviors. In contrast to traditional psychology, which focuses on mitigating illness, positive psychology focuses on the strengths that allow individuals to thrive.

Use of positive psychology interventions has been shown to improve well being and decrease depressive symptoms.Positive psychology interventions can serve as a useful tool to improve team dynamics and success in stressful situations such as trauma resuscitations.Practicing gratitude, positive self-talk, and intentional acts of kindness have the potential to make emotionally difficult shifts more tolerable and improve physician-patient interactions. Despite the literature describing the benefits of positive psychology, similar to SVS and mindfulness and meditation, we found no described use of a positive psychology curriculum for residents. To address this gap, we developed a flexible and easy-to-implement positive psychology toolkit, focusing on two positive psychology principles, PERMA and BTSF. Although these positive psychology principles can be taught together as a two-part, positive psychology lesson plan, each can also be given as a stand-alone session. PERMA is an evidence-based model for well being that can help residents to more fully engage with their work and thrive in their careers.The PERMA toolkit includes a slide set presentation, brainstorming period, and both paired and group discussion over a period of 50 minutes. The slide set provides a basic introduction to positive psychology, followed by a more detailed description of the PERMA model. Interactive audience participation during the slide presentation is encouraged. A brainstorming activity in pairs and in a larger group follows the slide presentation. The session concludes with a “commitment to act,” an exercise in which the learners write down one specific thing that they plan to do differently based on their participation. BTSF is a skill that can be quickly taught to residents and used in a wide variety of settings. This technique helps an individual cope with an acute stress or by employing one or all four of the following: tactical or box breathing; positive self-talk; visualizing success; and stating or intentionally thinking a specific focus word to hone attention.Similar to the PERMA lesson plan, the BTSF session includes a slide set presentation,hydroponic drain table active group discussion, and an acute stress or exercise for participants to practice BTSF in a simulated stressful environment over a period of 45 minutes. The slide set specifically describes each of the components of BTSF and concludes with a tactical breathing exercise.

The conclusion of the BTSF lesson plan is an acute stress or exercise. We suggest using the children’s game, Operation , or other similar game to simulate stress while practicing the BTSF model. The lesson plan also concludes with a “commitment to act” exercise.The 2017 Resident Wellness Consensus Summit was convened with the ultimate mission to empower EM residents from around the world to lead efforts aimed at decreasing burnout, depression, and suicidality during residency and to increase resident well being. Leading up to the event, many residents collaborated in the Wellness Think Tank over an eight-month period to conduct much of the consensus event pre-work. Specifically our Educator Toolkit working group focused on developing three widely applicable, high-yield lesson plans for EM residency programs on the topics of SVS, mindfulness and meditation, and positive psychology. The lesson plans may stand alone or be incorporated into a larger wellness program. The three tool kits differ in length, scope, and duration for the individual sessions. This design provides greater flexibility for residency programs to schedule into their existing training curriculum. For example, programs with limited time or resources may find a single 45-minute session on positive psychology easier to incorporate than a year-long curriculum that includes classroom sessions and monthly guided meditations, as described in the mindfulness and meditation toolkit. In an effort to address widespread burnout and unwellness, our goal is for these three topics to be widely covered and implemented by residency programs through the use of these templated lesson plans. Each toolkit provides instruction on practical skills training that can be used on a daily basis, both within and outside of the emergency department. Next steps include measuring the effects of these lesson plans on resident satisfaction, learning, behavior change, and ultimately patient outcomes, as well as burnout, resilience, and job satisfaction. Physician wellness has recently become a popular topic of conversation and publication within the house of medicine and specifically within emergency medicine.The purpose of this summit was to identify areas of overlap and synergy so that collaborative projects and possibly best practices could be established for emergency physician wellness.National organizations, such as the Accreditation Council on Graduate Medical Education, have recently placed a high priority on resiliency and wellness in trainees.Similar efforts have been undertaken by the American Medical Association,8 the American College of Emergency Physicians,and the American Academy of Emergency Medicine.Mirroring recent literature showing that emergency physicians are at particularly high risk of burnout syndrome,the rate of burnout among trainees is as high as 60%.Several recent studies have identified factors associated with increased resiliency with one meta-analysis demonstrating several interventions associated with increased resiliency and lower incidence of burnout syndrome in graduate medical education.No literature, however, has focused exclusively on the high-risk burnout population of EM residents.

Through a joint collaboration involving Academic Life in Emergency Medicine’s Wellness Think Tank, Essentials of Emergency Medicine , and the Emergency Medicine Residents’ Association , a one-day Resident Wellness Consensus Summit was organized. This summit primarily convened a group of essential stakeholders to the conversation, EM residents, to clarify the present state of wellness initiatives among EM training programs, and potentially identify best practices and tangible tools to increase physician wellness. To our knowledge, this is the first national consensus event of its kind comprised primarily of residents focusing on resident wellness.The RWCS event is the first step of a transformative, cultural journey focusing on resident health and well being. To our knowledge, this is the first time that residents from across the world collaborated and convened to reach a consensus on these critical issues. The tools developed by the four RWCS working groups will serve as a resource for resident health and well being leaders looking to influence clinical learning environments at the local organizational level for the future. Working at an organizational level is foundational but not sufficient for cultural transformation. An additional paradigm to look at resident health and well being is through the paradigm of a social movement, which the Wellness Think Tank and RWCS event embody. Veteran organizer and policy expert Marshall Ganz describes four elements necessary to lead successful social movements: relationships, story, strategy, and action.The Wellness Think Tank and RWCS have made inroads in relationships and story, and will hopefully catalyze strategy and action to ensure that resident health and well being becomes a successful social movement.The RWCS leadership team had experience working within the ALiEM culture prior to the RWCS. This led to the development of the Wellness Think Tank to congregate a critical mass of EM residents into a virtual community. Mirroring the ALiEM culture, the Think Tank’s culture was based on deep, reciprocal relationships that complement knowledge transactions. These relationships are facilitated by trust,communication,and personal learning networks that allow for exponential growth. The networks developed have both strong ties that facilitate commitment and motivation, and weak ties that facilitate entry into new networks and domains.The relationships that the Wellness Think Tank and RWCS created will fuel the networks needed to implement a successful resident health and well being social movement in the future.Many recent academic and popular publications have highlighted the fact that physicians are at much higher risk for burnout, depression, and suicide than the general population of the United States. Data from the National Violent Death Reporting System indicate that each year more than 200 physicians in the U.S. commit suicide.1 Medical students and residents are at especially high risk.Furthermore, emergency physicians are consistently ranked at the top of most burnt-out doctors.This dark problem was recently brought to the forefront in an email written to the Council of Emergency Medicine Residency Directors by Dr. Christopher Doty, the residency program director at the University of Kentucky, detailing his tragic loss of a resident and its effects on the residency and broader hospital community. The Accreditation Council for Graduate Medical Education has included the mandate that residency programs address resident wellness within the Common Program Requirements. Emergency medicine residency programs are now required to provide education to residents and faculty on burnout, depression and substance abuse and are instructed to implement curricula to encourage optimal well being. In 2016 a group of 142 EM residents from across the world began discussing ways to address this issue through the Wellness Think Tank, a virtual community of practice focusing on resident wellness.

Posted in hemp grow | Tagged , , | Comments Off on A standardized post resuscitation debriefing template is introduced

Police recruits received a two-second burst of police issue OC spray to the face

These patients require rapid extrication, advanced resuscitation, and transport by a dedicated RTF component; they cannot be attended to solely by tactical medics.Recently, RTF has become a “buzz word” that first responder departments use to demonstrate their effectiveness in tactical events. However, the role and implementation of such teams varies markedly from agency to agency. In practice, interoperability must continue to be emphasized by both command and ground-level units, and it must be practiced on a recurring basis to prevent confusion of operational objectives. On the day of the San Bernardino shooting only three fire agencies in the county had active RTF programs in place. Communication between these units was extremely strained by existing systems and the varied understanding of RTF concepts. Ensuring cohesive and coherent medical education across agencies will not only provide law enforcement with understanding of medical priorities, but also familiarize EMS with the tactical priorities of their law enforcement partners.As many law enforcement agencies begin to deploy their own medical assets, it is critical that EMS medical directors recognize the tactical medical resource as separate from but augmenting the overall medical profile. This position falls outside the realm of the medical branch of the incident command system because of its integration with operational teams. Thus, a law enforcement medical coordinator may provide a conduit to both EMS and fire assets as well as providing operational input to the incident commander. The LEMC would then provide the commander with critical information that may be overlooked by the traditional medical branch of the ICS. First, the ability to conduct an in-depth, plant grow table medical-threat assessment using operational data gathered by law enforcement and combined with EMS resources will provide on-scene commanders with a much better perspective on potential threats and limitations to operational plans.

Secondly, this position will provide improved integration between the tactical elements of the response and the force protection and rescue elements of the task force. Creating a LEMC position ensures proper allocation of both human and medical assets. Because SWAT medics operate within the law enforcement branch and not the medical branch, there is potential for duplication of efforts and general disorganization. This occurred in San Bernardino. Despite the traffic management by the SBPD, local resources pouring into the area of the shooting caused an obstacle to staged EMS assets. Medical resources were also being dispatched in duplicate with their respective law enforcement teams. Consolidated coordination of these assets would improve law enforcement support as well as integration for agencies less experienced with the RTF model. Ideally this position would be filled by an active or former tactical medical provider – preferably a physician with knowledge of both the tactical and EMS functions. The benefits include continuous evaluation of the medical threat from law enforcement assets in the hot zone as well as EMS and fire in the warm/cold zone. Additionally, the LEMC would oversee resource need and distribution among the operational teams. Designating one individual streamlines the process and enables the SWAT medic to focus solely on providing emergent aid within the hot zone, while knowing that coordination is being managed by a professional who understands the scene, its evolution, and their needs. Further, because of the uncertain nature of these operations, agencies must be prepared for extended operations.This possibility was understood by several teams present at the IRC event because they had recently been involved in the manhunt for Christopher Dorner, the disgraced Los Angeles Police Department officer who went on a shooting spree throughout Southern California. As the duration of that event extended several hours teams began to lack the basic necessities such as food and water, and experienced a shortage of personnel needed for the rotation system in order to sustain a high operational tempo.

Though the logistics branch of the ICS is theoretically tasked with procurement of supplies for an operation, law enforcement team health remains under the purview of the tactical medic. Therefore, a LEMC would be the ideal person to ensure proper allotment of resources regardless of the duration of operations.Law enforcement and fire departments have adapted quickly to minimize the loss of life in high-threat incidents through improved integration and education. Training for these scenarios is more often practiced as isolated events and less frequently combined. As a result, medical directors often outfit their teams in relation to the perceived threat, with PPE and medical equipment designed to protect from handguns and treat the “preventable causes of death.” Despite this traditional mindset, it has been repeatedly demonstrated that modern terrorists coordinate complex attacks, using multiple detonations to “drive” response and inflict maximal damage. Although many of the victims of the San Bernardino terrorist event were shot numerous times, it has been well documented that there were unexploded IEDs in the immediate vicinity of both survivors and rescuers. In the face of multiple, armed attackers using high-powered rifles and multiple explosive devices, the typically-issued PPE is inadequate and the available medical supplies could quickly be exhausted, particularly when treating individuals with blast injuries. Further, as active-shooter incidents have evolved, the push to incorporate Tactical Emergency Casualty Care guidelines by first-responder agencies has accordingly focused on ballistic injuries. This approach emphasizes the need for hemorrhage control but overlooks both the likelihood of encountering victims with multiple amputations and the complications of blast injury not seen by a penetrating injury. Medical directors and medical assets should update their education programs to re-emphasize treatment of blast vs. ballistic injury. In addition, focused, mass-casualty management will help agencies and designated LEMCs as to the care and coordination necessary for adequate resource planning.

In light of the threats now faced by our society, merely supplying one tourniquet, one chest seal and one dressing may no longer be sufficient. We recommend that ALL responders carry tourniquets,hydroponic table while SWAT team members should carry several. In addition, designated law enforcement medical elements should wear the same PPE as their colleagues on patrol. The development of a portable medical kit for active shooter/suspected terrorist events should be encouraged. Should extra equipment become necessary, this kit should contain multiple tourniquets, triage tape, combination dressing/ bandages and large quantities of gauze for hemostasis/wound packing. Contrary to conventional thinking, establishment of an airway is not of primary concern in these types of events, eliminating the need for multiple advanced airway kits. Most public buildings follow standard security practices, and medical directors and tactical medics should accordingly make basic changes in their response profiles. When the sprinklers were activated in the IRC building, medical assets were unprepared for operations in a wet environment. Moving forward, medical directors should educate and plan for the electrical shock hazards and biological hazards posed to responders in that environment. Rescue equipment should include waterproof triage tags , and teams should have the tools to circumvent difficulties with building access as part of the rescue plan. In the current environment, all tactical teams must have such access. Finally, agency training can no longer accept notional acknowledgment to the presence of IEDs. The actual procedures for IED, complicated, active shooter incident events should now be the standard, practiced scenario.Additionally, the complex and critical nature of injuries seen in these events and the challenge of accessing patients wounded by explosions, demonstrate the necessity for bystander care at the scene of the incident. Municipal and county agencies should consider training communities in TECC First Care Provider guidelines.Similarly, as the community has accepted the placement of automated external defibrillators in high-traffic areas, trauma/MCI equipment stations should also be pre-positioned in such areas and co-located with the AED.Stresses from these critical incidents may be reversed or halted through adaptive responses. Recognition that PTS is a likely outcome to mass casualty events should stimulate medical directors and team medics to create mechanisms for early recognition and practice of adaptive responses both for the individual and the collective. While individual stress is the focus of therapy, shared trauma or group stress remains a possible outcome. This shared trauma may unconsciously change processes within the group, affecting operational capabilities.Restricting access by non-essential personnel to victims remains the most basic process for decreasing stress in all groups. Additionally, there is a marked difference to the responses expected by responding patrol units and organized SWAT units. While specialized teams may have the infrastructure to address PTS, including their own medical assets, individuals involved in the initial response may find it difficult to participate in departmental programs because they fear stigmatization.

Avoidance of formal services may isolate and cause development of maladaptive responses that incur significantly higher risk for long-term pathology.Formal gatherings of team members and peer groups should be initiated very early to begin discussion of what has been witnessed and to prevent isolation by those most affected. However, support services must remain flexible and available to individuals reaching out to medical directors and team medics. Moreover, these gatherings must be protected from rules of discovery; fostering unguarded discussion/ conversation is crucial to this process, and fear of retribution may destroy this process. Finally, team medics may themselves need assistance following a crisis. It is imperative that medical directors or medical coordinators, as well as team leaders, allow for small-group or peer discussions in the aftermath of a critical event.As part of their standard training, each participant received a standardized irritant exposure and completed a training evolution.They were then required to complete a series of tasks to simulate control and apprehension of a combative criminal suspect. This training sequence lasted approximately 1½-2 minutes. Military trainees wearing protective gas masks were placed in an enclosed structure that was then saturated with CS gas. Gas masks were removed and each trainee was exposed to the tear gas for approximately 10 seconds. They were then required to perform a series of training tasks and safely exit the multi-story structure. This training sequence also lasted approximately 1½-2 minutes. After irritant exposure and completion of their training sequence, all subjects proceeded to a decontamination area and were allowed to irrigate their eyes and skin ad lib with water. Participants were randomized to a control group and intervention group. The intervention group was provided a cup containing a unit “dose” of 15cc of Johnson’s® baby shampoo and instructed to apply it liberally to their head, neck, and face. Repeat shampoo “doses” were available ad lib to this group. Irrigation was provided by a garden hose for police trainees exposed to OC and by a custom-made, multi-station irrigation device for military trainees exposed to CS. This device was constructed of two PVC pipes supported horizontally three feet off the ground and connected to a fire hydrant. Water flow was adjusted to produce an approximately 48-inch column of water from each of 20 holes drilled in each PVC pipe at offset angles. Oleoresin capsicum or “pepper spray” is an oil-based extract from pepper plants of the genus Capsicum. The chemically active ingredient is capsaicin, a fat-soluble phenol. OC causes its effect by stimulating type C unmyelinated nerve fibers that cause the release of substance P along with other neuropeptides, causing neurogenic inflammation and vasodilation.These neuropeptides also produce protective reactions of mucus secretion and coughing.Clinically this results in a painful burning sensation of the skin and mucous membranes, blepharospasm , and shortness of breath. Although OC causes a prominent subjective sense of dyspnea due to mucosal irritation, research has shown no objective change in respiratory function.OC has been estimated to be 90% effective in stopping aggressive behavior.A prior review of ED visits for OC exposure found the most common symptoms to be burning, erythema and local irritation to exposed areas.“Tear gas” is a lay term used to describe a group of irritant chemicals that cause lacrimation. The most commonly used agent by law enforcement is CS. CS is actually a crystalline solid, not a gas, making the term “tear gas” a misnomer; it is insoluble in water and has a small solubility in alcohols.It is aerosolized by multiple techniques including dissolving it in an organic solvent, micro-pulverization into a powder or in use with a thermal grenade that produces hot gases.

Posted in hemp grow | Tagged , , | Comments Off on Police recruits received a two-second burst of police issue OC spray to the face

Six LEMSA have protocols for the prehospital administration of therapeutic hypothermia

The pre-implementation time period used for comparison was November 1, 2014, to May 1, 2015. The post-implementation time period was November 1, 2015, to May 1, 2016. We defined TTVS documented as the time from quick registration to first vital sign documented in the electronic medical record. The pilot phase was initiated in May 2014 for eight hours/day, five days/week, excluding weekends. This was extended to 16 hours/day, seven days/week in November 2014, which was the study period. During the implementation period, a vital signs station was created and a personal care assistant was assigned to the waiting area with the designated job of obtaining vital signs on all patients upon arrival to the ED and prior to leaving the waiting area. PCAs are part of the ED team and perform duties under the supervision of doctors and nurses. They assist with numerous tasks. This vital sign station was directly adjacent to the quick registration desk. After patient arrival and sign-in, a quick registration including name, date of birth, and chief complaint was completed. Subsequently, patients were directed to a PCA with a portable vital signs machine and a computer on wheels with access to the EMR. The PCA’s sole task was to obtain vital signs on all patients before they left the waiting area and then enter this information in the EMR. Patients who arrived via EMS had vital signs entered by the ED triage nurse and were also included in this analysis. PCAs were also empowered to obtain vital signs on patients who were waiting in line for registration.The implementation of DTR has had countless benefits, including faster turnaround times, improved door-to- doctor times, and decreased LWBS rates.3 By reducing ED crowding, decision-making time can be reduced as well as reducing over-use of the laboratory and computed tomography.9 However, our experience has shown that an unintended consequence of DTR is both a delay and inconsistency in obtaining initial vital signs. In this study, we demonstrated that the implementation of a vital sign station at ambulatory registration reduced the TTVS, an unintended consequence of DTR, by a mean time of nearly six minutes. When we coupled a vital signs station with our already existing quick registration process,vertical rack the department experienced no delays in overall throughput. Although this now adds a few minutes to the quick registration, we found that the overall benefits far outweigh this short delay.

For EDs that have some form of quick registration and DTR process and experience similar delays in obtaining vital signs, we believe that creating a vital sign station in the waiting room is a feasible and effective solution that could be implemented by any ED. Our ED has two portals of entry: an ambulance entrance, where the patient is immediately triaged and has his vital signs obtained by a nurse who then enters them in the patient chart; and a quick registration desk in the waiting room where all ambulatory patients must sign in prior to being brought to the treatment area. At the quick registration desk, brief demographic information and chief complaint is obtained, which allows the patient to be entered into the EMR and receive a medical record number. After undergoing a quick registration, there are three subsequent pathways for the patient: 1) taken directly into the treatment area by a nurse, PCA, or pavilion coordinator ; 2) taken to a triage station for formal nursing triage, 3) queued in the waiting room for either the next available DTR or formal triage availability. At our institution the pavilion coordinator is an ED greeter who helps the nursing staff facilitate our DTR process. Quick registration with chief complaint and vital sign assessment is markedly different from formal triage, in that formal triage requires nursing resources and a significant amount of time. Quick registration only requires patient demographics and chief complaint, whereas traditional formal triage includes expanded history-taking and a medical assessment including allergies, medications, surgical history, etc. which can lead to a delay in initial clinical assessment in treatment areas. There are many potential benefits to this new process besides the decrease in TTVS. Obtaining earlier vital signs enhances patient safety since it allows for earlier recognition of potentially abnormal vital signs and therefore prompt treatment and intervention. This is especially true in the patient who may appear stable. Second, patient satisfaction is improved since they recognize that they are being taken care of from the moment they walk into the ED. Implementation may be limited due to PCA competing priorities and unanticipated staffing needs within the department. While there were no extra personnel costs as staffing did not increase to fill the vital signs station, we did decrease the availability of existing PCAs in the clinical arena. Annually over 400,000 people suffer non-traumatic out-of hospital cardiac arrest in the United States.This represents the third leading cause of death in industrial nations and accounts for eight times as many deaths as motor vehicle collisions.

There have been steady, albeit modest, improvements in the survival of patients with OHCA over the past decade.Other improvements including higher rates of bystander CPR, dispatch directed CPR, deployment of automatic external defibrillators in the community, and improved CPR quality have also contributed to increasing survival rates.Recently the American Heart Association and other subject matter experts have advocated for the development of regional systems of cardiac arrest care with designation of cardiac arrest centers.A cardiac arrest center is a hospital that provides evidence-based practice in resuscitation and post-resuscitation care, including 24/7 percutaneous coronary intervention capability and targeted temperature management , as well as an adequate annual volume of OHCA cases and a commitment to performance improvement and bench marking. There is a similar precedent in the establishment of ST-segment elevation myocardial infarction centers over the past decade to improve outcomes in that time-dependent disease.Observational studies suggest a benefit of regionalization; therefore, the establishment of regional care systems may optimize access to and delivery of care for patients with OHCA. A prospective study demonstrated improved outcomes in patients with OHCA transported to a cardiac arrest center compared to non-cardiac arrest centers.There have been numerous observational studies with differing hospital characteristics as well as a number of studies that compared outcomes before and after the implementation of regionalized systems of care,microgreen flood table all suggesting an association between improved survival and routing of select patients to cardiac arrest centers. A regionalized cardiac arrest system involves a systematic approach to the care of the OHCA patients across a geographic area. This would include consistency in prehospital care, selective transport to designated cardiac arrest centers, consistent policies on the post-resuscitation care, and participation in a regional performance improvement process to address any potential disparities in care. Currently, most cardiac arrest centers in the U.S. are self-designated academic centers.The extent to which regionalization of cardiac arrest care has been established is not well quantified.

Two studies describing established regional cardiac arrest care systems demonstrated improved patient outcomes with regionalization.This survey of local EMS agencies in California was intended to determine the current practices regarding the treatment and routing of OHCA patients and the extent to which EMS systems have regionalized care across California. The State of California has a population of 39 million, and EMS care is regulated by the California EMS Authority. Oversight of local care is provided by 33 LEMSA. These government agencies establish uniform policies and procedures for a countywide or region-wide system of first responders and EMS providers. While all LEMSA must have an EMS plan that conforms to California EMS Authority mandates, policies and protocols vary among them.We surveyed all 33 California LEMSA on three topics: 1) local policy regarding routing of OHCA patients to designated cardiac arrest centers; 2) specific interventions for post-resuscitation care available in those centers; and 3) access to data on OHCA treatment and outcome measures. We also requested system metrics on frequency of OHCA and patient outcomes. Of note, our survey inquired about the policies and protocols pertaining to all OHCA patients, not only those who achieved ROSC. We developed a 37-question survey in three sections: field treatment and routing policies ; specialty centers ; and system data. Prior to dissemination, the survey was reviewed by several LEMSA administrators and subsequently edited for clarity. The survey was distributed by email to the California LEMSA administrators and medical directors in August 2016, available online via Qualtrics software. Reminders were sent until all LEMSA completed the survey. We clarified incomplete or inconsistent survey responses by email and/or phone. The primary objective was to describe management of OHCA throughout California in terms of current treatment guidelines and specifically to determine the extent to which systems have regionalized care. Responses were submitted by either the LEMSA director or representative and downloaded or input into Excel for analysis. The findings of this study will be shared with the EMS Medical Directors Association of California ,an advisory body to the California EMS Authority comprised of all EMS medical directors of the 33 LEMSAs, who meet quarterly to advise the state on issues pertaining to prehospital scope of practice and quality of care. This study was submitted to the Institutional Review Board at the University of California at San Francisco and was deemed to not involve human subjects as to require continuous IRB review.

All 33 California LEMSA participated in the survey for a response rate of 100%. Table 1 provides a summary of LEMSA routing policies. Two LEMSA reported a fully developed regional cardiac arrest care system with specific clinical protocols to direct patients to cardiac arrest centers, a role in influencing hospital policies about post-cardiac arrest care, and participate in a regional performance-improvement process. The Los Angeles regional cardiac arrest system has been described previously.In LA, all OHCA with ROSC and those transported with presumed cardiac etiology are routed to designated centers, which double as STEMI and cardiac arrest centers. All have 24/7 PCI capability, written internal protocols for TTM, and take part in a regional performance-improvement process. Alameda County operates a similar system, routing all OHCA patients with ROSC at any time to cardiac arrest centers. A large number of LEMSA , comprising a population of 14 million, have specific protocols to direct all OHCA patients with ROSC to designated PCI-capable hospitals. They have a limited role or no role in influencing hospital policies about post cardiac arrest care and do not have a regional performance-improvement process. There was inconsistency among agencies regarding the protocols and reporting required from these hospitals. Nearly all LEMSA have a termination of resuscitation protocol for OHCA. Eight LEMSA have policies and protocols that direct the use of TTM during post-resuscitation care, requiring hospitals to have a written TTM protocol, and five have a memorandum of understanding to enforce the requirement and allow them a role in determining the inclusion and exclusion criteria.Seven LEMSA have policies that require receiving hospitals to have a written protocol for emergent PCI after OHCA. Of these, four have memoranda of understanding with the hospitals and three have a role in determining inclusion and exclusion criteria. The use of PCI for patients with persistent cardiac arrest was rare. Fifteen agencies reported that this occurred in their system, but none reported more than 3-5 patients. Eleven LEMSA have hospitals with extracorporeal membrane oxygenation capability, but it was rarely used for this indication and there were no LEMSA with specific routing or regional policies for its use. Mechanical CPR devices were optional for 18 local EMS agencies. One agency required the use of mechanical CPR devices during transport and another required them for all OHCA patients. The majority of LEMSA report collecting process measures for system quality improvement, with EMS response time the most commonly measured , followed by the time to CPR , the time to defibrillation , and the rate of dispatcher-assisted CPR. However, the measurements of in-hospital outcomes were significantly lower with survival to hospital discharge the most commonly measured. The frequency of reported treatment and outcome measures are listed in Table 2. We present the current policies for treatment and routing of all OHCA patients throughout California with a 100% survey response rate.

Posted in hemp grow | Tagged , , | Comments Off on Six LEMSA have protocols for the prehospital administration of therapeutic hypothermia

One or more diagnoses can be coded by ICD-9 in the KPNC administrative databases

Ascertainment of HIV infected patients by this registry has been shown to be at least 95% complete. The HIV registry contains information on patient demographics , HIV transmission risk group , dates of known HIV infection, and AIDS diagnoses. KPNC also maintains complete and historical electronic databases on hospital admission/discharge/transfer data, prescription dispensing, outpatient visits, and laboratory tests results, including CD4 T-cell counts and HIV-1 RNA levels. Mortality information including date and cause of death are obtained from hospitalization records, membership files, California death certificates, and Social Security Administration databases. Mortality data were complete through December 31, 2007. Antiretroviral medication prescription data were obtained from KPNC pharmacy databases. Approximately 97% of members fill their prescriptions at KPNC pharmacies, including patients whose prescriptions are obtained through the Ryan White AIDS Drug Assistance Program. ARV medication data included date of first fill, dosage, and days supply, as well as data on all refills. Patients were classified as: currently receiving combination-ARV , current dual NNRTI/NRTI ARV use, past ARV use, or never users.Psychiatric diagnoses were assigned by providers.Psychiatric diagnoses selected for this study were the most common and serious psychiatric disorders diagnosed among health plan members including schizophrenic disorders , major depressive disorder, bipolar affective disorder, neurotic disorders , hysteria, phobic disorders, obsessive-compulsive disorder, anorexianervosa, and bulimia. We examined the impact of having one or more of these psychiatric disorders in aggregate, as in prior HIV studies.Within the health plan,rolling benches canada psychiatry can be accessed directly by patients. Mild cases of depression and anxiety may be addressed in primary care with medication but moderate to severe cases are referred to psychiatry.

Treatment in psychiatry includes assessment, psychotherapy and medication management. Patients diagnosed with a psychiatric disorder generally return to psychiatry for individual and/or group psychotherapy and/or medication evaluations. Our measure of psychiatric treatment was whether or not a patient had visits to a psychiatric clinic after a psychiatric diagnosis,obtained from automated databases.A diagnosis of ICD-9 substance dependence or abuse can be made by the patient’s clinician in primary care, SU disorder treatment, or psychiatry as a primary or secondary diagnosis.Diagnostic categories include all alcoholic psychoses, drug psychoses, alcohol dependence syndrome, drug dependence , alcohol abuse, cannabis abuse, hallucinogen abuse, barbiturate abuse, sedative/tranquilizer abuse, opioid abuse, cocaine abuse, and amphetamine abuse; as well as multiple substance abuse and unspecified substance abuse. In our analyses we classified patients as having one or more diagnoses of substance abuse and/or dependence versus no diagnosis.KPNC provides comprehensive outpatient SU treatment available to all members of the health plan. Services include both day hospital and traditional outpatient programs,both of which include eight weeks of individual and group therapy, education, relapse prevention, family therapy, with aftercare visits once a week for ten months. In addition to these primary services, ambulatory detoxification and residential services are available, as needed. A small proportion of patients engage in residential SU treatment, conducted by contractual agreement with outside institutions. These data are available in the KPNC referrals and claims databases. As with psychiatric treatment, in the current study SU treatment initiation was measured as having one or more visits to an outpatient program or a stay in a residential SU treatment unit following diagnosis.Analyses focused on diagnoses of psychiatric disorders with and without co-occurring SU diagnoses as the primary predictors of interest. The distribution of demographic, clinical and behavioral characteristics was compared between patients with and without a major psychiatric diagnosis; statistical significance was assessed using the w2 test.

The distribution of cause of death was examined by psychiatric diagnostic status ; statistical significance was assessed using the w2 test or Fisher’s exact test where table cells were sparsely populated. Cox proportional hazards regression was used to obtain point and interval estimates of mortality relative hazards associated with psychiatric diagnosis/treatment status and SU problems diagnosis/treatment status, with each of these two time dependent covariates measured at three levels: no diagnosis, diagnosis with treatment, diagnosis without treatment. With the goal of examining the joint effects of these two covariates on mortality, results are expressed as hazard ratios for combi nations of psychiatric diagnosis/treatment and SU diagnosis/treatment levels, with no diagnosis of either comorbidity as the referent. These estimates were adjusted for an a priori chosen set of available covariates, including age at entry into study, race/ethnicity, gender, HIV transmission risk group, CD4 T-cell counts and HIV RNA levels and ARV treatment modeled as time-dependent covariates, year of known HIV infection, AIDS diagnosis prior to entry into study, and evidence of hepatitis C viral infection. Initial modeling results demonstrated a significant interaction be tween psychiatric and SU diagnosis/treatment status in Cox regression models. Therefore, relative hazard estimates of interest were obtained via appropriate linear combinations of parameter estimates from a fully saturated model. Although a significant minority of patients remained ARV naı¨ve throughout the study follow-up,flood table we wanted to estimate adherence to combination highly active antiretroviral therapy stratified by psychiatric diagnosis and SU diagnosis status for study participants who did receive HAART. Adherence was measured using electronic pharmacy dispensing refill records; the “days supply of HAART medication was divided by the “total time elapsed between first day of HAART initiation and last day of HAART medication supply’’ over the first 12 and 24 months of study follow-up. Mean and standard deviaition of adherence were then estimated by diagnostic status category.

All data analyses were conducted using SAS software, version 9.1.The distributions of demographic and HIV-related clinical and behavioral characteristics by psychiatric diagnosis status are presented in Table 1. The results of w2 tests indicate significant differences in most characteristics between those patients with and without a psychiatric diagnosis. However it can be seen that the categories of these characteristics were still very similar in distribution in both groups. Finding significant results for very small differences in distributions is likely the consequence of having a very large sample size in this study. The majority of patients were white, male, 30–49 years of age at baseline, and belonged to the men who have sex with men HIV transmission risk group. CD4 T lymphocyte cell counts measured at or near time of study entry were comparable in both patients with and without a psychiatric diagnosis. Similar results were observed for HIV RNA levels. Of the 2472 patients with a psychiatric diagnosis, 83.9% had one or more psychiatry department visits. The proportion of patients with any ARV therapy experience at baseline was similar across psychiatric disorder status, with on average 35% of all patients having no ARV experience. Throughout study follow-up, approximately 25% of all patients remained ARV naı¨ve. Among those who were receiving HAART during study follow-up, mean adherence was estimated as 82.4% among patients with a psychiatric diagnosis at 12 months after initiation of HAART and 83.7% among patients with no psychiatric diagnoses; similar mean adherence was observed at 24 months. Patients diagnosed with SU problems showed mean adherence of 81.1% at 12 months after initiating HAART in comparison to 83.5% among patient without a SU problem diagnosis. Because adherence rates were similar across diagnostic status, we did not conduct a subanalysis of ARV-experienced patients only, where adherence would have been included as a covariate in the regression model. The distribution of cause of death cross-tabulated by psy chiatric diagnosis is presented in Table 2. The majority of deaths among patients with or without a psychiatric diagnosis were attributed to HIV/AIDS. The remaining causes of death had proportionately the same distribution across categories of psychiatric diagnosis status, with the possible exception of suicide which was twice as common among patients with a psychiatric diagnosis in comparison to pa tients with no diagnosis. Examining all-cause mortality for the entire study follow-up, we found an age-adjusted mortality rate of 28.6 deaths per 1000 person–years for patients with a psychiatric diagnosis versus 17.5 deaths for those with out a psychiatric diagnosis. To examine the joint effects of psychiatric diagnosis, psychiatric treatment visits, SU diagnosis, and SU treatment on mortality, relative hazards were estimated using Cox proportional hazards regression. As mentioned in Statistical methods, the effects of psychiatric diagnosis/treatment and SU diagnosis/treatment were not additive, with statistically significant interactions between these covariates.

RHs and 95% Confidence Intervals estimated from unadjusted and adjusted models are presented in Table 3. Categories of diagnosis and treatment are ordered from lowest to highest RH in the unadjusted model 1. In comparison to patients with neither a psychiatric diagnosis nor a SU diagnosis , the highest risk of dying was found among patients with dual diagnoses but who had no psychiatric treatment visits and no SU treatment. This effect was somewhat attenuated after adjustment for potential confounders but remained statistically significant. Similar results were observed for patients who had a psychiatric diagnosis but no psychiatric services and no SU diagnosis that were very similar to those parameter estimates in model 2.During 12 years of follow-up , we observed a higher mortality risk for HIV-infected patients diagnosed with both psychiatric and SU disorders in comparison to patients with neither diagnosis. However, we observed that psychiatric and SU treatment, in general, reduced mortality risk in single and dual diagnosed patients, and remained statistically significant even after adjustment for age, race, immune status, HIV viral load, antiretroviral therapy use, and other potential confounders. Accessing psychiatric treatment reduced mortality risk among dual diagnosed patients who were treated or not treated for SU disorder. Previous studies of individuals with HIV infection have found that those with psychiatric disorders are at elevated risk for poor medication adherence and clinical outcomes.There is substantial evidence that depression, stressful life events and trauma affect HIV disease progression and mortality.This effect has been found even controlling for medication adherence, in a study that showed that HAART adherent patients with depressive symptoms were 5.90 times more likely to die than adherent patients with no depressive symptoms.Depressive symptoms in dependently predicted mortality among women with HIV,18 and also in a separate study of men.17 Similarly, in multi variate analyses controlling for clinical characteristics and treatment, women with chronic depressive symptoms were 2 times more likely to die than women with limited or no depressive symptoms.Among women with CD4 cell counts of less than 200 10/ L, HIV-related mortality rates were 54% for those with chronic depressive symptoms and 48% for those with intermittent depressive symptoms compared with 21% for those with limited or no depressive symptoms. Chronic depressive symptoms were also associated with significantly greater decline in CD4 cell counts after controlling for other variables.These mechanisms could help to explain the greater risk of mortality ob served in our sample. Our findings strongly highlight the importance of access to psychiatric and SU disorder treatment for this population. It was estimated that during a 6-month period, 61.4% of 231,400 adults in the United States receiving treatment for HIV/AIDS used psychiatric or SU disorder treatment services.A significant number of HIV-infected patients report accessing psychiatric services.Such visits are associated with decreased risk of discontinuing HAART.Burnam et al.found that those with less severe HIV-related illness were less likely to access psychiatric or SU disorder treatment. One study found that engagement in SU disorder treatment was not associated with a decrease in hospital use by HIV-infected individuals with a history of alcohol problems.Improvement in depression was associated with increase in HAART adherence among injection drug users.A limitation of our study may have been the differences in timing of the psychiatric diagnosis and/or SU diagnosis. Some patients in our sample may have received their psychiatric diagnosis shortly after the onset of symptoms or in the initial phase of substance dependence or abuse, while other patients may have been diagnosed at a more advanced stage. Some patients may have met the criteria for a psychiatric or SU diagnosis without receiving one. In addition, some study subjects may have received psychiatric care or informal SU disorder services or self-pay services outside of the KPNC health plan, and our study does not have information about those services. We also could not control for level of comorbidity for other diseases and conditions at baseline, because many patients had insufficient health plan membership time prior to study entry. This study examined mortality among HIV-infected patients with private health insurance who received medical care in an integrated health plan, who had full access to psychiatric and SU disorder services, and who had received diagnoses of psychiatric disorder and substance dependence or abuse by a clinician.

Posted in hemp grow | Tagged , , | Comments Off on One or more diagnoses can be coded by ICD-9 in the KPNC administrative databases

The estimates varied by state while trends for each beverage type were consistent across states

This increase was driven by the brand Steel Reserve, with a %ABV of 8.1%, as the top-selling brand in the malt beer category from 2006 onwards. Also between 2005 and 2006 the market share of malt beer increased by about 29%, although it still only comprised less than 3% of the market share in 2006. The decline in the national mean %ABV of beer between 2006 and 2010 was explained by the continued decline in market shares of premium beer, which lost 20% of its market shares over this period. The marked increase in the national mean %ABV of beer from 2010 to 2016 was driven by the increase in mean %ABV and market shares of flavored malt beverages and of craft beer. The mean %ABV of FMBs increased from 5.9% to 6.5%, and of craft beer from 4.9% to 5.3% between 2011 and 2016. Over the same period FMBs increased its market share by approximately 56%, while craft beer increased by approximately 85%. It is also important to note that light beer, which had a stable %ABV over time of about 4.3%, showed a steady decline in market shares from a high of 52.9% in 2010 to 44.5% in 2016. The increase in the mean %ABV of wine between 2007 and 2010 was driven by increases in the sales-weighted mean %ABV of table wine. Table wines increased from 11.7 in 2007 to 12.4 %ABV in 2010 when the %ABV peaked and changed little thereafter. Table wines comprised the vast majority of wine sales nationally with a market share consistently around 90%. This market share changed little over the entire 2003-2016 period from 90.2% to 90.7%, and was highest in 2010 at 91.8%. The slight decline in the mean %ABV of wine between 2010 and 2011 was attributable to the decline in the mean %ABV of dessert and fortified wine from 15.0% to 14.1%, which also lost market shares by approximately 16% between 2010 and 2011, although comprised only 3% of the market in 2010. Spirits showed a steady increase in mean %ABV over the 2003 to 2016 period,microgreen rack for sale reflecting a gradual increase in the market shares of higher %ABV spirits and a gradual decrease in lower %ABV spirits.

Vodka, with a mean 40% ABV throughout the study period, showed the largest rise in market shares from 26.2% in 2003 to 33.6% in 2016. Similarly, market shares of tequila, also with a mean 40% ABV, increased market shares from 4.8% to 7.2%. Straight whiskey also increased its market shares from 8.4% to 9.5% between 2003 and 2016, and had a slight increase in mean %ABV from 41.1% to 41.9%. There were limited changes in the mean %ABV of spirits sub-types, with the exception of cordials & liqueurs and prepared cocktails. Cordials & liqueurs showed an increase of mean %ABV from 23.7% to 28.4%, and prepared cocktails from 9.7% to 11.9%. National beverage-specific and total per capita alcohol consumption estimates. The new national variant %ABV-based PCC estimates for beer, wine, and spirits, and for total consumption, with comparisons to AEDS estimates are presented in Figure 2. Overall, our new estimates showed that consumption of pure alcohol from beer was somewhat higher for every year and that consumption of alcohol from wine, spirits, and total PCC was lower in every year compared to AEDS estimates. Our PCC estimates from beer decreased from 4.8 to 4.4 liters per capita between 2003 and 2016 and showed a similar trend over time compared to AEDS estimates. However, the percent difference between the AEDS and our estimates increased between 2011 and 2016 from 3.2% to 5.1% showing that the trends diverge slightly. Our PCC estimates from wine increased from 1.2 to 1.6 liters per capita between 2003 and 2016. The trend is similar to AEDS estimates, although there is a notable convergence between our estimates and the AEDS estimates, where the percent difference decreased from 9.7% in 2003 to 5.0% in 2016. Our estimates of PCC from spirits increased between 2003 and 2016 from 2.31 to 2.98 liters per capita and followed a very similar trend to the AEDS estimates, remaining mostly parallel over the study period. A slight convergence was observed as the percent difference between our estimate and the AEDS estimate was 10.3% in 2003, 7.9% in 2015 and 6.8% in 2016. Our total PCC estimates followed a similar pattern over the 2003 to 2016 period to that of the AEDS estimates . However, there are important differences. Overall, our total PCC estimates were lower than the AEDS estimates.

Further, the trend for our estimate converged with the AEDS estimate trend. The difference between our estimates declined from 0.24 liters of alcohol per person in 2003 to a difference of just 0.08 liters in 2016. Importantly, the percent change between 2003 and 2016 for the AEDS estimates was 5.8% compared to a 7.9% change in our estimates over the same period. This 7.9% change represents 0.66 liters, which is a mean of approximately 37 drinks per person per year. In contrast, a 5.8% change represents 0.48 liters, which is a mean of approximately 27 drinks per person per year. State %ABV estimates for beer, wine, and spirits. The estimates of the mean %ABV of beer, wine,cannabis grow facility layout and spirits for each state and the District of Columbia for selected years are presented in Table 3. The mean %ABV of each beverage type are seen to vary by state in each year, reflecting the variation in preferences and mean %ABV for each beverage sub-type across states and time. All states and the District of Columbia showed an increase in the mean %ABV of beer between 2003 and 2016, and most states followed the national trend. The states with the least amount of change over the 2003-2016 period were North Dakota, Virginia, and Iowa with percent increases of 1.2%, 1.1%, and 0.9%, respectively, while New Mexico, Montana, and Maine experienced the greatest percent increases of 4.9%, 4.4%, and 4.3%, respectively. For wine, all states showed an increase in mean %ABV and followed the national trend. The states with the greatest increases between 2003-2016 were Idaho, Virginia, and Tennessee with increases of 6.8%, 6.8%, and 6.7%, respectively. The states with the lowest percent change were Illinois, North Carolina, and Mississippi with increases of 3.1%, 3.0%, and 2.9%, respectively. For spirits, 45 states and the District of Columbia showed increases in the mean %ABV of spirits, and of these the vast majority followed the national trend. Ohio, Rhode Island, and Nebraska had the largest percent increases at 10.5%, 7.9%,and 6.6%, respectively, while West Virginia, Mississippi, and Alabama had the largest decreases in %ABV for spirits of 0.4%, 0.5%, and 1.8%, respectively. State mean %ABVs and market shares for beverage sub-types. The change in the mean %ABV of beer, wine, and spirits was driven by changes in beverage sub-type mean %ABVs and preferences, and these %ABVs and preferences varied by state. To describe these state-level beverage sub-type %ABV and preference changes in relation to state-level changes in mean beverage-specific %ABV, we present data for the states with the largest change in mean %ABV for each beverage type.

The increase in %ABV of beer for New Mexico, which had the largest percent increase of 4.9%, is attributable to a decline in the market shares of beer with relatively low mean %ABV and an increase of relatively higher mean %ABV beer sub-types. Between 2006 and 2016 the market shares of light beer declined from 51.5% to 37.6%. The market shares of the super premium, micro/specialty, and FMBs sub-type category increased from 6.8% in 2006 to 11.9% in 2010, and between 2011 and 2016 the market shares of craft beer increased from 8.2% to 14.9%. Similar to the national trends in the mean %ABV of wine, state-level trends were driven by the increase in the mean %ABV and the market shares of table wine. Idaho, which had the largest percent change in mean %ABV of wine of 6.8%, had the largest market share of table wine for most years between 2003 and 2016, where market shares of table wine were 97.3% in 2003 and 97.4% in 2016. Comparable to national trends in the mean %ABV of spirits, state level trends were driven by declines in the market shares of low %ABV spirit sub-types and increases in high %ABV spirit sub-types. Between 2003 and 2016, Ohio had the largest increase in mean spirits %ABV of 10.5%. Unlike the national trend, it showed a marked increase between 2012 and 2014 after which it leveled off. The increase in %ABV between 2012 and 2014 was driven by a decline in the market shares of prepared cocktails from 9.3% in 2012 to 0.2% in 2014 and a concomitant increase in the market shares of cordials and liqueurs, straight whiskey, tequila, and brandy & cognac. State beverage-specific and total per capita alcohol consumption estimates. The new beverage-specific %ABV-variant PCC estimates for selected years for each state are presented in Table 3.The total PCC estimates for each state with comparisons to AEDS estimates for 2003 and 2016 are presented in Table 4. The estimates varied by state in each year, representing the range in total PCC by state. Table 4 also shows the percent change in total PCC for each state for both our new estimates and the AEDS estimates. The ranking by percent change varies by the new and AEDS estimates. North Dakota has the largest percent change in total PCC according to both estimates, however, the new estimates rank Vermont second followed by Idaho while the AEDS estimate rank Idaho second followed by Vermont. The vast majority of states showed an increase in total PCC, although 2 more states, Nebraska and Illinois, showed a decline according to AEDS estimates than did according to our new estimates. For all beverage types, our mean %ABV estimates increased nationally and for all but five states. These increases were driven by an increase in national and state preferences for beverages with a higher and increasing %ABV and a decrease in preferences for lower %ABV beverages. The estimates of PCC from wine and spirits utilizing variable %ABV conversion factors were lower than AEDS estimates, while consumption from beer was higher. While our total PCC estimates were also lower than AEDS estimates, the trends in PCC showed a more dramatic increase in pure alcohol volume than those using ABV-invariant methods. Researchers have used PCC estimates to try to understand the observed increases in alcohol-related morbidity and mortality in the U.S. over the first part of the 21st century. For example, White et al noted an increase of 1.7% in PCC and concluded that it did not appear to be related to the 47% increase in the rate of alcohol-related ED visits from 2006 to 2014 . Using our ABV variant method, PCC between 2006 and 2014 increased by 3.6%, over double the increase using the ABV invariant method. This difference and the absolute increase using the ABV variant method may not alone explain the increase in the rate of alcohol-related ED visits. However, because the change in PCC was likely underestimated, it suggests PCC should not be dismissed and may be one of many factors driving the increase in alcohol-related emergency room visits. This example also highlights the importance of the rate of change in PCC trends, and is consistent with findings from an Australian study that similarly showed the value of including time-varying ABV values to ensure precision in PCC estimates so change over time can be accurately measured . It is important to note that cohort and lag effects may also be drivers of the disparity between changes in alcohol-related morbidity and mortality and changes in PCC. Cohort effects may be related in that previous generations may have been drinking at high levels that resulted in death from alcohol-related diseases so that their alcohol consumption would not be included in current PCC estimates . Lag effects may contribute because the time from changes in PCC to the time to first effect for some alcohol-attributable diseases, such as alcohol-related cancers, is at least 10 years .

Posted in hemp grow | Tagged , , | Comments Off on The estimates varied by state while trends for each beverage type were consistent across states

The absence of such evidence-based policies is an important driver of harm

However, e-cigarette’s harm quotient should stay low, provided they are properly regulated in terms of their components, including nicotine. Social influences and attitudes drive harm through stigma, social exclusion and social marginalization; these are often side-effects of drug policies, which can bring more harm than drug use itself.Policies that reduce exposure to drugs are essentially those that limit availability by increasing the price and reducing physical availability.Limits to availability bring a range of co-benefits to educational achievement and productivity, for example, but they can also bring adverse effects – for example, the well documented violence, corruption and loss of public income associated with some existing ‘illegal’ drug policies. Individual choices and behaviour that drive harm from drug use are determined by the environment in which those choices and behaviours operate. Banning commercial communications, increasing price and reducing availability are all incentives that impact individual behaviour. Research and development can be promoted to reduce the potency of existing drugs and their drug delivery packages. Unfortunately, there remain enormous gaps between the supply and demand of evidence-based prevention, advice and treatment programmes. Called for by United Nations Sustainable Development Goal 3.5, their supply can bring many co-benefits to society, including reduced social costs and increased productivity. The harm driven by the gaps is due in large part to insufficient resources and insufficient implementation of effective evidence-based prevention and treatment programmes. Currently these programmes represents less than 1% of all costs incurred to society by drugs.

Similar to medicines agencies that assess and approve drugs,commercial greenhouse supplies prevention agencies could be created. Compounding the gap between supply and demand is the fact that often, considerable marginalization and stigmatization happens in the path to treatment, and this is then further exacerbated by the treatment itself. The use of pharmacotherapy as an adjunct may be further limited due to ideological stances, poorly implemented guidelines, lack of appropriate medication, and even a perceived lack of effect, if the drug is available. The private sector is a core driver of harm, through commercial communications which include all actions undertaken by producers of drugs to persuade consumers to buy and consume more. There are international models encouraging better control of commercial communications in the public health interest, the most notable being the Framework Convention on Tobacco Control. In addition to commercial communications, the private sector drives harm through shaping drug policies, leading to more drug-related deaths. Governance structures thus need to have the capability and expertise to supervise industry movements that shape drug-related legislation and regulations, including regulating and restricting political lobbying. One of the difficulties here is that politically driven change in difficult areas, such as drug policies, is highly dependent on collective decisions and influenced by what has been termed specular interaction, in which a politician’s actions may be less determined by their own conviction, and more by their evaluation of beliefs of their rivals and friends. The health footprint is the accountability system for who and what causes drug-related harm. Jurisdictional entities can be ranked according to their overall health footprint, in order to identify the countries that contribute most to drug attributable ill-health and premature death, and where the most health gain could be achieved at country level. Jurisdictional footprints could include ‘policy attributable health footprints’ which estimate the health footprint between current policy and ideal health policy. This would address the question: ‘what would be the improvement in the health footprint compared to present policies, were the country to implement strengthened or new policies?’ Conversely, the health footprint can provide accountability for when such evidence-based policy is not implemented correctly.

A range of sectors are involved in nicotine and alcohol related risk factors. These include producer and retail organizations such as large supermarket chains, and service provider companies such as advertising and marketing industries. There is considerable overlap between sectors, and estimates will need to determine appropriate boundaries for health footprint calculations. Companies could report their health footprints and choose to commit to reducing them by a specified amount over a five to ten-year time frame. Direct examples of producer action could include switching from higher to lower alcohol concentration products, and switching from smoked tobacco cigarettes to e-cigarettes.Cannabinoid receptors, the molecular targets of the active principle of cannabis 9 -tetrahydrocannabinol, are activated by a small family of naturally occurring lipids that include anandamide and 2-arachidonylglycerol . As in the case of other lipid mediators, these endogenous cannabis-like compounds may be released from cells upon demand by stimulus-dependent cleavage of membrane phospholipid precursors . After release, anandamide and 2-AG may be eliminated by a two-step mechanism consisting of carrier-mediated transport into cells followed by enzymatic hydrolysis . Because of this rapid deactivation process, the endocannabinoids may primarily act near their sites of synthesis by binding to and activating cannabinoid receptors on the surface of neighboring cells . The development of methods for endocannabinoid analysis and the availability of selective pharmacological probes for cannabinoid receptors have allowed the exploration of the physiopathological functions served by the endocannabinoid system. Although still at their beginnings, these studies indicate that the endocannabinoids may significantly contribute to the regulation of pain processing , motor activity , blood pressure , and tumor cell growth . Furthermore, these investigations point to the endocannabinoid system—with its network of endogenous ligands, receptors, and inactivating mechanisms—as a potentially important arena for drug discovery. In this context, emphasis has been especially placed on the possible roles that CB1 and CB2 receptors may play as drug targets .

Here, we focus our attention on another facet of endocannabinoid pharmacology: the mechanisms by which anandamide and 2-AG are deactivated. We summarize current knowledge on how these mechanisms may function, describe pharmacological agents that interfere with their actions, and highlight the potential applications of these agents to medicine.Extracellular anandamide is rapidly recaptured by neuronal and non-neuronal cells through a mechanism that meets four key criteria of carriermediated transport: fast rate, temperature dependence, saturability, and substrate selectivity . Importantly, and in contrast with transport systems for classical neurotransmitters, [3 H]anandamide reuptake is neither dependent on external Na ions nor affected by metabolic inhibitors, suggesting that it may be mediated by a process of carrier-facilitated diffusion . How selective is anandamide reuptake? Cis-inhibition studies in a human astrocytoma cell line have shown that [ 3 H]anandamide accumulation is not affected by a variety of amino acid transmitters or biogenic amines . Furthermore, [3 H]anandamide reuptake is not prevented by fatty acids , neutral lipids , saturated fatty acyl ethanolamides ,cannabis dry rack prostaglandins, leukotrienes, hydroxyeicosatetraenoic acids, and epoxyeicosatetraenoic acids. Even further, [ 3 H]anandamide accumulation is insensitive to substrates or inhibitors of fatty acid transport , organic anion transport , and P-glycoproteins . By contrast, in the same cells, [3 H]anandamide reuptake is competitively blocked by either of the two endogenous cannabinoids, anandamide or 2-AG . Similar selectivity profiles are observed in primary cultures of rat cortical neurons or astrocytes and rat brain slices . The fact that both anandamide and 2-AG prevent [ 3 H]anandamide transport in cis-inhibition studies suggests that the two compounds compete for the same transport system. This possibility is further supported by three observations: 1) anandamide and 2-AG can mutually displace each other’s transport ; 2) [3 H]anandamide and [3 H]2-AG are accumulated with similar kinetic properties ; and 3) the transports of both compounds are prevented by the endocannabinoid transport inhibitor, N–arachidonylamide. Together, these findings indicate that anandamide and 2-AG may be internalized via a common carrier-mediated process, which displays a substantial degree of substrate and inhibitor selectivity. The molecular structure of this hypothetical transporter remains, however, unknown.The structures of anandamide and 2-AG contain three potential pharmacophores: 1) the hydrophobic carbon chain; 2) the carboxamido/carboxyester group; and 3) the polar head group . Systematic modifications in the carbon chain suggest that the structural requisites for substrate recognition by the putative endocannabinoid transporter may be different from those of substrate translocation. Substrate recognition appears to require the presence of at least one cis double bond in the middle of the fatty acid chain, indicating a preference for substrates whose hydrophobic tail can adopt an extended U-shaped conformation. By contrast, a minimum of four cis nonconjugated double bonds may be required for translocation, suggesting that substrates need to adopt a closed “hairpin” conformation to be transported across the membrane . In agreement with this hypothesis, molecular modeling studies show that transport substrates have both extended and hairpin low-energy conformers . By contrast, extended, but not hairpin, conformations may be thermodynamically favored in pseudosubstrates such as oleylethanolamide , that displace [3 H]anandamide from transport without being themselves internalized .

The impact that modifications of the polar head group exert on endocannabinoid transport has also been investigated . The available data suggest that ligand recognition may be favored 1) by a head group of defined stereochemical configuration containing a hydroxyl moiety at its distal end; and 2) by replacing the ethanolamine group with a 4-hydroxyphenyl or 2-hydroxyphenyl moiety. The latter modification leads to compounds, such as AM404 , that are competitive transport inhibitors of reasonable potency and efficacy .Anatomical studies of endocannabinoid transport are greatly limited by the lack of transporter-specific markers. Nevertheless, biochemical experiments have documented the existence of [3 H]anandamide uptake in primary cultures of rat cortical neurons and astrocytes , rat cerebellar granule cells , human neuroblastoma cells , and human astrocytoma cells . The CNS distribution of endocannabinoid transport was investigated by exposing metabolically active rat brain slices to [14C]anandamide and analyzing the distribution of radioactivity in the tissue by autoradiography . A receptor antagonist was included in the incubations to prevent the binding of [14C]anandamide to CB1 receptors, which are very numerous in certain brain regions , and AM404 was used to differentiate transportmediated [14C]anandamide reuptake from nonspecific binding . Substantial levels of AM404-sensitive [14C]anandamide reuptake were observed in the somatosensory, motor, and limbic areas of the cortex and in the striatum. Additional brain regions showing detectable [14C]anandamide accumulation included the hippocampus, thalamus, septum, substantia nigra, amygdala, and hypothalamus . Thus, endocannabinoid transport may be present in discrete regions of the rat brain that also express CB1 receptors .The endocannabinoid system is not confined to the brain, and it is reasonable to anticipate that mechanisms of endocannabinoid inactivation may also exist in peripheral tissues. In keeping with this expectation, carrier-mediated [ 3 H]anandamide transport was demonstrated in J774 macrophages , RBL-2H3 cells , and human endothelial cells . Although the kinetic and pharmacological properties of endocannabinoid uptake in peripheral cells appear to be generally similar to those reported in the CNS, some important difference have been observed. For example, in contrast to neurons, [3 H]anandamide uptake in RBL-2H3 cells is inhibited by arachidonic acid . Such disparities might reflect the existence in non-neural tissues of mechanisms of endocannabinoid internalization that are distinct from those found in the CNS.A variety of compounds have been tested for their ability to interfere with [3 H]anandamide internalization . Amongever, that AM404 is readily transported inside cells , where it can reach concentrations that may be sufficient to inhibit anandamide hydrolysis . To what extent this effect contributes to the ability of AM404 to prolong anandamide’s life span is at present unclear. The selectivity of AM404 for endocannabinoid transport has been the object of investigation. An initial screening found that AM404 has no affinity for a panel of 36 different pharmacological targets, including G protein-coupled receptors and ligand-gated ion channels . However, additional studies revealed that AM404 activates capsaicin receptor channels at concentrations similar to those necessary to inhibit endocannabinoid transport . The fact that AM404 can produce undesired effects underscores the need to introduce appropriate controls in the design of in vivo experiments with this compound. In particular, the effects of a cannabinoid receptor antagonist should be routinely tested to verify that endogenously produced anandamide and 2-AG are involved in the response to AM404 .AM404 does not display a typical cannabimimetic profile when administered in vivo; this is consistent with its poor affinity for cannabinoid receptors. For example, AM404 has no antinociceptive effect in mice or rats and causes no hypotension in guinea pigs .

Posted in hemp grow | Tagged , , | Comments Off on The absence of such evidence-based policies is an important driver of harm