Pain is a subjective sensory experience that cannot be directly measured nor quantified

Additionally, the current study goes beyond the scope of a standard clinical trial: in addition to assessing the effectiveness of the drug in AUD treatment, this study examines biological mechanisms underpinning the treatment, including neural activation to alcohol cues and stress, proinflammatory plasma biomarkers, and biological measures of stress response. Collection of these biological data for analyses of effects of IBUD represents an innovative aspect of the study which seeks to elucidate mechanisms underlying the treatment effects without compromising the clinicaltrial design required to address the primary aims. The ability of these data to provide mechanistic data for AUD pharmacotherapies is especially useful in the case of compounds like IBUD, for which the mechanism of action is currently unknown. Some limitations are to be acknowledged. Standard face-to-face behavioral support is not offered, instead using the computer-based Take Control program. While a recent study comparing computer-delivered Take Control to Therapist-Delivered Platforms found comparable drinking outcomes and higher medication adherence in the Take Control trials, suggesting that Take Control is a comparable and cost-efficient alternative to TDP in clinical trials, we acknowledge that face-to-face counseling remains the standard of care for AUD. Furthermore, abstinence is not the primary endpoint for the trial. However, based on our previous findings that IBUD improved phasic mood during stress- and alcohol-cue exposures as well as reductions in tonic levels of alcohol craving, we decided that PHDD was a more appropriate primary outcome. The successful completion of the current study will further develop IBUD, a promising novel compound with strong preclinical and safety data for AUD. In the case of encouraging results—i.e., if IBUD proves superior to placebo in this study—the stage will be set for a confirmatory multi-site trial leading to FDA approval of a novel AUD treatment.

Pain is a protective evolutionary function that involves “unpleasant sensory and emotional experiences associated with,hydroponic tables canada or resembling that associated with, actual or potential tissue damage”. Acute pain is an adaptive and essential survival behavior. Chronic pain is a pathological condition that poses a significant clinical, economic and social burden. Chronic pain is the most common clinical complaint in the United States affecting ∼10–20% of the U.S. population with an estimated annual cost of $600 billion, the most significant economic cost of any disease to-date. Neuropathic pain is defined as pain that is “initiated or caused by a primary lesion or dysfunction in the nervous system”. Neuropathic pain can be divided into either having peripheral origin or central origin and can be further divided into acute or chronic pain, the latter defined as pain lasting for longer than 3 months.Although pain is subjective and influenced by many physiological and psychological factors, measuring biomarkers of neuropathic pain provides an opportunity to identify objective markers of peripheral nerve damage and other pathology contributing to neuropathic pain. If used in combination, biomarkers related to pain mechanisms offer the possibility to develop objective pain related indicators that may improve diagnosis, treatment, and understanding of pain pathophysiology. The pursuit of pain biomarkers has followed two largely separate general directions: physiological vs. brain neuroimaging. Physiological pain biomarkers research has followed multiple lines of investigation including genetic, vesicular micro-RNA, metabolic/molecular, and stress markers. Neuroimaging biomarker research in neuropathic pain research was initially motivated by research into brain areas activated by painful stimuli and that vary with pain severity. Brain activity that occurs in response to pain can also be observed in the absence of pain, which has led to conflicting evidence regarding brain activity related to pain. Thus, some researchers are developing biomarkers based on the mechanisms underlying pain and pain perception and biomarkers that may predict response to medication and pain treatments allowing for prediction of personalized treatment responses. 

Toward the goal of identifying composite biomarkers for investigating neuropathic pain mechanisms and improving diagnosis and treatment response, we present a review of non-imaging and imaging pain biomarkers related to various neuropathic pain mechanisms, including opiate, inflammation, endocannabinoid mechanisms. In this review, we review mechanisms for neuropathic pain in general, but we focus on pain biomarkers for different types of peripheral neuropathies. Although various reviews of pain biomarkers exist, we focus on creating composite biomarkers through machine learning approaches that can most accurately identify people with neuropathic pain.Endogenous opioids are necessary for the expression of pain relief and pain-induced aversion. Blocking opioidergic transmission reduces dopamine release in the nucleus accumbens that accompanies pain relief.All opioid receptor types mediate analgesia but have differing side effects, mostly due to their variable regional expression and functional activity in different parts of central and peripheral organ systems. Endogenous opioids are particularly concentrated in circuits involved in pain modulation. Beta-endorphin levels in the CSF, blood and saliva have been investigated as possible pain biomarkers. Plasma Beta-endorphin has been used to investigate age responses to experimental pain. Patients with chronic neuropathic pain due to trauma or surgery have been shown to have lower levels of Betaendorphin in the CSF. Plasma and CSF Beta-endorphin have been investigated in patients with trigeminal neuralgia. Interestingly, Beta-endorphin in peripheral blood was related to levels in CSF; furthermore, the levels of Beta-endorphin were inversely correlated with the severity of pain symptoms. While chronic low back pain typically involves non-neuropathic pain mechanisms, it is interesting that plasma Beta-endorphin levels have been shown to be a promising biomarker for chronic back pain. In other non-neuropathic pain conditions,microgreen rack for sale mu opioid receptors expressed on immune B cells was found to be a biomarker for chronic pain in fibromyalgia and osteoarthritis. In this study, the percentage of mu opioid receptors positive B cells was statistically lower in patients with moderate to severe pain than in pain-free subjects or mild pain subjects. 

In a heterogenous group of patients with pain, a composite biomarker was identified that uses emergent properties in genetics to separate patients with pain requiring extremely high opioid doses from controls. Negative studies for opiate mechanism pain biomarkers have shown that salivary Beta-endorphin is not a biomarker for neuropathic chronic pain propensity. Functional brain imaging performed on patients with nonneuropathic primary dysmenorrhea with mu-opioid receptor A118G polymorphism has been used to investigate pain sensitivity and opioid-analgesic treatment related to function in the descending pain modulatory system. specifically, the functional connectivity of the descending pain modulatory system dependence upon mu-opioid receptor A118G polymorphisms was investigated. This study found that patient groups with different alleles for the A118G polymorphisms exhibited varying functional connectivity between the anterior cingulate cortex and periaqueductal gray. Although magnetic resonance imaging provides information regarding structural and metabolic changes that provide insight into pain perception of the CNS, magnetic resonance imaging cannot image opioid function in cells in vivo at the molecular level. Such important opioid function information can be obtained through positron emission tomography and can be used to investigate pain opioid mechanisms. While the development of neuropathic pain has long been ascribed to the known contributors of central sensitization , the role of neuroinflammation regarding the initiation and maintenance of neuropathic pain has evolved tremendously over the last decade. Pro-inflammatory cytokines have been implicated in the generation of neuropathic pain states at both peripheral and central nervous system sites. Neuroinflammation of the peripheral nervous system is triggered by inciting damage to the peripheral nerves, either by trauma, metabolic disturbances , viral infection or surgical lesions leading to sprouting of new pain-sensitive fibers , excessive neuronal firing, and hypersensitization of primary afferent peripheral neurons. During a peripheral nerve injury, local cytokines recruits macrophages which secrete components of the complement cascade, coagulation factors, proteases, hydrolases, interferons, and other cytokines that ultimately facilitate degradation and phagocytosis of the pathogen and injured tissue. Peripheral neuroinflammatory mechanisms affect the damaged neuron and neighboring afferent neurons sharing the same innervation territory. Peripheral nerve injury causes neuroinflammation in the spinal cord. The neuroinflammation is triggered by hyperactivity of the injured primary afferent peripheral sensory neuron which increases neurotransmitters and neuromodulators, causing hyperactivity of postsynaptic nociceptive neuronal hyperactivity as well as the release of several inflammatory activators. A result of this lumbar spinal inflammation process is disruption of the blood-spinal cord barrier leading to increased permeability, which then leads to infiltration of immune cells such as T lymphocytes, macrophages, mast cells, and neutrophils from the periphery into the spinal cord and dorsal root ganglion. These mechanisms contribute to further release of inflammatory mediators which contribute to alterations in post-synaptic receptors. This neurotransmitter increase leads to hyperactivity of post-synaptic nociceptive neurons in the spinal cord and altered signaling up to the thalamus and cortex that may contribute to central sensitization and pain hypersensitivity. Nerve injury typically involves neuro-immune interaction involving glia. Glia are known to provide functional micro-environment modulating neuronal signal transduction, synaptic pruning, and neuroplasticity that contributes to central sensitization. Cytokines have also been demonstrated to be potent mediators of pain in peripheral neuropathy. In one peripheral neuropathy study, gene expression of pro- and anti-inflammatory cytokines was shown to be increased in patients compared to controls. Another study found neuropathic pain group was found to have higher serum levels of several markers including CReactive Protein and Tumor Necrosis Factor – α compared with two control groups.

Furthermore, patients with painful neuropathy had higher sICAM-1 and CRP levels when compared to painless neuropathy. A meta-analysis comprehensively assessed the relationship between serum TNF- α levels and diabetic peripheral neuropathy in patients with type 2 diabetes, demonstrating increased serum TNF-α levels in patients with diabetic neuropathy compared to type 2 diabetic patients without neuropathy and compared with controls. Il- 17 is significantly upregulated in rat models of neuropathic pain, and mRNA expression levels of IL-1β and IL-6 are significantly enhanced in the spinal dorsal horn compared with controls. Moreover, functional recovery from neuropathic pain following a peripheral nerve injury relies on down regulation of IL-1 β and TNF- α responses. Another key pro-inflammatory neuropeptide, Substance P, is known to initiate biological inflammatory effects. In painful trigeminal neuralgia, levels of Substance P and other neuropeptides in the cerebrospinal fluid and blood of patients were found to have higher levels than that of controls; furthermore, blood levels of these markers correlated with those of the CSF. Another study investigating nonneuropathic experimental pain found altered substance P levels and dynamics when comparing older and younger adults. Compromised BBB can be identified with gadolinium-enhanced MRI as is seen in the setting of white matter lesions in multiple sclerosis. CNS-infiltration of circulating immune cells, such as monocyte infiltration into brain parenchyma, can be tracked with iron oxide nanoparticles and MRI. Pathological consequences of neuroinflammation such as apoptosis can be imaged with PET [99mTc] Annexin V or with iron accumulation with using MRI T2∗ relaxometry. These imaging techniques can be used to image human neuroinflammation which have potential to impact patient care in the foreseeable future. Integrated positron emission tomography-magnetic resonance imaging and the radioligand 11C-PBR28 for the translocator protein can be used to image regional brain volumes with glial activation. Given the putative role of activated glia in the establishment and or maintenance of persistent pain, pathophysiology, and management of a variety of persistent pain conditions the results from this technique are important to consider when considering imaging techniques for measuring CNS inflammatory effects of pain. There are three classifications of cannabinoids: phytocannabinoids , endocannabinoids , and synthetic cannabinoids. Similar to the opioid system, versions of the endocannabinoid system have been found in the vast majority of species with a nervous system. In particular, the ECB ligands 2-AG and AEA have been found throughout the animal kingdom. The ECB system regulates physiology across most organ systems and operates independently and interacts with the inflammatory system, the opiate system, the Vaniloid system, and with nuclear transcription factors. The ECB system works as a part of a negative feedback loop that regulates neurotransmitter and neuropeptide release in the nervous system. Endocannabinoid ligands are generated on-demand in response to high levels of activity and produce short-term inhibitory effects via their actions as retrograde transmitters at presynaptic inhibitory G protein-coupled receptors. The two most prevalent endocannabinoid ligands that bind endocannabinoid receptors are anandamide and 2- arachidonoylglycerol. 

Posted in hemp grow | Tagged , , | Comments Off on Pain is a subjective sensory experience that cannot be directly measured nor quantified

Participants will be provided a copy of both the first and second informed consent

As AM404 does not activate cannabinoid receptors , the effects of this drug were suggested to result from the elevation of endogenous anandamide levels. However, recent findings suggest that AM404 also directly activates the vanilloid VR1 receptor , complicating the identification of its mechanism of action on ethanol self-administration. However, the effect of AM404 was not reversed or enhanced by pre-treatment with the competitive vanilloid VR1 receptor antagonist capsazepine, indicating that the inhibitory action of AM404 is not mediated through VR1 stimulation and may be derived from other targets in the endocannabinoid system. Following this rationale we studied the involvement of the cannabinoid CB1 receptor, the natural target of anandamide. In order to confirm its participation we first studied whether the cannabinoid receptor antagonist SR141716A reversed the actions of AM404. This pharmacological test was complicated by the inhibitory actions of SR141716A on ethanol self-administration that precluded the observation of a reversal of the actions of AM404. A second strategy was to compare the actions of AM404 with those of selective cannabinoid CB1 receptor agonists belonging to three of the four main classes of cannabinoid agonists: eicosanoids , aminoalkylindoles and classical cannabinoids. The effects of these compounds in ethanol self-administration are not similar to those of AM404. ACEA and WIN 55,212-2 reduced ethanol self-administration, although the component of motor inhibition of WIN 55,212-2 might be responsible for this effect. However, the classical cannabinoid receptor agonist HU-210 did not affect ethanol self-administration. We replicated this finding in a separate study in Marchigian Sardinian alcohol-preferring rats. These results indicate that the contribution of the CB1 receptors to AM404 cannot be supported. The similar profile of actions observed after systemic administration of either cannabinoid CB1 receptor agonists or antagonist seems to be challenging. It has been reported that both cannabinoid CB1 receptor agonists,sub irrigation cannabis such as tetrahydrocannabinol, CP55 940 and WIN 55,212-2, and cannabinoid receptor antagonist ⁄ inverse agonists, such as SR141716A, suppress operant behavior. 

These reports stress the pleiotropic spectrum of actions found after the interference with endocannabinoid signaling. The complex roles of the endocannabinoid system on the regulation of GABA and glutamate synapses throughout the brain circuits processing the appetitive ⁄ motivational properties of ethanol might explain these findings. As an example, we have recently described that intracerebral injections of SR141716A only affect ethanol self administration in rats when the CB1 antagonist is infused in the prefrontal cortex but not in the hippocampus or dorsal striatum. Moreover, in this study, local blockade of fatty acid amidohydrolase, the main enzyme that degrades anandamide, enhances ethanol self-administration when injected into the prefrontal cortex. However, we cannot exclude additional targets such as noncloned cannabinoid-like receptors on which anandamide and WIN 55,212-2 may act. Thus, the present study stressed the need to clarify the growing complexity of endocannabinoid pharmacology, especially in the fifield of motivated behaviors. Although the present results exclude VR1, CB1 and CB2 receptors as the targets of the effects of AM404, we cannot exclude the contribution of endocannabinoids elevated by AM404 to the present actions, especially because the endocannabinoid system has been recently implicated in the neuroadaptations that occur during acute alcohol exposure, alcohol dependence and abstinence. Several studies have documented that endocannabinoid transmission is acutely inhibited by ethanol and becomes hyperactive during chronic ethanol administration, as revealed by the increase in the levels of endocannabinoids and the down-regulation of CB1 receptors. Thus, it is tempting to imagine that those compounds that increase endocannabinoid transmission, such as AM404, might be useful in reducing operant responses for ethanol. With the precautions derived from the non-CB1 profile of the effects of AM404, we propose that the increased levels of endogenous cannabinoids occurring during chronic ethanol administration contribute to facilitate the action of AM404; the neuroadaptations in the central nervous system associated with chronic ethanol intake lead to an increase in anandamide levels and this event could enhance the action of AM404 acting through the increased endogenous anadamide. However, we also demonstrate that the acute administration of AM404 was not able to suppress the relapse response for ethanol, i.e. the reinstatement of ethanol responding induced by the presentation of contextual cues associated with ethanol after a period of extinction. The differential response to AM404 in self-administration and relapse conditions may have a neuropharmacological basis in the recently described changes in endocannabinoid levels after chronic ethanol exposure. 

A possible explanation for these differences may reside in the probable alterations induced by chronically consumed ethanol in the functionality of the receptor systems mediating the central effects of ethanol that sustain ethanol-drinking behavior in rats. These neuroadaptation processes might result in a decreased potency and efficacy of the ligands. The increased levels of anandamide observed during ethanol consumption may return to basal levels or even disappear and thereby AM404 could not be acting in such a situation.This hypothesis is supported by the results obtained recently by Gonzalez et al. who showed that the levels of endocannabinoids underwent significant changes in reward-related areas during relapse, showing the lowest values in this phase. The levels of both anandamide and 2-arachidonoyl-glycerol were significantly reduced when rats were allowed to relapse to alcohol consumption. Thus, the induction of compensatory mechanisms in the status of the endocannabinoid system may be determinant in the actions of AM404. The sensitivity to AM404 modulation of reward processes may be also dependent on the rate of motivation for the drug. The differential responses found after administration of AM404 in food and saccharin reinforcement with respect to ethanol might be explained by the different rewarding properties between natural rewards and drugs of abuse such as ethanol, the latter being more potent. In other words, AM404 may be efficacious only in those situations in which the motivation for the drug is stronger, indicating that this effect is linked to the strength of the hedonic properties of the reward rather than to a motivational state. In conclusion, our results showing that AM404 administration reduced ethanol self-administration open a new line of research for the development of therapies to reduce ethanol intake in alcoholic non abstinent patients.Alcohol use disorder is a chronic and relapsing condition for which current pharmacological treatments are only modestly effective. The development of efficacious medications for AUD remains a high research priority with recent emphasis on identifying novel molecular targets for AUD treatment and to efficiently screen new compounds aimed at those targets. To that end,vertical grow modulation of neuroimmune function represents a promising novel target for AUD. Chronic alcohol consumption produces a sustained inflammatory state, such that individuals with AUD have increased neuroinflammation throughout the brain, and alcohol-induced neuroinflammation is thought to contribute to chronic alcohol seeking behavior and to the behavioral and neurotoxic effects of alcohol. In rodents, lipopolysaccharide-induced neuroinflammation produces prolonged increases in alcohol consumption, and knocking out neuroimmune signaling genes attenuates alcohol preference and self-administration. Therefore, a medication that reduces proinflammatory signaling may produce anti-alcohol and neuroprotective effects that may be beneficial for the treatment of AUD. Ibudilast has been advanced as a novel addiction pharmacotherapy that targets neurotrophin signaling and neuroimmune function.

IBUD inhibits phosphodiesterases 4 and 10 and macrophage migration inhibitory factor . As PDE4 and MMIF are critically involved in proinflammatory signaling, and PDE10 negatively regulates neurotrophin expression, the inhibition of these molecules by IBUD has been theorized to reduce neuroinflammation and promote neurotrophin expression . In support, IBUD enhances neurotrophin expression, reduces pro-inflammatory cytokine release, and attenuates neuronal death. In rodents, IBUD has been demonstrated to reduce ethanol intake by approximately 50% both under conditions of maintenance and relapse testing. These recent preclinical findings for IBUD support prior studies indicating pharmacological inhibition of PDE4 and PDE10 decreases alcohol intake. To advance medications development for AUD, our laboratory has recently completed a randomized, double-blind, placebo-controlled crossover laboratory study of IBUD in non-treatment seeking individuals with AUD. This study tested the safety, tolerability, and initial human laboratory efficacy of IBUD on measures of subjective response to alcohol, as well as cue- and stress-induced changes in craving and mood. Twenty-four individuals completed two separate 7-day intensive outpatient protocols which included daily visits for medication administration and testing. Upon reaching a stable target dose of IBUD , participants completed a stress-exposure session, an alcohol cue-exposure session, and an IV alcohol administration session. Results indicated that IBUD was well tolerated and associated with mood improvements during stress- and alcohol-cue exposures, and with reduction in tonic levels of alcohol craving. Exploratory analyses revealed that among individuals with higher depressive symptomatology, IBUD attenuated the stimulant and positive mood-altering effects of alcohol. Given the promising pre-clinical and initial human laboratory findings, we will conduct a large-scale randomized clinical trial of IBUD in treatment-seeking participants with AUD. Additionally,we will collect psychosocial stress- and alcohol-cuerelated neuroimaging data as part of the trial. As an exploratory aim, we will also collect proinflammatory biomarkers from participants over the course of the trial.At the initial in-person screening visit and prior to conducting any research-related procedures, a trained member of the study team will conduct the three-part consent process which details the procedures that take place during the screening process. First, participants will be asked to read and provide verbal consent for the breathalyzer. If the breathalyzer test is 0.000 g/dl, study staff will read and discuss the written informed consent outlining procedures for the initial screening visit with the participant. Once the participant has asked questions and has a clear understanding of the procedures, the participant will sign the consent form. If the participant is found eligible to continue to the medical screening visit, a second written consent form outlining the study purpose, procedures, potential risks, and benefits will be reviewed and signed in the presence of the study physician in a private, confidential setting. Only physician investigators who are continuously involved in the research and qualified to answer questions regarding the nature of the subject’s participation and the alternatives to participation will obtain the second informed consent.At the first initial in-person screening visit, participants will read and sign a consent form that outlines the procedures for collecting biological specimens such as a urine sample for a toxicology screen and pregnancy test at each study visit. At the medical screening visit, participants will sign the experimental consent form that includes the electrocardiogram and blood sample for a comprehensive metabolic panel and complete blood cell count in order to evaluate overall health and determine eligibility. In addition, participants will be asked to consent to the collection of biological specimens during the study such as a blood sample for neuroinflammation assays at every study visit, salivary cortisol samples, and brain imaging at the week 4 visit. Participant data and biological specimens used to evaluate eligibility and study compliance will be disposed of after testing. Data and specimens germane to the research study such as de-identified neuroinflammation assays and salivary cortisol samples will be owned by the University of California or by a third party designated by the University. Participants will be asked to indicate if they permit part of the sample to be shared with other researchers and/or used in future studies. However, the samples for this study will be used for the specified analyses and will not be stored for secondary analysis.The trial is placebo-controlled, due to there being no universal standard-of-care medication for AUD treatment. Since the standard treatment for AUD is therapist-delivered behavioral support, all subjects will be provided the computer-based, NIAAA-developed Take Control program. This decision is supported by a recent study comparing computer-delivered Take Control to Therapist-Delivered Platforms , which found comparable drinking outcomes and higher medication adherence in the Take Control trials, suggesting that Take Control is a comparable and cost-efficient alternative to TDP in AUD clinical trials.The study physicians will be available to participants for the entire duration of the study. The study physicians will call every participant at the end of the first week on the study medication to discuss and manage any adverse events. Study staff will also notify the study physicians of any adverse events recorded during the follow-up visits. Side effects will be collected through an open-ended question asking participants to report any adverse events they may be experiencing. If study medication adjustments are required due to safety concerns, it is up to the study physicians’ discretion to make a dose adjustment or terminate medication.

Posted in hemp grow | Tagged , , | Comments Off on Participants will be provided a copy of both the first and second informed consent

The specific receptor pathways involved in these processes are not fully elucidated

Godlewski and colleagues reported that administration of the peripherally-restricted CB1R inverse agonist, JD5037, reduced the intake of ethanol in wild-type mice; however, it was ineffective in whole-body CB1R- and GHS-R1a-null mice. Ethanol-consuming mice also had elevated levels of the eCB, anandamide, in stomach cells, and inhibiting peripheral CB1Rs with JD5037 blocked formation of the bio-active form of ghrelin, octanoyl-ghrelin. These results suggest that CB1Rs in stomach cells promote ethanol intake by a mechanism that includes controlling production of ghrelin. Next, a mouse gastric ghrelinoma cell line — which contains CB1Rs and enzymatic machinery for eCB metabolism—was used to identify mechanisms of CB1R-mediated ghrelin production. Inhibition of CB1Rs in MGN3-1 cells with JD5037 blocked formation of octanoyl-ghrelin by a mechanism that includes increased oxidative degradation of the ghrelin substrate, octanoyl-carnitine. Moreover, given expression of ghrelin receptors in the brain as well as vagal afferent neurons, the authors aimed to identify if the actions of JD5037 to reduce ethanol intake via changes in ghrelin signaling required gastric vagal afferent neurons. Both JD5037 and the CB1R inverse agonist, rimonabant, were ineffective at reducing ethanol intake in mice subjected to chemical ablation of sensory afferents by neonatal exposure to capsaicin. Interestingly, mice denervated by capsaicin displayed moderate increases in preference and intake of ethanol. Additionally, mice treated with JD5037 displayed no changes in ad-libitum intake of standard rodent chow under these specific conditions. Together, these studies provide evidence that CB1Rs in mouse stomach cells control intake and preference for ethanol by a mechanism that includes regulating production of ghrelin and indirect control of gut-brain vagal signaling. Future studies will be important to clarify if activating CB1Rs stimulates production of ghrelin by increasing conversion of octanoyl-carnitine to octanoyl-ghrelin,cannabis drying and if roles for these pathways extend beyond intake and preference for ethanol to other reinforcers, such as palatable food.

In addition, the precise impact that CB1R-mediated control of ghrelin production has on vagal neurotransmission and associated firing rates remains to be determined. Notably, ghrelin and CCK inversely affect vagal afferent neural activity, with ghrelin decreasing and CCK increasing activity. Accordingly, it is possible that eCB signaling at CB1Rs on stomach cells that produce ghrelin and on CCK-containing cells in the upper small-intestinal epithelium that inhibit CCK release results in similar reductions in vagal afferent neural activity and increases in food intake. Moreover, these pathways may coordinate vagal afferent neural activity associated with feeding status and become imbalanced in diet-induced obesity. A direct test of these hypotheses, however, remains for future investigations. In addition to indirect mechanisms, eCBs may also activate CB1Rs located on the afferent vagus nerve and directly impact gut-brain neurotransmission and food intake. Indeed, a series of studies by Burdyga and colleagues suggest that expression of CB1Rs on rat gastric vagal afferent neurons is impacted by feeding status and gutderived hormones. Immunoreactivity and mRNA for CB1Rs were identified in the rat and human nodose ganglia, and fasting for up to 48 h in rats was associated with time-dependent increases in their expression. Refeeding after a 48 h fast led to reductions in expression of mRNA for CB1Rs in nodose ganglia by 2-hrs after reintroduction of food, an effect mimicked by administration of bio-active CCK-8 in fasted rats. In addition,administration of a CCKA receptor antagonist blocked refeeding-induced reductions in expression of mRNA for CB1Rs in fasted rats, which suggests a key role for CCK in these processes. Similarly, administration of ghrelin blocked refeeding-induced reductions in expression of mRNA for CB1Rs in nodose ganglia and blocked the actions of CCK-8 administration on expression of mRNA for CB1Rs in fasted rats. These results highlight the opposing actions that gut-derived satiation and hunger signals have on expression of CB1Rs in rodent vagal afferent neurons. Several studies suggest that expression of CB1Rs in the nodose ganglia is dysregulated in rodent models of diet-induced obesity. Immunoreactivity for CB1Rs was elevated in the nodose ganglia in Zucker or Sprague Dawley rats that were maintained on high-fats diet for 8 weeks when compared to lean controls.

Similarly, mRNA for CB1Rs was elevated in nodose ganglia in mice fed a high-fat diet for 12 weeks. In addition, refeeding after a fast or administration of CCK-8 in Wistar rats maintained on a high-fat diet both failed to reduce levels of immunoreactivity for CB1Rs in nodose ganglia. Moreover, levels of mRNA for CB1Rs were elevated in the nucleus of the solitary tract in rats maintained on a high-fat and sugar diet. Collectively, these studies suggest that CB1R expression in rodent vagal afferent neurons is controlled by feeding status, and their meal-related expression is dysregulated by chronic exposure to high-fat diets. Roles in food intake for CB1Rs expressed in vagal afferent neurons are unclear; however, Elmquist and colleagues reported that genetic deletion of CB1Rs in the afferent and efferent vagus nerve had no impact on food intake, body weight, or energy expenditure in mice maintained on standard rodent chow or a high-fat diet. These results suggest that CB1Rs on vagal afferent neurons may be sufficient to promote food intake but are not required in these processes. With regards to food intake, these findings are also in line with the transient nature of feeding suppression in rodents administered CB1R antagonists, which suggests that CB1Rs may not be required for the long-term maintenance of food intake. Nonetheless, a series of important studies investigated the impact of activating CB1Rs on the neurochemical phenotype of associated neurons and the function of gastric vagal afferent neurons in mice. Similar to CB1Rs, fasting was associated with time-dependent increases in expression of melanin-concentrating hormone 1 receptor in the nodose ganglia of rats, albeit at later time-points when compared to CB1Rs. In contrast,growers equipment fasting was associated with time-dependent reductions in expression of neuropeptide Y receptor type 2. Administration of CCK-8 reversed the effects of fasting by decreasing expression of CB1Rs and MCH1Rs, and increasing expression of Y2Rs. Notably, administration of the eCB, anandamide, dose-dependently reversed the effects of CCK-8 on expression of CB1Rs, MCH1Rs, and Y2Rs. Moreover, administration of a CB1R inverse agonist reduced expression of CB1Rs and increased expression of Y2Rs with no effect on expression of MCH1Rs in fasted rats. Together, these studies reveal distinct changes in the neurochemical composition of vagal afferents neurons in response to CB1R activation and inactivation, and suggest that CB1Rs may directly modulate activity of vagal afferent neurons in response to food-related signals released from the gut. Elegant studies conducted by Christie and colleagues suggest that CB1Rs control mechanosensitivity of gastric vagal afferent neurons, which may be dysregulated in dietinduced obesity. For these studies, a mouse in vitro electrophysiological preparation was utilized that consists of isolated stomach and esophagus with attached vagal fibers and measurement of vagal afferent neural function and mechanosensitivity. Application of methanandamide—a stable analog of anandamide —led to a biphasic effect on activity of vagal fibers in response to stretch, with low doses reducing responses to stretch and high doses increasing responses. These effects were found only in tension sensitive fibers, but not those innervating gastric mucosa. In contrast to mice maintained on standard rodent chow, mice maintained for 12 weeks on a high-fat diet were only responsive to the inhibitory effects of methanandamide on gastric vagal afferent neural activity.

To identify receptor signaling pathways mediating these effects, methanandamide was co-incubated with a CB1R inverse agonist, a transient receptorpotential vanilloid-1 channel antagonist, a growth hormone secretagogue receptor antagonist, or several inhibitors of distinct second messenger pathways. The biphasic effects of methanandamide on mechanosensitivity were both inhibited by CB1R and TRPV1 blockade in mice maintained on standard rodent chow. Furthermore, the excitatory effects of methanandamide may occur via a CB1R-mediated PKC-TRPV1 second messenger pathway, whereas the inhibitory effects may occur via CB1R-mediated release of ghrelin from the stomach and its actions on GHSRs on vagal afferent neurons. Together, these studies suggest that endocannabinoids differentially control afferent vagal activity depending upon dose by mechanisms that include distinct interactions between CB1Rs, TRPV1, and GHSR signaling pathways, which may become dysregulated in diet-induced obesity. Future studies will be important to identify physiological roles in food intake and obesity for CB1R signaling in distinct populations of gastric vagal afferent neurons. Moreover, it will be important to delineate how CB1Rs on sensory vagal terminals in the gut, nodose ganglia, and at terminals in the NTS may participate in distinct or common aspects of vagal afferent neurotransmission. Fasting is associated with elevated levels of eCBs in the upper small-intestinal epithelium of rodents, and recent studies suggest that the efferent vagus nerve is required for these processes. The efferent arm of the vagus nerve communicates parasympathetic neurotransmission from the brain to peripheral organs—including the gut—via cholinergic signaling pathways, and participates in a variety of motor functions and possibly food intake. For example, c-Fos expression in the myenteric and sub-mucosal plexus in the rat proximal small intestine was induced by vagal nerve stimulation, and pharmacological blockade of peripheral muscarinic acetylcholine receptors with atropine methyl nitrate inhibited both refeeding after a fast and sham feeding of liquid diets in rats.Recent investigations, however, suggest that cholinergic neurotransmission carried by the efferent vagus nerve controls biosynthesis of the eCB, 2-AG, in the proximal small-intestinal epithelium during a fast and participates in refeeding after a fast. For these studies, rats were fasted for up to 24 h, then levels of 2-AG and its precursor, 1, stearoyl, 2-arachidonoyl-sn-glycerol , were quantified in a variety of peripheral organs by liquid chromatography/tandem mass spectrometry. Levels of 2-AG and SAG were elevated in the upper small-intestinal epithelium by 24 h after fasting; however, no changes were found in stomach, ileum, colon, liver, pancreas, or spleen. This effect was specific for 2-AG, because levels of other common monoacylglycerols in the upper small-intestinal epithelium were unaffected. Moreover, levels of 2-AG were rapidly normalized by refeeding to levels similar to those in free-feeding animals, an effect mimicked by intra-duodenal infusions of equicaloric quantities of lipid, sucrose, or protein. These results highlight that production of intestinal 2-AG in fasting rats can be rapidly reduced upon refeeding in a manner that is not dependent on macronutrient content.We next aimed to identify if efferent cholinergic activity is required for production of 2-AG in the intestinal epithelium. Rats were given full diaphragmatic vagotomy and fasted for 24 h. When compared to control rats receiving a sham surgery, levels of 2-AG failed to increase in the small-intestinal epithelium after a 24 h fast, which suggests that the efferent vagus may be required for production of 2-AG. We next aimed to identify specific cholinergic receptors involved in vagal-mediated 2-AG production during a fast. The principal neurotransmitter released from the efferent vagus nerve is acetylcholine, which activates a variety of cholinergic receptor subtypes in the periphery, including mAChRs in the intestine. Activation of mAChRs in the brain, including subtype- 3 mAChRs, enhances eCB production that, in turn, participates in the control of synaptic plasticity via CB1Rs. In addition, 2-AG in the brain is formed by a mechanism that includes phospholipase-C-dependent production of SAG, then conversion of SAG to 2-AG by diacylglycerol lipase. Notably, m3 mAChRs are Gq-protein coupled receptors that share overlapping downstream pathways as those responsible for biosynthesis of 2-AG, including activation of phospholipase-C and diacylglycerol lipase. Thus, we examined if mAChRs are required for fasting-induced production of 2-AG in the small-intestinal lining. Similar to brain, m3 mAChRs were expressed in the small intestinal epithelium, and diacylglycerol lipase activity was required for biosynthesis of 2-AG. Systemic administration of a general mAChR antagonist or intra-duodenal administration of a selective m3 mAChR antagonist both blocked fasting-induced rises in 2-AG in the small intestine. Moreover, pharmacological inhibition of peripheral CB1Rs and m3 mAChRs in the intestine equally reduced refeeding after a 24 h fast, with no additive effects when both inhibitors were co-administered. Collectively, these studies suggest that the efferent vagus nerve is recruited during a fast and participates in refeeding after a fast by activating m3 mAChRs in the intestine which, in turn, drives production of 2-AG and activation of local CB1Rs. In addition to interactions between efferent parasympathetic neurotransmission and the eCB system, studies also suggest that efferent sympathetic neurotransmission is controlled by CB1Rs, which may impact food intake through a mechanism that requires the afferent vagus nerve.

Posted in hemp grow | Tagged , , | Comments Off on The specific receptor pathways involved in these processes are not fully elucidated

Standardized fidelity ratings were made on 50% of randomly selected CBT sessions

Although the prevalence of a current or lifetime diagnosis of MDD did not differ between WWH and MWH, MDD was an important risk factor of demonstrating Global weaknesses with spared verbal recognition compared to the profile demonstrating only Weakness in motor function. This finding aligns with our work demonstrating that MDD may have a greater impact in women compared to men. Our work indicates that HIV comorbid with depression affects certain cognitive domains including cognitive control, and that these effects are largest in women. specifically, WWH with elevated depressive symptoms had 5 times the odds of impairment on Stroop Trial 3, a measure of behavioral inhibition, compared to HIV-uninfected depressed women, and 3 times the odds of impairment on that test compared to depressed MWH. In a recent meta-analysis, small to moderate deficits in declarative memory and cognitive control were documented not only in individuals with current MDD but also in individuals with remitted MDD, leading to the conclusion that these deficits occur independently of episodes of low mood in individuals with “active” MDD. Together these lines of work suggest that MDD would exacerbate cognitive difficulties in PWH, particularly in the cognitive domains of declarative memory and cognitive control in WWH. Our study has limitations. Although we were adequately powered within both WWH and MWH , the magnitude of power was discrepant by sex considering that women represented 20% of our sample. Larger-scale studies in WWH only are currently underway. The generalizability of our findings also warrant additional study as the profiles identified here may not represent the profiles among all PWH. Due to the unavailability of data, we were unable to explore certain psychosocial factors as potential determinants of cognitive profiles. Our analyses were cross-sectional which allows us to identify determinants associated with cognitive profiles but precludes us from determining the temporal relationships between these factors and cognitive function. Although many of the related factors may be risk factors for cognitive impairment, reverse causality is possible with some of the factors resulting from cognitive impairment. Additionally,cannabis grow system interpretation of the machine learning results should be done with care as RF is an ensemble model that is inherently non-linear in nature.

This means that the importance and predictive power of every variable is specified in the context of other variables. This can lead to situations where an important predictive variable in the RF model has no significant difference in the overall comparison but has dramatic differences when included with other variables in the model. As such, this model should be interpreted as hypothesis-generating and identifies variables in need of further investigation. Lastly, because our study was focused on sex differences in cognitive profiles within PWH, we did not include a HIV-seronegative comparison group. Thus, we cannot determine the degree to which HIV contributes to sex differences in cognitive profiles. However, the independent HIV-related predictors does suggest that HIV has a role. Despite these limitations, we selected RF over linear models such as lasso and ridge regression because RF models had more predictive power and higher accuracy in this data compared to the linear models, even linear models with tuning parameters such as ridge and lasso that can used for feature selection. The results from these models mirror the P-values for the univariate comparisons , which is expected since analysis of variance and t-tests are also linear models. Moreover, RF models are more optimal for handling missing data, the inclusion of categorical predictor variables, and the use of categorical outcome measures which was the case in the present study. RF models also account for the complexity in the data that can arise from multicollinearity often seen in large feature sets. In conclusion, our results also suggest that sex is a contributor to the heterogeneity in cognitive profiles among PWH and that cognitive findings from MWH or male-dominant samples cannot be wholly generalized to WWH. Whereas, MWH showed an unimpaired profile and even a cognitively advantageous profile, WWH only showed impairment profiles that included global and more domain-specific impairment, which supports previous findings of greater cognitive impairment in WWH than in MWH. Although the strongest determinants of cognitive profiles were similar in MWH and WWH including WRAT- 4, HIV disease characteristics, age and depressive symptoms, the direction of these associations sometimes differed.

This suggests that the effects of certain biological, clinical, or demographic factors on the brain and cognition may manifest differently in MWH and WWH and that sex may contribute to heterogeneity not only in cognitive profiles but in their determinants although studies with larger numbers of WWH are needed to more definitively test these hypotheses. It is important to detect these differing cognitive profiles and their associated risk/protective factors as this information can help to identify differing mechanisms contributing to cognitive impairment and whether these mechanisms are related to HIV disease, neurotoxic effects of ART medications, and/or comorbidities that are highly prevalent among PWH. Given the longer lifespan of PWH in the era of effective antiretroviral therapy, cognitive profiling will also inform aging-related effects on cognition in the context of HIV and perhaps early clinical indicators of age-related neurodegenerative disease. By identifying cognitive profiles and their underlying mechanisms, we can ultimately improve our ability to treat by tailoring and directing intervention strategies to those most likely to benefit. Overall, our results stress the importance of considering sex differences in studies of the pathogenesis, clinical presentation, and treatment of cognitive dysfunction in HIV.Older persons living with HIV , often defined as age 50 years, represent a rapidly growing population. More than 50% of PLWH in the U.S. are 50 years,. Furthermore, older PWLH have high rates of multi-morbidity. Chronic pain and substance use occur commonly in this population and are associated with poor health outcomes and increased use of healthcare services. PLWH are also at risk for declining physical functioning and reduced physical performance. Given the high prevalence of co-morbid pain, substance use, and reduced physical functioning in older PLWH, multi-component interventions targeting all three are needed. Cognitive behavioral therapy is an evidence-based approach for managing both pain and substance use. According to the Infectious Diseases Society of America guidelines, CBT is a recommended first line non-pharmacologic treatment for chronic pain management among PLWH. In addition, exercise therapies reduce pain, reverse muscle atrophy, and decrease fall risk among older adults with chronic pain. Tai chi is a mind-body exercise that combines gentle movement, meditation and deep breathing. Tai chi can be feasibly administered to diverse groups of older adults and is associated with reduced pain,marijuana grow system risk of falling and depressive symptomatology. Given high rates of physical deconditioning in PLWH , tai chi constitutes a particularly appealing movement-based therapy due to its use of low impact, graded, weight bearing exercises.

Finally, text messaging has recently demonstrated efficacy in reinforcing elements of behavioral interventions, including those directed at changing addictive behaviors and managing chronic pain. Text messaging may also be an acceptable continuing care strategy following intensive treatment for a substance use disorder and in reducing problem drinking. We conducted a pilot randomized controlled trial to assess the feasibility, acceptability and preliminary efficacy of a multi-component behavioral intervention—a combined CBT and tai chi protocol reinforced with text messaging—to reduce levels of pain and substance use, and improve physical performance among older PLWH. We hypothesized that participants randomized to the intervention arm would demonstrate reductions in substance use, pain-related disability and pain intensity along with improvements in physical performance.Prior to the RCT, we conducted focus groups with prospective end users of the intervention to ascertain their preferences regarding behavioral treatments for pain ; developed the integrated intervention and trained APAIT staff to deliver it; and conducted a small pilot study, using the results to refine study materials and procedures prior to the current trial. We also obtained supplemental funding to conduct daily diary assessments of overall health, pain, behavioral responses to pain, mood, sleep, exercise, drinking and drug use, and social contact among all study participants via their cell phones. These data are reported in a separate paper. The Institutional Review Boards of all participating institutions approved the study. All participants provided written informed consent and participants assigned to the CBT/TC/TXT arm granted permission for the CBT sessions to be audiotaped. Study investigators developed an eight session, manualized treatment protocol to be delivered once weekly over 60 minutes in a group format by behavioral health counselors. The eight week, open-group program was adapted from three manualized interventions: 1) Manage Your Pain ; 2) Integrated CBT ; and 3) Mindfulness Based Relapse Prevention. All CBT sessions began with homework review, followed by delivery of didactic materials, coping skills, rehearsal exercises, and a new homework assignment. Therapy content focused on a different theme each week including 1) coping with chronic pain, 2) using mindfulness to cope with pain, 3) understanding and changing problematic patterns of substance use, 4) building motivation for change, 5) stress management and problem solving, 6) coping with negative thoughts and emotions, 7) improving sleep, and 8) building social support. Participants were given a copy of the client manual to facilitate between-session homework practice of coping skills presented in the weekly sessions. Three behavioral health counselors, including two members of APAIT’s staff, participated in a day-long training led by a Master’s level clinician experienced in administering manualized CBT interventions. To maintain fidelity during the trial, a clinical psychologist provided monthly supervision with review of audiotaped CBT sessions and feedback to counselors. Ongoing fidelity monitoring was conducted on all CBT sessions using a previously developed fidelity rating scale that assessed the extent of study therapists’ use of CBT-specific skills.These showed acceptable to excellent fidelity on all domains.The study employed a Yang-style tai chi delivered weekly for one hour following each CBT session. Each session started with a 10-minute warm-up, stretching and review of tai chi principles followed by 30 minutes of tai chi exercises, including five animal forms, a walking meditation, and a partnered activity known as push hands. Each class ended with a 10-minute cool down and a 5-minute closing that included a review of the material presented. A tai chi instructor with 18 years of experience trained two APAIT exercise program staff to lead the tai chi exercises. The staff underwent training for 1 hour weekly for 3 months before they began to lead tai chi sessions in the study. The trainer also attended four tai chi study sessions with each of the study instructors during the trial to monitor staff members’ instruction and provide feedback and adjustments as needed. Between November 2015 and April 2016, participants were recruited by a research assistant who distributed flyers at venues serving PLWH, including APAIT, gave presentations at nearby health agencies, and approached potential participants at health centers. A flow diagram shows the number of individuals approached, screened, and recruited, as well as losses to follow-up. Randomization was conducted by research staff who used consecutively numbered, sealed envelopes containing assignment information using a computer-generated set of random numbers to select permutated blocks of six. Within each block, equal numbers were assigned to each of the three groups. Participant follow-up concluded in July, 2016. Participants were compensated for their time via gift cards. Participants in the CBT/TC/TXT arm could be compensated up to $280, those in the SG arm could be compensated up to $200, while subjects in the AO arm could be compensated up to $120. Compensation included $10 for attending each CBT, TC or SG session.Demographic and health-related data included date of birth, gender, race/ethnicity, education, marital status, housing arrangement, employment status, number of years living with HIV, most recent CD4+ T lymphocyte count and HIV-1 RNA [detectable or undetectable ] and number of non-HIV chronic medical conditions. Participants also completed mental and physical health measures. Substance use data included most often used substance and the total number of substances used. Substances included both drugs and alcohol. Substance use measures included the WHO ASSIST-Version 3 and the Timeline Follow back. The TLFB was used to determine the number of days in the past 30 days of a) using a preferred substance; b) using any substance; c) using any drugs; and d) heavy drinking. Pain data included number of years of chronic pain, and medications used to treat pain.

Posted in hemp grow | Tagged , , | Comments Off on Standardized fidelity ratings were made on 50% of randomly selected CBT sessions

A few studies do suggest that health concerns are at least as important as legal risks

People smoke more when they have filters and low-tar cigarettes.And there is some evidence that improved HIV treatments are associated with increases in risky sexual behavior.How might safety testing have such effects? Conceivably, users who were worried about drug quality in the illicit market may become less worried if they learn through safety testing that drugs are generally pure in the local market. For better or worse, the purity rates presented above suggest little cause for this concern. A somewhat different concern is that the very presence of a safety testing organization, like DanceSafe, might make people feel more comfortable about using MDMA. One survey has examined this possibility.Seven hundred and nineteen students at McDaniel College in Maryland were asked whether they had ever used Ecstasy, and “whether the presence of [DanceSafe] would affect their decision to try [Ecstasy] for the first time or use it.”Among the 75% who had never used, 69% said they would not use under any condition, while 19% said they might be more likely to use under such conditions, and 12% said that if they did decide to use they would not be influenced by the presence of DanceSafe.Students that had previously used Ecstasy were equally divided between those who thought they might be influenced and those who did not.But there are also reasons to think that safety testing, with its historically dire purity statistics, might scare off some drug users. At the very least, some fraction of participants who submit samples that turn out “dirty” presumably quit using, scale back their use,cannabis grow set up or at least delay their use while seeking better samples. And to the extent that other potential users see these statistics, the deterrent effect might be much broader than the limited participation rates indicate. Do current and potential users consider health risks—and the risk of being ripped off—when they consider drug use? The health risks of illicit drugs have long been a major focus of prevention campaigns, and various studies show that current users worry about these risks.

One such study reported that users and nonusers of MDMA frequently relied on the Internet for information about MDMA.Users were more likely to seek information from non-government sites than from government sites , and the nongovernmental sources were perceived to be more accurate than the governmental sources. This Article makes no claim that health fears matter more than legal fears. It is surprisingly difficult to find surveys comparing the relative importance of fear of legal risk and fear of health risk. The Monitoring the Future survey conflates the two dimensions by asking “How much do you think people risk harming themselves , if they [try marijuana].”In the vast literature on drug prevention and on the application of attitudinal theories—reasoned action, planned behavior, and the health belief model—to drug use, there is almost no research directly reporting perceived fear or risk of arrest or other legal sanctions.On the other hand, the smaller “perceptual deterrence” literature assesses perceived legal risk , but does not examine health concerns.An Australian survey by Professors Don Weatherburn and Craig Jones found that those not using cannabis were more likely to cite “worried about your health” than “[c]annabis is illegal” , “[y]ou are afraid you will be caught by the police” , or “[y]ou have drug testing in your workplace” as a reason for not using.And the aforementioned McDaniel College survey, which suggested that people might be influenced by DanceSafe, found that both users and nonusers worried more about the purity of Ecstasy than about legal sanctions.But a broader harm reduction benefit occurs through the testing messages posted by safety testing organizations. These messages can be quite specific. For example, DanceSafe and EcstasyData.org post photographs of contaminated or adulterated “brands” of MDMA, together with the date and geographic region of the purchase.

I have already reviewed evidence that a sizeable fraction of MDMA users say they read such information on the web, that they view the information as credible, and that their health and safety matter to them.So it is possible that for every anonymous sample provider who is helped, there are many more potential users who are also helped. But again, I am not aware of direct evidence of the harms averted by safety testing. As with use testing, there may be other, less direct consequences, some of which may be undesirable. There may be a substitution from one type of drug to another; for example, users may come to distrust MDMA and seek out other substances. Some of those substances are arguably more benign ; others may be more unhealthy. In theory, widespread safety testing could improve the quality of illicit drugs in the marketplace. This provides a stark illustration of the tension between harm reduction and use reduction, because better drug quality should increase demand. But it is difficult to make firm predictions here. In an ordinary market, sellers should charge more for higher quality goods, and buyers should be willing to pay more. In the long run, sellers of low-quality goods can expect to lose customers to sellers offering higher quality goods at the same price. But illicit drugs are not an ordinary market. Professors Jonathan Caulkins and Rema Padman found that prices rose with purity for white and brown heroin and powder cocaine, but surprisingly, they were unable to detect an effect of purity on the prices of crack, methamphetamine, or black tar heroin.To help explain this puzzle, Professors Peter Reuter and Jonathan Caulkins detail a number of distinctive features of illicit drug markets, including the multistage distribution networks connecting producers and consumers, uncertainty about quality, turnover of buyers and sellers, and a limited ability to signal quality through consistent branding.Many of these features produce the kind of informational problems discussed in Professor George Akerlof’s classic paper on “the market for lemons.”A lemons market occurs when there is an informational asymmetry such that sellers know more than buyers about a good’s quality. This asymmetry increases the supply of low-quality goods, and can even collapse the market if potential buyers refuse to make new purchases. One major difference from the classic lemons model is the higher likelihood of repeat buyer-seller transactions; in drug markets, the retail seller also has imperfect knowledge of and control over quality.From a use-reduction standpoint,outdoor cannabis grow the highly variable quality of drugs probably reduces the demand for illicit drugs. But from a harm-reduction standpoint, this feature of illicit markets is quite troubling. First, it creates a high risk of overdose and illness, because adulterants have a toxic effect and also because customers have difficulty calibrating their dosage. Second, it encourages disputes between sellers and buyers, and given the illicit nature of their transactions, these disputes cannot be taken to legal authorities and thus frequently result in violence.

Over 35 million people worldwide live with human immunodeficiency virus , and 1.2 million of these people live in the United States. Since the development of combination antire-troviral therapy , HIV-associated mortality has decreased in the United States, such that the lifespan of people living with HIV with reliable access to cART is comparable to those without HIV. Despite these advances in the medical management of HIV disease, the central nervous system remains vulnerable. In fact, HIV targets the CNS within days after infection leading to neurological, behavioral, and cognitive complications. Even in the current cART era, mild neurocognitive deficits are observed in about 45% of PLWH, particularly in the domains of executive function, learning, and memory. Neuroimaging studies suggest that functional and structural abnormalities in subcortical regions underlie these cognitive deficits. Neurocognitive impairment among PLWH is clinically meaningful because it is known to adversely affect daily functioning, conferring an increased risk of poor medication management , impaired driving ability , problems in employment , and early mortality. As the HIV+ population ages, understanding and addressing HIV-associated comorbidities that impact cognitive performance and everyday functioning is critical to overall healthcare for PLWH. Multiple adverse experiences such as childhood trauma, sexual abuse, physical violence, unemployment, and poverty are highly prevalent among PLWH and have known CNS consequences. For example, estimates of sexual and/or physical abuse in PLWH range from 30% to over 50% Whereas the physiological response to acute stress is typically adaptive, chronically-elevated stress exposure can disturb brain development and function, and increase risk of psychiatric disease. Chronic exposure to stress and stress hormones, glucocorticoids, can hinder immune mechanisms and amplify inflammation in the CNS and, furthermore, exacerbate injury-induced neuronal death. Chronic stress in healthy adults is linked to structural and functional alterations in the hippocampus and prefrontal cortex , and poorer memory recall ability. Due to the overlap in the inflammatory and immune mechanisms shown to be affected by stress and HIV, traumatic and stressful experiences may contribute to or compound the likelihood of CNS injury via this pathway in PLWH. Thus, PLWH with a history of trauma and adversity may be at increased risk for neurocognitive impairment and decreased functional capacity. Among men living with HIV, a previous study found that stressful life events were related to worse executive functioning, attention, and processing speed. In women living with HIV, high levels of self-reported stress were associated with verbal memory deficits, as well as prefrontal cortex structural and functional deficits. Conversely, high stress was not associated with verbal memory performance inwomen without HIV, suggesting that stress may be particularly deleterious to cognitive function in the context of HIV. Another recent study found that PLWH with higher levels of social adversity showed reduced volumes of subcortical structures and worse learning/memory performance, and these findings did not extend to the HIV- group. Stress, emotional reactivity, and avoidant coping behaviors are related to important daily functioning behaviors such as medication nonadherence among PLWH. Although multiple studies have examined the effects of stress on cognitive function within cohorts of PLWH or individuals without HIV, few have directly compared the effects between serostatus groups while examining the combined effects of multiple traumatic and stressful experiences, or included standardized measures of daily functional abilities. In the present study, we investigated whether a composite measure of multiple adverse experiences including trauma, economic hardship, and stress exerts a negative impact on cognitive and everyday function in a cohort of adults living with and without HIV. We hypothesized that PLWH would experience more trauma, economic hardship, and stress than their HIV- counterparts. Furthermore, we hypothesized that elevated TES would relate to worse cognitive function and everyday function in both serostatus groups, but the magnitude of the association would be greater for PLWH compared to their HIV- counterparts, after controlling for established predictors of cognitive and functional status.Participants were 122 PLWH and 95 adults without HIV from the Multi Dimensional Successful Aging among Adults living with HIV study conducted at the University of California San Diego. This study utilized cross-sectional data from the first study visit. The UCSD Institutional Review Board approved this study, and all participants provided written, informed consent. Exclusion criteria were minimal in order to enroll a representative cohort of PLWH and HIV- adults, and included: diagnosis of a psychotic disorder or mood disorder with psychotic features; presence of a significant neurological condition known to impact cognitive functioning ; positive urine toxicology on the day of testing. An HIV/HCV finger stick point of care test was used to test all participants for HIV infection. Of the participants who reported they were HIV- at screening, none tested positive for HIV. Study visits consisted of detailed neuromedical, psychosocial, and cognitive assessments, and specimen collection. Our TES composite variable was derived to capture three components of adversity: traumatic events , economic hardship: food insecurity and low socioeconomic status , and perceived stress. Traumatic events were assessed by the self-report Women’s Health Initiative Life Events Scale, which assesses traumatic events over the past year. For our trauma variable, we included the following five items from this scale: death of a spouse or partner, major problems with money, a major accident, disaster, mugging, unwanted sexual experience, or robbery, physical abuse by a family member or close friend, or verbal abuse by a family member or close friend, for which the participant rated the event as moderately or very upsetting. In the overall cohort, the number of traumatic life events ranged from zero to five. 

Posted in hemp grow | Tagged , , | Comments Off on A few studies do suggest that health concerns are at least as important as legal risks

This difference highlights the importance of age in relation to depressive symptoms

For example, rates of elevated depression among PLWH were similarly high in all age groups. In contrast, only the youngest HIV− group had relatively low rates of elevated depressive symptomology, with higher rates in older cohorts. This is consistent with research estimating high prevalence of sub syndromal depression among middle-aged to older adults, especially those with greater medical burden, disability, and lower social support. Overall the H+/D+ group reported the lowest physical and mental HRQoL; however, the relationships between the four groups differed depending on age cohort. While depressive symptoms in PLWH consistently related to lower mental HRQoL across ages, elevated depressive symptoms most prominently impacted phy sical HRQoL in the oldest H+/D+ group. These findings are consistent with prior studies that have reported a correlation between worse HRQoL and depression among PLWH. However, our novel findings highlight that the relation ship between depression, age and HRQoL differs for mental components compared to physical components. Importantly, there were no differences on HRQoL or positive psy chological factors between the two non-elevated depressive symptom groups. Similar to prior research, the H+/D− group reported comparable grittiness, resilience, and successful aging to the H−/D− group, which indicates that in the absence of elevated depressive symptoms PLWH rate themselves as having favorable positive psychological factors. In the oldest age decade, the H+/D− group had the highest positive psychological factors, suggesting an important relationship between these positive psychological factors and being able to live a relatively long, non-de pressed life as a person living with HIV. Hence,cannabis grow supplies positive psychological factors may be protective for PLWH. Individuals’ subjective health ratings may provide valuable insight to their overall well-being, as previous studies have shown an association between reported worse health ratings and an increased risk of mortality. 

This finding may also reflect a potential “survivor effect” given that these older individuals have had HIV for longer and as long-term survivors, may view living with HIV more positively compared to prior expectations. This study has strengths in its multi-cohort design methodology that allows us to examine the combined effects of HIV and depression on HRQoL across age cohorts; there are also some limitations, however. For example, we were not able to address questions regarding the onset of depressive symptoms in relation to HRQoL or the positive psycho logical factors. The cross-sectional nature of the current data analyses prevents any causal attributions. For instance, depression may lead to less resilience and grit or vice versa. Like prior studies , we found a higher proportion of elevated depressive symptoms among PLWH, and individuals with elevated depressive symptoms reported lower HRQoL and positive psychological factors. There may be other factors related to depression and acquiring HIV not captured by our present variables that may account for the difference in depressive symptoms by HIV status. An other limitation is the small sample size per group, especially within the H−/D+ group. Furthermore, the sample, particularly the within the PLWH groups, was predominantly male and these results may not be generalizable to females. However, within the United Sates the majority of middle-aged to older PLWH are male; thus, our study cohort is similar to the broader characteristics of PLWH in the U.S.. Given the negative consequences of depression in PLWH, it is important to identify those in greatest need of treatment. Prior work has highlighted the usefulness of cognitive behavioral therapy for depression treatment among PLWH, even in those with advanced HIV disease. Furthermore, meta-analytic work has shown psychotherapeutic interventions reduce depressive symptoms in PLWH, which in turn may lead to improved psychiatric and medical outcomes. With this said, older PLWH are less likely to be engaged in behavioral health treatment for depression than younger PLWH, highlighting the need to address underlying factors contributing to the lack of adequate mental health treatment among older PLWH. However, increasing or improving positive psychological factors may provide one potential avenue to mitigate depressive symptoms.How bad were things in 2020?

Perhaps because of a relatively strict state policy, Oregon actually has weathered the COVID pandemic quite well public health-wise so far. As of January 27, 2022, the Beaver State had the fourth-lowest rate of infections in the country at 14,353 cases per 100,000 residents and in deaths at 143 per 100,0000, compared to national averages of 22,233 and 263 respectively. However, Oregon’s employment dropped more than the national rate between February 2020 and December 2020, with the state losing 7.8% of its jobs in that period, compared to a national loss of 6.5%. The state’s GDP declined 3.1% in the first three quarters of 2020, very close to the national average of 3.4% decline. In the Summer of 2020, Oregon also faced forest fires that killed 9, burned 1.2 million acres, and destroyed a record-breaking over 4,000 homes. In February 2021, Northwestern Oregon also suffered an ice and snow storm that caused at least 279,000 homes to lose power, many of them for multiple days and some for weeks. Beyond the acts of nature, Oregon also faced the consequences of a highly polarized political context similar to that of the entire nation. Even with Democratic dominance in the legislature and all statewide offices, the sharp split between Democrats and Republicans and the rising militancy of conservatives roiled the state. As happened in 2019, Republican legislators walked out of the regular 2020 legislative session, but this time the Democrats simply ended the session early since they needed at least two Republicans present in both the House and Senate to make the membership quorum. According to the Oregon constitution , “Two thirds of each house shall constitute a quorum to do business.” In the Senate, there are 30 members,cannabis grow facility which means a quorum of 20 is needed. In the House of Representatives, there are 60 members, with a quorum of 40 needed. Twenty-two of the 60 Representatives are Republicans, and in the Senate, 12 of the 30 members are Republicans. Thus, the Democratic super majorities are meaningless if the Republicans decide not to participate, which happened during the Spring 2020 session. The Republican walk-out concerned a measure to fight climate change legislation, and the early end meant over 100 bills were never even considered. While the legislature had successful special sessions later in 2020 in response to the COVID and forest fire crises, the 2021 regular session had an early walk-out over Governor Brown’s decision to continue her emergency powers in response to COVID. This ominous action and the Oregon State Republican Party selection of very conservative State Senator Dallas Heard as its chair threatened the success of the 2021 regular session.

As the US News and World Report noted: “During a Jan. 6 demonstration by Trump supporters outside the Oregon Capitol, he pointed at the building and shouted through a megaphone: “Don’t let any of these punks from that stone temple over there ever tell you that they are any better than you. Trust me, I work with these fools.” He followed that with, “Don’t be violent, take action, trust in God and take down these fools in 2022.” All across Oregon, rural activists are pushing the idea of their communities leaving Oregon and joining Idaho.. You would think the state budget would be collapsing under such circumstances, but paradoxically and by many measures, the session finished very positively, and the state budget was record-breaking in size. This overview of budget developments in Oregon centers on three aspects of 2020 and early 2021: 1) the fiscal implications of the state’s economic conditions, 2) the existence of strong reserves and changed revenues sources, and 3) the COVID-related cash from the federal government. In June 2020, the OEA made a grim assessment: “The sudden stop in economic activity has led to the largest downward revision to the quarterly forecast that our office has ever had to make. In the baseline scenario, General Fund and other major revenues have been reduced relative to the March forecast by $2.7 billion in the current biennium and $4.4 billion in the 2021-23 budget period.” As noted above, however, this gap was eclipsed by increased business income and positive developments in the inherently volatile and unpredictable personal income tax revenues due to federal COVID stimulus spending. Four other revenue factors distinguish the budget picture in 2021: 1) the existence of significant “rainy day funds”; 2) the creation in 2019 of a new Corporate Activity Tax to significantly supplement K-12 funding; 3) loss of Oregon Lottery revenues; and 4) complex growth in the small but always noteworthy cannabis tax. By June 2021, when the 2021-2023 budget was finally approved, the “rainy day funds” did not seem so important, but their existence provided the legislature many options for responding to the COVID recession when it appeared to be laying the state budget to waste.

The size of the reserves made it uncontroversial to consider tapping them, although positive developments meant the legislature did not need to make full use of them. Oregon has two reserve funds. The Oregon Education Stability Fund was projected to grow to $801 million by the end of the 2019- 2021 budget, but the Legislature spent about $400 million during a 2020 August special session called in response to the COVID situation. Besides money reserved for education alone, there is the Oregon Rainy Day Fund , which has not been tapped into since 2010 and was projected to have a balance of $949.3 million at the end of the 2019-2021 budget term. For a state so heavily dependent on personal income taxes, this buffer gave legislators some confidence that radical cuts might be forestalled, even without the major rescue operations by the federal government. The June 2021 OEA report projects that at the end of the 2021-2023 budget Oregon will have a total of $1.94 billion in total reserves, about 8.3% of revenues. This is the highest reserve the state has ever had. The 2019-2021 state budget also featured a key new source of revenue: the Corporate Activity Tax. For 2019-2021, General Fund spending on K-12 Education was approved for $8.25 billion, or almost 35% of state General Funds. The General Fund budget itself rose overall by 12.2% from 2017-2019, while the K-12 component rose only 1.6% from 2019-2021’s $8.12 billion. General Fund allocation to K-12 funding was actually 5% short of the current service level funding between the two biennia September 2019, 4-6) In Oregon, the education lobby is very powerful, as in most Democratic states, and such a budget cut would be nearly unthinkable in the good times of 2019. But the Democratic majority compensated for the General Fund shortfall for K-12 Education by passing HB 3427, the “Oregon Student Success Act,” which created the CAT, a relatively large new revenue source dedicated entirely to K-12 programs. HB 3427 “Oregon Student Success Act” features spending requirements that Democrats say ensure that the money will not be siphoned off to support non-productive educational spending, such as to support Oregon’s expensive Public Employee Retirement System. Besides creating the new CAT, the Student Success Act cut the income tax rate for the three lowest tax brackets by.25%, to 4.75%, 6.75%, and 8.75%. The top income tax rate, which kicks in at individual incomes of $125,000, was left unchanged at 9.9%. The first allocations of CAT funds actually went to the General Fund to fill in the $423 million lost due to tax provisions included in HB 3427, plus $20 million to fund the High Cost Disability program, and $200 million to be allocated according to the existing school revenue formula. The real significance of CAT is demonstrated by the projected future revenues. The CAT is predicted to steadily increase. For 2021-2023 it is projected to bring in $2.29 billion and in 2023- 2025, $2.60 billion Although there are many moving parts in the state’s K-12 total budget, the addition of the CAT revenues, which started in 2020, helped raised total K-12 spending by 9.9% over the 2017-2019 amount. Incidentally, the state’s share of the total K-12 state/local spending increased slightly to 67.8%. COVID seems to have affected two smaller revenues streams in interesting ways. First, Lottery revenues for 2019-2021 were down greatly.

Posted in hemp grow | Tagged , , | Comments Off on This difference highlights the importance of age in relation to depressive symptoms

The classic hippocampal circuit is a trisynaptic circuit utilizing glutamatergic neurotransmis sion

By follow-up in June 2021, on average, there were no significant differences from pre-pandemic patterns of alcohol and nicotine use. Findings are consistent with previous short-term studies showing a pandemic related increase in the number of days drinking. In our data, this change reflected a different distribution of drinking across the population: compared to pre-pandemic, fewer young adults were drinking, but those who did drank more frequently. While two previous studies found decreases in binge drinking , we did not find a statistically significant change in the number of days of binge drinking at any timepoint in the current study. However, the non-significant reduction we observed in binge drinking in June and December 2020 was directionally consistent with these previous studies. In addition, the time frame of measurement may explain the discrepancy: those two previous studies focused on changes earlier during the pandemic, in March and April 2020, whereas another study focusing on changes in June and July 2020 also found no significant change in binge drinking. As in one previous study , we did not find an average effect of the pandemic on nicotine use. However, this appeared to obscure opposing changes among those who suffered vs. did not experience impacts on their financial security. Relative to pre-pandemic, in June 2020, those with past-month nicotine use had increased the number of days using if they experienced financial impact and had stable or decreased number of days using if they denied experiencing financial impact. Loss of job or reduction in work hours could increase smoking during periods of boredom at home or to cope with the attendant stress. This pattern is consistent with the larger literature documenting how the pandemic may exacerbate health dis parities based on pre-existing socioeconomic advantage. 

However,cannabis grow kit moderation of multiple out comes was tested, so the current findings should be regarded as preliminary and await replication. This study had limitations. First, findings may not generalize beyond emerging adults ages 18–22 years old. Second, for nicotine use, we did not measure the quantity used each day, which could have changed. Third, we did not consider other substances such as cannabis. Fourth, the mode of assessment differed from the pre pandemic to during-pandemic assessments, potentially introducing differences.Fifth, secular changes in the rates of alcohol or nicotine use among young adults between 2016 and 2021 could be confounding the effect of the pandemic, potentially introducing bias.Sixth, pre-pandemic responses on a free-response scale had to be mapped onto the discrete response options , potentially limiting precision. Seventh, we assessed the degree to which the pandemic impacted in dividuals’ financial security but not the form of this impact. Eighth, pre-pandemic observations were not anchored to the months of June and December, so seasonal effects could explain part of the observed differences. We reported here the most extended follow-up to date of pandemic related changes in drinking and nicotine use in emerging adults. The study had several further strengths. We used seven years of pre pandemic assessments and a rigorous age-based design to identify the pandemic’s impact over and above typical developmental changes. We incorporated three assessments spanning the first 15 months of the pandemic to study whether early changes in drinking and nicotine use persisted. Participants spanned five sites across the U.S and multiple racial and ethnic backgrounds. Finally, we focused on a critical developmental period associated with elevated risk for problematic use. In summary, in a heterogeneous group of young adults, pandemic related changes in drinking patterns were no longer detectable in June 2021. Pandemic-related increases in nicotine use occurred only for participants who reported greater impact of the pandemic on their financial security—these subgroup effects were no longer statistically significant in June 2021, though a large effect size for past-month nicotine use remained. Thus, those whose financial security has been adversely impacted by the pandemic may reflect a vulnerable group worth targeting for supports to manage drinking and nicotine use.

Continued follow-up beyond summer 2021 is necessary to verify that the pandemic’s effects on drinking and nicotine use have indeed faded and understand the pandemic’s long-run impacts of substance use trajectories into adulthood. Parkinson’s Disease treatment has been based on dopamine replacement therapy for 35 years. Yet, side effects resulting from long-term use of DA agonists, namely dyskinesias and on–off responses, are prompting investigations of alternative neurotransmitter manipulations to modulate basal ganglia function and normalize motor activity. Dyskinesias often result from lesion or disturbance affecting the transcortical loop or indirect pathway, with disruption of balance between excitation and inhibition in the globus pallidus pars externa-subthalamic nucleus-globus pallidus pars interna circuit. Thus, dyskinesias reflect altered patterns of neuronal firing in this circuit, which result in the improper selection of specific motor programs and, eventually, in the development of hyperkinetic movements. Endocannabinoids, the endogenous ligands of cannabinoid receptors, are synthesized upon demand by neurons in response to depolarization , and, once released, diffuse backwards across synapses to suppress pre-synaptic GABA or glutamate release. Because of these properties, the endocanna binoid system may offer new pharmacological targets for the treatment of neurologic conditions characterized by abnormal firing patterns. One application of cannabinoid based therapeutics would be for dyskinetic syndromes, hyperkinetic disorders characterized by changes in pattern, synchronization, mean discharge rates, and somatosensory responsiveness of neurons in the direct and indirect extrapyramidal motor circuits. Further applications of cannabinoid-based therapeu tics may extend to treatment of seizure disorders, changes in behavioral or cognitive state resulting from hypersynchro nous excessive neuronal discharges in other, for example, limbic, cortical or thalamic circuits. To test the hypothesis that endocannabinoids act as endogenous antidyskinetic agents with modulatory effects on abnormal basal ganglia circuits, we examined endocan nabinoid production in specific areas of the basal ganglia of rats infected with Borna disease virus and how cannabinoid agonists and antagonists affect their motor behaviors. Borna disease virus is a negative strand RNA virus epidemiologically linked to patients with neuropsychi atric disorders and Parkinson’s-plus syndromes. 

After infection, BD rats develop an extrapyramidal disorder with sponta neous dyskinesias, hyperactivity,cannabis drying racks stereotypic behaviors, partial DA deafferentation, DA agonist hypersensitivity, and Huntington’s-type striatal neuropathology. Our investigations revealed elevations in the endocannabinoid anandamide in the subthalamic nucleus of BD rats, associated with increased metabolic activity in this key basal ganglia relay nucleus. As pharmacological antagonism of CB1 receptors caused BD rats to seize, we also evaluated the relationship between changes in anandamide levels and seizure phenomena. Our results suggest that anandamide acts as both an endogenous antidyskinetic and anticonvulsive compound, in part via interactions with the opioid system.Our results are consistent with a functional role for anandamide signaling as a natural mechanism to buffer abnormal firing patterns in various neural circuits. Using BD rats, a rodent model of viral-induced neurodegenerative syndrome and spontaneous dyskinesias, we showed significant anandamide elevations in the STN, a critical basal ganglia relay nucleus in which abnormal firing has been linked to dyskinesias, dystonia and hemiballismus. The characteristics of STN neurons, such as fast firing kinetics, short membrane refractory periods and ability to modify their firing pattern after small changes in impinging synaptic input , render the STN well-suited to regulation by activity-dependent modulators such as endocannabinoids. In keeping with this hypothesis, CB1 receptor protein and functional CB1 receptors have been found in the STN of rats. However, neither WIN 55,212-2 nor AM404 had robust antidyskinetic effects in BD rats, which we attribute to loss of CB1 receptors along with GABA neurons in other nuclei of the basal ganglia circuit, as indicated by striking loss of GAD immunoreactivity in BD rats. STN hyperactivity is a recognized feature of PD , but may also signifying abnormal patterns of firing, as in dystonia. Thus, in BD rats, which display mixed Parkinsonian and Huntington’s lesions, the anandamide and Col activity elevation observed in the STN may represent an important compensatory or modulatory reaction to abnormal input to STN, with the net effect of reducing pathologic or dyskinetic movements.

The convulsant effect of CB1 receptor antagonist SR141716A was an unexpected result. Since abrupt reduction of endocannabinoid tone produced hippocampal seizures in BD rats, we suggest that anandamide, in addition to its compensatory function in the basal ganglia, may have a role in maintaining homeostatic or balanced activity in limbic networks.To further evaluate the role of anandamide in convulsive phenomena, we used a seizure paradigm already developed in BD rats. In these rats, degenerative changes extend to hippocampus and amygdala and self-limited limbic seizures can be consistently produced within 5 to 10 min of administration of the general opiate antagonist naloxone. We found that administration of naloxone did not change anandamide levels in the hippo campus and amygdala of BD rats that seized, while naloxone did cause significant anandamide elevation in the same brain areas of normal rats that did not seize. The failure of BD rats to increase limbic region levels of anandamide in response to the opiate antagonist naloxone is consistent with the idea that decreased availability of anandamide on demand contributed to seizures induced by a chemoconvulsant. Dynamic neurotransmitter buffering in rapid response to excitatory stimuli may be a general principal of endogenous anticonvulsants, applying to classic inhibitory neurotrans mitters such as GABA and to neuromodulators such as opioids or endocannabinoids. For example, when opioid tone was reduced by naloxone, the result was increased EEG activity of both normal and BD rats. When BD rats developed increased or hypersynchronous EEG activity but could not increase anandamide levels, it was the ananda mide transport blocker AM404 that limited or reversed naloxone excitability and rescued the animal from seizures. Anticonvulsant efficacy was most likely via elevation of anandamide or other endocannabinoid tone, an interpreta tion consistent with anecdotal reports by patients of improvement in seizure frequency or severity with mar ijuana use. However, at this time, we cannot exclude a vanilloid-mediated effect of AM404, since this drug binds to the TRPV1 receptor. Our study widens the role of potential cannabinoid– opioid interactions beyond substance abuse, tolerance dependence phenomena, analgesia, hypothermia, and inflammation and suggests a reciprocal relation between these two systems with respect to convulsive phenomena. So far, cannabinoids and opioids have been implicated separately in seizures. While CNS opioid dysregulation has been considered a substrate for the interictal personality disorder , CNS endocannabinoid signaling changes because of their association with schizophrenia and psychotic symptoms might also contribute to interictal cognitive or personality syndromes. Endocannabinoid upre gulation during opiate withdrawal could explain the absence of seizures during opiate withdrawal. The opioid system includes several families of related neuropeptides and A, y, n opioid receptors. In other work , kappa opioid receptors have been identified as a major contributor to anticonvulsant efficacy. When K and CB1 receptors are compared, they are found to have convergent biochemical mechanisms. Both are members of Gi/o protein coupled receptor family and signal through cAMP-Protein Kinase A, inwardly rectifying K+ channels and N, P/Q, R type Ca++ channels. KOR and CB1 receptors exhibit overlapping neuroanatomic dis tribution in hippocampus. CB1 receptors are found on CCK expressing interneurons , while KOR receptors are found on mossy fiber terminals, principal neurons, perforant path and supramammillary afferents, and GABA/SOM/NPY containing interneurons.At each step, excitatory tone is modulated by a diverse group of inhibitory and excitatory neurons. KOR or CB1 stimulation of selective interneurons could desynchronize GABA inputs to a post synaptic network. Desynchronization of signals from GABA containing interneurons to their networks of pyramidal cells is one mechanism of enhanced inhibition of principal neurons. KOR stimulation at other sites, producing pre- or postsynaptic inhibitory effects on large pyramidal neurons, would also modulate excitability of principal neurons, with net effect the modulation of hippocampal outflow pathways. Further studies in our model will investigate the effect of AM404 on endocannabinoid production in the limbic system. Greater understanding of the conditions for inter action between endocannabinoids and opioid system will enhance our knowledge of neural circuits that serve fundamental or broad homeostatic functions and also will be the goal of future studies. In conclusion, knowledge of endocannabinoid distribu tion and function throughout basal ganglia circuits could lead to the identification of non-dopamine pharmacologic targets for dyskinetic disorders and a greater understanding of the role of output pathways in the genesis of motor behaviors and involuntary movements.

Posted in hemp grow | Tagged , , | Comments Off on The classic hippocampal circuit is a trisynaptic circuit utilizing glutamatergic neurotransmis sion

Further study into the individual effects of AVF on compression strategy is warranted

This event rate is, however, higher than the reported adverse event rate with using ketamine as a single agent.This discrepancy may be due to the fact that children in this study may have had more than one adverse event documented during a single sedation, such as apnea, oxygen desaturation, and BMV. Previous studies have shown that ketamine has a low side-effect profile with the most common adverse events being those related to respiratory compromise and emesis.In fact, the odds of respiratory adverse events associated with ketamine use increases when it is administered intramuscularly instead of intravenously.In addition, ketamine-associated emesis can be reduced by administering ondansetron prior to the start of PSA.However, neither of the two patients who had emesis during PSA in this study received ondansetron as a premedication. Moreover, while the authors did not evaluate NPO status and how this relates to emesis, previous studies have shown that the NPO time does not affect the rate of major adverse events during PSA.Despite advances in the field of resuscitation science and modest improvement in outcomes, mortality from in-hospital cardiopulmonary arrest remains relatively high.However, a common denominator in recent reports of modest outcome improvements in CPA resuscitation has been the link to quality of cardiopulmonary resuscitation.In particular, high-quality chest compressions have been described as the foundation that all additional, “downstream” resuscitative efforts are built upon and highly associated with improved survival and favorable neurological outcomes.Most recently, high-quality chest compressions have been defined by the updated 2015 American Heart Association adult guidelines as a depth of 2-2.4 in, full chest recoil, a rate between 100-120 beats per minute, and a chest compression fraction of at least 60%.Even when delivered according to guidelines,cannabis grow supplier external manual chest compressions are inherently inefficient, providing only 30% to 40% of normal blood flow to the brain and less than one third of normal blood flow to the heart.

This inefficiency highlights the need for rescuers to deliver the highest-quality chest compressions in a timely and consistent manner.Although the relationship between high-quality chest compressions and improved survival has been well described, concern remains with the reports of trained rescuers performing suboptimal compression depth, rate, and hands-off fraction time.Rescuer overestimation of depth and underestimation of rate, as well as increased performance fatigue in prolonged situations, may be primary forces in the relatively poor adherence to current guidelines.Real-time, CPR performer feedback via defibrillator has been a relatively recent approach in maintaining chest compression performance and associated with continuous high-quality chest compression.Currently, there are no studies investigating the ability to maintain high-quality chest compressions within the current 2015 AHA guidelines with and without the influence of real-time audiovisual feedback , which may assist in maintaining high-quality chest compression. The goal of this study was to assess the ability to maintain high-quality chest compressions by 2015 updated guidelines both with and without AVF in a simulated arrest scenario.This was a randomized, prospective, observational study conducted within a community hospital with over 22,000 annual inpatient admissions. All participants were voluntary emergency department and medical-surgery nursing staff with both Basic and Advanced Cardiac Life Support certification. We obtained institutional review board approval, and written consent was required prior to participation. We defined CPR providers as a two-person team consisting of one participant performing chest compressions while the second administered ventilations via bag-valve mask. Chest compressions and ventilations were performed on a Little Anne CPR Training Manikin. AVF on chest compression rate and depth was provided to participants through ZOLL See-Thru CPR® on R Series® defibrillators. In a “mock code” scenario, 98 teams were randomly assigned to perform CPR +/- AVF chest compression feedback.

Participants were further randomly assigned to perform either standard chest compressions with a compression-to-ventilation ratio of 30:2 to simulate CPR without an advanced airway or continuous chest compressions to simulate CPR with an advanced airway for a total of four distinct groups.Chest compressions were performed for two minutes, representing a standard cycle interposed between rhythm/pulse checks and/or compressor switch. Defibrillator data for analysis included chest compression rate, depth, and compression fraction over the entire two minutes. The primary outcome measured was ability to maintain high-quality chest compressions as defined by current 2015 AHA guidelines.Secondary outcomes included group differences in chest compression depth, rate, and fraction time. Based on recent findings per Wutzler et al. on the ability to maintain effective chest compressions we estimated a sample size of at least 68 teams to maintain a two-sided alpha of 0.05, and a power of 80%.27 Data are presented as means and standard deviations. We compared CPR variables between respective groups by Mann-Whitney U test or continuous variables and by chi squared test for categorical variables. Only participants with technically adequate data available were used in this comparison. We considered p values < 0.05 statistically significant. No participants were excluded.Previous iterations of the AHA’s CPR and Emergency Cardiovascular Care guidelines have recommended chest compression rate ≥ 100 compression/min; however, the 2015 updates have called for a chest compression-rate upper limit of 120/min. The recommendation appears to be based on both animal studies as well as recent clinical observations from large out-of-hospital cardiac arrest registries describing an association between chest compression rates, return of spontaneous circulation , and survival to hospital discharge.This makes sense as observations in animal studies have described anterograde coronary blood flow as positively correlated with diastolic aortic pressures and subsequently compression rate. However, at rates greater than 120 compressions/min, this relationship weakens as diastolic coronary perfusion time decreases.Regarding human data,cannabis drainage system recent observations from the Resuscitation Outcomes Consortium registry suggest an optimum target of between 100 and 120 compressions per minute.In this randomized, controlled study we report that overall, AVF is associated with a greater ability to provide simultaneously guideline-recommended rate and depth.

This is important as previous studies have focused on the proportion of correct chest compression rate and depth; however, it has been shown that despite adequate individual mean values, the actual proportion of chest compressions that fell within guideline criteria simultaneously for rate and depth was low.Overall comparisons between SC and CCC cohorts were without significant differences in compression dynamics. AVF appeared to have an effect regardless of chest compression strategy, with isolated analysis of both compression strategy groups notable for differences. Within the SC group, significant differences were noted in both average rate and proportion of compressions within current guideline recommendations. Analysis of the CCC cohort was notable for the association with AVF, a greater proportion of compression depth within current guidelines and proportion of time with ideal compressions. One potential explanation for the association between AVF and ability to perform “high quality” chest compressions on a more consistent basis is the ability to possibly avoid early fatigue by “pacing” an individual through the early periods of a highly stressful cardiac arrest situation where one could understandably want to push as fast and hard as possible, which in turn may lead to early fatigue and subsequently “poor quality.”Finally, similar to overall analysis, comparisons between compression strategies without AVF did not result in any significant compression differences. The isolated effect of AVF on compression dynamic overall appears to be related to compression strategy. Within the CCC cohort, the effect appears to be on ability to maintain ideal depth, while in the SC cohort, the effect appears to be related to rate control. We do note that within this cohort, although a statistically significant difference is noted in average rate of compressions, both are within current guidelines. However, it should be noted that the non-AVF cohort demonstrated an average rate at the most upper level of current recommendations, and more importantly was associated with a lower rate of proportion of compressions with rate within guideline recommendations over the testing period. This is important as recent studies have reported an inverse association between compression rates and depth, with rates above 120/min having the greatest impact on reducing compression depth.Recent reports have called this upper rate limit into question and suggest that faster rate limits may be actually associated with a higher likelihood of ROSC in in-hospital cardiac arrest.Unfortunately, in that study compression depth was not reported, leaving optimal rates in in-hospital arrest up to continued debate.Interestingly, within the AVF cohort, chest compression depth appeared to be both deeper on the average and out of guideline recommended depth for the SC cohort. Yet again, these differences did not translate to overall differences in the proportion of time within recommended depth between compression groups. Chest compression strategy and relationship with AVF may be related to the nature of the strategy. That is, with continuous compressions fatigue may become an issue and feedback on depth may be of greater importance over time while bursts of activity after brief pauses with standard compressions may require greater mindfulness in rate of compressions.Finally, we note that although the presence of AVF appears to have improved the quality of chest compressions, proportions of high-quality compressions were surprisingly low between all groups with a high of 25% and nadir of 3.3% 1, 2.

However, our findings are consistent with reported “effective compressions,” i.e., trial period with mean compression rate and depth within guidelines and CCF ≥80% per Wutzler et al. In their simulation-based study, there was an “effective compression” rate of 25.4% with feedback vs. 12.7% without.These findings warrant further investigation into possible influencing factors and sources of variation including fatigue, critical care experience, and time since last training update. The translation of preclinical theories of alcoholism etiology to clinical samples is fundamental to understanding alcohol use disorders and developing efficacious treatments. Human subjects research is fundamentally limited in neurobiological precision and experimental control, whereas preclinical models permit fine grained measurement of biological function. However, the concordance between preclinical models and human psycho pathology is often evidenced by face validity alone. The aim of this study, therefore, is to test the degree to which one prominent preclinical model of alcoholism etiology, the Allostatic Model, predicts the behavior and affective responses of human subjects in an experimental pharmacology design. The Allostatic Model was selected for translational investigation due to its focus on reward and reinforcement mechanisms in early vs. late stages of addiction. In this study, we advance a novel translational human laboratory approach to assessing the relationship between alcohol-induced reward and motivated alcohol consumption. A key prediction of the Allostatic Model is that chronic alcohol consumptions results in a cascade of neuroadaptations, which ultimately blunt drinking-relate hedonic reward and positive reinforcement, while simultaneously leading to the emergence of persistent elevations in negative affect, termed allostasis. Consequently, the model predicts that drinking in late-stage dependence should be motivated by the relief of withdrawal related negative affect, and hence, by negative reinforcement mechanisms. In other words, the Allostatic Model suggests a transition from reward to relief craving in drug dependence. The Allostatic Model is supported by studies utilizing ethanol vapor paradigms in rodents that can lead to severe withdrawal symptoms, escalated ethanol self-administration, high motivation to consume the drug as revealed by progressive ratio breakpoints, enhanced reinstatement, and reduced sensitivity to punishment. Diminished positive reinforcement in this model is inferred through examination of reward thresholds in an intracranial self-stimulation protocol. Critically, these allostatic neuroadaptations are hypothesized to persist beyond acute withdrawal, producing state changes in negative emotion ality in protracted abstinence. Supporting this hypothesis, exposure to chronic ethanol vapor produces substantial increases in ethanol consumption during both acute and protracted abstinence periods.. Despite strong preclinical support, the Allostatic Model has not been validated in human populations with AUD.Decades of human alcohol challenge research has demon strated that individuals differences in subjective responses to alcohol predict alcoholism risk. The Low Level of Response Model suggests that globally decreased sensitivity to alcohol predicts AUD. Critically however, research has demonstrated that SR is multi-dimensional. The Differ entiator Model as refined by King et al suggests that stimulatory and sedative dimensions of SR differentially predict alcoholism risk and binge drinking behavior. Specifically, an enhanced stimulatory and rewarding SR, particularly at peak BrAC is associated with heavier drinking and more severe AUD prospectively.

Posted in hemp grow | Tagged , , | Comments Off on Further study into the individual effects of AVF on compression strategy is warranted

Pain scores and injury severity scores may have differed and were not studied

This may reflect random variation or purposeful decline in opioid prescribing influenced by the significant attention recently brought on by the “opioid epidemic.” The providers were not notified of removal of the default quantity; therefore, it is less likely that the intervention itself influenced the decrease in number of prescriptions. The data on prescribing patterns from the ED in recent years are limited, and it is unknown if there has been a widespread decline in prescribing patterns over this same time period.As a retrospective analysis, unmeasured confounders may have influenced our analysis. Factors that were not studied may have influenced opioid prescribing patterns. These include the physician’s perception of pain intensity, the age of the patient, the provider’s experience level, and the diagnosis at the time of discharge. Furthermore, it is unknown whether the increased variation post-intervention really represents true individual prescribing variation. Further evaluation would be required to analyze each individual provider’s prescribing patterns before and after the intervention to determine whether they each exhibited the same increase in variability as the entire group or if, after removal of the default quantity, each provider relied on his/her own individual default quantity for each patient regardless of painful condition. Other potential explanations for the findings observed were not studied directly. One potential confounder is a change in the patient population or ED providers during the study period, which may have influenced prescribing habits. Comparing patient acuity in the period before and after the intervention demonstrates similar Emergency Severity Index scores and admission rates. This suggests similar patient characteristics in the pre and post-intervention period. The total number of Level I and II trauma activations and ED visits for adult patients was lower in the post-intervention period as expected,cannabis hydroponic set up given the duration of the post-intervention period was shorter.

Although it appears that prescribing patterns may have been more appropriate after elimination of default quantity, this assumption was not directly tested. Changes in provider mix may also account for differences in opioid prescribing during the post-intervention period. Although this was not studied directly, there was minimal turnover among the provider group during the study period with a total of one hire and two departures of full-time faculty during the combined time periods. Further studies would be needed to determine which factors influence physician-prescribing patterns of opioid analgesics for specific, painful conditions including analysis of pain scores.Frequent users of emergency departments have been the subject of substantial research given the implications for resource utilization, healthcare costs, and ED crowding.A unique subset of frequent ED users are those who present to the ED repeatedly for acute alcohol intoxication.As ED visits for acute alcohol intoxication are increasing,9 the burden of alcohol-related frequent users will be important to Hennepin County Medical Center, Department of Emergency Medicine, Minneapolis, Minnesota explore. Existing studies describing frequent ED users often cite alcohol-use disorders as a common comorbidity and a precipitant for their disproportionate utilization of emergency services.Despite this established association, there is a paucity of data describing the encounters and individuals who frequently use the ED for alcohol intoxication, or the extent to which they use the ED for other reasons. The purpose of this study was to describe this population and their ED encounters.This was a retrospective, observational, cohort study of ED patients presenting for acute alcohol intoxication from 2012 to 2016. It was approved by the institutional review board. The study hospital is a county ED with an annual volume of 100,000 visits and 7,000 visits for alcohol intoxication. The ED has a 16-bed area within the department that clusters all intoxication encounters. The purpose of this area is to treat patients who are in the department for intoxication at patients who ared to treat complicated medical or trauma patients who also happen to be intoxicated from alcohol.

Patients are selected for treatment in this area at the discretion of triage nurses, paramedics , and the emergency physicians. All alcohol intoxication encounters are seen in this particular area of the ED, but there is occasional overflow to other parts of the ED if these rooms are full. All patients who are treated in one of these rooms are entered into the electronic medical record using the chief complaint “altered mental status.” We included adults if they presented to the ED for alcohol intoxication during the study period. These patients were identified using the EMR by querying for all visits where the chief complaint was “altered mental status ng for ir initial ED room was within the intoxication section of the ED. Patients were excluded if their breath alcohol concentration was zero. The variables for analyses were chosen a priori. We selected them if they were hypothesized to be relevant to the study population and if they were readily available in the EMR. A data analyst who was blinded to the purpose of the study obtained the following variables without any manual chart abstraction: age, gender, race/ethnicity, insurance status, primary care physician, medical/ psychiatric comorbidities, breath alcohol concentration, testing obtained , chemical sedation administered, ED disposition, and length of stay. Additional data for each frequent user was manually abstracted from the chart by another investigator ; these included counts of ED visits that were not for alcohol intoxication, hospital admissions, and visits to a separate psychiatric services ED. Multiple definitions for ED frequent users exist in the literature, ranging from 3-20 visits per 12-month period.For this study,hydroponic system for cannabis we elected to use the upper limit of this range and categorize an alcohol- related frequent user as greater than 20 visits for acute alcohol intoxication in the previous 12 months, in order to describe the highest-user cohort possible. Non-frequent users were those who did not meet this criterion. After we identified the frequent-user cohort, we analyzed encounter characteristics for those with a frequent-user designation during that visit compared to those without. For analysis of patient characteristics and demographics, duplicate observations were excluded. The patient encounter that was retained for demographic analysis was the most recent encounter during the study period.

For all comparisons, we calculated differences in means or proportions with associated 95% confidence intervals. We checked a subset of 20 charts to confirm accuracy of data abstraction.Frequent users for alcohol intoxication are a unique subset of frequent ED users who merit attention given increasing numbers of alcohol-related visits nationally.9 In this study, we identified 325 patients with 11,370 encounters for alcohol intoxication over a five-year period, where some individuals used the ED for alcohol intoxication more than 100 times in a year. In this study, we identified several variables that differed for frequent users compared to non-frequent users. First, there were comparatively higher rates of medical and psychiatric comorbidities among alcohol- related frequent users. This finding reiterates the complexity of this population, and the fact that any of these “routine” visits have the potential for clinical decompensation and may require resources beyond the scope of simple observation for intoxication. We also identified differences in demographics , as well as differences regarding health insurance status. In contrast, several variables were not different among the two groups; namely, diagnostic workups were similar between the groups, but interpretation of this finding is limited by practice patterns at our institution, where workups tend to be minimal for most alcohol intoxication encounters. Another important finding in this study was the low admission rate among frequent users. While it is not unexpected that presentations for alcohol intoxication would result in low admission rates , it does illustrate a potential barrier in caring for this population. In other studies describing frequent users for other general medical complaints, admission rates are reported to be as high as 40%.3 In those cases, interventions can be implemented as inpatients, and resources can be initiated during admissions. In the population we describe, since admissions are so uncommon, the responsibility may be on ED personnel to identify these patients, as they will not be addressed by an inpatient team. In our cohort of alcohol-related frequent users, we identified some concerning features regarding primary care access and utilization. Less than half of the frequent-user population had primary care physicians, and only 4% were participants in a coordinated primary care program intended for the hospital’s greatest utilizers. We believe that this is an important gap in coverage for a very high-needs population. This finding also contrasts the general ED frequent-user literature, where most describe primary care access as over 90%.Our institution does not appear to be identifying alcohol-related frequent users for primary care services as effectively as those who use the ED for other problems. Possible explanations for this gap in coverage could include a lack of readiness for healthcare accountability, or a struggle maintaining primary care relationships in the setting of ongoing substance abuse. We were unable to determine the prevalence of important social stressors such as homelessness, employment, or government assistance in this cohort, but addressing these stressors in future will play an important role in assisting this population.

Multiple social services interventions have been proposed for frequent ED users, such as case management and referral programs, but these have been shown to have variable rates of success.One study conducted in our community investigated use of case management and demographic-specific housing referrals among 92 chronic inebriates. While the study found that the healthcare costs decreased pre vs. post intervention, ED visits did not decrease.Oxygen desaturation below 70% puts patients at risk for dysrhythmia, hemoglobin decompensation, hypoxic brain injury, and death.The challenge for emergency physicians is to secure an endotracheal tube rapidly without critical hypoxia or aspiration.1 Preoxygenation prior to intubation extends the duration of “safe apnea” , to allow for placement of a definitive airway.Below that level, oxygen offloading from hemoglobin enters the steeper portion of the oxyhemoglobin dissociation curve, and can decrease to critical levels of oxygen saturation within seconds.Alveoli will continue to take up oxygen even without diaphragmatic movements or lung expansion. Within some of the larger airways, turbulent flow could generate a cascade of turbulent vortex flows extending into smaller airways.Denitrogenation involves using oxygen to wash out the nitrogen contained in lungs after breathing room air, resulting in a larger alveolar oxygen reservoir. When breathing room air , 450 mL of oxygen is present in the lungs of an average healthy adult. When a patient breathes 100% oxygen, this washes out the nitrogen, increasing the oxygen in the lungs to 3,000 mL. EPs and emergency medical services use several devices to deliver oxygen or increased airflow to patients in respiratory need. Nasal cannula is used primarily for apneic oxygenation rather than pre-oxygenation. Previous recommendations were to place high-flow nasal cannula with an initial oxygen flow rate of 4 L/min, then increase to 15 L/min to provide apneic oxygenation once the patient is sedated. A nasal cannula can be placed above the face mask until just prior to attempting laryngoscopy, at which point it is placed in the nares to facilitate apneic oxygenation. The standard non-rebreather mask delivers only 60% to 70% inspired oxygen at oxygen flow rates of 15 L/ min. The FiO2 can be improved by connecting the NRB to 30-60 L/min oxygen flows from rates of 15 L/min. The use of NRBs is limited in patients with high inspiratory flow rates as FiO2 may be decreased due to NRB design. Some devices with effective seals and valves will collapse onto the patients face at high inspiratory flow rates causing transient airway obstruction. A bag-valve mask may approximate an anesthesia circuit for preoxygenation. BVMs vary in performance according to the type of BVM device, spontaneous ventilation vs. positive pressure ventilation, and the presence of a positive end-expository pressure valve. During spontaneous ventilation the patient must produce sufficient negative inspiratory pressures to activate the inspiratory valve. The negative pressures generated within the mask may lead to entrapment of room air and lower FiO2 during pre oxygenation. A BVM’s performance increases during spontaneous breathing by administering high-flow oxygen, using a PEEP valve, and assisting spontaneous ventilations with positive pressure ventilations in synchrony with the patient’s spontaneous inspiratory efforts. Continuous positive airway pressure improves oxygenation by increasing functional residual capacity by reversing pulmonary shunting through the recruitment of poorly ventilated lung units.

Posted in hemp grow | Tagged , , | Comments Off on Pain scores and injury severity scores may have differed and were not studied

EPs are poorly equipped to determine the burden of AF or the origin of the arrhythmia

Atrial fibrillation and flutter is a pervasive disease affecting 6.1 million people in the United States.Each year it is responsible for more than 750,000 hospitalizations and 130,000 deaths.In contrast to overall declining death rates for cardiovascular disease,4 AF as the “primary or contributing cause of death has been rising for more than two decades.”The annual economic burden of AF is six billion dollars; medical costs per AF patient are about $8,707 higher than for non-AF individuals.Thrombotic embolism of the cerebral circulation, or stroke, is the principal risk of AF and ranges from less than 2% to greater than 10% annually.AF is the cause of 100,000-125,000 embolic strokes each year, of which 20% are fatal.Anticoagulation to prevent these embolic events is standard of care unless contraindicated.However, it is not without risk, as even minor trauma can cause substantial and potentially life-threatening bleeding. Given that AF is the most common arrhythmia among the elderly,balancing these competing risks is challenging. Anticoagulation for AF is most commonly accomplished with a vitamin K antagonist, warfarin. However, its use requires patient education, medication compliance, dietary consistency, and close monitoring.CHA2 DS2 -VASc, ATRIA, HAS-BLED, ORBIT, and HEMORR2 HAGES are just some of the decision-support tools available to objectively weigh the risk of stroke and life-threatening bleeding from therpy.Newer, novel oral anticoagulant agents provide a benefit/risk profile that may surpass warfarin, especially when considering initiation in the emergency department. 16-18 In this issue of WestJEM, Smith and colleagues present a prospective observational evaluation of anticoagulation prescribing practices in non-valvular AF. Patients presenting to one of seven Northern California EDs with AF at high risk for stroke were eligible unless admitted, not part of Kaiser Permanente of Northern California ,cannabis grow indoor or already prescribed anticoagulation. During the 14-month study there were no departmental policies governing the initiation of anticoagulation in AF patients. University of Alabama School of Medicine, Department of Emergency Medicine, Birmingham, Alabama.

The authors report 27.2% of the 312 at high risk for stroke received a new anticoagulant at ED discharge, and only 40% were prescribed oral anticoagulation within 30 days of the index ED visit. Anticoagulation was more likely to be initiated in the ED if the patient was younger , had persistent AF at discharge, or when cardiology was consulted during the index visit. Furthermore, only 60.3% of patients were given patient education material on AF in their discharge instructions.Critics of Smith et al. will take issue with their inclusion criteria that required participation in KPNC. By definition, all members of KPNC are insured; they also have guaranteed access to timely primary care follow-up and are of higher socioeconomic means than the general population.Many of the factors that contribute to successful anticoagulation therapy – diet stability, monitoring of renal function, education and intervention of modifiable risk factors, smoking cessation, and fall risk – can all be assessed by a primary care physician and addressed with shared decision-making ensured in the KPNC system.While these limitations are acknowledged by the authors and narrow the generalizability of these findings, Smith and colleagues demonstrate the challenges of addressing ongoing chronic disease in the ED and highlight the complex decision making required. AF patients without insurance in the U.S. lack reliable access to primary care, and emergency physicians likely under-prescribe anticoagulation therapy due to an abundance of caution.Lacking the objective data to quantify these thromboembolic risk factors of AF, EPs are reluctant to initiate thromboprophylaxis, despite its known benefits, in light of the well-demonstrated risk for life-threatening bleeding.However, the risk is largely misperceived. Recent findings from the Spanish EMERG-AF trial demonstrate that initiating this therapy in the ED is at least as safe as in other settings and has clear mortality benefit at one year. Furthermore, that benefit does not come at the expense of reduced effectiveness over the course of one-year follow-up.In addition to highlighting the challenges of prescribing anticoagulation in the ED setting, Smith et al. also illustrate the opportunity for EPs to prevent future strokes in the setting of known AF. This opportunity is likely larger than reported considering the limitations of this investigation. Thankfully, there are clear guidelines to assist EPs based upon validated methods of risk-stratification.

Furthermore, of those patients receiving anticoagulation therapy in the first 30 days, more than half were initiated in the ED. While these subjects likely represent the least complex decision-making, these results also suggest some prescribing inertia; anticoagulation was continued by the primary care physician because it has already been initiated in the ED. Despite these limitations, Smith and colleagues demonstrate an immense target for EPs to improve stroke risk for at least 60% of AF patients discharged from the ED. Coupled with other evidence demonstrating that such practice is efficacious, safe, and cost effective, Smith makes a compelling case that thromboprophylaxis should be initiated in all but the most complex AF patients who will likely be admitted. EDs should develop policies to assure that AF patients can receive anticoagulation therapy on discharge. These local policies could include decision pathways that rely on guidelines, decision-support tools, and account for insurance status. As EPs, we should embrace the responsibility to provide thromboprophylaxis regardless of the likelihood of primary care follow-up. To defer that decision ignores the role emergency medicine plays in providing for the public health in the U.S., and frankly misses the mark. Arterial lines are important for monitoring and providing care to critically ill patients. Not only do they allow for rapid access to blood, but they also allow a provider continuous access to the patient’s blood pressure, which enables minute titration of vasoactive medications. Traditionally there are two locations for arterial line placement: femoral and radial arteries. The choice between sites is often made according to the provider’s preference with very little evidence guiding this decision.Although initial beliefs that arterial lines are immune to infection are certainly unfounded,vertical farming supplies there is evidence that the infection risk is proportionally similar to their central venous counterparts regarding location.It has also been shown repeatedly that central arterial monitoring provides different information from both peripheral and non invasive monitoring.Older studies have shown that the femoral artery is superior to the radial artery for blood pressure monitoring, but these results come from a different era of medicine when placement technique was different and the landscape of monitoring was not what it is today.

It is therefore important to reinvestigate femoral artery access in today’s environment. Line failure adds significant and unnecessary costs to the treatment of critically ill patients, including financial costs , time , and health. In the present study, we attempt to determine if one site is more prone to failure. We performed an ambispective, observational, cohort study to determine variance in failure rates between femoral and radial arterial lines. This study took place at a single center, a county teaching hospital with 12 adult ICU beds, and was approved by our institutional review board. Any patient with an arterial line placed anywhere in our hospital met our inclusion criteria. Providers at our site were not using ultrasound for arterial line placement routinely, so this metric was not evaluated. Although the specific indication for arterial line placement was not captured in our study, it is customary at our institution to place arterial lines for either ongoing titration of vasopressor agents or expected repeated evaluation of the management of patients with ventilatory support. Our institution uses the Arrow RA-04020 quick kit for radial arterial lines, which is a 20-gauge, 4.25 cm catheter. The Arrow select kit is used for femoral lines, which is also a 20-gauge catheter, though 12 cm in length. All patients in our study were admitted to an ICU bed and were therefore of high acuity. Exclusion criteria were patient age < 18 years old and line removal before 24 hours. We performed the retrospective arm of this study using the hospital billing database. Records from every patient who received and was successfully billed for an arterial line between January 2012 and June 2015 in our hospital were included. Research assistants , who were blinded to the study hypothesis , were provided a training presentation on how to extract relevant information from the electronic health record , including patient’s age, line insertion time, line removal time, and whether line removal was due to failure. We compiled their results into a database, and a pilot quality inprovement study was initially performed on every 20th patient in the study. The two principal investigators then reviewed the data to ensure that data acquisition was accurate between all RAs, demonstrating reliable inter-observer agreement regarding insertion and removal dates and classification of line failure. After confirming that our proposed method of data acquisition was precise, the RAs performed the complete review on the total cohort and the acquired data was kept in a spreadsheet without analysis until the prospective portion of the study was completed. The prospective arm of the study took place from June 2015 to March 2016. RAs obtained information on every adult patient in whom an arterial line was placed in our hospital during the enrollment period. To ensure capture of all patients, RAs would observe each ICU bed and ED resuscitation bay for new arterial lines three times daily. They compiled an ongoing list of known lines, noting the time of insertion, location of the line , patient age, and patient comorbidities. If the arterial line was found to have been removed, the RAs would document the time of removal and determine why the line had been removed , noting whether it was considered a failure and if it was replaced. The RAs obtained this information from nursing flow sheets or nursing interview at the time of their evaluation. Causes of failure included the following: 1) inaccuracy , 2) blockage , 3) site issue , and 4) accidental removal. We hypothesized a 2x greater failure rate of radial arterial lines compared to femoral amounting to a 50% reduction in failure rate by placing the line in the femoral artery. We postulated a 60% radial and 40% femoral distribution of line placement, based on observance of local practice. We calculated that 128 patients would provide sufficient power to detect the hypothesized failure rate if lines were split evenly between the two sites. We therefore planned to enroll 200 patients as the actual distribution was not known a priori. We chose an ambispective design as the EHR made retrospective data acquisition easy, allowing for greater power to the study. We subsequently used the prospective data to help validate our retrospective findings. In total, we evaluated 272 arterial lines over both the prospective and retrospective arms of our study, with 58 lines leading to failure for a combined total failure rate of 21.32%. Comorbidities between the two cohorts were similar, as shown in the Table. Our retrospective arm screened 304 arterial lines; however, only 196 met criteria for analysis over the three-and a-half years. The radial cohort had 43 failures and the femoral cohort had three failures , for an absolute risk reduction for failure of 25.4% if the femoral site was chosen. The prospective arm had 76 total lines, which included 39 radial and 37 femoral. The radial cohort had 10 failures and the femoral cohort had two failures. This similarly provided an absolute risk reduction of 20.2% in failure rate if a femoral line was placed instead of a radial arterial line. This outcome was consistent between the retrospective and prospective arms of the trial and led to a number needed to treat of 4.1 patients to prevent one line failure. Secondary outcomes evaluated include time to failure and cause of failure. Combined data showed the median time to failure for radial lines as two days compared to femoral lines having a median time to failure of four days. From the prospective data, the primary causes of failure for the radial lines were accidental removal , a line not drawing , and inaccurate readings. There were no radial lines removed due to “site issue” in our prospective arm; however, such issues were responsible for 15% of radial removals in our retrospective arm. Conversely, accidental removal accounted for only 5% of all removals in the retrospective cohort of radial lines but 40% of failures in the prospective arm.

Posted in hemp grow | Tagged , , | Comments Off on EPs are poorly equipped to determine the burden of AF or the origin of the arrhythmia