Studying the effect of naturally-occurring stressors on HPA axis function is methodologically complex

One recent 20-year comparison of daily tobacco versus marijuana smokers showed only increased risk for periodontal disease with cannabis, whereas tobacco users had expected increases in lung, cardiac, and metabolic risk factors. Cannabis appears to have some anti-inflammatory properties; according to the National Academies report, there is insufficient evidence to support any conclusions about the impacts of cannabis on other immune functions. Interesting public health data suggest that there may be significant trends toward decreases in opioid overdose deaths in states with legalized cannabis,fewer overall traffic deaths after legalization, and reductions in pre-versus-post legalization Medicare expenditures on prescription analgesics, sedative-hypnotics, anxiolytics, and other agents.A separate but important consideration is whether a standard risk-benefit analysis makes sense for palliative care patients when contemplating cannabis. Risks of overgeneralization aside, most palliative care patients are NOT young people with unlimited life prospects who are early in their school years or social developmental trajectories, are NOT climbing career ladders, are NOT parenting small dependent children, or operating heavy industrial machinery. Thus, I would argue that this brief summary of safety risks should, in the palliative care clinical setting, be balanced against the exigencies of attempting to help patients achieve symptom relief in the context of serious illness, particularly if they are facing difficult symptoms not responsive to conventional treatments.In addition to those scientific and public policy matters outlined above, there are many other uncertainties facing the palliative care clinician and his/her patient contemplating cannabis grow racks. Chief among them is what the patient actually receives when he/she purchases medical marijuana at a dispensary: a recent small study of marijuana edibles showed accurate labeling in only 17%.The majority of products were ‘‘overlabeled’’ , while 23% were ‘‘underlabeled’’ . Geographic differences were noted as well, with Los Angeles dispensaries showing a significant inclination to underlabel.

FDA has recently published a report of its analysis of CBD products purchased over the internet, which showed most of the products to contain little or no active ingredient.These findings undermine a fundamental element of physician practice, namely the ability to identify and recommend specific, reliable doses of compounds. It should also be noted that under most of the state laws, physicians are not prescribing medical marijuana at all. Instead, they are asked to endorse, attest, or certify that in their professional judgment the patient has a disorder for which medical marijuana may have efficacy. This, too, is unfamiliar territory for many of us.Originally formulated over twenty years ago, and recently updated, the neural diathesis-stress model proposes that the hypothalamic-pituitary-adrenal axis is the central physiological mechanism linking psychosocial stress to the onset and exacerbation of schizophrenia and related psychotic disorders . A central tenet to this model is that individuals with increased vulnerability for psychosis are more sensitive to the effects of psychosocial stressors due to abnormalities within the HPA axis which in turn contribute to dopaminergic and glutamatergic abnormalities that eventually trigger expression of psychotic illness . In support of the model, accumulated evidence indicates that patients with psychosis exhibit elevated basal cortisol relative to healthy controls , but a blunted cortisol awakening response [CAR ], the latter thought to represent a distinct HPA axis component, independent of stress-induced cortisol secretion . More recently, these features have been reported among individuals who are at increased risk for psychosis due to clinical features and/or genetic liability . Moreover, at-risk individuals who later develop full psychosis show even greater increases in basal cortisol and pituitary volume , suggesting that increased HPA axis activity may signal risk for worsening illness. In parallel with this research, studies show that at-risk individuals report greater exposure and sensitivity to a range of psychosocial stressors, including major life events, childhood trauma, and minor daily stressors . However, there has been a paucity of studies examining the concordance between psychosocial stressor exposure/distress and HPA axis function; as such, the extent to which individuals on the psychosis spectrum exhibit ‘abnormal’ HPA axis responses to psychosocial stressors is unclear.

That is, the increases in basal cortisol observed in those with, and at-risk for, psychosis may represent either a ‘normal/ adaptive’ response to the high levels of psychosocial stressors reported in these populations , or hyperresponsivity of the HPA axis , characterised by an increase in cortisol greater than that expected in a healthy individual . Alternatively, the elevated basal cortisol levels observed may be partially independent of psychosocial stress exposure/distress , and instead reflect individual-level factors such as genetic predisposition to HPA axis hyperactivity or metabolic abnormalities , the latter being more common among individuals at clinical high-risk for psychosis , who present features consistent with the prodromal phase of illness. Two recent studies of at-risk individuals support the ‘increased concordance’ hypothesis: Using the experience sampling method, siblings of psychosis patients showed more pronounced increases in salivary cortisol in response to unpleasant events relative to controls , whilst a further study reported a stronger association between retrospectively-reported stressful life events and basal cortisol in CHR youth compared to controls . In contrast, lower cortisol responses during psychosocial stressor tasks have observed in CHR individuals and young adults with high schizotypy traits relative to controls; a pattern consistent with that observed in patients with chronic schizophrenia . Together, these findings tentatively suggest that naturally-occurring psychosocial stressors are associated with greater cortisol increases in at-risk individuals compared to healthy controls, whereas the response to experimentally-induced psychosocial stressors is blunted. However, the degree to which HPA axis responses to laboratory-based stressor tasks are relevant to psychosis aetiology is unclear.Unlike studies using experimentally-induced stressor tasks, the lapse of time between stressor exposure and cortisol measurement may be considerable. Whilst elevations in cortisol levels following stressor exposure appear to decrease over time , early life events and trauma exposure are associated with HPA dysregulation later in life, suggesting long term effects of stress exposure . A related issue is that stress measures and cortisol samples may not be collected on the same day, particularly when studies have large assessment batteries spanning several days.

It is possible that day-to-day variations in perceived stress might influence both retrospective reporting of stressful events and cortisol levels, such that greater concordance is observed when measures are collected on the same day. However, to our knowledge, this has yet to be investigated. Determining the extent to which HPA axis responsivity in at-risk youth predicts clinical outcome is important, as such work might ultimately help to identify individuals at increased risk of illness progression by virtue of being more sensitive to the effects of psychosocial stress, enabling targeted interventions. Utilising data from the North American Prodrome Longitudinal Study 2 [NAPLS 2, ] we investigated whether psychosocial stressors, basal cortisol levels, and stressor-cortisol concordance at the baseline assessment differed across healthy controls and CHR subgroups defined on the basis of their clinical presentation at the two-year follow-up . Based on previous studies, we hypothesised that CHR youth who later converted to psychosis would show greater exposure and distress in relation to psychosocial stressors, elevated basal cortisol, and higher stressor-cortisol concordance relative to healthy controls; we also anticipated that CHR non-converters would be intermediate to CHR converters subgroups and healthy controls on these measures. In all analyses we controlled for a range of potential confounders ,cannabis grow system and additionally explored the effect of lapse-of-time between assessments on stressor-cortisol concordance.NAPLS 2 is a consortium of eight research sites examining CHR youth, the aims and recruitment methods for which are detailed elsewhere . Briefly, CHR subjects were help-seeking individuals who met criteria for one or more prodromal syndromes: attenuated psychotic symptoms; brief intermittent psychotic symptoms; or substantial functional decline combined with a first degree relative with a psychotic disorder, or schizotypal personality disorder in individuals younger than 18 years. Prodromal syndromes were assessed using the Criteria of Prodromal Syndromes , based on the Structured Interview for Prodromal Syndromes [SIPS ], conducted by clinically-trained interviewers; psychiatric diagnoses were determined via the Structured Clinical Interview for DSM-IV . CHR individuals who had met criteria for an Axis I psychotic disorder were not eligible for inclusion; treatment with antipsychotic medication was permitted provided that full psychotic symptoms were not present at the time of medication commencement. Healthy controls were recruited from the community and had no personal history or first-degree relative with psychosis and did not meet criteria for any prodromal syndrome. All participants were aged between 12 – 35 years at recruitment. Exclusion criteria for both groups included substance dependence in the past six months, neurological disorder, or full-scale IQ < 70. Non-psychotic psychiatric disorders were permitted in CHR and healthy control groups .Ethical approval was provided by Institutional Review Boards at each NAPLS site , all participants provided informed consent or assent. The current sample includes 662 participants for whom variables of interest at baseline and clinical status at follow-up were available. At baseline, participants provided information on sociodemographic factors and potential confounders, completed stress measures, and collected saliva samples.

Baseline assessments were completed over two or more visits. Where possible, saliva was collected on the same day as daily stressor, life event and childhood trauma measures . However, in some in cases , the baseline assessment was interrupted that lead to a substantial delay in the completion of all measures. In such instances, the remaining baseline measures were collected when the participant was able to return and complete the schedule, with clinical assessments repeated to confirm CHR status. All participants were included in the analysis which accounted for timelapse between assessments. Prodromal symptoms were assessed via the SIPS at 12- and 24-month follow-up assessments and used to categorise CHR subgroups [see Table 1 for details ].Participant date of birth, sex, and ethnicity were assessed via self-report, the latter was subsequently collapsed to a four-level variable . Cannabis use was assessed via a structured interview . For the purposes of the current investigation we created a binary variable indexing current use . Details of all prescribed psychotropic medications were obtained at the baseline assessment via self-report, pharmacy records, and/or medical records. Binary variables were created for current antipsychotic use and current psychotropic use , irrespective of type, dose, or data source.The 58-item, Daily Stress Inventory , was used to determine the presence of minor stressors occurring within the past 24 -hs. Participants indicated whether they experienced each stressor and the level of distress elicited by each endorsed stressor . Total distress scores were then divided by the total exposure score to obtain an average distress per item score . Life events were assessed via the Psychiatric Epidemiology Research Interview Life Events Scale , modified to exclude life events of lesser relevance to youth . The 59 events can be classified as independent or dependent . Interviewers recorded how often each of the 59 events had occurred in the participant’s lifetime and the associated level of distress ; participants could report multiple exposures to the same event , where the maximum occurrence for any single life event in the NAPLS cohort was four. An average life event distress score was derived by dividing the total distress score by the total exposure score . Participants additionally completed the Childhood Trauma and Abuse Scale , a semi-structured interview examining experiences of physical, sexual, and psychological abuse, and emotional neglect, occurring prior to age 16 . Each trauma type was scored as absent/present with a binary variable indexing any form of trauma derived.At the research session, participants provided three saliva samples with a mean salivary cortisol value subsequently derived when two or more samples were available . The median time of collection for the three samples was 1107 h , 1207 h , and 1300 h , respectively. The mean cortisol value, which is highly correlated with area under the curve values , was computed to provide consistency with previous publications . Participants were instructed to avoid consumption of caffeine, alcohol, or dairy products after 1900 h on the day before sampling; individuals who reported non-compliance with these instructions were not excluded as previous analyses performed on a subset of the cohort found no association with these variables and cortisol levels .

Posted in hemp grow | Tagged , , | Comments Off on Studying the effect of naturally-occurring stressors on HPA axis function is methodologically complex

Diagnoses can be assigned by physicians or any other qualified health care provider who is directly evaluating a patient

Utilization of emergency department resources are 50% to 100% higher for patients with SUD compared with patients without SUD.In addition to acute medical emergencies, ED use may be indicative of poor health, unmet service need, or inappropriate use of health care.To date, studies have found most SUD-related ED visits are associated with alcohol,and frequently document ED-based treatments have focused on alcohol to the exclusion of other drugs.Yet, ED visits associated with the misuse of opioids and marijuana are common, and considerable SUD-related ED visits involve concurrent or other drug use.In addition, alcohol and opioid use disorders are among the most severe SUD diagnoses in terms of their negative impact on health, and evidence continues to emerge about the adverse health effects associated with marijuana use disorder.Thus, the study of ED trends among patients with alcohol, marijuana, and opioid use disorders is important.High rates of SUD-related clinical emergencies and associated ED visits are a persistent barrier to improving health outcomes in this population.Thus, a study that seeks to identify how patients with alcohol, marijuana, and opioid use disorders use ED resources is important, to potentially inform more specific ED-based treatment efforts . This study examined ED trends across patients with alcohol, marijuana, and opioid use disorders, and controls, over time in a large integrated health care system in which all patients have insurance coverage to access health care. Using electronic health record data, we aimed to determine the odds of having an ED visit each year from 2010 to 2014 for patients with alcohol, marijuana, and opioid use disorders relative to controls without these conditions; evaluate differences in ED use between controls and those with alcohol, cannabis grow set up, and opioid use disorders over 5 years; and explore sub-samples for which patients with SUD may have a greater impact on ED resources.We used secondary EHR data for this database-only study.

These data were used to identify all health plan members who were aged 18 or older, who had a visit to a KPNC facility in 2010, and had a recorded ICD-9 diagnosis of alcohol, marijuana, or opioid abuse or dependence in 2010. The first mention for each ICD-9 diagnosis of alcohol, marijuana, or opioid use disorder recorded from January 1, 2010, to December 31, 2010, were included; patients in the sample could have multiple diagnoses . We also included all current or existing SUD diagnosis that were additionally documented for patients with alcohol, marijuana, or opioid use disorder during health plan visits in 2010 . Within KPNC, SUD and other behavioral health diagnoses can be assigned to patients in any clinic setting, e.g., primary care or any specialty care clinic.All diagnoses are captured through ICD-9 codes. EHR data were used to identify control patients who did not have current or existing SUDs or other behavioral health diagnoses. Control patients were selected for all unique patients with alcohol, marijuana, and opioid use disorders and matched one-to-one on gender, age, and medical home facility. This accounted for differences in services, types of behavioral health conditions, or unobservable differences by geographic location. To control for varying lengths of membership, participants were required to be KPNC members for at least 80% of the study . The final analytical sample consisted of 35,148 patients: 12,411 with alcohol use disorder, 2752 with marijuana use disorder, 2411 with opioid use disorder, and 17,574 controls. Institutional review board approval was obtained from the Kaiser Foundation Research Institute.Overall, the sample was 35.5% women, 60.0% white, 16.1% Hispanic, 11.0% Asian, 8.6% black, and 4.0% other race/ethnicity. Patients were 37 years old on average. Differences in the characteristics among patients with alcohol, marijuana, and opioid use disorders and the controls are reported in Table 1. Compared with controls, more patients with alcohol, marijuana, or opioid use disorder were white or black; more controls were Asian, Hispanic, or had a race/ethnicity categorized as “other” compared with those with alcohol, marijuana, and opioid use disorder with few exceptions. In addition, compared with controls, patients with alcohol, marijuana, and opioid use disorders had greater medical comorbidities , and co-occurring mental health and substance use conditions were common .

Alcohol, marijuana, and opioids frequently take center stage in public policy and debate as concerns remain focused around opioid misuse and overdose,ongoing drinking problems,and liberalization of marijuana use policies.Persons who excessively use these substances face the risk of developing an associated SUD,which can have considerable implications for patient health and health systems,in part by contributing to high use of ED services.Thus, we examined how patients with alcohol, marijuana, and opioid use disorders, and controls, used ED resources over time in a large health care system. Similar to studies conducted in the general population and other health systems,alcohol use disorder was diagnosed the most frequently, followed by marijuana use disorder, and opioid use disorder, and the rates of cooccurring medical, psychiatric, and SUD were substantial in each. Because these conditions worsen prognosis, lead to high morbidity,and can contribute to inappropriate service use,it is not surprising we found that patients with these disorders consistently had greater likelihood of ED use relative to controls. ED visits were the highest among patients with opioid use disorder, followed by those with marijuana and alcohol use disorders, which is contrary to prior work that has documented most SUD related ED visits are associated with alcohol use disorder.This difference could reflect the effects of changing marijuana use disorder patterns and an overall high morbidity among patients with opioid disorder, which may have large effects on health system resources.Most ED-based treatments focus on alcohol to the exclusion of other drugs,and since our data suggest that ED visits are also frequent among patients with marijuana and opioid use disorders, these patients may be at risk for having unmet or unidentified treatment needs. Consequently, building on ED based treatments for patients with alcohol use disorder,it will be important for future studies to extend these treatments to patients with opioid and marijuana use disorders, to reduce medical emergencies and improve patient health in this population. Patients with opioid use disorder constituted a modest proportion of the sample, and these patients consistently had high odds of ED use. Similar to this, previous studies report that patients with opioid use disorder are over represented in ED settings.This could be due to the individual or combined effects of complex medical conditions, injury, or overdose,which have large impact on the burden of disease and are some of the more persistent barriers to improving overall health outcomes among patients with opioid use disorder.

Consequently, ED settings offer important opportunities to identify patients with opioid use disorder and initiate treatment. Recent evidence suggests that ED-initiated buprenorphine increases subsequent engagement in addiction treatment and reduces illicit opioid use.Devoting more health resources to initiating evidence-based ED-based treatments for patients with opioid use disorder in health systems, including ED-initiated buprenorphine and referral to SUD treatment,may be a step toward improving health outcomes and reducing high SUD related ED visits among patients with opioid use disorder. Over time, all patients had fewer ED visits, and a greater decrease in ED use was observed for patients with SUDs compared with controls, although those with SUDs continued to have more ED visits. These ED utilization patters are consistent with general population studies, which show decreasing ED visits involving alcohol and opioid use disorders.At the same time, our ED utilization patterns regarding marijuana use disorder are inconsistent with national data, which suggest increasing ED visits involving marijuana-related problems.This national increase could be due to the combined effects of increasing marijuana potency, liberalizing views of the drug, and increasing trends toward its legalization.Notably, however, we found a decrease in ED use over time across patients with marijuana use disorder as well as those with alcohol and opioid use disorders, which may suggest that some patients’ health status improves more quickly. Another possibility is that the observed decrease in ED use may be specific to those who receive care within integrate health systems in which specialty services are provided internally. For example,outdoor cannabis grow prior studies conducted within KPNC found that patients with SUD who had ongoing primary care and addiction treatment were less likely to have subsequent ED visits.It will be important for future studies in other systems to investigate the potential impact of specialty and primary care on reducing subsequent acute services across those with alcohol, marijuana, and opioid use disorders. Our results confirm the work of prior studies showing that patients with alcohol and opioid use disorders, and to a lesser degree patients with marijuana use disorder, have frequent and increasing ED visits over time associated with poor health or complex medical conditions.Since our medical comorbidity measure combined acute and chronic conditions, it will be important for future work to identify which individual medical conditions contribute most strongly to ED admission. Other characteristics that were not measured may also influence ED use rates in patients with SUD, and understanding these factors may further help improve service planning efforts and ED-based treatments for this population. In addition, comorbid conditions were common among patients with SUD, and these individuals may have ED visits that require a range of medical treatments, psychiatric symptom stabilization, or detoxification from alcohol or drugs. Limitations should be noted. Our use of provider-assigned diagnoses restricted the sample to patients with at least 1 of the 3 most common SUD diagnoses in 2010 . As with other studies that have used claims-based data,our study captures patients with SUD through ICD-9 codes noted in health plan visits during the study period. This methodology is vulnerable to diagnostic underestimation.Therefore, the SUD prevalence data in our study may underestimate the general ED patient population prevalence.

Although not available for this study, future database studies could examine if the inclusion of pharmacy-based prescription data to ICD-9 diagnosis improves prevalence estimates. Another potential limitation with the methods we used to select our SUD sample is that we required a single mention of an ICD-9 code for SUD during the study period to link the patient with that diagnosis. Although the single mention methodology is well established,it could result in an overestimation of the true diagnostic rates if diagnoses only mentioned one time in the EHR are more likely to be inaccurate. Patients were insured members of an integrated health system, and thus our results may not be generalizable to uninsured populations or other types of health systems. Our findings of SUD-related ED trends are somewhat inconsistent with prior work,which suggests a need for replication. All patients were required to have a health system visit in 2010 to enter the study, but they were not required to have a health system visit to remain in the study. These criteria may explain the steep decline in ED visits between 2010 and 2011 and subsequent leveling of ED use. We cannot identify the reason for why patients had an ED visit , which will be an important focus of future work. ED utilization that KPNC did not pay for is not captured, although we capture external, paid-for ED utilization through claims. Consequently, ED use may be higher than we report. Low base rates of SUDs other than alcohol, marijuana, and opioid use disorders precluded our ability to examine the effect of these conditions on ED visits.The empirical literature documents statistically and clinically significant relations between HIV/AIDS and anxiety and depressive symptoms and disorders . Rates of anxiety disorders among HIV? individuals have been estimated as high as 43 % . Likewise, depressive symptoms and disorders commonly co-occur with HIV/AIDS, with some studies finding over a 50 % base rate of clinical depression among adults with HIV/AIDS . Although the underlying directionality between anxiety and depressive symptoms and disorders and HIV/AIDS is presently unclear, research has nonetheless found that these negative emotional states tend to contribute to non-adherence to HIV medications , lesser quality of life , greater health-care utilization , and greater risky sexual behaviors . Scholars have begun to focus greater energy on identifying the explanatory processes that may underlie such anxiety/depression-HIV/AIDS associations. The most well developed aspect of this literature has been focused on coping with the HIV/AIDS illness and other life stressors . Yet, there has been little investigation of other cognitive-affective factors related to these negative emotional states.

Posted in hemp grow | Tagged , , | Comments Off on Diagnoses can be assigned by physicians or any other qualified health care provider who is directly evaluating a patient

The use of these devices for the ingestion of cannabis extracts has become increasingly popular

We also were unable to assess clinical data, such as HIV viral loads and CD4 cell counts among PLWH. On the other hand, the BRS and GAD-7 are highly validated and reliable instruments. In a review of resilience measures, Windle et al. identified the BRS as one of three scales with the highest psychometric properties for resilience. The generalizability of resilience among PLWH may be limited by survivorship bias. Nevertheless, resilience among PLWH was protective against adverse mental and behavioral health outcomes. There were also considerable losses to follow-up; however, a sensitivity analysis using cross-sectional data from all participants during the 1st survey wave also supported our hypotheses: PLWH had higher odds of high resilience than HIV-uninfected peers, and high resilience was associated with lower risk of substance misuse. The study is strengthened by the use of six C3PNO cohorts across the USA, consisting of marginalized and minority groups with a diverse population of PWUD, many living with or at-risk of HIV. Expansion of the use of cannabis for purported medical benefits stimulates interest in the possible opioid-sparing effects of cannabis constituents, including the primary psychoactive compound Δ9 -tetrahydrocannabinol . While opioids are effective medications, they have many limitations including the development of tolerance , the development of dependence , diversion for non-medical use, and the development of addiction. Any drugs which may act in an additive or synergistic fashion with opioids therefore have the potential to reduce opioid-related concerns. There are known interactions by which endogenous cannabinoid receptor 1 ligands may enhance signaling of mu opioid receptors,thus generating a reasonable mechanistic hypothesis for evaluating opioid-sparing effects of cannabis grow supplies constituents. Cannabis co-use triples the risk of dependence on heroin in those diagnosed with a substance use disorder and cannabis users over 50 years of age exhibit a 6.3 increased odds ratio of heroin use .

Adolescents in one urban setting who use cannabis regularly were at twice the risk for opioid misuse compared with occasional users of cannabis and age 14 onsets of frequent cannabis use increased the risk of opioid use at age 19 . Almost two-thirds of individuals in one sample first used heroin while co-using cannabis and 50–60% of individuals in heroin- and methadone-maintenance treatment for opioid use disorder were co-using cannabis . On a day-by-day basis, cannabis use doubles the risk of non-medical opioid use in adults with problematic substance use . Thus, although cannabis may have the potential to reduce opioid use in medical patients, there is also a clear risk for cannabis to increase the non-medical use of heroin from adolescence into middle age. Consideration of these phenomena spurs interest in determining the interactive effects of cannabinoids and opioids across multiple behavioral and physiological endpoints to lend greater context for “opioid-sparing” recommendations for cannabis use. We recently presented evidence that THC enhances the effects of oxycodone in an anti-nociception assay in rats and it also enhances the effects of a unit dose of oxycodone or heroin when self-administered . Maguire and France have shown that the nature of the anti-nociceptive interaction between cannabinoids and opioids may depend on the specific drugs that are involved which cautions against making generalizations across either class of substances, before specific data are available. In the case of both nociception and drug self-administration, the effects of opioids and cannabinoids are often in the same direction, i.e., antinociceptive and rewarding . This can make it difficult to determine if the outcome of co-administration is due to the additive effects of independent mechanisms or the interaction of signaling within the same mechanistic pathways . Determination of any heroin/THC interactions for in vivo endpoints that are expected to change in opposite directions after each drug is administered individually can help to further parse the specificity of any apparent additive effects.

We have identified conditions under which either inhaled or injected heroin can increase the body temperature and spontaneous locomotor activity , and conditions under which THC can decrease body temperature and locomotor activity . We have further shown that the locomotor effects of nicotine and THC on activity can oppose each other when co-administered ; the effects of each drug to decrease body temperature were also dissociated across time, after administration. This study was conducted to determine if heroin and THC interact in an additive or independent manner to alter thermal nociception, body temperature, or spontaneous locomotor activity when inhaled or injected. The recent broad availability of e-cigarette style Electronic Drug Delivery Systems supports the possibility of delivering a range of drugs other than nicotine, including opioids, via vapor inhalation. The EDDS can be used to deliver active doses of a wide range of drugs including amphetamine and cathinone derivative psychomotor stimulants , opioids , cannabinoids , and nicotine to rats and mice; for review see . Generally speaking, the effects of drugs after inhalation persist for a shorter time than when injected , and, as we have recently shown, this is certainly true for heroin and THC . This difference may impact the combined effects, and therefore, the inhalation route was contrasted with traditional injection routes used in rodent models.A preliminary dose-ranging experiment was conducted in a group of male Sprague Dawley rats used previously in investigations of the effects of inhalation of cannabidiol, nicotine, and THC as reported . The first goal was to evaluate whether heroin by vapor inhalation would alter body temperature and spontaneous locomotion, as it does when injected. These were our first attempts, conducted prior to the subsequent investigations reported here and in prior publications , critical to establishing the efficacy of drug delivery by this method. The second goal was to identify sub-maximal exposure conditions such that interactions with another drug such as THC might be detected. All animals were habituated to the procedure with one session of 30-min vapor exposure to PG followed by a 60-min recording interval. To determine the effect of heroin vapor on body temperature and locomotor activity, an initial 30-min inhalation exposure to PG and Heroin was designed. Animals were randomized to treatment conditions, and exposed, in pairs. Subsequently, the animals were evaluated after a 15-min interval of exposure to PG, heroin , THC , or heroin +THC.

The treatment conditions were again counter-balanced across exposure pairs and THC was experienced no more frequently than once per week for a given rat.Groups of male and female Sprague–Dawley rats used in studies previously reported were recorded for a baseline interval, exposed to inhalation of vapor from PG, heroin , THC , or the combination for 30 min in groups of 2–3 per exposure. These studies were conducted subsequent to a series of similar challenges with vapor inhalation of nicotine, THC, and with an injection of THC in additional studies not previously reported. The tail-withdrawal assay was conducted immediately after the vapor session and then rats were returned to their individual recording chambers for telemetry assessment. The four inhalation conditions were evaluated in a counterbalanced order across pairs/triplets, no more frequently than every 3–4 days.An opioid receptor antagonist study was included because we had not included this evidence in a prior study reporting heroin vapor effects on similar endpoints and we have reported an unusual lack of effect of cannabinoid receptor 1 antagonist/inverse agonist pre-treatment on THC vapor-induced hypothermia . This latter was observed despite efficacy against the anti-nociceptive effects of inhaled THC and the thermoregulatory effects in injected THC; thus, it was of interest to determine if a nonselective opioid receptor antagonist was effective against the effects of inhaled heroin on temperature, activity,cannabis grow facility and anti-nociceptive responses. The group of male Sprague–Dawley rats used in experiments 2–4 was recorded for a baseline interval, and then injected with either saline or naloxone 15 min prior to the start of a 30-min inhalation session of either PG or Heroin vapor. Post-inhalation, a tail-withdrawal assay was conducted and then animals were returned to their recording chambers for telemetric assessment. Inhalation was conducted in two pairs and one triplet, with the groupings changed on each test day. Pre-treatments were varied within the inhalation groupings to provide further randomization of conditions. Sessions were conducted two times per week in a counterbalanced treatment order. The female group examined in parallel with these animals in prior studies was not included because the maximum number of treatment days under the approved protocol had been used in prior experiments.

Groups of female and male Wistar rats were used in these studies. Rats had been exposed to twice daily sessions of vapor from the propylene glycol vehicle or heroin from PND 36 to PND 45 . These antinociception experiments were conducted between PND 253 and PND 275. Four treatment conditions were evaluated in counter-balanced order. Each session started with a tail-withdrawal assessment before any injections . This was followed immediately with an injection of THC or the cannabinoid vehicle . Another tail-withdrawal assessment was conducted 30 min post-injection, followed by a second injection of either heroin or saline , and then a final tail-withdrawal 30 min later. We have shown previously that anti-nociceptive effects of THC when injected at 5 or 10 mg/kg, i.p., persist essentially unchanged from 30 to 90 min after injection , justifying this sequential assessment approach. The design resulted in four replications of the pretreatment baselines, two replications of the THC or vehicle condition and then four final conditions of vehicle-saline, vehicle-heroin, THC-saline, and THC-heroin.The telemeterized body temperature and activity rate were collected on a 5-min schedule in telemetry studies but are expressed as 15-min or 30-min averages for primary analysis. The time courses for data collection are expressed relative to the initiation of vapor inhalation and times in the figures refer to the end of the interval . Due to transfer time after vapor sessions, the “60- min” interval reflects the average of collections from~40 to 60 min. Any missing temperature values were interpolated from the values before and after the lost time point. Activity rate values were not interpolated because 5 to 5-min values can change dramatically; thus, there is no justification for interpolating. Data were generally analyzed with two-way analysis of variance including repeated measures factors for the drug treatment condition and the time after vapor initiation or injection. A third factor was used for pre-treatments, adolescent exposure group, or sex in some experiments as described in the “Results” section. A mixed effects model was used instead of ANOVA wherever there were missing datapoints. Any significant main effects were followed with post hoc analysis using Tukey or Sidak procedures. The criterion for inferring a significant difference was set to p<0.05. All analyses used Prism 8 or 9 for Windows .The activity of the female rats was increased after heroin inhalation and decreased after inhalation of the THC+heroin combination. The post hoc test confirmed that activity was decreased relative to the baseline and all other inhalation conditions 60 min after the start of THC+heroin inhalation and elevated relative to the baseline and all other inhalation conditions 60 min after the start of THC. Activity was elevated compared with all other conditions 90–120 min after the start of heroin inhalation.Similarly, the post hoc test confirmed that temperature was significantly higher after inhalation of heroin 50 mg/mL and lower after inhalation of THC 50 mg/mL or the combination , compared with the PG condition. Temperature following the inhalation of both drugs in combination was significantly lower compared with heroin and higher compared with THC . The post hoc test confirmed that activity was elevated 90–120 min after the start of heroin inhalation compared with all other conditions, as well as at individual time points relative to the combination , THC , and PG . Activity differed between TH and the combined inhalation was higher than the baseline 60 min after initiation of heroin and lower than the baseline in the PG and THC inhalation conditions.The body temperature of the male rats was significantly elevated by heroin and decreased by THC .The post hoc test further confirmed a significant difference between saline injection and all three heroin doses, and between 0.32 and 0.56 mg/kg, within the THC pre-treatment condition.

Posted in hemp grow | Tagged , , | Comments Off on The use of these devices for the ingestion of cannabis extracts has become increasingly popular

We calculated the effective volumetric air content at each site based on the VWC data of each sensor

After the air-injection period , all sites except NSL showed a similar recovery in soil aeration status in all flooded treatments , which is another indication of the local drainage issue at NSL.The average impact of air injection varies among sites.The impact was small at CT compared with the flooded treatment without air injection, whereas no impact and even negative impact was observed at KARE and NSL.However, the wide variability in O2 data in the air-injection treatments at KARE and NSL implies that air injection may improve the aeration status for a limited time.The fast increase in soil O2 at the end of the experiment at NSL is the result of the CaO2 preliminary test , which demonstrates the potential of this technique for improving soil O2 during Ag-MAR.The soil sensors measurements presented in Figures 3–5 provide continuous high-resolution temporal data, but with limited spatial resolution.A complementary spatial description of the soil aeration status was obtained by the manual measurements of soil O2.At NSL, these measurements show results that differed from the continuous measurements , indicating that air injection improves average O2 levels by ∼1% compared with the flooded treatment without air injection, at 15 cm and at 30-cm depth , whereas no impact was observed at 50-cm depth.At CT, the manual soil O2 measurements are consistent with the continuous measurements, showing improvement of up to ∼2% O2 at all depths.For the experiments at the almond orchards, gross yield was measured during the harvest season after the Ag-MAR experiments.At KARE, the Nonpareil was the only variety that showed a higher average yield in the air-injection treatment than the no-air injection treatment , whereas all other varieties showed no staThistical difference between treatments.At NSL,cannabis grow system the average yield of the air-injection treatment was lower than the no-air injection treatment , but this yield reduction is correlated with annual precipitation, which implies a drainage issue in the specific row of the air treatment.At CT, plant indices measured before and after the experiment showed an average decrease for the no-air injection treatment, whereas an increase was observed for the air injection and the control treatments.

Changes were not staThistically different based on a significance level of α =.05 , but the difference between the no-air injection and the control treatments after flooding is significant at the significance level of α =.1.The water application results demonstrate the degree of suitability of the selected sites for Ag-MAR projects in terms of infiltration rates.Both the KARE and CT sites are suitable for Ag-MAR, as their infiltration rate is greater than 1 m in a few days , whereas the low infiltration rate measured at NSL makes the site unsuitable for Ag-MAR.Indeed, the NSL site was selected in this study to demonstrate the difference among Ag-MAR sites with different soil texture, SAGBI class, and hydraulic properties.The soil at NSL has a SAGBI rating of moderately good to poor , which illustrates the role that SAGBI has as a first approximation planning tool for locating potential Ag-MAR sites.However, the final selection of an Ag-MAR site should also consider the growers’ experience.Apart from soil suitability for Ag-MAR, technical constraints and agronomic constraints should all be considered when designing an Ag-MAR project.Naturally, well-drained soils, which are more suitable for Ag-MAR, are also well aerated.This characteristic is manifested in the flood-drainage intervals that allow reaeration of the soil during drainage, even without the use of forced aeration.Conversely, poorly drained soils are unsuitable for AgMAR, even if forced aeration can maintain adequate soil aeration status, due to the low infiltration rates and insufficient deep percolation that these soils achieve, which does not promote recharge of large amounts of water to the groundwater.Ideally, O2 and Eh measurements are complementary soil aeration quantifiers for aerobic and anaerobic conditions, respectively.In this study, the duration of soil saturation was shorter than 48 h at all sites , and therefore root zone residence time was relatively short, and aerobic conditions prevailed most of the time.Besides the duration of saturation, soil aeration status is also affected by soil respiration, which differed among the sites.Based on the O2 measurements of the control treatments , the lowest soil respiration rates were observed in the almond experiment during the winter , followed by higher rates in the cover crop during the spring , whereas the highest rates were observed in the almonds during the summer.

The increase in soil respiration with increasing temperatures is well documented, and generally soil respiration increases by a factor of two to three for a temperature increase of 10 ˚C.The relationship between O2 and Eh measurements is complex, as sometimes high O2 levels may coexist with low Eh levels, and vice versa.Some of this complexity can be explained by the lag of Eh behind O2 in response to flooding.Data showing this Eh-O2 lag can be found in a handful of studies , but Blackwell was the only one who explicitly related this lag to the soil volume over which the measurement is integrated.Black well concluded that O2 measurements are more affected by large pores compared with redox measurements that are measured by a small-size electrode.Some of our data support this explanation as indicated by the Eh minima, which lags up to 20 h behind the O2 minima, which might correspond to the lag in draining small pores vs.larger pores during soil drainage.Another explanation for the lag in Eh readings is related to slow reaction kinetics and mixed potentials, where the latter is inherent in most measurements done with redox electrodes.Hence, to obtain an unbiased result of the soil aeration status during Ag-MAR, a paired measurement of O2 and Eh is needed even in relatively well-drained soils with short flooding events.Based on our O2 and Eh measurements and the results of previous studies , denitrification will probably occur during Ag-MAR even in well-drained soils.This may promote N2O emissions as well as prevent NO3 − leaching to groundwater.The Eh results from KARE demonstrate that during an Ag-MAR event the soil may reach anaerobic conditions even in well-drained soils.At KARE, moderately reducing conditions were observed for up to 2 d , which can promote sequential reduction reactions according to the thermodynamic theory.However, in oxic terrestrial soils, some of these reactions may occur simultaneously; for example, reduction of NO3 − followed by concurrent reduction of Mn4+, SO4 2−, and Fe3+.Although these chemical species are stable under aerobic conditions, upon flooding they might undergo reduction and might get released from the solid phase into the soil solution, thereby changing the bio-geochemical state of the soil and increasing the risk of groundwater contamination.

Soil O2 is negatively correlated with soil water content, as high water content reduces the soil O2 diffusivity and air permeability.At the same time O2 demand by microbial respiration increases, as it highly depends on water content.A similar negative correlation was observed in this study, but in the air injection treatment, higher O2 levels at relatively higher water contents were observed, which indicates the positive impact of air injection.At the same time, a negative impact of air injection was observed for some trees at KARE.This negative impact can be explained by the injected air that pushes the pore water towards the O2 sensors.In this case, the readings of the galvanic O2 sensors might be different from the actual DO concentration of the pushed pore water.Under flooded conditions that are expected in Ag-MAR, O2 levels will decrease sharply after the water content reaches some critical value.Bachand et al.suggested a critical value of 74% degree of saturation in their report on Ag-MAR and soil O2, although a critical value based on air content is a better quantifier, marijuana grow system as it represents an absolute value that can be compared across soils with different porosities.The term effective volumetric air content is used here because we assume it represents only the conducting gas-phase fraction.Omitting the data of air injection and drainage stage, we set the critical air content upper bound as θa,eff ≈0.09 based on the inflection points of O2 levels, identified using a smoothed moving average procedure.It is noted that identifying the critical air content based on this method can be somewhat subjective and that the critical value is subjected to the accuracy of the water content sensors.As noted by Bachand et al., reaching the critical air content is inevitable during Ag-MAR flooding, and therefore attaining hypoxic/anoxic conditions will depend on the soil O2 depletion/recovery rates and on the flooding duration.We calculated the average soil O2 depletion/recovery rates for sites KARE and CT, as each represents a potential site forAg-MAR operation.The average depletion rate is lower than 0.3% O2 h−1, hence an O2 reduction from atmospheric conditions to 5% O2will take about 2 d for the soils tested here.Average recovery rates, excluding the active injection periods, are on the same order as the depletion rates, and usually the recovery rates are higher at the beginning of the reaeration stage, and then level off over time.Our observed average depletion rates are higher than the maximum depletion rates reported by Bachand et al.by up to ∼0.1% O2 h−1.This is probably because their water application was more conservative compared with this study.Under waterlogged conditions the galvanic gas-phase soil O2 sensors might malfunction , returning low or zero O2 readings, whereas the actual DO of the pore water might be higher.An example for this condition was observed at NSL, where we measured sporadically DO in water samples that were extracted from the air samplers at depths of 15, 30, and 50 cm.In addition, we also measured a few DO samples of ponded and irrigation water.

Although the total sampling amount was relatively small , it better represents the aeration status for some trees.For example, at Tree 3W, 15-cm depth, the DO is always higher than 0.5 O2 saturation compared with 0% O2 obtained with the galvanic soil O2 sensor.These measurements also demonstrate the impact of air injection on DO, as the trend of increasing DO during air injection periods is evident for all measured trees.At CT, we were able to extract more DO samples using dedicated pore-water samplers.Dissolved O2 levels fluctuated according to water application events and for the air-injection treatment also by the duration of air injection.Under flooded conditions, it is difficult to monitor the gas phase in the soil , and therefore DO measurements might be more suitable as soil aeration quantifier in Ag-MAR.Reliable in situ DO measurements can be achieved using optic sensors; however, its spatial resolution is still limited due to the relatively high costs of these sensors.The almond yield at KARE showed no staThistical difference between the treatments, except for the Nonpareil variety that showed higher yield for the air treatment compared to the no air treatment.This result is encouraging, but it should be taken with caution as it is based on only a few trees, and it might not correlate directly with the impact of Ag-MAR flooding, because yield in almond trees is commonly determined by the nutrient and water management practices of the previous growing seasons.The impact of flooding on almond yield was not studied systemically previously, although few studies on deficit irrigation showed yield reduction either by insufficient or excess irrigation.Still, these reported yield reductions are due to over irrigation throughout the entire growing season and not due to occasional flooding.In this study, the yield results of the air treatment at NSL demonstrate the potential negative impact that Ag-MAR may have on almond yield in poorly aerated soils.Previous studies on flooding in almond seedlings , and 1-yr-old almond trees demonstrated sensitivity to water logging which can lead to tree mortality.This sensitivity, also related to root diseases such as Phytophthora, varies among different almond varieties and Prunus rootstocks.At CT, the decrease in plant indices for the no-air-injection treatment compared with the other treatments was due to a reduction in the dry root weight of the cover crop after flooding.Although these changes were not staThistically significant, they agree with several studies that showed a reduction of root weight in bell beans under water-logged soils.This reduction is related to the formation of adventitious roots and aerenchyma with higher root porosity, which helps the plant to transport O2 from the soil surface and the stem, in order to reduce flooding injury.

Posted in hemp grow | Tagged , , | Comments Off on We calculated the effective volumetric air content at each site based on the VWC data of each sensor

The FAAH inhibitor that has been studied most intensively is URB597

Findings suggest a need to thoroughly and accurately assess and address cannabis use among HIV positive individuals who are prescribed antiretroviral medication. Indeed, it may not be enough to assess whether an individual simply uses cannabis, but also the frequency and duration of use, as well as problems associated with use, tolerance, and withdrawal, so as to be able to accurately determine dependence. Second, and related, findings suggest the potential need for specific interventions for HIV positive individuals with cannabis dependence. Cannabis dependent individuals in the present study had between 74 % and 93 % adherence, and evidence of poorer viral suppression. Though pill count data revealed higher adherence for non-dependent cannabis users, both pill count and self-report data on adherence was also lower than 95 %, on average, for this group as well. Existing brief interventions that can be used in primary care and specialty care settings may be useful in terms of reducing problematic cannabis use and increasing antiretroviral adherence among these individuals. Finally, as some data suggest that even low levels of cannabis use are associated with psychological problems , the lack of differences between cannabis non-users and non-dependent users provide some initial doubt as to the clinical utility of recommending cannabis for those with HIV. Though the present study provides a more detailed picture of the relations between cannabis use and antiretroviral adherence and HIV symptoms/ART side effects, it is not without limitation. First, the present study was cross sectional and thus unable to determine the prospective association between varying degrees of cannabis use and adherence or symptoms. Indeed, it remains unclear whether cannabis dependence led to poor adherence and symptoms or whether more severe HIV symptoms resulting from poor adherence led to cannabis grow equipment dependence via coping-oriented use. Related, though the employed measurement design was a study strength, we were not able to discern pill count or self-reported adherence as well as would have been possible with multiple assessments to establish a stable baseline level of adherence .

Future studies would benefit from examining the studied associations longitudinally, and conducting blood draws at the same time as the other assessments. Such improvements in design and measurement would also warrant more advanced methods of statistical analysis , where multiple observed variables could serve as indicators of an overarching latent ‘‘adherence’’ variable. Though the present study was quite ethnically diverse, almost 80 % of the sample was male. Though the present study had a higher percentage of females with HIV than is represented in the San Francisco bay area , future work would benefit from recruiting a more gender-diverse sample from different geographic areas. Finally, though cannabis dependence was assessed using the most current and rigorous criteria, the collection of additional contextual information related to cannabis use may help improve our understanding of the differences observed in the present study. Related, the collection of additional information on factors such as substance use motivation, emotion regulation, and cognitive functioning, in future prospective studies, would allow for the determination of malleable mechanisms that may underlie the observed relations, providing a more nuanced understanding of the association between cannabis use and antiretroviral medication adherence and HIV symptoms/ ART side effects.Cannabis and synthetic cannabinoid agonists can produce certain therapeutic effects, but they can also produce adverse side effects including dependence and memory impairment. They produce these effects by activating cannabinoid CB1 receptors, mimicking the effects of endogenous cannabinoid substances . The two main endocannabinoids, anandamide and 2-arachidonoylglycerol , are produced on demand and are rapidly degraded by fatty acid amide hydrolase and monoacylglycerol lipase , respectively. Since CB1 receptors have two separate endogenous ligands, it is likely that the brain circuits involving anandamide and 2-AG underlie distinct sets of neuro behavioral processes that can be selectively targeted for therapeutic purposes. This can be accomplished by administering inhibitors of FAAH or MGL, thereby increasing the effects of anandamide or 2-AG when and where they are released. This amplification of natural endocannabinoid signaling could potentially produce beneficial effects without the adverse side effects associated with exogenous cannabinoid agonists, which directly activate CB1 receptors throughout the brain .

In preclinical testing, URB597 does not produce classical THC-like effects such as catalepsy, hypothermia, and hyperphagia . URB597 also shows no signs of abuse potential in animal models of cannabis abuse; it does not have THC-like effects in rats trained to detect the interoceptive effects of THC , and it is not self-administered by squirrel monkeys that have extensive experience self-administering anandamide and other cannabinoid agonists . However, other FAAH inhibitors, including URB694 , PF-04457845 , and AM3506 , have shown moderate to strong reinforcing effects when offered as an intravenous solution to squirrel monkeys. These findings indicate that FAAH inhibitors can vary considerably in their effect profiles and should be evaluated individually for specific therapeutic and adverse effects. Delta-9-tetrahydrocannabinol impairs learning and memory in humans and animals , with working memory being particularly sensitive. In rodents, memory has also been shown to be impaired by administration of exogenous anandamide, but only when its degradation by FAAH is prevented . Surprisingly, inhibition or genetic deletion of FAAH, which substantially increases endogenous levels of anandamide, has been found to enhance rather than impair memory in rodents trained with procedures involving aversively motivated behavior . However, memory-related studies with appetitively motivated procedures have mostly shown impairment rather than enhancement after treatment with a FAAH inhibitor . There have been fewer studies involving MGL inhibition. The MGL inhibitor JZL184 did not affect memory in an object recognition procedure , but JZL184 and a dual FAAH-MGL inhibitor both impaired memory in a repeated acquisition water-maze procedure in mice . In the present study, we focused on the effects of FAAH inhibitors on working memory in rats, using a food-based procedure known to be sensitive to impairment by THC . We tested five different FAAH inhibitors at doses sufficient to substantially increase levels of anandamide . We found that only one of these compounds, the FAAH inhibitor AM3506, impaired working memory at the doses tested.

Since pharmacological doses of anandamide may activate alpha-type peroxisome proliferator-activated receptors and vanilloid transient receptor potential cation channels , and since FAAH inhibition increases endogenous levels of not only anandamide but also other fatty acid amides that are ligands for PPAR-alpha and TRPV1, we explored the mechanism of AM3506’s effects by giving AM3506 in combination with a CB1 antagonist , a PPAR-alpha antagonist , or a TRPV1 antagonist . These tests indicated that the memory impairment induced by AM3506 was mediated by CB1 receptors.Twelve experimentally naive male Sprague–Dawley rats were maintained in individual cages on a 12-h light/dark cycle with lights on starting at 0645 hours. Procedures were conducted Monday through Friday between 1000 and 1400 hours. Rats were fed approximately 15 g of food per day to maintain stable body weights. The facilities were fully accredited by the Association for Assessment and Accreditation of Laboratory Animal Care , and all experiments were conducted in accordance with the guidelines of the Animal Care and Use Committee of the National Institute on Drug Abuse Intramural Research Program and the Guidelines for the Care and Use of Mammals in Neuroscience and Behavioral Research .The preliminary training procedures, including magazine training with food, shaping of nosepoke responding, and response-chain training were described in detail previously . Under the nonmatching-to-position task used for baseline and test sessions in the present study , there were repeated trials in which either the left or right nosepoke hole was illuminated as a sample; two responses in the sample hole extinguished the sample hole light and turned on the center hole light,cannabis cultivation technology starting the delay period; after a delay of 0, 7, 14, 21, or 28 s, the next response in the center hole extinguished the center hole light and illuminated both side holes, starting the choice phase of the trial; during the choice phase, a response in the side hole opposite to the sample constituted a correct response and produced a food pellet, extinguished the hole lights, and started a 15-s intertrial period with only the house light on; alternatively, during the choice phase, a response in the same hole in which the sample had been presented constituted an incorrect response and did not produce a food pellet, but extinguished the hole lights and caused the house light to flash at 5 Hz for 5 s, followed by a 15-s intertrial period with only the house light on; regardless of whether the choice response had been correct or incorrect, the house light was extinguished and a sample hole was illuminated after the intertrial period, starting a new trial. The side of the sample hole in each trial was drawn without replacement from a list in which each side appeared twice. Similarly, the value of the delay was drawn without replacement from a list in which each of the five possible values appeared once. When either list was depleted, it was replenished before the next trial. Sessions were conducted Monday through Friday and lasted for 90 min or until 100 food pellets had been delivered.Tests were conducted up to two times per week, usually on Tuesday and Friday, if the accuracy of choice responding was over 90% correct at the 0-s delay and there was <10 percentage points difference in accuracy at a given delay over the two previous baseline sessions. The FAAH inhibitors were first tested in the following order: URB694, AM3506, URB597, PF-04457845, ARN14633. The monoacylglycerol lipase inhibitor, JZL184, was tested after ARN14633. For each test drug, the vehicle and two doses were tested in a counterbalanced order across subjects. This counterbalancing was intended to avoid artifacts due to potential confounding of shifts in baseline performance and the order in which the drugs were tested, by allowing each drug treatment to be compared to a contemporaneous vehicle control session. After this single-drug testing, the effects of treatment with AM3506 and its vehicle were tested in combination with a pretreatment injection of rimonabant , MK886 , capsazepine , or vehicle, with the order of combinations counterbalanced across subjects.Analyses were performed with Proc Mixed , using the Tukey–Kramer procedure to maintain a 0.05 significance level for paired comparisons. For figures showing delay curves, simultaneous confidence intervals with a Bonferroni-corrected 95% confidence level were determined for all points within each experiment, and gray bands were included in the figures such that points falling outside the band were significantly higher than 50%.

The percentage of trials with a correct response was analyzed as a function of the pretreatment dose , the treatment dose, and the delay value. All percentage measures were arcsine root transformed for analysis. Responding during the delay period was also analyzed using procedures to assess the role of mediating behavior in performance of the matching task. Briefly, logistic regression was used for each subject to determine whether responding in either the to-be-correct hole or the to-be-incorrect hole during the delay period influenced the accuracy of the choice response; based on this regression, each rat was categorized according to whether responding in the to-be-correct hole or the to-be incorrect hole was Bappropriate; each trial from each test session was then categorized according to whether side-hole responding occurred during the delay period only in the appropriate hole, only in the inappropriate hole, both, or neither. To obtain sufficient samples for the logistic regression used to categorize each rat, data were combined from all the baseline sessions that preceded treatment sessions.Accuracy under the non matching-to-sample task was high at the 0-s delay and decreased monotonically as a function of delay under baseline conditions and after treatment with vehicle . Even at the longest delay, accuracy was well above chance level after treatment with vehicle. During drug testing , the accuracy curves continued to show a general downward slope. The data in each frame in Fig. 2 were analyzed separately, and in each case, the main effect of delay was highly significant [F ranging from 24.0 to 46.0, all p values <.0001]. The main effect of AM3506 on accuracy was significant [F=30.2, p<.0001], but the other treatment drugs had no significant main effects or interaction effects .

Posted in hemp grow | Tagged , , | Comments Off on The FAAH inhibitor that has been studied most intensively is URB597

The effect of WS on patient recovery and prolonged ICU stay is unclear

As expected, they found that illicit dealers were most often victimized and in response mobilized the law least often and retaliated most often. But unexpectedly, the fully licit cafe´ operators reported roughly double the instances of victimization as semi-licit coffee shop operators, and neither mobilized the law nor retaliated often. In the following discussion, I add a few points to the overview of these findings to suggest possible future research.That caf´es selling alcohol experienced more crime than coffee shops selling cannabis would not shock American police. Pharmacology matters, as Jacques et al. suggest, albeit always mediated by culture. At the macro level, there is a well-known correlation between drinking and crime, although it varies significantly across cultures , and different cultures have specific repertoires of intoxication .Culture matters, too, in the micro sense; the normative architecture of the settings of drinking or drug use interact with user expectation sets to affect behavior under the influence.Dutch caf´es, like American bars, tend to be spaces of spirited disinhibition; Dutch coffee shops tend to aspire to a more contemplative ethos, disinhibition in a mellow tone. Future research might usefully extend Jacques et al.’s work by varying pharmacology , type of setting , and culture .The flip side of prohibition creating “zones of statelessness” where law is unavailable is that decriminalization can expand the regulatory capacity of the state. This happened in the Netherlands as its cannabis policy evolved from informal toleration of “house dealers” inside some clubs into formally licensed coffee shops and into subsequent refinements that gave officials greater control, for example, tightening license requirements, raising the minimum age for purchase, and banning advertising . As more U.S. states and other nations legalize cannabis, some are concerned that greater availability could cause greater abuse . The Dutch experience does not support this hypothesis, but instead it supports the counterintuitive argument that legalization can provide more, rather than less, social control. Street dealers generally do not check IDs,cannabis drying racks but as Jacques et al. suggest, Dutch coffee shop operators do because their licenses and incomes are contingent on following the rules. In the United States, by contrast, criminalized cannabis is easier for many high-school students to obtain than tobacco, alcohol, or prescription drugs, which are legal but regulated .

Criminologists well understand that criminalization can amplify inequality. In describing their interviewees, Jacques et al. report that although two thirds of their coffee shop and caf´e operators are White, three fourths of their street dealers are Black, the latter also more often immigrants who reported lower levels of education and a higher frequency of criminal records. Rational choice theory suggests that if criminalization laws are designed to make illicit drug selling as dangerous as possible to deter would-be dealers, we should not be surprised when those who enter that line of work are more desperate. Choices are always made under the constraints of context. Although the Netherlands has substantially less inequality than the United States , immigrants and ethnic minorities there still have fewer licit opportunities. The hypothesis would follow that the marginalized are more likely to find their way into the illicit crevices created by prohibition, where there is often lower cost of entry, higher income, and greater autonomy and dignity than in the legal economy. Moreover, in the United States, well-documented patterns of racially discriminatory drug law enforcement have made minor drug arrests a key gateway to mass incarceration, with all the negative consequences that flow from that. More research is needed to see whether this is the case in other comparable democracies. Future studies would perform a great service if they investigated the degree to which prohibition laws function as an adjunct mechanism of marginalization in other societies. If they do not, it would be even more important to learn how this tendency was avoided.Jacques et al. observe that Dutch decriminalization of cannabis does “not appear to have increased cannabis use by natives.” Indeed, in 2009, the latest year for which national data are available, 25.7% of the Dutch population reported lifetime prevalence of cannabis use, whereas 7% reported last-year prevalence . In the United States, by contrast, where roughly 700,000 citizens are arrested for marijuana possession each year, the latest data available show that 44.2% of the population reported lifetime prevalence of cannabis use, whereas 13.2% reported last-year prevalence . It is worth noting, too, that despite hundreds of coffee shops and decades of claims about cannabis serving as a “gateway” to harder drugs, the Netherlands has lower prevalence of other illicit drug use than the United States and many other European societies.

The Dutch evidence runs counter to the foundational claim of cannabis criminalization; prevalence data indicate that availability is not destiny after all. Although governments committed to criminalization are unlikely to fund such studies, much more research is needed on the relationship between drug policy and drug use prevalence and problems .Jacques et al. rightly argue that the “best way to adjudicate competing claims about the consequences of drug law reform is to conduct research in the settings where the reforms have taken hold.” Their argument centers on the effects of decriminalization on crime and violence in illicit markets. Their findings can be read as mixed. Future researchers will likely generate new findings that support, complicate, and qualify those reported here, showing variation across time, space, cultures, and the complex conjunctures of conditions that shape drug use patterns. But in one sense, the key policy significance of Jacques et al.’s study is simply that it was conducted at all because its core question rests on a consequentialist conceptualization of drug policy: that drug policies must be evaluated on the basis of their actual consequences, not on their intent. Dutch drug policy has opened to empirical examination what has until recently too often remained unquestioned drug war orthodoxy. The Dutch case is complicated, and there is no guarantee that their model could simply be exported to other nations with the same relatively benign results. But the Netherlands provides as good a window as we have on what an alternative drug policy future may look like. As cannabis becomes legalized in more places, its commercialization may yet cause the sky to fall. But the evidence to date, both from the Netherlands and U.S. states, suggests no need to duck for cover just yet. Jacques et al. note that reducing crime and violence in illicit drug markets is not the only objective of Dutch drug policy nor, I would add, the most important. The “other objectives” their study does not directly address include avoiding or reducing the harms of stigma, marginalization, and other negative consequences of criminal punishment . Two odd metaphors catch at the difference between Dutch and U.S. drug policy in this regard. President Lyndon Johnson once famously said of FBI Director J. Edgar Hoover, “better to have him inside the tent pissing out than outside the tent pissing in.” For a century, the United States has pursued drug policies designed to deter use by stigmatizing, punishing,hydroponic cannabis system and ostracizing users. In effect we push them out of the societal tent and then are perplexed when they cause problems, so we pass tougher laws, and so on . Since 1976, drug policy in the Netherlands has been designed to keep illicit drug users inside the societal tent. Compared with the United States, the Netherlands has a stronger welfare state, more social housing, national health care, and greater accessibility of treatment, which result in less poverty, homelessness, addiction, and crime .

In thinking about U.S. drug policy, my Dutch colleagues often use “a stopped-up sink” metaphor: “Americans keep feverishly mopping the floor, but the faucet is still running.” The day I was finishing this article, two stories appeared simultaneously in the New York Times . The first was about an extraordinary letter to UN Secretary General Ban Ki-moon on the eve of the UN General Assembly Special Session on Drugs. The letter urged an end to the war on drugs as a failed public health policy and a human rights disaster. It attracted more than 1,000 signatures, including those of former UN Secretary General Kofifi Anan; former President Jimmy Carter; Hillary Clinton; senators Bernie Sanders, Elizabeth Warren, and Cory Booker; legendary business leaders like Warren Buffett, George Soros, and Richard Branson; former presidents of Switzerland, Brazil, Ireland, and ten other former heads of state; former Federal Reserve Chair Paul Volcker; hundreds of legislators and cabinet ministers from around the world; Nobel Prize winners; university professors; and numerous celebrities. All attendees at the Special Session were given copies of the letter. The UN ordered all copies confiscated . The second article provided vivid testimony as to why such a letter was necessary: The U.S. Supreme Court refused to hear the appeal of a 75-year-old disabled veteran serving a mandatory sentence of life without parole for growing two pounds of cannabis for his own medical use, a fact uncontested by the prosecutor . Such grave injustices have allowed the Drug Policy Alliance and a growing number of other nongovernmental organizations to mount a drug policy reform movement of unprecedented scale. Stopping the drug war and the mass incarceration it helped spawn has become a top priority for the civil rights movement, from the NAACP to Black Lives Matter. Voters in the United States and elsewhere are slowly taking matters into their own hands. Medical marijuana laws have been passed in 24 states, and cannabis has been legalized under state law in Colorado, Washington, Alaska, Oregon, and Washington, DC. Voters in California, Arizona, Massachusetts, and perhaps other states are set to vote on cannabis legalization initiatives in November 2016. Most European countries have embraced at least some harm reduction policies. Portugal, Uruguay, Australia, the Czech Republic, Italy, Germany, and Switzerland have moved toward decriminalization of cannabis in one form or another. Former drug war allies across Latin America are in revolt against U.S.-style prohibition. These are the sounds of the American drug war consensus collapsing. Global drug policy is at an historic inflection point, and it is trending Dutch.ICU patients frequently receive opioid and benzodiazepine medications to treat the pain, anxiety, and agitation experienced during a critical illness. Trauma ICU patients may require high and/or prolonged doses of opioids to manage pain associated with multiple open wounds, fractures, painful procedures, and/or surgery. They may also require benzodiazepines to prevent or manage anxiety and agitation and to facilitate effective mechanical ventilation . Although the effect of different pain and sedative medication regimens on TICU patients is unclear, prior evidence suggests that administration of opioid and benzodiazepine medications in the ICU setting is associated with the development of many complications including delirium and poor patient outcomes . Exposure to high or prolonged use of opioids and benzodiazepines may also contribute to both drug tolerance and drugphysical dependence . Once drug dependence has developed, patients are then at risk for withdrawal syndrome , a group of serious physical and psychologic symptoms that occur upon the abrupt discontinuation of these medications .Unlike in the PICU patient population, physical dependence during drug weaning of adult ICU patients exposed to prolonged doses of opioids and benzodiazepines has received little study. Indeed, there is a large discrepancy in the amount of literature regarding WS in the adult versus PICU populations. There are two descriptive studies with retrospective chart review designs and small samples in adult ICU surgical-trauma patients and burn ICU MV patients . Cammarano et al found that 32% of their sample developed WS after prolonged exposure to high doses of analgesics and sedatives. Brown et al found that all burn MV patients who received opioids and benzodiazepines for more than 7 days developed WS. In a prospective experimental study of major abdominal and cardiothoracic postsurgical ICU patients, 35% who received a combination of opioids and benzodiazepines developed marked withdrawal compared with 28% who received a combination of opioids and propofol . These three studies were reported more than 1 decade ago, prior to the current recommended change in sedative management . A recent prospective study of 54 TICU patients showed a lower occurrence of iatrogenic opioid WS than in previous studies .

Posted in hemp grow | Tagged , , | Comments Off on The effect of WS on patient recovery and prolonged ICU stay is unclear

Age is one such confounder that is well known to associate with coagulation and risk for disease outcomes

Increasing evidence indicates multiple physiologic regulatory systems are dysregulated in frailty, with a greater number of dysregulated systems increasing the odds of frailty. Frailty appears to be the result of a breakdown in the homeostasis that is required for an organism to remain resilient. Despite the growing body of research concerning frailty, questions remain, particularly for older PWH. There is evidence of an interplay between frailty and disease, especially catabolic diseases. The specific system drivers that start the syndrome of frailty, such as inflammation, need to be better understood. It also is unclear if there are shared etiologic factors between frailty and disease and how these interact with one another, especially in the case of HIV. The implications for the prevention and treatment of frailty are complicated by multiple co-occurring events: treatments that can alter multiple pathways, such as physical activity, will be most effective in reducing or preventing frailty. Ultimately, successful frailty prevention will involve interventions that target the underlying physiology and biology, which might be unique for older PWH. Frailty provides a window into the biology of vulnerability by allowing us to examine the resources and resilience in PWH who can be rebound in the face of stressors to the system.A key contributor to the current spectrum of end-organ disease risk among older PWH entails persistent abnormalities in coagulation activation, despite effective ART.Older PWH are well known to be at excess risk for venous thromboembolism ,cannabis indoor greenhouse and atherosclerotic cardiovascular disease is now a leading cause of morbidity and mortality among PWH.Still, beyond these classic manifestations of macrolevel venous or arterial thrombosis, elevations in circulating D-dimer levels also are associated with increased risk for end-stage liver or renal disease, the frailty phenotype, all-cause mortality, and other grade 4 adverse events related to end-organ injury among PWH.

In this context, a central underlying question is how low-level persistent hyperactivation of the coagulation system contributes to excess risk across a wide spectrum of disease, beyond macrolevel thrombosis. When studying potential causal associations between HIVassociated coagulopathy and end-organ disease risk, there are several important considerations and limitations when interpreting data from observational studies. Two examples are potential influence of confounders and mediators on associations between host factors and clinical risk. Confounders are not on the causal pathway, and associate with both the outcome and a biomarker or the exposure of interest.Mediators, however, are more informative in this context as they are directly or partially on the causal pathway, such that they may account, at least in part, for the association between HIV-associated coagulopathy and end-organ disease risk. Mediators specific to HIV disease that may contribute to coagulopathy include direct effects from viral replication as well as persistent immune depletion and loss of a protective barrier at the level of mucosal surfaces.We have previously shown that HIV viremia increases procoagulant factors , and concurrently decreases anticoagulant factors , with the resulting alterations in coagulation factor composition then associated with greater predicted thrombin generation and mortality risk.These HIV-associated changes are very similar to changes in coagulation profiles that occur with advancing age.HIV disease is also characterized by immunologic depletion at effector sites in the gastrointestinal tract, and other secondary lymphatic tissues contribute to the loss of mucosal integrity, which largely persists despite ART.This loss of mucosal integrity contributes to ongoing immune activation and low-level hypercoagulation due, in part, to microbial antigens translocating across mucosal surfaces resulting in endotoxin-mediated activation of tissue factor pathways.Pathologic alterations to coagulation profiles and low-level endotoxemia then represent potential mediators on the causal pathway from HIV disease to coagulopathy. If an HIV-associated coagulopathy increases risk for thrombosis, it follows that risk for ischemic cardiovascular disease and mortality may be increased in this context.

However, other end-organ diseases cannot be explained by macrovessel thrombosis and other explanations must be sought. Additional pathways have therefore been explored, whereby HIV coagulopathy could cause end-organ disease by driving inflammation-associated tissue injury. This hypothesis entails a cross-talk between coagulation and in- flammation that is mediated by clotting factors activating protease-activated receptors , which are expressed on leukocytes and on vascular surfaces.Activation of PAR-1 and/or PAR-2 signaling, in part, drives in- flammation and injury within end-organ tissues. However, this hypothesis has been tested and not supported in two proof-of-concept randomized trials, studying direct acting oral anticoagulants edoxaban and vorapaxar .Alternatively, it is possible that a disease state may contribute to alterations in coagulation , such that the development of end-organ dysfunction, whether HIV related or not, would itself further contribute to a coagulopathy. In summary, current data support that chronic HIV disease contributes to a coagulopathy, but further research is needed to better understand the mechanisms by which this coagulopathy contributes to end-organ pathology and clinical manifestations resembling those of aging among older PWH.There is a long history of medicinal use of cannabis, going back millenia. Political shifts in the early 20th century resulted in the criminalization of cannabis. In the 1990s, there was persistent anecdotal evidence that cannabis mitigated HIV-related symptoms, such as nausea, vomiting, and wasting, with simultaneous political shifts to favor access to medical cannabis. In 1996, the Compassionate Use Act was passed in California, which allowed the use of medicinal cannabis and spurred a call for greater research about the positive and negative effects of cannabis. In 1999, the Medical Marijuana Research Act was passed in California, which provided resources and pathways to conduct rigorous studies on cannabis, including development of the Center for Medicinal Cannabis Research at the University of California, San Diego. In the decade that followed, cannabis researchers identified over 100 different cannabinoids in cannabis, the main two being tetrahydrocannabinol , which is psychoactive, and cannabidiol , which is nonintoxicating.

These are the two compounds most often discussed and studied in cannabis research, including those examining possible medicinal effects. In addition, the plant contains terpenoids, which contribute to the aroma and may act on serotonin, dopamine, and other receptors, and flavonoids, which contribute to the color of the plant and might have antioxidant and anti-inflammatory properties. To date, two primary cannabinoid receptors have been noted: CB1, which is highly prevalent in the brain, as well as other body systems, and CB2. The body also has an endocannabinoid system, with two primary constituents being anandamide and 2-arachidonylglycerol. This system, unlike some other neurotransmitters, can be synthesized on demand and serve as signaling messengers to promote homeostasis. In 2017, The Health Effects of Cannabis and Cannabinoids National Academies Report was released, and stated that in humans there is conclusive evidence that cannabis benefits chronic pain, spasticity associated with multiple sclerosis, and control of nausea; moderate evidence that cannabis helps improve sleep in those with chronic conditions; limited evidence that cannabis helps anxiety disorders and posttraumatic stress disorder ; and no evidence that cannabis is effective as a treatment for diseases such as cancer, epilepsy, or schizophrenia. In most cases, the lack of evidence was based upon a lack of substantive research having been completed, rather than necessarily negative findings. There are a number of studies, typically small, which provide data on the potential benefits of cannabis in PWH. A study by Abrams et al. found evidence that smoking cannabis reduces HIV neuropathic pain and another by Wilsey et al. showed that both low and medium doses of vaporized cannabis were equally effective with neuropathy in other conditions.In a population without HIV, Wallace et al. proposed a ‘‘window’’ for cannabis growing equipment pain relief, such that too low or too high dose of THC may have no effect, or even exacerbate pain, indicating that dosage is very important.Based on a retrospective observational study of PWH, there is a possibility of neuroprotective effects among moderate cannabis users compared to infrequent and frequent users.If true, this is likely a benefit seen in individuals with an inflammatory condition, such as HIV, and in which the anti-inflammatory effects may be helpful, rather than in individuals not with such conditions.

In one study of PWH, having a diagnosis of cannabis use disorder was predictive of higher odds of being a ‘‘superager,’’ an adult who performs better than his or her peers, and at par with younger individuals, on cognitive tests.While there are data supporting positive effects of cannabis use, several challenges to conducting cannabis research remain. First, smoking as a delivery method is a challenge due to concerns regarding the safety of using combustible materials , second hand smoke as an irritant, and difficulties in the standardization of dosing, to name a few. Second, the Drug Enforcement Agency scheduling criteria and access regulations limits access to cannabis for research. Currently, plant-based THC is schedule 1 , and the scheduling of synthetic THC and synthetic and plant-based CBD vary widely . Furthermore, currently the University of Mississippi remains the sole source for plant-based cannabis in the United States. There is promise that the DEA will propose new regulations to expand the policy to allow additional sources. Finally, there has been a proliferation of CBD products ; however, these are not available to researchers because they are federally illegal, and therefore no study can assess their efficacy or safety. Questions regarding HIV, aging, and cannabis remain unanswered, and are of particular importance due to the alterations in body composition, reduction in hepatic and renal drug clearance, cardiovascular and pulmonary changes, and so on, which can occur with both aging and HIV and interact with the effects of cannabis.80 Method of administration will also play a role in evaluating the risks and benefits of cannabis, as there are indications that long-term smoking may result in increased risk of lung-related diseases in PWH.More studies on the potential anti-inflammatory and neuroprotective qualities of cannabis are also needed, to inform their potential uses among older PWH.Loneliness, or the discrepancy between one’s preferred and actual social relationships, is different from social isolation and is hypothesized to serve as an evolutionary cue.Like hunger, the discomfort of loneliness encourages persons to seek out meaningful relationships, ultimately to enhance survival. Loneliness and social isolation are common in the United States, with estimates suggesting that nearly half of Americans report sometimes or always feeling alone and 40% sometimes or always reporting their social relationships are not meaningful.Older PWH may experience slightly higher rates of loneliness than HIV-seronegative persons, with estimates ranging from 39% to 58% depending on the population evaluated.Older PWH who report loneliness are more likely to smoke cigarettes, use alcohol or other substances, and have low social support, depressive symptoms, and poor to fair quality of life.Loneliness and social isolation increase the odds of an early death by 26%–45%, an impact similar to that of smoking 15 cigarettes a day.The effect of loneliness and social isolation on health appears secondary to stress-induced cortisol dysregulation. Persons who are lonely demonstrate higher total peripheral vascular resistance and lower cardiac contractility.Immunologically, persons who are lonely display less natural killer cell activity, poorer immune responses to influenza vaccination, and increased circulating levels of cortisol.It is now apparent that chronic higher than usual levels of cortisol mediate the transcriptional response of glucocorticoid receptor pathways.Clinically, these conserved transcriptional responses to adversity result in a mixed picture of excess inflammation and immunosuppression that moderate the association between loneliness and social isolation and health.Protective factors that mitigate the impact of loneliness also exist and include wisdom, resilience, nostalgia and eudaimonia ,although the prevalence and impact of these factors in older PWH have not been studied. Overall, evaluation of loneliness in older PWH remains a significantly understudied topic. Further work that enhances our understanding of the true impact of loneliness and social isolation on quality of life, health, and function of older PWH is needed to ultimately develop effective interventions for this potentially modifiable condition.Finding a cure for HIV is an important consideration for all PWH. However, there are many unique challenges for cure research within the aging HIV population, including the impact of immuno senescence, increased rates of medical comorbidities, polypharmacy, and frailty. The potential benefits of cure for older PWH are also numerous: curing HIV could reduce stigma, improve psychosocial outcomes, and reduce harm associated with long-term ART toxicity and polypharmacy. An HIV cure could reduce inflammation, immune dysfunction, and tissue fibrosis, resulting in a significant reduction in morbidity for older PWH.At present, PWH older than 65 years are routinely excluded from HIV cure research, potentially limiting advances in the field.

Posted in hemp grow | Tagged , , | Comments Off on Age is one such confounder that is well known to associate with coagulation and risk for disease outcomes

They leave understandings of causality and attribution to their interpreters

This dark figure exists for two main reasons: victims fail to report crimes , and law enforcement agents are unable to detect crimes . There have been many attempts by law enforcement and criminologists to better estimate crime and diminish this dark figure through improved and new types of surveillance, anonymous reporting systems and victimization surveys, like the National Crime Survey . More recently, law enforcement at international, national and regional levels has attempted to detect crime by using remote sensing technologies. Using imagery collected remotely, from sensors onboard aircraft, unmanned aerial vehicles and satellites, law enforcement agents have been able to assess where and when certain kinds of crimes have taken place. The use of remote sensing, the “observation of earth’s land and water surfaces by means of reflected or emitted electromagnetic energy” or, more simply, a method of “acquiring data about an object without touching it”, for surveillance and analysis has obvious benefits for law enforcement agencies . It greatly expands the supervision of agents of the law in often remote or inaccessible places, reduces the exposure of these agents to dangerous circumstances on the ground and may make up for a lack of manpower . At the same time, using remote sensing has at least three serious limitations. First, and perhaps most obviously, remotely sensed images that are gathered from overflying helicopters, aircraft or satellites can only detect crimes or crime’s impacts that are visible from above and for sustained periods of time. For example, remote sensors can identify illegal logging, large-scale drug production, and trails in the desert but they would be much less likely to detect murder, assault, homicide, robbery,ebb and flow flood table or other small-scale, undercover, rapid actions, though some attempts have been made to capture the lasting effects of these things, for examples, see Pringle and others. Second, remote sensing cannot record the social, political, economic and historical context of landscapes and the actions that take place within them. Crime and criminals are subjective, spatially delineated and historically contingent categories.

They are not, nor ever have been, pre-determined or natural classifications. As laws, land use regulations, as well as national and local power relations shift, so do the definitions of crimes and criminals . Thus, remote sensing cannot detect crime as it might detect a stand of a certain tree species: crimes, their perpetrators and their forms are defined by the dominant forces in society rather than spectral signatures or texture patterns. Because remotely sensed images are collected remotely , they lack detailed or nuanced definitions of crime drawn from the context of the landscapes they seek to analyze; they do not tell us why certain things happened or by whom, specifically. Despite the serious imbalances and problems that may arise from the remote sensing of crime, it continues apace, as we have seen from increasing discussions in the popular press and academic journals about the use of unmanned aircraft systems , increasing availability of micro-satellites and Google Earth images in the detection of crime . The continued and increasing use of remote sensing for these purposes brings us to the third limitation that we will mention here: the issue of validation. As remote sensing scholars, such as Jensen, Congalton and Foody note, validation is a critical part of any remote sensing exercise, and these scholars and others have laid forth strict protocols for validation exercises. Validating that crimes are actually occurring in the places that remote sensing algorithms say they are is not a simple task, however. On the ground, verification of potential illicit drug production, arms and drug smuggling or even illegal logging, activities which are often protected by, or associated with, armed guards or agents, is often dangerous. The lack of validation in the remote sensing of crime is troubling, however, because drastic military or police actions are often used to intervene where crimes are detected with lasting ecological, economic and social impacts: lives, security and livelihoods can be at stake, not to mention law enforcement credibility and resources. In short, classifying an action as a crime or a person as a criminal may have much higher costs than other classification mistakes. Thus, we must be doubly sure of what we classify as crime using remotely sensed images before we act. Further, such validation may add nuance and greater contextual understanding of the images used for analysis, which may allow for a more fair and balanced law enforcement response. Although all three of the above limitations are important to consider, this paper will take a methodological approach to engage with the issue of the validation of remotely sensed crime.

We believe a focus on validation is critical, because as remotely sensed products become increasingly available to our desktops and smartphones, a rising trend of validation-free analysis is emerging. In these circumstances, products, like Google Earth, are used with the assumption that their images portray “the truth”, which should be acted upon. Despite the ease with which these data now flow to us, validation of our findings based on these images remains critical; competing sensors, processing methodologies and the familiarity of analysts with the limitations of the data they are using can present very real challenges to the ethical and accurate use of remote sensing in law enforcement and/or litigation. In this paper, we will first analyze how remote sensing technologies have been used to aid in the detection of crimes that might otherwise go undetected. As other authors have shown, “satellite imagery highlights the spatial footprint of human actors in very real and compelling ways” .Here, we review the literature that discusses how satellite and airborne technologies have been used in the active detection of felony cases of drug production, smuggling and extra-legal migrations. We use the term “extra-legal” here, rather than “illegal,” in order to highlight the fact that though these acts are prohibited by USA or international law, the prohibition of these actions is often highly political and may not be deemed illegal in all cultures or by all groups. Forensic remote sensing has also been critical in the detection of environmental crimes, such as extra-legal mining and timber extraction, as well as in detecting oil spills and hazardous waste dumping. While the use of remote sensing in environmental forensics of this kind are important, many of the articles on these topics are embedded in larger land-clearance, deforestation and oceanographic literatures that deal with licit, illicit and accidental extraction or pollution, making the attribution of legality associated with the event difficult. Forensic remote sensing can also be used to identify the location of single and mass grave sites, but because most of these studies are experimental or historically oriented, we excluded them from our review. Remote sensing has also been used to find bodies, munitions and toxic waste that may have drifted based on water-current analysis.

While our scope is narrower than that of forensic remote sensing, we do draw upon the advances in crime detection and validation that these studies have advanced in our analysis. Second, building on this literature review, we consider what kinds of validation protocols for the remote sensing of crime have been attempted and what the limitations to these protocols are, geographically, financially, as well as in terms of personnel and time. Third, we seek to generate a discussion on new and less traditional ways that crime may be sensed remotely or validated. While “first order” validation protocols, such as the collection of ground reference data, over flights and the use of higher spectral or spatial resolution images, are critical to assessing the accuracy of remotely sensed processes, they may not always be useful, possible or sufficient in the context of criminal investigations. Here, we propose going beyond the “first order” validation protocols that are standard in remote sensing to ensure accurate assessments of remotely sensed crime are occurring in ethical and contextually-situated ways. Here, we define the remote sensing of crime as the use of airborne and satellite remote imagery to detect crimes that have heretofore gone unreported or undetected. Lein describes forensic remote sensing as considering “the investigative use of image processing technology to support policy decisions regarding the environment and the regulation of human activities that interact with environmental process and amenities.” In this definition the term “forensic” refers to detailed investigation rather than a criminological one . As Lien points out,hydroponic drain table forensic remote sensing seeks to generate information pertaining to a specific event rather than “provide a broad thematic explanation”. As we note above, not all crimes are well suited to detection by remote sensing, however. Those crimes that have been most successfully detected using remote sensing technologies generally have the following three characteristics: first, they occur over relatively large geographic areas, so that their patterns may be easily detected, even with moderate or low spatial resolution imagery, like Landsat or MODIS ; second, the crimes or their evidence are generally visible for extended periods of time, allowing for their detection by satellites or airborne sensors over the length of a day, week or month; and third, they generally have characteristic spatial or spectral patterns that can be recognized from above using object-based analysis or spectral analysis. This paper focuses on the utility of remote sensing in detecting crimes that are deemed a felony offense under U.S. federal law and are recognized as crimes internationally: arms, drug and human trafficking, repeat extra-legal migration and drug production/possession . While there exists a plethora of academic papers that test methods that could theoretically be used for the remote sensing of crime—testing algorithms, detection techniques or spectral reflectances of illicit crops and smuggling trails—there are relatively few studies that document the use of remote sensing in the active reconnaissance of criminal activities. In this section, we review studies of active reconnaissance that exist in peer reviewed journals, as well as in gray literature in relationship to drug production, smuggling and extra-legal migrations. The characteristics of these activities fit those described above: they often occur in large geographic and temporal scales and may be uniquely identifiable from the surrounding landscape using aerial images. Because of these attributes, they represent the most common examples in papers regarding remote sensing used in the active detection of crimes.

We reviewed 61 papers, reports from the United Nations Office on Drugs and Crime and master’s theses on these topics that were found through searches in the Google Scholar, Web of Science and Jstor search databases using a number of combined words and phrases . Some of these reports involved multiple case studies. Though, as Figure 1 shows, there were thousands of results that came from these combinations of search terms, very few of these results dealt with the active reconnaissance of crimes using remote sensing. We do acknowledge that there are probably many more reports and papers available on this topic in the law enforcement literature that are not available to the public. Government agencies, like Homeland Security, the Federal Bureau of Investigation and the Central Intelligence Agency, as well as international law enforcement agencies, like Interpol, may have extensive documentation on these topics that we were unable to access. Most prevalent in literature involving remote sensing of crime were studies on the detection of the cultivation of illicit substances. While the criminalization of each of these plants and their use is fraught with important political, cultural, economic and militaristic implications, an in-depth discussion of the reasoning behind these criminalizations and their ethics is beyond the purview of this article. Rather, we narrow our focus to the application of remote sensing products to actively detect “crime”, as it is construed by international or national governing powers. The use of remote sensing to detect the cultivation of illicit crops is a trend that has increased over time, perhaps because of the opening of the Landsat archives in 2008, and perhaps because of interest in opium growing in Afghanistan and South East Asia . We gathered the publicly available literature on the remote sensing of drug production in Afghanistan, Myanmar, Thailand, Laos, Bolivia, Colombia and Peru, countries targeted for drug production monitoring both by the UN’s Office of Drugs and Crime and academic researchers, due to these countries’ historically high exports of illicit substances .

Posted in hemp grow | Tagged , , | Comments Off on They leave understandings of causality and attribution to their interpreters

Cigarette smokers have elevated rates of both caffeine and marijuana use

Once a patient is deemed stable for discharge from the ED by the trauma service, the rest of the patient’s care is up to the discretion of the emergency physician, which includes any and all medication prescriptions and ultimate disposition decisions. Lastly, as a supplementary analysis to look more specifically into potential associations with THC use we compared opioid prescriptions against three separate groups that included patients with negative toxicology screens for THC, patients with positive screens for THC, and patients without a toxicology screen.The study population was divided into five subgroups that included the following: negative urine and serum toxicology screen ; depressants; stimulants; mixed; and no toxicology screens. The median total MME for the five separate subgroups was as follows: none ; depressant ; stimulants ; mixed ; and no toxicology screens . The median total number of pills for the five separate subgroups was as follows: none ; depressant ; stimulants ; mixed ; and no toxicology screen . When comparing the 103 patients from whom toxicology screens were obtained to the 255 patients without toxicology screens, we found no statistically significant differences in the total prescribed MME or in the number of pills prescribed . Notably, none of the 103 patients who had toxicology screens were prescribed naloxone upon discharge. We also looked into whether the type of injury had any association with opioid prescriptions. Our data, shown in Table 2 below, indicates there was no statistically significant difference in total prescribed MME or amount of pills prescribed when comparing patients with fractures, dislocations,flood tray or amputations. As a supplementary analysis we aimed to determine whether or not the presence of THC on urine toxicology screens was associated with an increase or decrease in the amount and total MME prescribed . The median total prescribed MME for patients with urine toxicology screens positive for THC was 87.5.

The median for patients with urine toxicology screens negative for THC was 75.0, and there was no statistically significant difference between the two groups . The median total number of pills for patients with urine toxicology screens positive for THC was 15.0. The median total number of pills for patients with urine toxicology screens negative for THC was 15.0, and there was no statistically significant difference between the two groups .At our Level I trauma center it is routine to obtain urine and serum toxicology screens for trauma activations. Most often, the results of these toxicology screens are not pertinent and will not significantly affect the patient’s disposition. However, previous reports have suggested that in some circumstances the urine drug screen is of utility in improving patient care by identifying patients who are at risk for diversion and mismanagement of controlled substances. 33 Our results did not substantiate these reports. For context, providers in California must consult the Controlled Substance Utilization Review and Evaluation System , the state’s prescription drug monitoring program, prior to prescribing Schedules II-IV controlled substances for the first time and at least once every four months thereafter if the patient continues to use the controlled substances.34 However, if prescribed in the ED, providers do not have to consult CURES if the quantity of controlled substance does not exceed a nonrefillable seven-day supply. In fact, it is common practice to prescribe less than one week’s supply and to consult CURES only if the prescriber has suspicion of diversion, misuse, or abuse. For these reasons we suspect CURES reports likely had limited to no effect on prescribing habits. A large-scale study based upon Medicaid States Drug Utilization Data found an associated decrease in the number of opioid prescriptions, dosages, and Medicaid spending in states that have legalized medical cannabis. 30 A similar study found that in states that have legalized recreational marijuana, there was a notable decrease in opioid prescriptions of about 6.38%. 35 Since then, several studies have failed to demonstrate similar findings in actual clinical practice, and many have actually found that cannabis use was associated with an increased risk of opioid use disorder and opioid misuse. 36-39 In our study, we found no statistically significant difference in opioid prescriptions in terms of either total MME or number of pills prescribed between groups. Thus, we do not see that emergency physicians reduce or significantly change the quantity of prescribed opioids when urine toxicology screens are noted to be positive for THC. This was consistently true even when our study population was divided into different classes of toxicology results .There was also no difference in opioid prescriptions between these four separate groups. Thus, physician knowledge of prior drug use was not associated with a decrease in the total quantity of opioid prescriptions.

This may be explained in part by the legal status of cannabis in the state of California and may portend an overall reduction in the stigma that was previously endured by patients who used cannabis medicinally or recreationally. Another salient finding within this data was the absence of naloxone prescriptions for any patient in this study. In the state of California, Assembly Bill No. 2760 was passed on September 10, 2018, and took effect January 1 2019. This bill mandates that opioid prescribers must offer a prescription of naloxone hydrochloride when the prescription dosage is 90 MME or more per day, when an opioid is prescribed concurrently with a benzodiazepine, and when the patient is at increased risk for overdose, which includes patients with a history of overdose, patients with substance use disorder, or patients at risk for returning to a high dose of opioid medications.We collected the data for our study prior to the enactment of this law. However, it is prudent to recognize that even within this law, there is no clear mandate on prescribing naloxone based upon toxicology results that imply higher risk of illicit drug use, such as urine drug screens that are positive for both opioids and benzodiazepines. We also found that of the 103 patients who had toxicology screens performed, were prescribed a total MME <90, and 46 were prescribed a total MME >90. Thus, had the law been in effect, 44.7% of these patients should have received a prescription for naloxone regardless of their drug screens, strictly due to the total MME prescribed. While this study was performed at an academic tertiary care center, if it were repeated at other community-based institutions, we could see similar patterns regarding the lack of naloxone prescriptions. Furthermore, we undertook this study in Orange County, California, a densely populated setting in Southern California that was ranked 17th out of 58 counties in the state for rates of prescription opioid deaths and unintentional injuries. Drug overdose was the largest contributor and the number 1 cause of death in patients between the ages of 15-44 years old.One study that surveyed emergency providers at an academic, urban, Level I trauma center found that the factors most commonly influencing providers’ willingness to prescribe naloxone were the prevalence of prescribing these medications in their institution, or if there was a strong mortality benefit.Sixty-two percent of prescribers endorsed that lack of training was a barrier to prescribing, and 52% cited lack of knowledge as a barrier. Thus, it is pertinent that as a medical community, we focus on methods to improve research and education on naloxone so that prescribing can become a more common practice.

Several initiatives have been developed and described in the literature aimed at improving naloxone prescription rates. Some examples include screening questionnaires for patients, pharmacy-led opioid overdose risk assessments, and multi-disciplinary teams with clinical nurse specialists for overdose education and naloxone distribution. In one study a program was implemented within the electronic health record system to search for keywords within nursing assessment notes to identify patients who were at high risk for opioid overdose. This then prompted the physician to consider naloxone prescriptions. Overall, the study found that since implementation of this integrated EHR programming, there was an associated increase in the rate of take-home naloxone prescriptions. Implementation of similar programming in EHRs could be used to flag patients with toxicology results positive for high-risk illicit drug use such as benzodiazepines, other opiates,grow table supplier and alcohol. These flagged patients could then trigger a prompt to consider prescribing naloxone if the clinician attempts to prescribe an opioid. Given that some states have implemented mandates requiring the prescription of naloxone when prescribing opioid regimens greater than 90 MME, an additional prompt from the EHR recommending naloxone in these situations may prove useful to ensure compliance with local laws and practice guidelines.39Despite the health risks and societal costs of cigarette smoking, the prevalence of smoking in the USA remains high at ∼19 % . Roughly 44 % of cigarettes are used by smokers with substance abuse/dependence and/or mental illness , and people with almost all substance abuse and mental illness diagnoses have elevated rates of cigarette smoking .Roughly half of smokers drink coffee and report drinking almost twice as much coffee per day as nonsmokers . Similarly, among smokers, 57.9 % have ever used marijuana, and smokers are about 8 times more likely than non-smokers to have a marijuana use disorder , with cigarette smoking and marijuana use being associated even after controlling for potential confounding variables, such as depression, alcohol use, and stressful life events . Given the high comorbidity of smoking and both caffeine and marijuana use, it is important to better understand biological factors that may be associated with these co-occurrences. One of the most well-established effects of chronic cigarette smoking on the human brain is widespread upregulation of α4β2* nicotinic acetylcholine receptors . Recent studies using single-photon emission computed tomographyand positron emission tomographyhave consistently demonstrated significant upregulation of these receptors in smokers compared to nonsmokers. These in vivo studies were an extension of much prior research, including human postmortem brain tissue studies, demonstrating that chronic smokers have increased nAChR density compared to non-smokers and former smokers . Additionally, many studies of laboratory animals have demonstrated upregulation of markers of nAChR density in response to chronic nicotine administration . In a previous study by our group comparing nAChR availability between smokers and nonsmokers , we explored the effect of many variables, including caffeine and marijuana use. Both heavy caffeine and marijuana use were exclusionary, such that participants drank an average of 1.3 coffee cup equivalents per day and only 12 % of the study sample reported occasional marijuana use. PET results indicated that caffeine and marijuana use had significant relationships with α4β2* nAChR availability in this group with low levels of usage. Based on these preliminary findings, we undertook a study of the effect of heavy caffeine or marijuana usage on α4β2* nAChR density in cigarette smokers.

One hundred and one otherwise healthy male adults completed the study and had usable data. Participants were recruited and screened using the same methodology as in our prior reports , with the exception that this study only included Veterans. For smokers, the central inclusion criteria were current nicotine dependence and smoking 10 to 40 cigarettes per day, while for non-smokers, the central inclusion criterion was no cigarette usage within the past year. Heavy caffeine use was defined as the equivalent of ≥3 cups of coffee per day, and heavy marijuana use was defined as ≥4 uses of at least 1 marijuana cigarette per week. Exclusion criteria for all participants were as follows: use of a medication or history of a medical condition that might affect the central nervous system at the time of scanning, any history of mental illness, or any substance abuse/dependence diagnosis within the past year other than caffeine or marijuana diagnoses. Occasional use of alcohol or illicit drugs was not exclusionary. There was no overlap between this study and prior research by our group. During an initial visit, screening data were obtained to verify participant reports and characterize smoking history. Rating scales obtained were as follows: the Smoker’s Profile Form , Fagerström Test for Nicotine Dependence , Beck Depression Inventory , Hamilton Depression Rating Scale , and Hamilton Anxiety Rating Scale . An exhaled carbon monoxide level was determined using a Micro Smokerlyzer to verify smoking status. A breathalyzer test and urine toxicology screen were obtained at the screening visit to support the participant’s report of no current alcohol abuse or other drug dependencies. This study was approved by the local institutional review board , and participants provided written informed consent.

Posted in hemp grow | Tagged , , | Comments Off on Cigarette smokers have elevated rates of both caffeine and marijuana use

Another well-known issue in lipidomics is that various sources of contamination can originate artifacts

It is particularly surprising that we see no effect on narcotics, considering most medical marijuana patients specifically use cannabis as a substitute for narcotics. An explanation for this can be that some medical marijuana users do not use for medical reasons many of the MMIC holders in this particular data base may only use for recreational purposes. To observe any further substitution effects, I used Equation 5.7 to regress alcohol induced crude rates, drug-induced crude rates, and all other crude rates on MMICs and unemployment still controlling for county and year fixed effects. Unlike the arrest rate data, no substitution effects were found. Referring to the regression output in Table 5.9 for alcohol-induced deaths, MMICs actually had a statistically significant positive effect on alcohol related deaths. The interpretation is that for every new medical marijuana user, the alcohol crude rate increases by 0.0068 deaths per 100,000. However, observing that zero is in the confidence interval and that the t-statistic is borderline significant, it is likely that there is no effect at all. While this is still a positive number, its suggested effect is so small, it becomes negligible. This can be determined by looking at the average crude rate for alcohol related deaths, which is 15.8. There would have to be an additional 147 MMICs per 100,000 to increase this crude rate by 1 death per 100,000. This is a highly unlikely scenario,plant grow table and could therefore be dismissed. By applying this same model to drug-related deaths, we again get a statistically significant positive effect on the crude rate, shown in Table 5.10.

While this would typically suggest that marijuana is a complement drug to other drugs, the effect is again, miniscule. With the average drug-induced crude rate of 13.4 deaths per 100,000, the number of medical marijuana cardholders would have to increase by 142 to cause 1 drug-related death. Similar to the effect on alcohol-induced mortality rates, this is a very unlikely event, and can be disregarded. While the drug and alcohol related deaths were affected slightly by medical marijuana, all other crude rates did not. There was no statistically significant effect when applying Equation 5.7 to all other crude rates. Fatty acid ethanolamides are a family of endogenous lipid mediators, whose chemical structures consist of a fatty acid moiety bound to ethanolamine by an amide linkage. These compounds are synthesized by cells throughout the body and control inflammation, appetite and food intake, learning and memory, and pain among other functions.1 Palmitoylethanolamide and oleoylethanolamide suppress inflammation by activating the ligand-operated transcription factor, peroxisome proliferator-activated receptor-a.Anandamide acts as a partial agonist at cannabinoid receptor type 1 and 2 receptors and, therefore, belongs to the diverse family of lipid signaling molecules called endocannabinoids .Due to their similar physicochemical properties, FAEs and other ECBs, such as 2-arachidonoyl-snglycerol , are usually coextracted from biological samples.The procedure for their analysis includes extraction with organic solvents followed by purification through solid-phase extraction and subsequent quantitation by liquid chromatography–mass spectrometry or gas chromatography–mass spectrometry .FAEs are present in blood serum or plasma in the pmol per mL scale and in biological tissues in concentration ranging from the pmol to nmol per gram scale. A review of the literature, however, reveals that data from different laboratories, reporting concentration of FAEs in human serum from healthy subjects, often do not corroborate one another. In particular, reported levels of PEA and OEA in serum or plasma of healthy human subjects differ by up to two orders of magnitude, from 5 to 30 pmol per mL6–17 up to 200 pmol per mL of serum or plasma.

During the validation process of a new method for LC/MS analysis of FAEs and ECBs in human serum extracts, we observed unexpectedly high levels of PEA, as compared with data previously obtained in our laboratory.We suspected that these abnormal levels could be due to a recurrent contamination. We found that 5”Pasteur pipettes of most, if not all, commercial brands, contain multiple contaminants detectable by LC/MS, including readily detectable quantities of a compound indistinguishable from PEA.A contaminant that is undistinguishable from PEA is present in glass Pasteur pipettes in amounts that are sufficient to interfere with analysis of biological samples. The contaminant was identified based on its LC retention time, accurate mass, and MS/MS fragmentation pattern, which were identical to those of authentic PEA. By contrast, only a negligible PEA contamination was found in 9” Pasteur pipettes. Furthermore, we isolated the PEA contamination to the polyurethane foam used to package the pipettes, which is transferred to glass pipettes by contact. In line with this finding, Oddi et al.22 recently reported that FAEs can be absorbed by plastic materials during laboratory assays. It is therefore conceivable that FAEs incidentally absorbed by plastics during industrial processes can be released later in organic solvents. Lastly, no other commonly analyzed FAEs or monoacylglycerols were found to be present in the pipettes. We published GC/MS23 and LC/MS5 analytical methods for the quantitation of ECBs and other related FAEs and monoacylglycerols in biological samples, including human serum.Prompted by the need for a novel quantitative LC/MS method to analyze ECBs in blood, we reviewed the literature and noticed discrepancies in the reported concentrations of FAEs and ECBs in human blood serum and plasma . The EC50 for anandamide and 2AG vary depending upon assay and tissue; however, it is important to note that levels reported in Table 1 for both compounds in plasma/serum are below the apparent biologically active concentrations required to activate CB receptors . Regarding relative levels of PEA and OEA, a number of studies reported very similar concentrations for both compounds,whereas others reported PEA approximately twice higher than OEA.Regarding absolute values, two separate laboratories reported levels of PEA and OEA in plasma that were excessively high,which reached or exceeded the concentrations needed by these ligands to engage PPAR-a as agonists. PEA and OEA are, in fact, considered high-potency ligands of PPAR-a; in heterologous expression systems, these FAEs engage the receptor with median effective concentration values of 0.12 lM for OEA and 3 lM for PEA.In the abovementioned reports,although PEA levels did not exceed the EC50, levels of PEA were high relative to other reports.The steady-state concentrations of FAEs in plasma/serum of healthy individuals possibly reflect an equilibrium of ECBs released by peripheral tissues and their enzymatic degradation in the blood stream. In animal tissues , levels of PEA and OEA are present in the same order of magnitude; therefore, it was predictable to find a similar pattern in human serum or plasma, as also shown by the literature reports in Table 1. Surprisingly, in our preliminary experiments, the measured level of OEA was in agreement with most literature reports, whereas PEA was one order of magnitude higher than expected . This finding prompted us to carefully screen all possible sources of contamination, including solvents, reagents, and glassware used for lipid extraction and quantitative analysis. In this study, we identify glass Pasteur pipettes used to transfer solvents and lipid extracts as the source of PEA contamination. The contaminant was identified as PEA by its exact mass and RT in three similar but different chromatographic systems, as well as by its MS2 fragmentation pattern, which were identical to those of standard PEA. Furthermore, we show that PEA is present in the polyurethane foam that manufacturers use to wrap the pipettes before packing, from which it leaks onto the glass pipettes. Moreover, accurate exact mass measurements with ppm deviation lower than five unambiguously confirmed the identity of the contaminant as PEA. Quantitative assessment showed that the content of PEA is 33.4 – 4.02 pmol per pipette. Unfortunately, none of the various manufacturers, whose pipettes were tested, provides 5”3 4 glass Pasteur pipette that are contaminant free . Only 9” pipettes from one vendor were free of PEA traces , allowing the use of these consumables in the overall procedure. The field of lipidomics is rapidly developing; however, reproducible standard procedures across laboratories are not established. Therefore, it is not uncommon for lipidomics data to differ among from independent laboratories.It is generally thought that these discrepancies are a result of the use of different instruments for lipid analysis, as well as differing extraction and separation protocols. In this study, however, all results were confirmed by two independent laboratories using different LC systems and QQQ mass spectrometers . Furthermore, accurate mass data were acquired on a third Shimadzu IT-TOF High-Resolution Mass Spectrometer for definitive confirmation that the contaminant was indeed PEA.Lipids, especially fatty acids, are common contaminants in detergents, mineral oils, greases, and plasticizers; hence, they are often present in laboratory equipment, including glassware and solvents. As shown in this study, assessment of FAEs, a group of lipids with diverse signaling properties, is not sheltered from this pitfall. We have shown that glass Pasteur pipettes,hydroponic table commonly used in lipidomic laboratories to transfer lipid extracts and organic solvents, can contain PEA as contaminant. This contamination gives rise to artifacts in the measurement of PEA in biological samples, especially when the procedure for sample preparation includes fractionation of the lipid extract, which concentrates the contaminant. The scope of this study is an alert to the ECB and FAE scientific community about possible PEA analytical artifacts and thus, great care is needed to exclude the possibility of contaminants when analyzing endogenous PEA levels in biological tissues.Heavy and problematic alcohol use is highly prevalent among HIV-infected individuals, with estimates of use documented as high as 63 % among HIV clinic patients. Indeed, the rate of alcohol use disorders among HIV infected individuals is markedly higher than that observed in the general population.Importantly, both HIV and problematic alcohol use have significant implications for memory functioning, which is vital to successful adherence to complicated medication regimens and effectiveness of cognitive and behavioral interventions for HIV. Although previous research has shown that heavy alcohol use is associated with a host of negative HIV health outcomes, including poor medication adherence, increased immune suppression, reduced effectiveness of therapeutic regimens, faster HIV disease progression, lower survival rates, and worse health-related quality of life, less is known about its impact on self-reported memory functioning and HIV symptom severity. Further, greater HIV symptom burden is associated with reduced health-related quality of life, an outcome that has gained increased significance as treatments for HIV infection have improved. Thus, in order to provide a richer clinical conceptualization to inform intervention and treatment efforts, it is important to determine how problematic alcohol use impacts these domains in this already vulnerable population. HIV disease progression poses significant risk for compromised cognitive efficiency and memory functioning, and although antiretroviral therapy can reduce neurocognitive impairment, mild forms still persist in a large proportion of individuals with HIV. Of note, cognitive impairment, and memory dysfunction more specifically, is associated with worse treatment outcomes among HIV-infected individuals and is known to reduce the effectiveness of interventions aimed at optimizing adherence and reducing risk behavior. Memory dysfunction has also been observed among individuals with heavy drinking and alcohol use disorders and these changes can persist even following an extended period of abstinence. The effects of alcohol on memory varies substantially across social drinkers and chronic alcoholics, and mild neurocognitive deficits are more notable at heavier drinking levels . These independent bodies of research suggest that the combination of problematic alcohol use and HIV may exert a negative additive or synergistic effect on neuropsychological indices of memory functioning. However, the literature speaking to these associations is mixed. Compared to healthy controls and participants with a single diagnosis, individuals with co-occurring HIV and alcohol dependence or abuse have been shown to perform worse on measures of immediate and delayed memory [WMS-R;], and on selective memory processes . In contrast, Rothlind et al. did not observe differences on measures of verbal and visual learning and memory in a comparison of light/non-drinking and heavy drinking HIV-infected individuals. Similarly, no differences in verbal or non-verbal memory emerged in a comparison of HIV-infected and HIV-uninfected African Americans with no drinking and light, moderate and heavy drinking. Finally, no differences in learning and memory were observed in a comparison of HIV positive and HIV negative males with and without a history of alcohol abuse.

Posted in hemp grow | Tagged , , | Comments Off on Another well-known issue in lipidomics is that various sources of contamination can originate artifacts