The endocannabinoid system has been implicated in control of nausea and vomiting

We then use two approaches for computing the semantic similarity between them: cosine similarity computed between average token embeddings, and BERTSCORE , which involves computation over BERT token embeddings of the tweet and misconception to obtain an F1-score-like measurement that we use as a similarity score. For the cosine similarity approach, we experiment with both non-contextualized and contextualized word embeddings. For non-contexualized word embeddings we use 300D GloVe trained on 2014- Wikipedia and Gigaword embeddings . For contexualized embeddings we use a pretrained BERT-LARGE model. However, Since BERT is not trained on COVID-19-related text we also use COVID-Twitter-BERT1 which uses domain-adaptive pretraining on 160M tweets about COVID-19. For sake of brevity, we will append the suffix to models that use COVID-Twitter-BERT instead of spelling out the full model name.We present the performance of similarity models in Table 4.1. Average embedding, both with GloVe and BERT embeddings, perform the worst . Although information retrieval based approaches, TF-IDF and BM25, considerably outperform the average embedding techniques, BERTSCORE captures the similarity as accurately as well. Domain adaptation, however, further improves the embedding-based similarity techniques, improving average BERT embeddings to be as good as others, while making BERTSCORE much more accurate than all other techniques. Thus we see that using domain adaptation and BERTSCORE are both important for performing accurate misconception retrieval.We illustrate the differences in the similarity models using example predictions in Table 4.2. The first example provides a challenging case of retrieval that requires taking both COVID-19 knowledge and contextual information into account, and thus only the BERTSCORE model is able to retrieve the correct misconception. The second example primarily requires domain knowledge that ‘coronavirus’ and ‘Sars-cov-2’ are very similar,vertical cannabis and only domain-adapted models are able to score the correct misconception highest.

The last example shows when contextual embeddings outperform non-contextual embedding . Due to the lack of adequately large datasets for stance detection with pairs of sentences , we cannot use existing datasets to train models for our setup. However, since classes in misinformation detection correspond to those in natural language inference and fact verification, tasks with much larger training datasets, we instead experiment with zero-shot learning on these tasks. The COVID-19 Health Risk Assessment task released after the pandemic started, also allowed us to experiment with few-shot learning by combining it with COVID-19 tweet-misonception pairs annotated for stance by researchers at the UCI School of medicine. Standard evaluation metrics like we have used above can overestimate the real world performance of NLP models, and do not reveal enough information about situations where the models are failing or how to fix them. To evaluate our stance detection models more rigorously we use the matrix of linguistic capabilities and test types provided by Check List for behaviour testing of NLP models. We evaluate our best zero- and few-shot models: BERTSCORE + SBERT trained on MNLI and CH+PA respectively. We test the following linguistic capabilities: Robustness , Negation, Vocabulary, NER , Temporal , and SRL . We perform three types of types: Minimum Functionality Tests , which are simple ’sanity checks’ of targetted capabilities; Invariance Tests consisting of perturbations which should not change model output; and Directional Expectation Tests For all tests, we use the COVID-19 misconceptions from COVIDLIES. For MFTs we construct simple tweets based on perturbations of the misconceptions, eg. introducing a simple typo: Salt water wtaer protects from coronavirus. From the results of the tests in 4.5 we see that that there are linguistic capabilities that the zero-shot model is more competent at than the few-shot model and vice versa. Notably, the few-shot model is more robust to typos and positive paraphrases possibly because it is trained on informally written content . However, an alarming incompetency of the few-shot model is when the constructed tweet is identical to the misconception— the output should obviously be Agree, which the zero-shot model is able to correctly predict 100% of the time, while the the few-shot model only 91.9% of the time .

A possible explanation for why this happens is that the tweets in COVID-Hera tend to repeat the headlines or claims they are paired with verbatim, even in the refute/rebutscategory. Both models failed spectacularly at NER tests when perturbing country names and numbers in misconceptions to construct tweets, i.e., they always fail to predict No Stance in these cases. For INV and DIR tests we randomly sample 45 misconception-tweet pairs per class from COVIDLIES that both models are able to correct the label for. For INV tests we perturbed tweets in ways that should not lead to output changes, and for DIR tests perturbations were to induce a specific label switch. INV tests were more successful than DIR tests for both models . In the social sciences, there have been recent efforts to quantify COVID-19 misin formation on social media , as well experimental efforts to prevent propagation of misinformation . At the same time, members of the NLP community have been working on developing tools for the automatic detection of COVID-19-related misinformation online. Serrano et al. detect YouTube videos spreading conspiracy theories using features of user comments, and Dharawat et al. classify tweets by the severity of health risks associated with them. McQuillan et al. study the behaviour of COVID-19 misinformation networks on Twitter using mapping, topic modeling, bridging centrality, and divergence. Penn Medicine launched a chat bot to provide patients with accurate information about the virus , and a crowd sourced chat bot, Jennifer, is also available to answer questions about the pandemic . We are the first to frame COVID-19 misinformation detection as a two-stage task of misconception retrieval and pairwise classification of stance, and add to this body of work by providing a dataset and benchmark models for automated identification of misinformation.There are several datasets for misinformation detection with binary veracity labels , for example, Fake NewsNet consisting of news articles, Some Like It Hoax consisting of Facebook posts, and PHEME containing twitter threads. Misinformation detection is also closely related to fact-checking since both tasks aim to assess the veracity of claims. FEVER is a dataset of claims and evidence pairs with Supported, Refuted or Not Enough Info labels to facilitate research in automated fact checking.

This is similar to Emergent , a stance classification dataset consisting of rumored claims and associated news articles with labels of For, Against, or Observing the claim. Stance detection is also the focus of the Fake News Challenge 1 consisting of pairs of news article headlines and body texts with Agrees, Disagrees, Discusses, and Unrelated labels. Our proposed models for detecting misinformation by using classififiers fall within the framework of detecting misinformation using content features . Other approaches include using crowd behaviour , reliability of the source , knowledge graphs , or a combination of these approaches . Adapting these techniques to COVID-19 misinformation is a promising direction for future work. The ongoing COVID-19 pandemic has been accompanied by a corresponding ‘infodemic’ of misin formation about the virus. It is important to develop tools to automatically detect misinformation online, especially on social media sites where the volume and speed of the spread are high. However, rapidly evolving information and novel language make existing misinformation detection datasets and models ineffective for detecting COVID-19 misinformation. In this work, to initiate research on this important and timely topic, we introduced COVIDLIES, a benchmark dataset containing known COVID-19 misconceptions accompanied with tweets that Agree, Disagree, or express No Stance for each misconception, annotated by experts.We evaluate a number of approaches for this task, including common semantic similarity models for retrieval,cannabis hydroponic set up accurate models trained on a variety of NLI datasets, and domain adaptation by pretraining language models on a corpus of COVID-19 tweets. We demonstrate domain adaptation significantly improves results for both sub-tasks of misinformation detection. We also show that it is feasible to detect the stance of tweets towards misconceptions using both zero-shot and few-shot settings. We showed that few-shot learning slightly improves stance detection when evaluating using standard aggregate performance metrics. However, further behaviour testing using CheckList leaves an unclear picture of which setting is better. Both settings have considerable scope for improvement. Future work will involve exploring improved performance on stance detection, preferably without reliance of methods that require a large amount of data collection since they are not quickly available in an emerging crisis. We plan to continually expand our annotated dataset by including posts from other domains such as news articles and Reddit, and misconceptions from sources beyond Wikipedia, such as Poynter . We invite researchers to build COVID-19 misinformation detection systems and evaluate their performance using the presented dataset.Chemotherapy-induced nausea and vomiting can be classified into three categories: acute onset, occurring within 24 h of the initial chemotherapy administration; delayed onset, occurring 24 h to several days after the initial treatment; and anticipatory nausea and vomiting . Anticipatory nausea develops in response to chemotherapy treatments, in which cytotoxic drugs are delivered in the presence of a novel context . Developing in approximately 30% of patients by their fourth treatment , AN has traditionally been understood in terms of classical conditioning. After one or more treatment sessions, a conditional association develops between the distinctive contextual cues of the treatment environment and the unconditioned stimulus of chemotherapy treatment that results in the unconditioned response of post-treatment illness experienced by the patient. Subsequent exposure to the treatment environment results in the patient experiencing a conditioned response of nausea and/ or vomiting before initiation of chemotherapy treatment. Once it develops, AN has been reported to be especially refractive to anti-emetic treatment . The evaluation of potential treatments for AN would be accelerated by the establishment of a reliable rodent model of nausea.

Although rats are incapable of vomiting, they display characteristic gaping reactions when exposed to a flavoured solution previously paired with lithium induced nausea. In fact, this gaping reaction in the rat requires the same orofacial musculature as that required for vomiting in other species . Only drugs that produce emesis in species capable of vomiting produce conditioned gaping in rats, although many non emetic drugs produce conditioned taste avoidance . Furthermore, anti-emetic drugs interfere with the establishment of conditioned gaping reactions elicited by a nausea-paired flavor, presumably by interfering with the nausea . Conditioned gaping in rats appears to be a selective index of conditioned nausea. Not only are flavor cues capable of eliciting conditioned gaping reactions when paired with lithium chloride – induced nausea in rats, but recently Limebeer et al. have demonstrated that re-exposure to LiCl-paired contextual cues also elicit conditioned gaping reactions in rats. This paradigm more closely resembles that reported to produce AN in chemotherapy patients. Rats were injected with LiCl or saline before placement in a vanilla-odor laced chamber with lights and texture different than their home cage on each of four trials, separated by 72 h. To equate both groups for experience with illness, the rats in group unpaired were injected with LiCl and those in group paired were injected with saline 24 h after each conditioning trial but were then simply returned to their home cage. When the rats were returned to the conditioning context, 72 h after the final conditioning trial, rats in group paired showed the conditioned gaping reaction, as a measure of AN. Although classical anti-emetic treatments such as the 5- hydroxytryptamine-3 antagonist, Ondansetron , effectively reduce unconditioned nausea and vomiting, they are ineffective in the alleviation of conditioned nausea once it develops in humans . Indeed, OND also did not suppress the conditioned gaping reactions displayed during re-exposure to the LiCl-paired context . Further more, using the emetic species, Suncus murinus as an animal model for AN, pre-treatment with a dose of OND that was shown to alleviate acute vomiting , did not reduce the display of conditioned retching reactions during re-exposure to a nausea-paired context . Thus, although OND has proven effective in the reduction of acute post-treatment nausea and vomiting, it does not appear to relieve conditioned nausea when it does develop.The psychoactive component in marijuana— delta-9-tetrahydrocannabinol —has been shown to interfere with the expression of vomiting in shrews and ferrets and conditioned gaping reactions elicited by a lithium-paired flavor in rats .

This entry was posted in hemp grow and tagged , , . Bookmark the permalink.