Rice RC was significantly greater in the WS-Control and WS-AWD than in the DSAWD system

If growers are to adopt alternative irrigation systems, understanding potential shifts in weed species’ composition will be critical to weed management. It is well documented that weed community composition can affect yields. The critical period of watergrass competition for rice in California is the first 30 d after planting, and yields can be reduced by as much as 59% when watergrass is uncontrolled . However, critical periods of competition for other weed species are not known, and differences in composition between early- and late-season weed communities and the impacts of late-season competition on rice yields remain to be seen. Using two alternative irrigation systems adapted for California rice, the primary objectives of this research were: to determine weed community composition in rice under different irrigation systems; to determine whether there are differences between early and late weed communities within a system; and to quantify differences in yields between irrigation systems in both the presence and absence of weed competition.Field preparation was standard for the California rice growing region and consisted of chiseling twice, followed by disking twice, to prepare a level seedbed . In the water-seeded alternate wet and dry and water-seeded control conditions, fertilizer was banded in by drill in strips before seeding. Fertilizer applications in the drill-seeded alternate wet and dry treatment were broadcast approximately 1 mo after planting . In all years, ebb flow tray nitrogen was applied at a rate of 171 kg ha−1 . Drilled nitrogen was applied as urea , and broadcast N was applied as ammonium sulfate .

Phosphorous was applied as triple superphosphate at a rate of 86 kg ha−1 in 2012 and at a rate of 45 kg ha−1 in 2013 and 2014. Potassium was applied as potassium chloride at a rate of 25 kg ha−1 in 2013 and 2014 only. The WS-AWD and WS-Control fields were broadcast seeded onto dry soil at a seeding rate of 168 kg ha−1 . The DS-AWD field was seeded to a depth of 2 cm at a rate of 112 kg ha−1 into dry soil. Rice seed for all treatments was pretreated with a 1 h soak in 2.5% NaClO solution to prevent infection with Bakanae disease [Gibberella fujikuroi Wollenw.]. Plots in all irrigation treatments across all years were seeded with M-206, a Calrose medium-grain rice variety widely grown in the region. The three main plot irrigation treatments were the DS-AWD, WS-AWD, and WS-Control. The DS-AWD treatment was initially flush irrigated for rice emergence and then flush irrigated once more when soil volumetric water content reached 35% .Immediately after the N fertilizer application at approximately 1 mo after planting, the DS-AWD was flooded to 10 cm above the soil surface, and water was held at that level to allow for N uptake. The WS-AWD and WS-Control plots were flooded to 10 cm above the soil surface within 24 h of broadcast seeding. The WS-AWD plot remained flooded until canopy closure of the rice, at which point water flowing into the system was shut off, and the standing water was allowed to recede into the soil. Canopy closure of the rice was determined to be when photosynthetic photon flux density reached or fell below 800 μmol m−2 s −1 , which is approximately where subcanopy PPFD stabilized. PPFD was measured every other day using a line quantum sensor at 15.2 cm above the soil surface, which was below the rice canopy.

Canopy closure was determined to be at 47, 49, and 54 d after seeding in 2012, 2013, and 2014, respectively. After being drained at canopy closure, both the WS-AWD and the DS-AWD treatment plots were flush irrigated again when soil VWC reached 35% . The WS-Control plot remained flooded until 1 mo before harvest, when it was drained to allow harvesting equipment onto the field . Soil VWC for irrigation purposes was measured at hourly intervals in each plot using EM5B data loggers and 10HS soil moisture sensors . The 35% VWC was determined using the average of the three replicates for each treatment. Further management details can be found in LaHue et al. .In 2012 there were only minor differences between irrigation systems in the weed counts taken at 20, 40, and 60 DAS. There were no significant differences in population densities of watergrass species, small flower umbrella sedge, and rice field bulrush between irrigation systems across all counts. Our results confirm previous research that showed watergrass plasticity and ability to germinate and emerge under both aerobic and anaerobic soil environments . There were three weed species with differences among irrigation treatments: redstem, ducksalad, and sprangletop. For redstem, there was an interaction between irrigation systems and count timing . Redstem was not present in any system at 20 DAS, but at both 40 and 60 DAS, the redstem density was greater in the WS-AWD than in the other two irrigation systems . Density was greater in the WS-Control system than in the DS-AWD system. The high redstem population in the water-seeded systems is consistent with earlier research showing redstem emergence under water-seeded but not under dry-seeded systems . Ducksalad density was greatest in the WS-Control and WS-AWD systems, irrespective of count timing . Sprangletop density was greatest in the DS-AWD system across all counts , though the difference was only significantly greater than the density in the WS-AWD system. These results are not surprising, since sprangletop emergence is reported to occur only under aerobic conditions in California .

Since it emerged in both the WS-AWD and WS-Control systems, further investigation of the species is warranted to elucidate whether water depth may affect emergence under flooded conditions, allowing the species to emerge under a shallow flood. Both species of sprangletop found in California, bearded sprangletop and Mexican sprangletop [Leptochloa fusca Kunth ssp. uninervia N. Snow], emerged from rice flooded to depths of 5 cm in Valencia, Spain . In Turkey, bearded sprangletop emerges at greater numbers and at a faster rate under flooded conditions than under dry conditions . Differences between weed counts at 20, 40, and 60 DAS indicate that certain species are emerging at different timings throughout the rice-growing season. Redstem did not emerge until 40 DAS across all irrigation systems. Sprangletop emerged by 20 DAS in the DS-AWD system, but did not emerge in the two water-seeded systems until 40 DAS. All other weed species emerged in significant numbers by 20 DAS, and then plant density was reduced by 40 and 60 DAS, presumably through competition for light as the canopy closed . RC and RDW. There were no significant interactions between irrigation system and years for either RC at canopy closure or RDW at harvest for all weed species and rice; therefore, only main effects are presented . RC of small flower umbrella sedge, flood drain tray watergrass species, and ricefield bulrush increased across systems from 2013 to 2014 , though the increase in rice field bulrush was not highly significant . The RC of rice also increased across all systems from 2013 to 2014. This increase may be due to the decrease in RC of ducksalad in 2014, since all other weed species increased in RC in 2014. In water-seeded Arkansas rice, ducksalad decreased yields by about 21% when germinating with rice . The decrease in RC of ducksalad in 2014 may be due to competition with other weed species, particularly watergrass, which had the greatest increase in RC of all weed species. There was a negative correlation between watergrass RC and ducksalad RC in 2013, but the relationship did not hold in 2014 . Thus, it is difficult to say with certainty why ducksalad cover decreased in 2014. Redstem and sprangletop RC were the same across years. At canopy closure the WS-Control and WS-AWD were dominated primarily by ducksalad and watergrass species, but both sedges were also present in small quantities . Sprangletop and redstem were present, but differences between systems were not significant . The only difference in weed composition between the two water-seeded systems at canopy closure was in the small flower umbrella sedge cover, which was significantly greater in the WS-AWD compared with the WS-Control. The weed species composition of the DS-AWD at canopy closure was significantly different from the composition of the water-seeded systems. It was dominated by watergrass species, and the only other species present was sprangletop . RDW of all weed species did not vary across years. There were only two species that were significantly different across irrigation systems: small flower umbrella sedge and watergrass species .

The RDW of small flower umbrella sedge was greatest in the WS-AWD, which was consistent with its RC at canopy closure. Ducksalad was not present at harvest, presumably because it had completed its life cycle and decomposed, although no information on longevity of this species is recorded in the literature. In Arkansas wet-seeded rice, Smith found that ducksalad matured by approximately 8 wk after seeding. In the DS-AWD system at harvest, the RDW of rice was 3% . In comparison, the WS-Control and WS-AWD systems had rice RDW measures of 72 and 77%, respectively. The differences in firequencies of weed species in the DS-AWD and the water-seeded systems corresponded to the differences in RC and RDW . Frequency of small flower umbrella sedge varied between WS-AWD and WSControl . The percentage contribution of small flower umbrella sedge to the dissimilarity between the irrigation systems was the greatest of all weed species at every measurement point, except at canopy closure assessment in 2013. Analysis of the two systems over time showed that although the firequency of small flower umbrella sedge was similar in the WS-AWD and WS-Control at canopy closure in 2013, the firequency of the species was consistently greater in the WS-AWD at all other assessment points . Small flower umbrella sedge cover was greatest in the WS-AWD treatment , and the relativecover of small flower increased in 2014 over 2013 . The relative dry weight of small flower umbrella sedge was greater in the WS-AWD than in the other treatments in both 2013 and 2014. Both the initial germinable seedbank assessment in 2012 and the plant density counts at 20 DAS in 2012 indicate similar germinable populations of small flower umbrella sedge in the WS-Control and WS-AWD irrigation systems. The increased density in the WS-AWD system at 40 and 60 DAS and the increased cover and biomass in both 2013 and 2014 may indicate that the drain at canopy closure affects small flower umbrella sedge germination or competitive ability. Small flower umbrella sedge germination is best under flooded conditions, though it appears to germinate well under saturated soil conditions as well . Preliminary evidence suggests that small flower umbrella sedge has a biphasic emergence pattern , and the relative growth rate of plants emerging under the second germination flush may be greater under the drier conditions of the WS-AWD. The irrigation system was shut off and the water was allowed to recede into the soil beginning at 47 DAS in the WS-AWD system in 2012. In 2013 and 2014 this occurred at 49 DAS and 54 DAS, respectively. Weed density counts were taken 1 wk before the irrigation shutoff in 2012, and weed relative cover ratings were taken 1 d before irrigation shutoff in both 2013 and 2014. Thus, it is possible that the increase in small flower umbrella sedge in the WS-AWD system may be unrelated to the irrigation system and was an artifact of greater population density in 2012. This could be related to the lower ducksalad density in the WS-AWD system that same year. Ducksalad may have a suppressive effect on small flower umbrella sedge, given that it quickly covers the canopy, blocking out light, which small flower umbrella sedge requires for germination . The two weeds had a similar density at the beginning of the experiment but small flower increased as the experiment continued .Rice relative cover increased from 2013 to 2014 over all treatments, yet the increase in 2014 at canopy closure did not correlate with an increase in rice biomass at harvest in 2014. This response confirms earlier research in California that showed competition with late watergrass after the critical period of competition further decreased rice yields . It is significant to note that despite statistically similar initial populations of watergrass species in all fields , rice cover and biomass were lowest in the DS-AWD compared with the water-seeded treatments, either indicating that the watergrass species are more competitive against rice under anaerobic conditions or confirming that rice is less competitive with weeds under anaerobic environments .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Rice RC was significantly greater in the WS-Control and WS-AWD than in the DSAWD system

Plasticity could facilitate establishment in novel environments through several mechanisms

Baker highlighted high competitive ability as a characteristic of invasive species, although his focus was on competition through “special means” such as allelopathy and choking growth . In practice, invasive plant species may coexist with and outcompete natives through a variety of mechanisms. Niche differentiation, where species possess different strategies of resource use, may allow for coexistence of native and non-native invasive species . In this case, invasive species are functionally different than the natives, either by possessing novel traits or by using resources in different ways or at different times . For example, in many Mediterranean climate systems, invasive annual species display different phenology and function compared to the largely perennial or woody native communities . Alternatively, invasive species may succeed by possessing highly competitive traits . As an example, functional similarity did not predict competitive outcomes between native species and a focal invader in a California grassland; instead, competitive natives possessed trait values consistent with high rates of below ground resource acquisition and allocation to above ground tissue . Other studies have found that both niche and fitness differences operate within a given community . For example, Fried et al. found that native species with flowering phenology similar to a focal invader were adversely impacted by the presence of the invader . At the same time, native species with larger seeds and higher rates of resource acquisition were more competitive with the invader. As the relative importance of competition mechanisms is likely to change at fine scales across resource gradients , ebb and flood table experiments that manipulate resource availability and directly measure competition outcomes are likely to elucidate the mechanisms by which non-native invasive species can coexist with or competitively exclude native species.

Baker hypothesized that species possessing more ‘ideal weed’ traits would be more invasive: “probably no existing plant has them all; if such a plant should evolve it would be a formidable weed, indeed” . Trade-ofs likely limit the capacity for any species to possess all ‘ideal weed’ traits , but particular trait combinations may act synergistically . Thus, focusing on a single trait or a small handful of traits may not accurately characterize invasiveness; rather, exploring multidimensional functional differences between invasive and non-invasive species may yield greater insight into mechanisms of invasion . Traits may act in non-additive ways, as certain combinations of traits lead to success in particular conditions. For example, species with high rates of resource uptake and poorly defended tissues have the most to gain from enemy escape . Finally, different traits can result in similar fitness highlighting the need to consider multiple traits. For example, prostrate plants with strong lateral spread may shade out native plants just as effectively as tall plants . Thus, a multi-trait approach that accurately characterizes light use would be more meaningful than comparisons of mean height among invasive and non-invasive species. Many researchers have emphasized that traits or suites of traits interact with other processes, such as habitat suitability and socioeconomic factors, to influence invasion. In an effort to identify patterns of species-ecosystem interactions leading to invasion, Kuefer et al. coined the term ‘invasion syndrome’ which Novoa et al. redefined as ‘‘a combination of pathways, alien species traits, and characteristics of the recipient ecosystem which collectively result in predictable dynamics and impacts, and that can be managed effectively using specific policy and management actions’’. This synthetic approach involves an iterative process of identifying similar invasion events and their associated syndromes .

As an example, Novoa et al. point to invasive plant species in high elevation areas, which tend to share a broad environmental tolerance and a similar pathway of introduction along transportation corridors from low and mid elevation areas. Thus, managing for invasive plant species in high elevation areas entails limiting the spread of introduced species along corridors. However, as our review highlights, traits and species interactions within communities are dynamic, so an invasion syndrome approach would have to be flexible, potentially weakening the value of this framework.Phenotypic plasticity, or the ability of a plant to adjust its phenotype in response to environmental variation, was a defining feature of Baker’s ‘ideal weed’ . First, Baker and others hypothesized that plasticity could lead to success in a wide range of novel environments . Consistent with this hypothesis, plasticity is associated with increased species range size . Second, plasticity could lead to high success in certain environments . For example, invaders may be particularly adept at capitalizing on high resource conditions , opening ‘invasion windows’ when resources become abundant that allow for explosive population growth . Third, as we discuss in “Evolutionary considerations” section, plasticity can facilitate rapid evolution. Empirical evidence for the role of plasticity in invasions is mixed, however. While several large multi-species studies or meta-analyses find that invaders are more plastic than natives or non-invasive non-natives , others find that on average invasive and non-invasive species do not differ in plasticity . Interestingly, heightened plasticity is only adaptive and helps maintain fitness in a subset of species and only in response to resource increases; non-invasive plant taxa were better able to maintain fitness homeostasis in low resource conditions .

One possibility for these conflicting empirical observations is that plasticity, like other traits, may only be advantageous during certain invasion stages . A large, phylogenetically-controlled study investigating phenological plasticity in response to warming found that on average invasive species show strong phenological shifts in response to warming, while native species do not . These phenological shifts were strongest for species characterized as invasive and much weaker for non-invasive non-native species, and phenological plasticity was stronger for species that had invaded long ago, suggesting that phenological plasticity may be most important during the spread and impact stages and may increase over time through evolution .Baker and G. Ledyard Stebbins brought together evolutionary biologists and ecologists to consider the problem of invasive species and, in doing so, inserted an evolutionary perspective into the field of invasion biology . Evolutionary studies of invasive species were relatively slow to take of compared to the rapid increase in ecological works following Elton’s seminal work and the SCOPE series that followed several decades later . However, we now recognize that prior adaptation and rapid evolution during or post invasion can allow for establishment and promote the spread of invasive species. Evolutionary history reflects challenges a population has experienced in the past, and overcoming particular challenges may make it more likely for a species to be transported to, establish in, and successfully invade new areas. Post-introduction, rapid evolutionary responses to novel aspects of the invaded environment may be necessary for the invasive species to establish and spread. Because a population’s evolutionary history determines its traits, incorporating evolution into invasion biology may help explain why certain bio-geographic regions produce so many invasive species . Using quantitative genetics approaches that link traits to fitness may help inform which traits promote success in particular environments. Such studies could help explain the context dependency so firequently observed in ecological studies linking traits to invasions. Interestingly, only a few of Baker’s traits have been well investigated from an evolutionary perspective . One study explicitly focusing on Baker’s ‘ideal weed’ traits found evidence for genetic variation in traits related to competitive ability and seed production, indicating that such traits have the potential to evolve pre- or post-introduction, but growth rate exhibited little genetic variation . Furthermore, grow racks these traits were often genetically correlated, although not always in the same direction across the two populations studied, suggesting that genetic constraints may sometimes limit and other times accelerate the evolution of ‘ideal weed’ traits.The idea that evolutionary history determines invasion success has a long, but relatively sparse, history going back to at least Darwin’s seminal works.

Much of this work has investigated Darwin’s Naturalization Hypothesis which proposes that species lacking close relatives in the community are more likely to invade . This hypothesis assumes that because close relatives are likely to be functionally similar, competition may strongly limit closely related invaders compared to more distantly related invaders . The counter argument is that closely related species may have similar environmental tolerances and species interactions leading to increased likelihood of invasion by close relatives in the introduced range . Support for these competing hypotheses is decidedly mixed, but Ma et al. suggest that this may result from different processes acting across scales and invasion stages . For example, Darwin’s Naturalization Hypothesis specifically invoked competition, which occurs at very local scales. In contrast, the Pre-Adaptation Hypothesis more likely applies to the climatic factors more prevalent at regional scales. Across invasion stages, Darwin’s Naturalization Hypothesis most likely applies to the species interactions that come into play at later invasion stages post-establishment , while the Pre-Adaptation Hypothesis is more likely to pertain to the filtering processes that occur earlier in invasion . Darwin’s Naturalization Hypothesis and the Pre-Adaptation Hypothesis are both less focused on a general role for specific traits and more on the match between traits and the invaded environment. More recently, Fridley and Sax proposed the Evolutionary Imbalance Hypothesis, predicting that species from richer biotas with more stable environments and larger habitat sizes are more likely to be ecologically optimized with better solutions to ecological challenges. Essentially, these biogeographic regions have had a larger number of ‘evolutionary experiments.’ Because ecological conditions repeat across the world, better solutions in the native range are likely to lead to better solutions elsewhere too. In support of this hypothesis, phylogenetic diversity in the native range predicts invasiveness . While this hypothesis does not focus on particular traits underlying this success, it does point to a strong role for traits promoting competitive ability, like allelopathy and other mechanisms highlighted by Baker, and suggests that the traits that have evolved in the native range determine success in the invaded range. Evolutionary responses to human-modifed environments also have the potential to promote invasion. The Anthropogenically Induced Adaptation to Invade hypothesis posits that prior adaptation to human-disturbed environments in the native range facilitates invasion into similarly disturbed environments across the globe because human-disturbed environments share many similarities regardless of location . Adaptation to disturbed environments will also lead to increased abundance in areas firequented by humans, potentially contributing to increased dispersal. In this way, adaptation to disturbed environments increases the likelihood of transport and the probability of establishment once transported . While this hypothesis does not strongly focus on specific traits, instead generally focusing on adaptation to a particular environment, many traits highlighted by Baker are also thought to be adaptive in disturbed environments, including rapid growth rates, a propensity for selfng or vegetative reproduction, and high and continuous seed production. While challenging to definitively test, three types of evidence support the hypothesis. First, European taxa associated with human altered environments are much more likely to invade other continents than taxa found only in natural habitats, although it is less clear whether this advantage results from adaptation to those disturbed environments, from species sorting , or from increased likelihood of transport given their abundance in human-visited habitats . Second, in animal systems, association with human-altered habitats appears to allow for expansion of the climatic niche in the invaded range, suggesting that adaptation to human-disturbance may facilitate invasion and range expansion . Finally, laboratory studies suggest that pre-adaptation to novel environments rivals the effects of propagule pressure on introduction success . While the Anthropogenically Induced Adaptation to Invade hypothesis focuses more on adaptation to cultivated habitats, invasive species are also adapting to urban environments. This urban adaptation could lead to further trait-matching and colonization of geographically distant but environmentally similar habitats, particularly given the high abundance of invasives in cities and the high likelihood of human transport . Interestingly, some traits favored by urban environmental conditions may further facilitate invasiveness in other areas ; for example, the reduced pollinator abundance in urban ecosystem is predicted to select for increased selfing and clonality , two traits characterizing Baker’s ‘ideal weed’. However, urban conditions also have the potential to select for traits that inhibit invasion. For example, increased fragmentation in city landscapes can select for reduced dispersal that is likely to reduce the spread of invasive species at larger spatial scales .Over the past three decades, increasing evidence suggests that many invaders rapidly adapt to the novel environments they encounter post-introduction .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Plasticity could facilitate establishment in novel environments through several mechanisms

A corollary to the renewal provision is the ability to assign biomass production to another farmer

These standards have less meaning with the production of a new crop type, and thus create uncertainty and potential for conflict between tenants, landlords, and end-users seeking control over the production process. Landowners may want to ensure the crops or the producers’ cultural practices will not cause long term harm to the land, creating another moral hazard problem and requiring landowners to increase control or monitor producer behavior. One possible solution could be the establishment of bonding requirements for remediation, similar to those imposed on biomass plantings in Florida larger than two acres. Bonding provisions could be incorporated into both the rental lease and the biomass production contract.If producers, in spite of these concerns, are able to secure leases for an extended length of time, they remain highly exposed to termination or default by the landlord; if the landlord defaults, the producer remains bound to a biomass production contract without sufficient land upon which to grow the crops. On the other hand, if a producer desires to exit the biomass industry, or becomes unable to continue production for any reason, he faces the risk of being locked into an undesirable long term lease. Likewise, landowners, due to high asset specificity and the nascent character of the bio-energy industry, face a relatively higher risk of default by both tenants and end-users. The issues discussed above illustrate the importance of specifically considering land tenure within the biomass supply contract and linking the provisions to specially tailored farmland leases for biomass production. Moreover, biomass supply contract duration should align with crop life cycles, which should align with land lease terms.Access to land, 4×8 flood tray while the most important consideration in negotiating biomass supply contracts, is not the only issue warranting attention.

Control of germplasm, whether conventionally bred or through advanced genetic engineering technologies, is an essential element of intellectual property rights protection. Contractual agreements embedded within intellectual property licenses can impose restrictions on the grower. Many of these restrictions currently used in the agrobiotech industry go far beyond mere protection of intellectual property rights and dictate specific agronomic practices of the farmer. The use of germplasm contracts could be structured to specify inputs , farming and harvesting practices , post-harvest disposition , and post-contract actions . From the producer’s perspective, growers may wish to expand their own production by harvesting rhizomes from their fields. This practice especially is likely in the early stages of industry maturity when rhizomes or specialized seeds may be hard to procure. Biomass supply contracts, therefore, should specifically address intellectual property rights in germplasm and ensure compatibility with germplasm agreements. A second ancillary issue relates to the positive externalities derived from certain agronomic practices associated with perennial biomass cultivation. Planting Miscanthus or other bio-energy crops may control erosion, improve water quality, sequester carbon, and increase wildlife habitat. In the future, ecosystem service markets may reward these practices. Accordingly, the biomass supply contract and, if applicable, the farmland lease should specify which party may participate, and thus receive the benefits, in ecosystem service markets.The duration of the biomass production contract has serious consequences for producers, but will likely be driven from the end-user’s perspective. This is because end-users must secure a stable biomass supply for the duration of the investment cycle of the conversion facility, likely at least 20 years. 

Offering contracts for less than the optimal investment cycle creates supply risk for the end-user and potential holdup issues. Longterm contracts are somewhat less critical for producers, as dedicated energy crops can be destroyed and the land returned to traditional cropping methods with comparatively lower cost. Nonetheless, in electing to produce perennial crops, producers also make long-term commitments by establishing a crop with a production cycle that could reach 15 years. Moreover, producers may wish to renew contracts, particularly if the life cycle of the established crop outlasts the initial contract term. To address these concerns, contract length should correspond with crop life-cycle to ensure producers can recover establishment costs and obtain adequate return on investment. Shorter durations, due to asset specificity, give rise to holdup risks. In situations in which the life-cycle of the crop outlasts the duration of the contract, the producer can reduce the risk of holdup by negotiating renewal options. As the end-user’s primary concern is securing a stable supply of biomass, incorporating assignment clauses in the initial agreement can provide a seamless escape hatch for farmers no longer interested in producing biomass as part of a long-term contract. Assignment clauses may minimize potential supply disruptions and serve as a “next best” strategy compared to attaching production contracts to land title. However, due to the vertically coordinated nature of the bio-energy industry, the extent to which individual producers may negotiate the contract provisions discussed in this section remains to be seen. Nonetheless, the authors recommend that end-users seeking a stable, long-term biomass supply chain at a low overall cost should consider the issues identified above, as biomass production agreements that incorporate the sociocompatibility perspective, along with risk- and cost minimization, are more likely to result in more secure supply chain relationships.

Incorporating a combination of the solutions detailed above into biomass production contracts will substantially address the costs, risks, and sociological concerns of producers and endusers. This should improve contract negotiation processes and improve supply chain stability. Moreover, as the biomass industry matures and follow-on issues arise, the proposed Biomass Contracting Framework can serve as an important point of departure in obtaining negotiated solutions. In addition to the framework described above, the development of sustainability standards tailored to the biomass industry, such as the Council on Sustainable Biomass Production 277 or Round table on Sustainable bio-fuels , can provide further support to improved biomass contract design. By focusing on long-term sustainability, these standards can use market forces to provide additional incentives for end-users to approach contractual relationships beyond the archetypal cost- and risk-minimization perspectives. For example, the RSB’s socioeconomic principle requires skill training that is culturally sensitive and respective of existing social structures. Although the intent of this provision is to apply within the context of impoverished regions, most likely in the developing world, the underlying sustainability benefits of cultural sensitivity in skills training certainly would hold true in domestic biomass contracts between end-users and producers. In the current climate of adhesion-type contracts presented by biomass end-users, producers could reference the internationally accepted RSB standards within their limited contract negotiations as support for professional development, formation of peer groups, and even feedback mechanisms, such as fieldmen services. Sustainability standards for environmental criteria, such as biomass residue removal, compaction, erosion, soil carbon maintenance, and restrictions on introduction of potentially invasive energy crops, also may have positive cross-over effects on biomass contract design. Incorporating environmentally-based sustainability standards into biomass contracts sends a signal to the producer of the perceived environmental credibility of the practice, and lessens producer concerns regarding land stewardship and conversion from familiar cropping systems. Moreover, many of the producer autonomy concerns and cultural risk management practices identified in the social compatibility discussion in Part I.A, find resonance within these environmental standards. On the other hand, unduly restrictive practices embedded in a sustainability standard could discourage producer acceptance, if these criteria sacrifice traditional agricultural risk management practices, such as pesticide application. Nonetheless, 4×4 flood tray the incorporation of sustainability standards within the biomass contract may provide a novel means to bring together divergent views of risk management, cost-minimization, and social compatibility to create a more stable, and ultimately profitable, biomass supply chain. In the future, end-users may be able to use contractual mechanisms to coordinate efforts within its “fuel shed” to achieve greater economic and environmental sustainability.Biomass crops in the United States are projected to yield 136 billion liters of biomass-derived liquid fuels by 2022 . The expectation is that this will require cultivation of between 54 and 150 million acres of bio-energy crops. Furthermore, state and federal greenhouse-gas reduction initiatives have incentivized widespread cultivation of bio-fuel crops. Of the crops under consideration, perennial nonfood grasses are the leading candidates. To be successful in this role, these bio-energy grasses will need to possess many agronomically desirable traits, including broad climatic tolerance, rapid growth rates, high yields, few natural enemies and resistance to periodic or seasonal soil moisture stress .

One of the leading candidates among bio-energy grasses is switch grass . Switch grass is a perennial warm-season bunch grass native to most of North America east of the Rocky Mountains, where it was historically a major component of the tall grass prairie. It was included in the initial screening for bio-fuel crops in the United States in the 1970s and was determined to be the model bio-energy species by the Department of Energy . This was primarily due to its broad adaptability and genetic variability . Over the past three decades, breeding efforts have developed several cultivars, many of which produce dense stands, tolerate infertile soils and readily regenerate from vegetative fragments . These cultivars are often separated into upland ecotypes and lowland ecotypes . Switch grass is not native to California and was, in fact, included for a brief time on the California Department of Food and Agriculture Noxious Weed List due to concerns about its potential invasiveness. Although there was one documented report of an escape of switch grass from cultivation in Orange County, California , there are no known records of its escaping elsewhere or causing any ecological or economic damage, despite its long-time use as a forage and conservation species . Since its removal from the CDFA Noxious Weed List, it has been the focus of yield trials throughout California . Because of the state’s Mediterranean climate, the yield potential is high; however, the crop will require significant water and nitrogen inputs.In an ideal system, bio-fuel crops should be cultivated in a highly managed agricultural setting similar to that of most major food crops, such that the crop could not survive outside of cultivation. Under such conditions, the likelihood of escape and invasion into other managed or natural systems would be very small. Unlike bio-fuel species, most food crops have been selected for high harvestable fruit or grain yield. This nearly always results in a loss of competitive ability, typically accompanied by an increase in the addition of nutrients and often pesticides. When a bio-fuel crop is grown for cellulose-based energy, the harvestable product is the entire above ground biomass. To be economically competitive, such perennial crops should be highly competitive with other plant species, harbor few pests and diseases, grow and establish rapidly, produce large annual yields and have a broad range of environmental tolerance, while also requiring few inputs per unit area of water, nutrients, pesticides and fossil fuels . Few species fit these requirements better than rhizomatous perennial grasses, primarily nonnative species . However, these qualities and traits are nearly identical to those found in harmful invasive species . For example, species such as johnsongrass Pers and kudzu were introduced as livestock forage or for horticultural use but have escaped cultivation to become serious weeds in many areas of the United States. In selecting bio-fuel crops, a balance must be struck between high productivity with minimal inputs, on the one hand, and risk of establishment and survival outside the cultivated environment on the other. Johnsongrass, like switch grass, was first cultivated as forage, but it subsequently escaped and has become one of the world’s most expensive weeds in terms of control costs . It is currently listed as a noxious weed in 19 U.S. states. When comparing switch grass to johnsongrass and to corn, a typical agronomic grass crop, it is clear that switch grass possesses many growth traits similar to those of weedy johnsongrass and only a few similar to those of corn . While this is not direct evidence that switch grass will be a significant invasive or weedy species, it does suggest that the risk may be greater than for more typical agronomic crops. Although cultivation of switch grass and other bio-fuel crop species may ultimately prove a net benefit to society, the environmental risks associated with their potential escape into natural and managed systems should be assessed before the crops are commercialized and introduced into new regions.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on A corollary to the renewal provision is the ability to assign biomass production to another farmer

Cost-minimization and the sociological-compatibility perspectives thus are not inherently in conflict

Moreover, field men visits can provide a source of information and a familiar contact through which producers could “negotiate contract terms, share technical information, estimate expected yields, and maintain a presence to ensure that the contract will be renewed” The use of field men to monitor also allows for more flexibility over time, and creates “shared understanding of what constitutes standards of good professional practice” Thus, working in a cooperative spirit allows for expectation adjustments without costly negotiations or conflicts. It must be mentioned, however, that although the field men monitoring model has many benefits, several costs are involved, including the cost of hiring, training, and employing a staff of specialists to serve in this role. The previous discussion of adverse selection problems stemming from information asymmetry and the moral hazard problems associated with unobserved action offers several potential contract based solutions, including rationing, screening, signaling, and auctioning, as well as measurement and monitoring strategies. But, much of the economic contract theory discussed above assumes that parties are able and willing to write “complete” contracts— contracts that specify each party’s obligations for possible contingencies. In practice, however, parties often are unable or unwilling to write and enforce complete contracts. Accordingly, in the following section, we introduce a second important transaction cost—contract incompleteness, and remedial strategies in the biomass supply chain context.Consider the situation where the end-user and producer negotiate and execute ex ante a biomass production agreement that specifies a time and amount for delivery , but fails to specify a delivery location in the contract.

Assume the end-user has two facilities,one ten miles from the producer and another, larger facility, vertical grow system one hundred miles from the producer. The lack of a specified delivery location is a source of incompleteness in the contract. Contracts literature contains several theories for explaining why parties sign incomplete contracts. In extreme cases, complete contracts may not be necessary, such as in a transaction in an environment where all contingencies and variables are observable and verifiable, allowing perfect information to eliminate the risk of adverse selection or moral hazard. But this is a rare situation. Parties may end up signing incomplete contracts because of the bounded rationality of the parties, the presence of uncertainty in the transaction, or the inability of the parties to objectively measure and evaluate relevant variables. A third explanation, closely related to the bounded rationality of the parties, is based on Williamson’s transaction cost theory. Williamson argues that complete contracts are unattainable because the transaction costs of writing and enforcing outweighs the benefits of obtaining perfection. The marginal cost of additional completeness increases, while the marginal benefit of completeness decreases; thus, parties choose to write contracts with an optimal level of incompleteness where the marginal cost is equal to the marginal benefit of additional completeness. As a bottom line, the general consensus is that contracts are necessarily incomplete; it is impossible to cover every possible contingency sufficiently well such that neither party will be able to take advantage of a loophole or ambiguity and act opportunistically. Thus, incompleteness gives rise to the risk of ex post opportunistic behavior, which in turn creates transaction costs. 

In the complete contract literature, renegotiation serves as an ex ante constraint, incentivizing the parties to remain with the original contract, but incompleteness creates the need for ex post renegotiation. Renegotiation can be a beneficial tool where a contingency occurs that leaves both parties worse off under the terms of the original contract; this flexibility allows the parties to adjust to changes in their environment. This flexibility may even make incomplete contracts preferable to complete contracts in some scenarios. However, when certain transacting environments are present , renegotiation may be detrimental to one party, as it reduces commitment and may lead to strategic behavior. Accordingly, a party may take advantage of any ambiguity or contingency not explicitly addressed in the contract to improve ex post payoff through renegotiation. When incompleteness exists, the future returns on a party’s ex ante investment will depend on the bargaining position of the party ex post . Within incomplete contracts, economic contract literature has identified at least two factors in a transaction that influence a party’s exposure to ex post opportunistic behavior: asset specificity and allocation of property rights. Both of these factors may create holdup, a form of opportunism. Williamson defines the condition of asset specificity as “investments in which the full productive values are realized only in the context of an ongoing relation between the original parties to a transaction such assets cannot be transferred to alternative uses or users without loss of productive value.” Legal scholars refer to specific assets as reliance investments. Asset specificity creates a bilateral dependence between the parties and a quasi-rent or “surplus over opportunity cost that increases the potential for opportunistic behavior.” 

Several types of asset specificity have been defined other than physical asset specificity, including “value-added specificity” , time specificity , and site specificity . When a party makes ex ante investments with high asset specificity, the seller is especially vulnerable in renegotiation, as the buyer knows that the next best value for the seller is substantially lower. In renegotiation contexts, the buyer will offer to pay only just above the next best offer, leaving the seller with no rents. This opportunistic behavior on behalf of the buyer is called the “hold-up problem.” The party who considers ex ante whether or not to make an investment with high asset specificity can perceive the threat of holdup. He realizes he has no incentive to invest as he will receive no rents, and therefore, will under invest. This inefficient level of investment creates transaction costs and barriers to entry. Again, consider our example of the biomass production contract. The biomass producer may choose ex ante to produce a crop of Miscanthus, and make a corresponding investment. Upon harvest the parties must determine the delivery location. The harvested crop of Miscanthus has a high level of asset specificity; because the farmer has no alternative market for the energy crop, the next highest value is near zero. The biomass conversion facility understands this and, consequently, has significant bargaining power. The end-user may assert that delivery was meant to be at the larger, more efficient plant 100 miles away. The level of asset specificity puts the farmer in a weak ex post bargaining position, as he is dependent on the contract with the end-user and must satisfy the end-user to obtain revenue. Thus the farmer, even though he will incur higher transportation costs, would rather accept the added costs of transportation to a distant market than forego contract payments. In addition to this holdup, other producers who observe this scenario may refuse to invest, perceiving uncertainty and weaker incentives. Thus, one can see that asset specificity may create risk of opportunism and holdup. Several fields of literature have identified different strategies of addressing holdup, which we discuss below. However, the theoretical strategies—when placed within the context of biomass production for renewable energy products—may conflict, indoor weed growing accessories requiring a balancing approach as well as careful analysis of specific issues to determine optimal strategies.The preceding deconstruction of the sociological, risk minimizing, and cost-minimizing perspectives yields several theoretical insights for an optimal biomass contracting framework, including key elements of contract design and opportunities for trade-offs in the negotiation process. From the sociological perspective, sensitivity to non-economic factors tends to dominate decision making in the innovation context. The ability to maintain existing agricultural practices and social networks throughout the education, field trial, and commercial production stages minimizes farmer disincentives to enter into production contracts for novel biomass crops. Trialibility, information sharing, and education also have strong influences on the sociological-compatibility perspective of contracts. The risk minimizing framework shares with the sociological perspective elements of information sharing, educational experience, and use of existing agricultural risk management tools, but also incorporates the concept of risk-incentive tradeoffs and minimization of common risk.

Likewise, the cost-minimizing perspective incorporates aspects of the risk-incentive framework. But, cost-minimizing also includes unique attributes of controlling for moral hazards and adverse selection, as well as intentional design of incomplete contracts to incorporate renegotiation opportunities. Table 1, below, summarizes these results.Accordingly, a trans-disciplinary approach to optimal biomass contract design would incorporate, to the extent possible, each of the contract attributes identified in Table 1. As discussed below, where perspectives overlap, contract design should be able to accommodate the differing frameworks, or at the least identify specific issues for negotiated bargaining. The more difficult proposition is when these principles are in conflict. For example, information sharing is a fundamental aspect of the sociological compatibility perspective , but is absent, or even discouraged from the cost-minimization perspective. The following section, therefore, analyzes the tools and implications of a Biomass Contracting Framework from a trans-disciplinary perspective.Economic contract theory posits that parties to a contract must optimize the tradeoff between costs and risk, such that both parties’ aversion to risk is equal to the additional cost of minimizing that risk. As producers have different levels of risk tolerance, the appropriate amount of risk minimization will differ; risk adverse producers will be more costly to incentivize to participate than their risk neutral colleagues. Moreover,identifying and addressing the risk tolerance of producers can be a key factor in adverse selection problems. On the other hand, perhaps the most exacting lesson from the sociological literature is that producers have multiple and varied non-economic goals and barriers that must be addressed in order to facilitate adoption of energy crops. What the sociology perspective implies, however, is that many of these non-economic goals cannot be adequately compensated by greater monetary incentives ; in order to overcome these constraints, contracting parties must incorporate other strategies to align the goals and incentives of the contract with non-economic considerations, such as the impact on producer autonomy, lifestyle, current farming operation, and core values. At first glance, the absence of monetary incentives complements the cost-minimization perspective, but upon careful consideration it creates unique problems due to information asymmetry. Determining the underlying non-economic goals and barriers can be costly, especially for entities without extensive experience in the agricultural sector. For example, where a multinational oil company seeks entry to the bio-fuels market as the result of the RFS2 blending mandate, or where an electric utility previously reliant on coal and natural gas seeks a biomass supply for co-firing a power plant to comply with a state renewable portfolio standard, both actors may lack the institutional capacity to identify fundamental, non-economic barriers to farmer adoption. The adverse selection problem discussed in the context of cost minimization is made more complex as the end-user cannot confine information seeking activities to the differentiation of true high- and low-cost producers, as the end-user must also consider producers with divergent and variable non-economicgoals not satisfied merely through financial means. As a result, theoretical methods of eliminating information asymmetry through rationing, screening, and auctions may not produce the desired results. On the other hand, the process of signaling can enable end-users to identify particular non-economic barriers, along with the traditional high or low production cost structure. Moreover, cooperation and information sharing requirements embedded within a contract can enhance education and training elements, while also reducing information asymmetry. The problem of information asymmetry and moral hazard is illustrative. As discussed above, one method for the Principal to manage moral hazard is via monitoring, and one potential model is the creation of a network of fieldmen to periodically visit producers. Fieldmen can identify opportunistic or suboptimal behavior, while also providing a source of information among networked producers regarding not only technical production practices, but also financial information to lower future transaction costs. The use of monitoring strategies also implicates the risk-minimization perspective. Although incentives provide one method to allocate endogenous risk of opportunistic behavior,202 incentive payments alone cannot differentiate between the endogenous risk of lack of producer effort from exogenous factors, such as poor weather. Moreover, incentive payments may not provide adequate compensation for the non-economic considerations described in the sociological compatibility perspective. Alternative policing mechanisms, such as monitoring and collaboration through fieldmen, however, could address the endogenous moral hazard problems and minimize risk premiums. Similarly, relative performance contracts, such as tournament contracts, incorporate producer performance incentives relative to similar producers, rather than absolute measures that depend on common risks . 

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Cost-minimization and the sociological-compatibility perspectives thus are not inherently in conflict

Personal risks include the risk of producer injury or death

It will be crucial to see if the benefits of NURTURE that I saw evidence for in my interviews carry over now that NURTURE is a module within Global G.A.P., or whether the character of those interactions and the high quality environmental and food safety “license to operate” that they helped to shape are significantly changed under Global G.A.P. management. It will also be interesting to see how this merger of standards affects market penetration of the Global G.A.P. standard. Global G.A.P. certifications active between 2014 and 2015 were not followed exclusively by any farmer interviewed during my study, but did show up as an additional standard followed concurrently by several farmers in my sample. If the merger with NURTURE allows Global G.A.P. to extend its market impact over a larger number of suppliers without fundamentally changing the structure of the NURTURE standard or its potential for positive landscape-level impacts, this merger could be one sign of positive evolution within harmonization of international food safety standards.Since I began this research, the FDA Food Safety Modernization Act in the United States has gone from a proposed rule making, to a final rule awaiting implementation, to the early stages of implementation within the US agricultural economy. Collection of the California data examined in this dissertation began while the proposed rule making was undergoing public comment, and concluded soon after initial publication of the final rule. At the time of publication of this dissertation, the Produce Rule is in effect. The first inspections called for under the FSMA Produce Rule were delayed in the beginning of 2018 by budget concerns for training new FDA inspectors, pruning cannabis issues around feasibility and definitions of certain requirements, and the complexities of developing state-level inspection protocols for the new rule.

Although the new requirements 11 are now in full effect as of September 2018 for all food businesses at all levels, FDA has reported that requirements for some types of produce farms will see a delay in enforcement reaching into 2019, or even 2020 . For many larger farms, changes have amounted to only minor refinements to existing food safety measures, because these producers already adhere to the most stringent rules in production due to their articulation with international trade networks and their deeper financial capacity to pay for audits and absorb additional costs. Conversely, for small and medium sized farms without large financial resources and for alternative producers following ecological farming methods that fit less readily into current HACCP risk-assessment frameworks without necessarily being less safe, the enforcement of FSMA’s newest provisions will present significant hurdles for day-to-day operations. Small farmers commented during the initial notice and comment cycle for FSMA that the new rules would likely put some smaller and alternative producers out of business due to the high cost of additional adaptation measures and inspections. The two iterations of the new USDA Harmonized GAP standard that I included in my analysis prior to their full implementation have also now gone into effect and have been adjusted for the enforcement of FSMA. This standard in its two iterative versions formed my category 2 Prescriptive Safety Plus, reflecting their attempts to incorporate clauses with a broader view of combined environmental and food safety concerns, and process-oriented controls in addition to prescriptive clauses. With my results from the previous chapter in mind, entry onto the state regulatory stage of standards with even a slight additional inclusion of these elements that I saw positively correlated with environmentally friendly farming practices and improved farmer experience could be a positive sign for the regulatory landscape. However, it will be necessary to see whether harmonization efforts are durable over the longer term, or if the landscape of standards remains subject to fragmentation. Hybrid food safety controls in the United States have also changed to accommodate new requirements at the national level. According to the LGMA board, as of August 2017, the LGMA standards applicable in both California and Arizona have been updated to reflect the new regulatory baseline represented by the FSMA Produce Rule.

This update was responsible for 12 the increase in prescriptive safety controls I observed in my comparison of standards, which saw the 2018 LGMA standard move back into my category 1: Prescriptive Safety. In comparison, the previous 2015 LGMA revision had contained enough process-oriented controls to place it within my category 3: Flexible Safety, representing a significant transition toward process-oriented controls that has now been reversed as private standards have intersected with changing public regulation. Producers entering into or maintaining certification to the LGMA in 2018 and beyond are now considered to be in full regulatory compliance with FSMA through the audits they already undergo. With the ability to signal full compliance through an already established public private partnership with USDA backing, the LGMA standard appears to have enhanced value and durability in a post-FSMA leafy greens market. This may complicate harmonization efforts,while also demonstrating an important back-tracking of the prior trend toward greater reliance on process-oriented food safety controls.On June 23 , 2016, residents of the United Kingdom voted in a historic referendum over rd the future of their membership in the European Union. The campaign to leave the EU won the vote by a margin of 51.9% to 48.1%, providing the popular basis for initiating the political process of exiting from the EU’s economic and political partnership represented by the EU. Under the terms of Article 50 of the Treaty on European Union, the UK has until March 29 , th 2019 to officially leave the EU with a negotiated agreement and its mutual parliamentary ratification. As of the time of writing, negotiations over the agreement are still underway, and although agreement has been reached on many specifics, the full terms have not yet been permanently established. Among many pressing economic questions for the nation, questions remain over what the UK’s exit will do to build upon or alter the functioning of the existing food safety and environment regulatory landscape in which fresh produce is currently grown. No matter what the precise terms of the final agreement, the UK’s exit will almost certainly change the face of British agriculture. The current UK leafy greens market benefits from a high degree of migrant labor drawn from other member countries within the EU under free migration rules.

If it becomes significantly harder for migrant laborers to gain access to the UK to work in agriculture, the UK produce sector could face steep competition from nearby EU countries where low-wage field labor is more easily attainable. Additionally, current environmental requirements and income support programs that some UK farmers presently enjoy under the EU’s Common Agricultural Policy will end by claim year 2020 under the terms 13 currently provisionally agreed by EU and UK negotiators as of November 14 , 2018 th . These direct payments and the regulatory mechanisms and requirements that underlie them provide the basis for UK environmental regulation in the farming sector, ensuring minimum standards of environmental protection and stewardship . Direct income support payments to farmers are slated to continue in the form of internal payments from the UK treasury until the next Parliamentary elections, due in 2022. Commentary from the UK parliament cites reasons that this exit and reorganization could be favorable for UK farmers, including relief from the more onerous parts of EU bureaucratic control and standardization of farming methods, record keeping, and crop timing . The UK’s DEFRA has announced plans to replace the direct payments with a new system designed to fix certain problems within the CAP payments, which some UK critics say do too much to reward large landholdings and too little to support real environmental stewardship. In a speech given to the Oxford Farming Conference in January 2018, Michael Gove, drying room current UK Secretary of State for Environment, Food and Rural Affairs, explained that DEFRA plans to move the nation away from CAP and its “resource-inefficient” methods of production, moving from “subsidies for inefficiency to public money for public goods” such as natural capital and environmental land stewardship. Such statements contain important clues 14 for basic framings of the values of food and farming which will soon decide which paths are considered by policy makers. Public expressions such as these pay service to the goals of environmental conservation and public goods such as food safety, but with uncertain guarantees during this ongoing time of negotiation. In February of 2018, ahead of major EU-UK negotiations, Britain’s National Farmers Union released a collaborative statement from 37 organizations representing the UK food and farming industries, in which they detailed their shared vision for what a successful British exit from the EU should aim to establish. Among other trade related goals, the statement calls on UK policy makers to ensure that new UK agricultural standards after the separation continue existing commitments to high environmental, health and animal welfare standards. However, recent evidence from the ongoing Brexit negotiations suggests that food standards may not be on track to ensure ideal outcomes for public health. With the announcement of a new program titled “Regulating Our Future” the UK Food Standards Agency has announced that its plans for food standards after Brexit will strongly favor private regulation over public regulation. This has some industry watchers worried that the future may hold a return to the scandals and failures of food industry self-regulation that plagued UK food production in the 1980s and 90s. Critics have warned that the new program undermines the publicly accountable enforcement provisions of the UK’s 1990 Food Safety Act by placing responsibility for food inspections on private commercial assurance providers instead of local and central government. Those wary of such a shift warn that this would be a misstep because private assurance firms would have a commercial incentive to serve their food industry clients over the interests of the public . Moreover, critics point out that regulatory over-influencing by corporate food actors was exactly the problem that the FSA, as a central independent watchdog agency, was famously created to solve. Non-state and hybrid standards can be effective tools for ensuring desired outcomes for food safety and the environment , but are most effective when paired with strong state regulation.First and foremost, I suggest that ensuring optimal outcomes for food safety and environmental management will require additional efforts to combat the division of food safety and environmental concerns into separate administrative and industry silos. Currently, my data show that these goals are being pursued separately by public and private standards in the United States, and through imperfect overlapping agencies within the UK government. In many cases, private standards push the frontier of regulation farther than public mechanisms, and form the most immediate point of contact uniting regulatory goals with buyer requirements and farmer practice. Currently, private standards in both nations are facing changes and updates in the wake of fluctuating regulatory climates, leaving private standards in a state of uncertainty and flux. I suggest that it is now especially important for private food safety standards in both nations to explicitly consider environmental goals alongside food safety goals. My results indicate that more balanced standards of this nature are correlated with more positive farmer experience, improved attitudes toward the natural environment, and higher use of conservation-oriented practices, goals which should be among those established during the current reorganization of regulatory priorities at public and private levels. Private standards are increasingly becoming international or global in scope, extending the reach of private regulation far beyond that of public regulation. Delivering effective food safety guarantees in global supply chains will require a shift toward more complete, internationally bench marked or otherwise harmonized standards, and nimble governance frameworks with a view of safety that does not ignore sustainability. However, market forces are currently still encouraging standards to proliferate and stay focused on food safety to the exclusion of other concerns. Additionally, regime transitions in both UK and US government make it less likely that the most balanced food safety standards currently available will be able to maintain their environmental completeness and broad conception of food safety going forward. It will be important for areas of the developed world that have rigorous and effective standards to maintain them in the face of incentives to race to the bottom, converging around standards which aim to deliver safety instead of other goods, rather than in addition.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Personal risks include the risk of producer injury or death

Enforcement of federal statutes is accomplished through partnerships between state and federal agencies

At the federal level, the US Food and Drug Administration establishes rules and sets general regulations that apply to all producers in all US states. In California, the California Department of Public Health and California Department of Food and Agriculture undertake guidance, inspection and enforcement of these federal rules at the state and local level. The US regulatory framework for food safety in fresh produce begins with the 1938 Federal Food Drug and Cosmetics Act , which established the United States Food and Drug Administration as the controller of food and drug safety at the federal level. The FD&C Act and its amendments prohibit “the introduction or delivery for introduction into interstate commerce of any food, drug, device, tobacco product, or cosmetic that is adulterated or misbranded” , in which adulteration may include the presence in food of “any poisonous or deleterious substance which may render it injurious to health” . In 2011, a significant change to the FD&C Act came in the form of the Food Safety Modernization Act, Pub. L. 111-353 . FSMA amended the FD&C Act to expand the power of FDA to regulate how foods including fresh produce are grown, harvested and processed. FSMA’s Section 105: Standards for Produce Safety called for the creation of a rule providing sufficient flexibility to be applicable to various types of entities engaged in the production and harvesting of fruits and vegetables that are raw agricultural commodities” . In 2015, pursuant to the Food Safety Modernization Act of 2011 , FDA released the final version of its Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption . The rule represented the first time that federal regulations had included a field-level safety standard for fresh produce, vertical farming systems marking an extremely hands-on and top-down form way of controlling how food is grown.

In response to FDA analysis of CDC data indicating that many food safety problems enter the supply chain during primary production and early handling and storage of fresh produce items the Produce Rule was designed by FDA to refocus food safety mitigation efforts on prevention rather than reaction, through the establishment of science-based minimum standards for produce safety. Key practices required by the Produce Rule include testing requirements for irrigation water, record keeping requirements, standards for the timing of compost application, worker health and hygiene requirements, equipment and facilities requirements, and specific limits for E.coli levels in irrigation water with zero detectable E.coli in water that washes or contacts edible portions of crops . At the state level, CDFA has released guidance documents and electronic resources which interpret relevant FSMA subsections, providing guidance through a range of public-facing programs which aim to “educate then regulate” toward full compliance with the law. Structurally, these two approaches to regulating food safety through state regulation are broadly similar. Both UK law and US law include regulatory goals at multiple levels, with the highest level setting out general objectives and definitions in the pursuit of safe food, and lower levels providing specifics and interpretation of statutes for on-the-ground implementation and enforcement. Both approaches also contain similar language setting out the targets of food safety regulation in absolute terms such as “safe food,” which explicitly seek to avoid threats, rather than reducing them to an acceptable level. However, the UK’s legal framework under EU law 5 contains an explicit focus on use of the precautionary principle in regulation, and specific requirements relating to traceability, both of which are not emphasized to any comparable degree by US laws. Differences also exist in overall style of the regulations. The EU level contributes a strong emphasis on cooperative process and stakeholder education, and stresses the importance of food businesses as the actor group closest to the problem of food safety and therefore best situated to solve it.

The regulatory model evidenced by the UK’s due diligence framework acts to protect food businesses from threats beyond the realm of reasonable caution. In this sense, the UK is relying on a liability framework for risk management in fresh food products, something that sets UK food safety management apart from US food safety management through direct regulatory standards. FSA operates through an arm’s length relationship with food businesses, in which it is tasked with regulating their actions through audits and inspections but does not have the power to directly enforce or prosecute. Rather, FSA oversees the activity of local city and district governments who act as food safety enforcement bodies. These Local Authorities employ auditors and inspectors who visit food producers to ensure that production standards are being met and hygiene requirements are being followed by food businesses. The central FSA as a result tends to communicate in the form of guidance, with control of deployment devolved entirely to the local level. This arm’s length arrangement can make it challenging for FSA to direct activities at local level, but it also allows for general principles to be more effectively tailored to local or regional concerns. In comparison, the US model contains similar efforts toward cooperative education-based regulation, but at a lower level. Collaborative efforts to educate at the state level are still back grounded by the context of inspection and punitive sanctions from federal level.From my research, The UK “due diligence” defense framework embedded in the UK’s 1990 Food Law appears to structure responses to food safety concerns differently than the US focus on regulatory mechanisms for holding producers accountable. Based on my data, the effect of this provision has been to motivate a deep and broad care for food safety risks, first at the level of the largest retail corporations, and then downward along the supply chain. It is deep because suppliers and retailers know that they are legally responsible for being circumspect in their attention to risks, and broad because attention is focused on a broad suite of potential sources of risk, rather than zeroing in on only one source .

This breadth was evident in my interviews when suppliers and retailers spoke of risks from pesticides in the same breath as they mentioned risks coming from pathogenic agents in food. Food safety risk in the UK is thought of as coming from a wide variety of sources. In addition, many UK interviewees at both policy and industry level commented to me that food safety and environmental health are seen as linked concepts, in that a healthy environment is perceived as generating safer food.The kind of public criminal codes and legal mechanisms underpinning the due diligence approach do exist in US law, and they may be experiencing a resurgence in active enforcement which is beginning to shift thinking among industry leaders. However, due to differences in legal structures, cannabis grow room criminal justice systems, and underlying societal attitudes, the nascent criminal liability approach may be taking a much more punitive form in the US than it currently takes in the UK. In 2015 five employees from the Peanut Corporation of America were sentenced to federal prison terms for their roles in distributing salmonella-contaminated peanut butter products to institutional buyers across the United States in 2008-2009, causing nine deaths and 714 illnesses across 46 US States. This case is an example of a recent application in US law of the Park Doctrine in US law, whereby an individual in a position of responsibility within a corporate entity can be held personally criminally liable for harms caused by that corporation’s activities, even if it cannot be proven that the individual in question acted personally or knowingly as part of the wrongdoing. Although the executives sentenced in the case of the Peanut Corporation of America had knowingly sold contaminated products, the statute invoked in their trial could equally be used to convict an executive who should have known, but did not. This difference in the protections afforded to food producers signals a harsher and punitive framework, with fewer protections in place to allow retailers any measure of security should food safety violations happen despite rigorous controls.My research revealed structural differences between the two industries that create stark contrasts in how food safety risk is handled. Contrary to US farmers who most commonly sell to packer/shippers, UK producers most commonly sell directly to a grocery retailer . My farmer surveys illustrate this trend: None of the UK producers interviewed during my research reported selling their leafy greens to a packer/shipper. Instead, the majority sold directly to a grocery retailer, accomplishing many of the intermediate steps such as trimming and bagging while still in the field . California leafy greens farmers, by contrast, are typically 6 growers and harvesters only. They sell their products to intermediaries such as processors, pack houses, packer-shippers, and wholesalers, with products changing hands multiple times between initial harvest and final sale to the consumer.

Farmers in both geographic regions complain that there are too many standards they must follow simultaneously, and a challenging lack of harmonization between different sets of requirements. My research reveals that in both locations, farmers feel negatively about the overlap and proliferation of private standards, but in CA, the structure of the leafy greens industry creates more opportunities for overlapping standards to disagree. In the UK, the larger number of private standards still creates this possibility, but the structure of the industry eliminates the potential for industry buyers to impose different requirements than the prevailing private standards. In CA, the Leafy Greens Marketing Agreement produce standard functions as its own private entity, not owned by government, by retailers, or by farmers. As a marketing agreement under US law, it invokes the inspection powers of the California Department of Food and Agriculture and the USDA to back its standard but sets and promulgates its own rules . Because the standard is not operated by the downstream buyers and retailers in the supply chain, there is sometimes conflict between the requirements that buyers impose, and the requirements of the LGMA certification standards. Research indicates that CA farmers feel their buyers often impose requirements on them that are not part of private standards like the LGMA, let alone the background regulatory process . For example, some buyers in California went beyond the LGMA requirements, adding their own seemingly arbitrary requirements for leafy greens growers around allowed soil amendments and appropriate time intervals to be followed when harvesting land that had been exposed to floodwater during weather events . In the UK system, the setters of private standards and the downstream buyers are the same entity; grocery retailers are by far the biggest buyers, and many have their own standards which suppliers must follow in addition to public regulatory rules. But because the retailers own and operate the standards, there is no time when the retailer as downstream buyer will specify different measures from what the relevant produce safety standard—in this case also created by the retailer—commands.Echoing results at policy level, the subset of standards that I examined indicated that food safety is managed in a more holistic way by private standards active in the UK than by those operating in CA. UK standards range more widely, often including additional values such as environmental health, social justice concerns, animal welfare, and chemical safety alongside food safety requirements. By contrast, US food safety controls follow a narrower focus on specific foodborne pathogens, and may contain few if any mentions of other concerns. This separated, piecemeal approach to ensuring diverse public goods encourages management of food safety goals in a vacuum, and my research supports the idea that this may pit different goals against one another rather than encouraging their mutual achievement. Each of the eleven standards for which an audit checklist could be obtained were evaluated and categorized clause-by-clause according to two parameters: Style and Focus. Style was measured by how many of the audit clauses in each standard were prescriptive in nature , vs. how many were process-oriented . Focus was measured by how many of the audit clauses in each standard were focused on achieving increased food safety, vs. how many were focused on achieving better environmental health.This section of my analysis also takes into account that many farmers carry multiple certifications at the same time, and aim to meet multiple overlapping sets of requirements and standards at once. For example, the BRC Global Food Safety Standard and the Red Tractor Assured Fresh Produce Standard are used as pre-regulatory risk assessments in the UK which prioritize state regulatory enforcement measures, meaning that all producers must go through these audit processes in order to be able to sell their products through grocery retailers.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Enforcement of federal statutes is accomplished through partnerships between state and federal agencies

Food providers must demonstrate due diligence or face criminal consequences

In the UK, high profile food safety concerns and a new focus on food safety began with the emergence of Bovine Spongiform Encephalopathy or “mad cow” disease in British cattle from 1986 onward. The BSE crisis, along with and later scares such as the appearance of foot and mouth disease in cattle and sheep, and the contamination of Belgian animal feed with dioxin in 1999 drew attention to production practices in livestock supply chains and to pathogenic food safety as a large-scale societal risk embedded in the industrial food system. The epidemiological identification of BSE and the subsequent confirmation of its link to new variant Creutzfeldt-Jakob Disease in humans sparked a massive controversy within British and European food production and regulation, shining a light on government mismanagement of the crisis and the role of food safety in international trade. These concerns left deep scars in the UK regulatory framework for food safety, eroding public perceptions of the authority of scientific experts and members of government to provide guidance in the face of salient public safety threats . Ultimately, this breakdown of public regulation led to the eventual creation of the European Food Safety Authority as an oversight body and a reorganization of European and UK food safety controls to focus on “farm to fork” management of agricultural safety and quality . These combined issues in modern animal husbandry called into question the basic safety of intensive agricultural production systems, and the British government’s handling of the BSE crisis ultimately shook consumer confidence in the fundamental ability of public food regulation to adequately protect consumers from harm . This constellation of food safety failures prompted the UK government to adopt its seminal 1990 Food Safety Act.

In reaction to BSE and other scandals, and as part of an attempt to incorporate EU language and priorities at the regulatory level , rolling grow benches the Act placed a high value on achieving traceability in food supply chains. Importantly, the 1990 Food Safety law also established a new “due diligence defence” for assigning criminal responsibility in cases of food safety violations, an element that has played a strong role in driving much of the UK retail industry’s focus on food assurance schemes and production standards . The language of the defense states that a food provider is responsible for any food safety risk that they could “reasonably” have been expected to know about or have acted to solve. During the same period, the United States government responded to domestic and international foodborne illness outbreaks by formally adopting a new food safety management protocol first developed by NASA to ensure the safety of food sent into space for American astronauts, and later broadened to minimize risk in a range of product supply chains . Known as Hazard Analysis and Critical Control Points , this protocol directs producers to make themselves aware of all potential sources of risk in their operations and place controls at specific steps to neutralize each hazard. Beginning in the early 1990s, HACCP-style risk management controls were formalized and US legislation began to mandate their use. Over the next decade HACCP models were adopted across global supply chains and incorporated into EU food law as part of the multinational response to BSE . In 1997, the Codex Alimentarius Commission, a joint body of the UN Food and Agriculture Organization and the World Health Organization, adopted HACCP into its international collection of food production standards for the protection of consumer health and the regulation of international food trade .

Given heightened awareness of risks throughout the food supply chain, food safety risk management across the industrialized nations has moved from testing the safety of final products on the retail shelf or the consumer’s table to monitoring the riskiness of field level agricultural production methods . Even in this heightened climate of food safety attention, some threats evaded early notice. The high-profile food safety failures of the 1980s had come primarily from the meat and poultry industries, focusing early regulatory responses on both sides of the Atlantic on livestock supply chains. Although botulism and other foodborne illness outbreaks had been seen in canned and preserved vegetable products in the US and elsewhere, outbreaks of animal origin gained more visibility during this time of increased focus on food safety. As a result, vegetable food safety threats largely escaped public attention and regulatory pressure through the late 1990s even as global consciousness of food safety gained momentum. The US Food and Drug Administration’s first public acknowledgement of food safety risks in fresh produce came in 1998 with the FDA “Guide to Minimizing Microbial Food Safety Hazards for Fresh Fruit and Vegetables” and the first EU Regulation establishing general food hygiene law was created in 2002 .In the search for improved food safety assurances in global agricultural value chains, a new balance of power began to develop between regulators and the food retail industry that would transform the food regulatory landscape. Public regulatory responses from due diligence clauses to HACCP requirements represent examples of regulation at a distance, extending traditional public regulation to the realm of enforced self-regulation by private firms . This change in the landscape of food regulation reflected the rising power and centrality of non-state governance mechanisms, creating new architectures of authority in global agri-food chains. Lacking confidence after food scares, consumers sought new ways of ensuring product quality .

Food retailers responded to this “turn” to quality by introducing an array of non-governmental safety and quality assurance schemes designed to exceed the requirements of public regulation in an effort to win back consumer trust . Quality assurance standards soon appeared across the developed nations focusing on topics such as enhanced animal welfare, fair labor practices, food safety, and environmental protection . Private or non-governmental food standards can be divided into three categories based on formation and participation . Individual firm standards are those created by a single private food corporation and intended to apply only to that firm’s products and suppliers, e.g. Tesco’s own-brand ‘Nurture’ program founded in 1992 and required for suppliers of Tesco’s UK retail stores. Collective national standards represent standards developed jointly by multiple private food corporations which are designed to apply across multiple firms or industry sectors, as when the British Retail Consortium formed and created its Food Technical Standard in 1998. Lastly, collective international standards are those formed by geographically diverse coalitions of food firms and designed to apply at an international level to facilitate the movement of foods through global supply chains that cross between many national regulatory settings. The International Featured Standards – Food created in 2003 by a coalition of Dutch, French and Italian Retailers is an example of one such international standard currently followed by agricultural producers in many countries worldwide. Whether individual or collective, national or international, these and other retailer-driven private standards have restructured power relationships within agricultural supply chains in the name of achieving heightened food safety and quality. Officially, private standards such as these depend on voluntary market relationships and do not have the authority to mandate adoption or participation. However, research has shown that voluntary private standards of this sort can become de facto mandatory if they become widely adopted in the market and companies require compliance from producers . Such standards may then be used by both public and private actors as a recognized governance mechanism, comparable in power to public regulation . The ascendance of non-governmental food standards has increased retailers’ control over all stages of food production. By ostensibly guaranteeing the consumer public a higher level of safety or quality than that presumed to be delivered by the background public regulatory process, drying cannabis private food standards assert the authority of food retailers as a legitimate rule-making force within broader food governance. Food retailers in both the UK and US markets now hold a position of regulatory authority and legitimacy within agri-food supply chains . By becoming the architects of the leading food regulations affecting both their own commercial activities and the growing practices of their producer suppliers, retailers have effectively made themselves the gatekeepers of food safety . With industry in control of defining, measuring, and managing food safety risks, government regulators in many parts of the industrialized world have focused more on the administrative task of evaluating industry’s efforts .Regulation of the food system achieved via non-state actors has shown certain benefits over pure public regulation, along with notable drawbacks. Private regulation can address market failures that might otherwise go unregulated by public entities, providing solutions to collective action problems that might persist in food supply chains without intervention . Examples include the establishment of privately-operated organic food certifications in the 1970s in the United States and Europe designed to solve ecological 6 externalities in conventional food production, and fair trade certifications born in the late 1980s7 and early 1990s with the aim of correcting exploitative labor conditions in developing countries exporting commodities for international trade . By offering solutions for these and other market failures, private standards and other forms of self-regulation can enhance the efficiency of supply chain management, reduce transaction costs, standardize industry responses to problems, and reduce liability for both retailers and producers . Instead of competing solely on factors like price or convenience, private quality standards can allow desired public goods such as improved food safety, animal welfare, labor standards, or environmental sustainability to be measured and managed directly within the supply chain, making them into attributes that can fuel retail competition as part of brand and product differentiation.

Private regulation can also be especially helpful for addressing problems that involve transnational trade and globalized supply chains of the sort now commonplace in food production, because they can be much more easily enforced across an extended supply chain. Private regulation can hold to account industry actors who operate across and between outside national boundaries, and might otherwise escape or supersede traditional state regulation . Furthermore, in part because private standards can become effectively mandatory for the producers they apply to because they control market access , they can be powerful vehicles for advancing a desired outcome quickly through a supply chain that crosses through multiple public jurisdictions . However, research suggests that private food standards do not necessarily provide equivalent results compared with traditional public regulation. Examinations of quality assurance schemes in UK agriculture have concluded that private food governance is unlikely to provide the outcomes sought by either consumers or governments , and that many private standards are critically flawed because guiding objectives fail to include adequately broad coverage of environmental threats and because definitions of key metrics and indicators are not clear enough to be well assessed and enforced . Similarly, a recent UK study of private and hybrid environmental standards found that few standards exist which genuinely have the potential to achieve public environmental outcomes . Because goals such as food safety assurance and environmental protection are inherently linked to profit motives, food retailers often create proprietary quality standards as part of corporate branding efforts, wielding them more as tools of market competition rather than as tools of public good . Even when private standards are built by broad coalitions of stakeholders or include independent third party accreditation and auditing mechanisms to increase consumer trust in objectivity, retailers often implement their own proprietary standards as an additional step even after collective standards are put in place . In this marketing space, proprietary private safety and quality standards represent a black-box style of food regulation where decisions are not necessarily openly accountable to the public or subject to robust monitoring and enforcement . Research suggests that food safety and quality controls under these circumstances may function to promote corporate profit and industry dominance rather than to achieve public regulatory goals . In light of the drawbacks of pure non-state regulation and the costs and compliance challenges of pure state regulation, co-regulatory approaches have become common in food safety. Co-regulation in the food system purports to decrease both the high administrative costs of achieving compliance, and the adversarial climate that coercive regulatory tactics can create , and combinations of public and private regulation can achieve better outcomes than either strategy can deliver alone . However, scholars have observed that co-regulation of food supply chains may work most effectively when the motives of the public and private parties involved are aligned, a situation that is far from common in food regulation .

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Food providers must demonstrate due diligence or face criminal consequences

Food is increasingly a manufactured good about which we know progressively less and less as time goes on

Groups of thought and practice as disparate as the California organic movement, the Italian slow food movement, the eat-local phenomenon, low-carbon diets, geographically protected traditional denominations of origin like Champagne and Parma ham reflecting ideals of quality, and proponents of the French concept of terroir have all attempted in different ways to bring food back from nowhere, to celebrate that which makes it locally specific, non-industrial, non-standard, and anti-capitalist . Julie Guthman has observed that while eco-labels and place-based certifications putatively re-embed lost social values in the market economy and protect them from being further eroded by neoliberal market development, such forms of resistance also reinforce the power of market dynamics through their reaction to them, and allow further penetration of capitalism into new markets . Moreover, although these efforts continue to make important gains, they remain in their truest form at the periphery of the dominant global food system. The psychological and physical divide between mainstream food production and the food consumption still continues to grow. Food luminaries ranging from journalist and professor Michael Pollan to celebrity chef Jamie Oliver have noted that we as a society no longer seem to understand what “food” truly is. The less we know about the how our food comes to arrive on our tables—how it is farmed, harvested, processed and transported, by whom, under what conditions—the more disconnected we become from the politics associated with our food. A consumer may have opinions about how farm laborers should be treated, what amount of carbon emissions should be generated by a meal, or how fresh a salad should be, ebb and flow tables but if the farm landscape and its people and technologies are out of sight and out of mind, that consumer is unable to make a link between those values and the agricultural system.

The farther away food production feels, the less empowered consumers and even regulators are to engage with food production and its challenges. Lack of attention at many levels threatens the ability of the food system to deliver to us the food we need and the deeper values that make up our society.‘Food from nowhere’ affords the consumer an unprecedented degree of choice, freed from many constraints. Modern grocery store shelves offer consumers high degrees of cosmetic quality, standardization, multi-seasonal availability, and low price . But these benefits come with a wide array of costs, the result of producing food within a corporate capitalist structure of distribution. The stable supply, large volume, and dependable quality that must be present to create food from nowhere, in turn require longer supply chains and more industrialized production to deliver them. To reduce competition, to accomplish effective transportation of fresh products across long distances and to insulate themselves from financial shocks, food companies have grown larger and more diversified. Corporate food giants like Pepsico, General Mills, Cargill, Coca-Cola, Unilever, ConAgra and Nestle grew to their present size primarily through mergers and acquisitions of smaller brands. A handful of major corporate food firms now represent hundreds of smaller brands that are still sold under their original brand names, hiding the consolidation of the market . Some of these firms have also widened their influence into adjacent market spaces such as genetic engineering, energy, seed and pesticide manufacture, and agricultural securities, while corporations such as Monsanto and Syngenta originally based in those sectors have also begun to add food to their portfolios. During the 20th century, food producing corporations also began to actively influence both regulations and public opinion through their lobbying and advertising campaigns, focused on increasing sales and not necessarily on increasing human wellbeing . 5 The result is a cluster of agri-food conglomerates that wield immense financial and political control in the global marketplace . Soon after the American colonial period closed, market forces operating in the American free market for agricultural goods incentivized the consolidation of agri-food producers into corporate conglomerates, and spurred a pattern of vertical and horizontal integration of supply chains and financialization of the agriculture industry .

The result today is that the American food system is dominated by a highly vertically integrated, highly consolidated set of agricultural markets in which corporate food companies now control the entire life cycle of production of a plant or animal in the food system . Farmers in this type of production are increasingly corporate employees rather than free operators, constrained by production contracts, and both legally and financially controlled by the vertically integrated corporations for whom they grow food products. Fresh salad greens demonstrate this rise of corporate consolidation, vertical integration, and the shift to contract farming. As with many foods, lettuce production systems exist along a spectrum from small-scale farms to immense industrialized corporate farms. Salad mixes were initially developed in California as a luxury commodity for restaurants that made their name by offering small-scale organically-grown food and emphasizing a close connection between farm and table . This packaging innovation was then copied as part of the portfolio of large corporate farms serving audiences with less concern over organic methods or farm-to-table cuisine, and the production systems behind this new commodity began to industrialize. Scholars have described the resulting bifurcated market, in which some small alternative producers still deliver fresh salads directly from farm to table, but the bulk of the market is dominated by large corporate farms run under contract, and that now operate under increasingly industrial and conventional supply chain models . Rural sociologist Thomas Lyson describes the impact of these changes thus: “The development of supply chains means that on-farm decisions will no longer be made to benefit the long-term sustainability of the farm, the good of the community, or the health of the natural resources that sustain the farm” . In such a system, the profits of agriculture are owned by the corporate conglomerate, not the farmer, and farm management decisions are also made by the corporate conglomerate, not the farmer. Distancing of human consumers from our food is thus taken to a mind-boggling extreme: Even the dwindling percentage of the population who wake up each morning on farms or whose livelihood comes directly from growing food, are not those who are in control of the way farming takes place. At the same time, the psychological and geographical distancing of consumers results in a packaged food purchasing landscape where consumers cannot tell which products come from which of the large firms, or how those products and firms may reflect their values. In both fresh and packaged foods, corporate consolidation and control of all levels of food and agriculture presents growing regulatory challenges for ensuring that food delivers on desired outcomes such as healthy environments, fair labor practices, and safe food.The 20th century’s population explosion, increasing urbanism, and new technologies for agriculture, food manufacture and food marketing gave rise to corporate food culture in the Global North. Grocery retailers and food brands consolidated, vying for control of emerging consumer markets. Capitalist industrial development over the following half century then allowed mega-corporations like Cargill, Nestle, and Unilever to amass unprecedented financial and political resources.

Vertical integration makes these corporations some of the largest agricultural producers worldwide, managing multinational supply chains and posting yearly profits larger than many smaller countries’ GDPs. These goliath corporate entities also wield considerable political power, through their size and financial importance, as well as through lobbying and direct political involvement. As the power held by these large corporations grew toward the end of the 20th century, scholars from many disciplines began pointing to a decrease in the power and influence held by nation-states . In this globally linked world where movements of capital, goods and information now happen faster and at a scale never before seen, transnational corporate interests have gained economic and political clout in a way that crossed and almost erases borders, allowing them to escape or influence regulation of their activities. Scholars have also noted that “globalization is developing in the context of a new international division of labor” as a result of the dynamics of agrarian change and the rise of capitalist farming that have occurred in the industrialized world. Different areas of the world have come to specialize in—or be trapped within—different types of goods or services within the new global market, with differential degrees of political and societal bargaining power, industrial drying racks much of which is now exerted within supply chains by means of private standards . It has even been suggested that the rise of transnational corporations may make the concept of the nation-state economically irrelevant . Using their concept of food regimes, Friedmann and McMichael posit that transnational corporations are the owners of geopolitical power now in the emerging present-day Corporate Food Regime in the same way that national governments held this geopolitical power in food regimes prior to 1970 . These trends have resulted in a weakening of public regulatory power in favor of an increasingly diffuse array of non-state actors exerting power over food and agriculture. When foods are produced and consumed in locations so far apart that they fall under different cultures, languages, political divisions, and social norms, little can be relied upon without being codified into standards that follow supply chains rather than borders . A corporation that sources a food crop from a country that has fewer or less effective environmental and labor regulations may operate there in a way that their home country would not allow . Both governments and consumers of goods in the many countries where this product eventually ends up for sale are then left unable to influence or fully regulate the activities of the transnational corporation in its broader supply chain. There are no formal public governmental entities in this interstitial space, necessitating alternative forms of non-state governance to achieve a degree of regulatory oversight where traditional state regulation cannot penetrate . In addition to—and sometimes in place of—traditional state-led public regulation, what has begun to take shape in the modern marketplace is a devolution of regulatory power from the state to an array of non-state actors. This development is not inherently problematic, and may in many cases reflects attempts to make regulation more flexible, more responsive, and less costly to pursue. However, when regulation by state regulatory authorities is replaced by quasi-governmental regulation designed and promulgated by authorities that are not publicly accountable, and for whom there are no established codes of conduct, important questions surround the resulting regulation. Private corporations, activist NGOs, and independent certification bodies have spearheaded the emergence of an array of privately-held production and quality standards, designed to provide assurances where public regulation does not suffice or does not exist . Thus, the new landscape is marked by a network of standards and certifications maintained by private entities, some separate from industry, some industry-led or industry-influenced. Examples include certifications managed by non-profit groups like the Forest Stewardship Council, industry-led governance bodies like the Roundtable on Sustainable Palm Oil, and advocacy-based campaigns to inform the consumer public through “naming and shaming” corporations like popular campaigns to hold Nike or Apple accountable to consumers for their environmentally and socially questionable overseas activities . The global rise of transnational corporations and the rise of private standards and certifications in the global marketplace coincided with a shift in how states exerted power over food and agriculture. Prior to this shift, states had sole power to control food and agriculture production within their borders. But since the late 1980s and 1990s, this power has become constrained, and shared with private industry. Transnational trade has made it harder to regulate the activities of food and agriculture because products cross borders and regulatory systems between production and sale, and neoliberal reforms in many global markets have led to public regulatory rollbacks. Against this backdrop, markets have developed and firms have sought to differentiate their products in ways that do not rely exclusively on state regulation, perhaps reflecting a new emerging economic order . Capitalist market development under this framework can focus on product differentiation through “a turn to quality”, in which products are marketed based on qualities such as environmentally friendly production practices, high food safety assurances, or other attributes verified by non-state certifiers . The result of these changes has been described as a shift from government to governance , in which states retain a degree of basic rulemaking authority and supervisory control, but private systems of certification hold much of the responsibility for defining controls and ensuring compliance.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Food is increasingly a manufactured good about which we know progressively less and less as time goes on

Backward and forward stepping algorithms were used to identify multivariate regression models

In the current study, the majority of Enterococcus spp. isolates that were susceptible to macrolides were found in hospital cows, similar to previous work demonstrating that Enterococcus resistance to macrolides was found in isolates from clinical animals. Moreover, all MDR Enterococcus isolates were from hospital and fresh cows, indicating that macrolide resistance genes might originate from hospital cows that are being treated for a variety of medical conditions and then spread to fresh cows. Among Enterococcus isolates, a negative association was noted between the occurrence of tetracycline and macrolide genes, indicating that the presence of the tetracycline resistance genes was associated with a reduced risk of simultaneously finding the macrolide resistance gene in these fecal bacteria. This finding is interesting given the previous observation that resistance to tetracycline and macrolide–lincosamide–streptogramin group was observed through transposable elements. In dairy production, lincosamide is used to treat mastitis in conventional farms; however, lincosamide resistance genes were found in hutch calves based on the CARD database. This may indicate the transfer of resistance genes along the production line and calves can acquire resistance genes at this early age. Similarly, it was reported that calves at 1–2 weeks of age acquired tetracycline-resistant genes, likely due to colonization with resistant bacteria from their mothers and/or the dairy farm environment, greenhouse rolling benches given the ubiquity of manure. According to genes identified from the CARD database, resistance to 3 antimicrobial classes genes was commonly observed among E. coli and Enterococcus spp.

No significant links between resistance to tetracycline and fluoroquinolone were observed in this study, which may be due to the mechanism of resistance to fluoroquinolone being frequently related to chromosomal mutations, while the mechanism of resistance to tetracycline can occur due to genetic mobility.For this study, we identified genes by evaluating two publicly available databases in E. coli—namely, sulphonamide, trimethoprim, and beta-lactamase resistance genes from ResFinder, and tetracycline and aminoglycoside resistance genes from both ResFinder and CARD. These resistance genotypes were in concordance with resistance phenotypes we characterized previously. For Enterococcus spp., high levels of agreement between resistance genotypes and phenotypes were only found for tetracycline resistance genes from the ResFinder . Similarly, a previous study observed a lower concordance between phenotypes and genotypes of streptomycin in Salmonella isolates. In contrast, high correlations between the presence of resistance genotypes and observed phenotypes have been reported in nontyphoidal Salmonella from retail meat specimens and human cases. Another study reported 67.9–100% concordance between resistance phenotypes and genotypes and 98.0–99.6% concordance between susceptible phenotypes and genotypes in Campylobacter from retail poultry. Although relatively few studies have been performed on Gram-positive organisms using WGS to study AMR, a high correlation between resistance genotypes and phenotypes in Enterococcus isolates was reported. The lower correlations between resistance genotypes and phenotypes of Enterococcus in the current study could be due to the small numbers of bacterial isolates tested, availability of drugs for antimicrobial susceptibility test in the commercial kits, and a different method used to analyze correlations between genotypes and phenotypes. In addition, the lower correlations also could be due to discrepancies between genotype and phenotype resistance that vary with bacterial species and antimicrobials.

Therefore, a combination of genotypes for resistance prediction with phenotypes determined by antimicrobial susceptibility would provide a more accurate assessment of resistance of different bacterial species from different samples and against different antimicrobials. Results of genotypes in the current work and phenotypes in our previous work on the same bacterial strains allowed us to better understand the resistance of E. coli and Enterococcus spp. on dairy farms.Based on phylogenetic analysis of resistance genes in E. coli detected from the ResFinder database, a quarter of the isolates that were in cluster 2A were from hutch calves. Phylogenetic analysis of resistance genes of Enterococcus detected from ResFinder also indicated a unique cluster of MDR genes mainly from hutch calves . Similarly, phylogenetic analysis of genes detected from the CARD database found distinct clusters of genes in E. coli and Enterococcus from hutch calves. Therefore, these results indicate that bacteria from hutch calves had AMR characteristics that were distinct from isolates from cattle at other stages of dairy production. Most E. coli isolates from hutch calves were MDR to aminoglycoside, phenicol, sulphonamide, and tetracycline, which is consistent with other studies in that E. coli from calves were frequently resistant to multiple antimicrobials. For example, MDR bacteria were very common from integrated veal calves. A review article indicated that young dairy calves often carry high levels of AMR in their fecal E. coli and Salmonella enterica, which could provide a potential reservoir of AMR genes for the greater dairy farm environment depending on how calf manure is managed or mixed into the general manure stream on the dairy. Our results, in addition to these prior studies and reviews, suggest that monitoring of MDR bacteria in hutch calves may be important for reducing the spread of AMR bacterial genes to other production stages in dairy farm settings. On the other hand, heat maps and phylogenetic analyses indicated a wide distribution of multiple resistance genes among multiple adult cattle production stages for fecal E. coli based on the CARD database . Given that one adult dairy cow can produce in excess of 20 to 30 kg of feces a day, conventional dairy herd sizes in California often exceed 1000 adult cows, and the concentration of fecal E. coli in dairy manure typically exceeds 106 cfu/g , one can expect that MDR fecal bacteria are widely distributed throughout the greater dairy farm environment and likely in relatively high concentrations. A previous study reported that AMR gene profiles varied between farms and different types of samples but a greater proportion of genes were common to all types of samples, suggesting horizontal transfer of common resistance genes among production stages. Samples in this study were collected within one farm at one point in time, and the sample size from each production stage was small due to the cost of WGS and available funding; these constraints may limit the representativeness of our results. However, our study warrants further investigation of the relationship between AMR clusters in different cattle groups and different types of farm sample matrices to support the effort to better control the spread of AMR within modern conventional dairy farms.In our previous work, we characterized the antimicrobial resistance phenotypes of E. coli and Enterococcus spp. from cattle at different production stages on a commercial dairy farm in Central California, USA. Briefly, using convenient sampling, fecal samples were collected from the rectum of dairy cattle at twelve different production stages on a commercial farm in the San Joaquin Valley, the major dairy production region of California.The antimicrobial susceptibility of E. coli and Enterococcus strains was determined by minimum inhibition concentrations of tested antimicrobials using a microbroth dilution method. Antimicrobials tested for E. coli were cefoxitin, azithromycin, chloramphenicol, tetracycline, ceftriaxone, amoxicillin/clavulanic acid, ciprofloxacin, gentamycin, nalidixic acid, ceftiofur, sulfisoxazole, trimethoprim–sulfamethoxazole, ampicillin, and streptomycin. Antimicrobials tested for Enterococcus were tigecycline, tetracycline, chloramphenicol, daptomycin, streptomycin, tylosin tartrate, quinupristin/dalfopristin, linezolid, nitrofurantoin, penicillin, kanamycin, erythromycin, ciprofloxacin, vancomycin, lincomycin, and gentamycin. Resistance phenotypes of E. coli and Enterococcus from the previous work were used for the analysis of associations with genotypes in the current work. In the current study, greenhouse bench top based on the availability of strains from cattle at different production stages that determined resistance phenotypes in our previous work, 40 strains of E. coli and 49 strains of Enterococcus from our culture collections were selected for genotype characterization using whole-genome sequencing .

Descriptive statistics were used to examine the distribution of AMR and MDR genes in E. coli and Enterococcus detected from the CARD and ResFinder databases. Logistic regression analysis was used to identify the relationship between the presence of various resistance genes. The production stages and bacterial species were also included as independent variables for the regression models. Univariate regression analysis for all independent variables was screened for potential significance, and a p-value threshold of 0.05 was used as an inclusion criterion in the model. Kappa coefficient analysis was used to assess the level of agreement between a bacterial isolate having a specific resistance genotype and also having the corresponding resistant phenotype for the isolates. The resistance genotypes used in the Kappa analysis were the genes detected from the CARD and ResFinder databases, while the resistance phenotypes were determined from our work on the same set of samples published previously . For the Kappa analysis, each bacterial isolates’ phenotypic resistance was compared to its corresponding pattern of resistance genes by classes of antimicrobial drugs. The percentage generated by the Kappa analysis indicated the degree of agreement between a pair of resistance phenotype and genotype class. A Kappa value of 100% indicated perfect agreement, while a Kappa 0% means no agreement between the presence of an AMR genotype class and its associated phenotype and. All statistical analyses were performed using Stata version 14 . A p-value < 0.05 was considered as statistical significance.Absorption and scattering of shortwave solar irradiance in the Earth’s atmosphere is balanced by absorption, emission, and scattering of long wave radiation. This balance between the shortwave radiation and the long wave radiation determines the temperature structure of the atmosphere and local temperature values on Earth’s surface. The SW and LW balance is essential for understanding climate change, but also for the thermal design of radiant cooling systems, cooling towers, solar power plants, and of the built environment in general. Current interest in the optical designs of passive cooling devices that take advantage of atmospheric windows to reject heat to outer space requires detailed balance between the incoming thermal radiation from the atmosphere and the outer space and the outgoing emissive power from the coolers in order to calculate the equilibrium temperatures and cooling efficiencies. These figures of merit depend on the local atmospheric conditions, which include the convective environment around the device and the downwelling radiative flux from the atmosphere. The convective contribution can be minimized by design, but the thermal radiation from the atmosphere is geometrically constrained by the ability of passive cooling devices to radiate directly to outer space. Absorption bands of water vapor dominate the absorption and emission of infrared radiation in the atmosphere when conditions are wet . When the relative humidity is low,other contributors such as CO2 and aerosols contribute in a non-negligible way through specific bands of the spectrum to the overall thermal balance of the passive cooling devices. Therefore, a detailed spectral model for the long wave radiative transfer in the atmosphere is in need to calculate the thermal balances of such optically selective devices. Two distinct solar power technologies have emerged as most competitive in the renewable energy market of utility scale solar plants: direct photovoltaic conversion and concentrated solar power using heliostat fields to direct solar radiation to a central boiler. In addition to greenhouse gas emission offset, large scale solar farms also interact with the atmosphere through surface albedo replacement. While both PV and CSP technologies affect the local environment, the extent in which they do so has not been studied in detail. Thus a spectrally resolved shortwave radiative model is needed to quantitatively evaluate the effects of albedo replacement on the local shortwave radiative exchange between the ground and the atmosphere. Therefore, to better understand the thermal balances of the Earth-atmosphere system, passive cooling devices and solar power farms, the objective of this research is to develop detailed radiative model to simulate the shortwave solar and long wave atmospheric radiative transfer in the atmosphere, with and without the presence of clouds. The model is validated against ground measurements and other radiative models for varies meteorological conditions. By comparing the modeling results with ground telemetry, representative cloud characteristics for given surface conditions are proposed. Thus a complete spectral model is presented that allows for determination of long wave and shortwave irradiance and can be used for a wide range of meteorological conditions. With the developed radiative model, the albedo replacement effects of large scale PV and CSP farms can be qualified for varies conditions. In addition, the model also serves as a valuable tool to analysis the contribution of each atmospheric constituent to the thermal balance of the Earth-atmosphere system, for seven critical bands of the infrared spectrum. Furthermore, the determination of thermal equilibrium temperatures for radiative cooling devices, also requires knowledge of the spectral atmospheric solar and long wave radiation.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on Backward and forward stepping algorithms were used to identify multivariate regression models

These differences are clear evidence that different organisms can behave differently in wetlands

Key practices such as no-till farming, optimal use of biomass , groundwater recharge, and substitution of chemical inputs for natural processes require further place-based research in order to develop and disseminate “best practices” for large scale operations through farmer-to-farmer and extension networks.Inspired by Dr. Timothy Wise’s book Eating Tomorrow: Agribusiness, Family Farmers, and the Battle for the Future of Food and Dr. Molly Anderson’s recent paper “The importance of vision in food system transformation,” I aim to contribute to building and implementing a shared vision for tomorrow’s food system, one that is climate mitigating, ecologically restorative, land based, and empowering of small farmers and historically marginalized groups in food system politics. A vision “is a beginning for transformation, but it requires policy that enables it to be enacted, ideally through democratic processes. The vision, buttressed by policy and democratic governance, is what determines where people are able to buy food, how much they pay, whether farmers earn decent incomes, and whether the food is healthy” . Lopez Island food system actors have made incremental progress articulating a vision since 1989, starting with the mission statement of the Lopez Community Land Trust. The East Bay region of the San Francisco Bay Area is building a vision for increasing food security via urban agriculture through the work of Food Policy Councils in Berkeley, Richmond, and Oakland. Small farms in both Washington and California are starting to put forth a vision for how regenerative agriculture and farm-based education can aid in the battle against climate change. Bringing these visions together under the polycentric governance model, hydroponic shelf system policy recommendations must be targeted at the appropriate level: county governance for zoning code updates and land use designations, state governance for climate and environmental education standards and funding, and national level policy to revamp the Farm Bill into an incentive package for smaller-scale, regenerative, relocalized agricultural operations.

Building off of the body of research presented in this dissertation, one of my future goals is to establish a Climate Farm School, where young people can come to a demonstration farm and deepen their understanding of the climate crisis while engaging in climate solutions through producing food. The purpose would be threefold: 1) establish a demonstration farm that models climate friendly agricultural practices while producing and distributing food, 2) educate young people and aspiring farmers how to implement and improve climate friendly practices, and 3) engage with local universities in research projects to explore and scale agricultural climate change mitigation/adaptation. My vision is that this farm school could arise through partnership with an existing farm, or through the right opportunity of land acquisition and fundraising.While I seek to engage first with the youth education sector, I can imagine a parallel “Climate Farm School” for policymakers to better understand and connect with climate-friendly farming operations in their areas of jurisdiction to inform and direct their policy proposals. Bringing young people and policymakers into the sustainable food system transition process is a critical step for food system researchers to take in order to realize positive change. Dendritic wetland designs, which consist of a sinuous network of water-filled channels and small, vegetated uplands, can help reduce water turbulence associated with high winds .Vegetative cover has been shown to decrease sediment re-suspension. For example, Braskerud found that an increase in vegetative cover from less than 20 percent up to 50 percent reduced the rate of sediment re-suspension from 40 percent down to near zero. Wetland depth may also have an indirect effect on sediment retention.

The water should be deep enough to mitigate the effect of wind velocity on the underlying soil surface, but if the water is too deep, vegetation will not be able to establish and a significant increase in re-suspension of sediment will result. Water depths between 10 and 20 inches optimize conditions for plant establishment, decreased water velocity, well-anchored soil, and a short distance for particles to fall before they can settle . An excess of vegetation can significantly reduce a wetland’s capacity to retain E. coli. Maximum removal of E. coli occurs under high solar radiation and high temperature conditions , and vegetation provides shading that can greatly reduce both UV radiation and water temperatures. While vegetation can provide favorable attachment sites for E. coli, a dense foliage canopy can hinder the free exchange of oxygen between the wetland and the atmosphere. This vegetation induced barrier to free exchange of oxygen limits dissolved oxygen levels, and that in turn reduces predaceous zooplankton, further decreasing removal of microbial pathogens from the wetland environment . Vegetation plays an important role in filtering contaminants in wetlands. The plants’ uptake of pollutants, including metals and nutrients, is an important mechanism, but is not really considered a removal mechanism unless the vegetation is harvested and physically removed from the wetland. Wetland vegetation also increases the surface area of the substrate for microbial attachment and the biofilm communities that are responsible for many contaminant transformation processes. Shading from vegetation also helps reduce algae growth. However, certain types of vegetation can attract wildlife such as migrating waterfowl, which may then become a source of additional pathogens. Vegetation that serves as a food source or as roosting or nesting habitat for waterfowl may need to be reduced in some settings. Among other important considerations for vegetation coverage in wetlands, one must include total biomass and depth features.

Vegetation should provide enough biomass for nutrient uptake and adsorptive surface area purposes, but must also be managed to allow sufficient light penetration to enable natural photo degradative processes and prevent accumulation of excessive plant residues, which would prevent the export of dissolved organic carbon. One way to promote this balance is to create areas of deeper water intermixed with the shallower areas. Plants will establish more readily in the shallow areas and less so where the water is deeper. In an agricultural setting, it may be hard to establish plantings of native species within wetlands due to the large seed bank of exotic species that may be present in input waters . You can also manage the type and amount of vegetation by manipulating the timing and duration of periods of standing water in the system. In extreme instances, you can actually harvest excess biomass.In addition to managing vegetation and water depth to maximize sedimentation and pathogen photo degradation, cannabis drying racks commercial growers can also manipulate hydrology to maximize the removal of microbial pollutants in wetlands. The importance of hydrologic residence time is apparent when you recognize that a longer HRT increases the exposure of bacteria to any removal processes such as sedimentation, adsorption, predation, impact of toxins from microorganisms or plants, and degradation by UV radiation . E. coli concentrations have been shown to increase in runoff from irrigated pastureland when the volume of runoff is increased . High runoff rates increase the mobility of contaminants from fields and decrease the HRT within the wetland, thus reducing the opportunity for filtering pathogens. Despite variations in several characteristics among the four flow-through wetlands in the case study described earlier, HRT wasa consistently good predictor of E. coli removal efficiency. Mean removal efficiency was 69, 79, 82, and 95 percent for wetlands having mean HRTs of 0.9, 1.6, 2.5, and 11.6 days, respectively . Remarkably, an HRT of less than a day can allow for considerable E. coli retention , which means that a relatively small wetland area can treat runoff from a relatively large agricultural area. The relationship between removal and HRT was not so clear for enterococci . In this case, W-1, with an HRT of 2.5 days, demonstrated a lower removal rate than W-2 or W-3, which had HRTs of 0.9 and 1.6 days, respectively . As discussed above, there are many parameters that can influence the environmental fate of pathogens in wetlands, including vegetation density, design, age, size, contributing area, and depth. A number of these wetland characteristics can doubtless be altered to increase bacteria removal efficiency. The efficiency with which contaminants can be reduced in agricultural water as it passes through a wetland is largely dependent on the extent to which water is evenly distributed across the wetland area. A wetland’s retention capacity is diminished if its design results in stagnant zones that either reduce the effective treatment area or short-circuit longer flow paths, decreasing the HRT. Efficient wetlands come in a variety of shapes and sizes.

A wetland should be wide enough to allow sufficient trapping of sediment and other particulate materials and long enough to permit sufficient residence time for nutrient removal. Most researchers agree that the surface area of a wetland should be as large as possible in order to maximize its HRT and storage capacity. The even dispersion of water across the wetland, termed hydraulic efficiency, is largely defined by the wetland’s dimensions and the relative locations of input and output channels. High hydraulic efficiency maximizes the removal of contaminants. Designs with good hydraulic efficiency have a shape that facilitates complete mixing throughout the wetland without the persistence of stagnant zones, or may incorporate barriers that achieve the same effects . All designs with good hydraulic efficiency have their input and output channels positioned on opposite ends of the wetland.The sediment trap is an important design feature in settings where the input water has a high level of suspended solids . Sediment traps are essentially small swales or ponds positioned between the source of the agricultural water and the main wetland to promote the settling of coarse particles before the water is distributed across the wetland. Sediment traps should be located in easily accessible areas where sediment can conveniently be removed on a regular basis. Incorporation of sediment traps in your design will decrease the amount of sedimentation within the wetland, lengthening the time you can go between dredgings. They also prevent the burial of germinating seedlings in the wetland and help limit channelization and short circuiting of flow paths.The amount of microbial pollutants in wetland soils is significantly higher than in the standing water. Bacteria survive longer in soil than in water . Fecal coliforms can persist in sediments for as long as 6 weeks , so the degree to which sediments are deposited in a wetland has a significant effect on the degree to which bacteria are exported in effluent waters, post-wetland. The survival time for pathogens varies widely in agricultural settings, probably as a result of local differences in environmental conditions . If conditions are conducive to pathogen survival, any of a number of wetland conditions that cause the re-suspension and entrainment of sediment—e.g., high water flow pulses into wetlands, wave action, or channelization—may lead to the release of waters that contain microbial pollutants. If you manage wetlands to allow for alternating episodes of flooding and drying, you may be able to decrease the survival of microbes in the wetland soil. In addition to desiccation associated with episodes of dry wetland soil, fluctuations in wetted surface area and depth can facilitate a diversity of biological and bio-geochemical conditions that optimize wetland function and minimize the duration of pathogen survival .There are two general options to reduce non-point source pollution from agriculture: on-site farm management practices that control the pollution source or limit the application of excess materials and their subsequent loss from farmlands, and off-site practices that intercept non-point source pollutants before they reach downstream waters. Wetlands can be used within a farm scape as either an on-site farm practice or an off-site tool, where downstream flood plains are converted to wetlands to mitigate pollution at the watershed scale. In settings where the attraction of wildlife is of concern, you may want to consider placing the wetland off-site, but at a place where it will intercept the runoff before it enters a natural water body. This may also require re-routing of the agricultural runoff into an off-site wetland.Johne’s disease is a chronic, infectious gastrointestinal disease of domestic and wild ruminants , caused by Mycobacterium avium subsp. paratuberculosis . Johne’s disease is a global disease, which was first observed in dairy cows in 1895. Environmental viability studies found that MAP can survive for 8 months in feces at ambient conditions and for 19 months in water at 38 degrees of centigrade. MAP remains viable in a desiccated state for up to 47 months.

Posted in Commercial Cannabis Cultivation | Tagged , , | Comments Off on These differences are clear evidence that different organisms can behave differently in wetlands