Further work has demonstrated reconfigurable accelerators that rely on FPGAs or ASICs

Uncertainty in when, how and where water is being used, however, threatens the security of water rights — particularly when water is substantially over allocated relative to natural supplies. During the 2012–2016 drought, for example, the SWRCB issued notices of curtailment to water rights holders to protect endangered fish species within priority watersheds. Less controversial targeted cutbacks to individuals might have been sufficient if the agency had more accurate information on how water rights were being exercised. As the 2012–2016 drought progressed, flaws in the state’s accounting system for tracking water rights be came more apparent. This study, together with other policy reports , articulated the need for water accounting reforms, raised public awareness and helped to mobilize support for new legislation in 2015 , which significantly increased water-use monitoring and reporting requirements for water rights holders. The new regulations also extended reporting requirements to senior water rights holders , which are among the largest individual water users in the state.The legalization of recreational cannabis in 2016 with passage of State Proposition 64 prompted state agencies to develop new policies to regulate the production, distribution and use of the plant. For example, California Senate Bill 837 directed the SWRCB to establish a new regulatory program to address potential water quality and quantity issues related to cannabis cultivation. The subsequently enacted California Water Code Section 13149 in 2016 obliged the SWRCB,hydroponic drain table in consultation with the California Department of Fish and Wildlife, to develop both interim and long-term principles and guidelines for water diversion and water quality in cannabis cultivation.

As a result, in 2017, the SWRCB adopted the Cannabis Cultivation Policy: Principles and Guidelines for Cannabis Cultivation . The Cannabis Cultivation Policy’s goal is to provide a framework to regulate the diversion of water and waste discharge associated with cannabis cultivation such that it does not negatively affect fresh water habitats and water quality. A key element of the Cannabis Cultivation Policy is the establishment of environmental flow thresholds, below which diversions for cannabis irrigation are prohibited . During the dry season , no surface water diversions are permitted for cannabis cultivation. Diversions from surface water sources to off-stream storage are allowed between Nov. 1 and March 31. However, water may only be extracted from streams when flow exceeds the amount needed to maintain adult salmon passage and spawning and winter rearing conditions for juvenile salmon. Environmental flow requirements for the winter diversion season were determined by an approach known as the Tessmann Method , which uses proportions of historical mean annual and mean monthly natural flows to set protective thresholds. Because flows are not measured continuously in most streams in California , including at most points of diversion, the Cannabis Cultivation Policy instead relies on using the predictions of natural flows from the models described above. Predicted natural mean monthly and annual flows are used by the SWRCB at compliance gauge points to calculate the Tessmann thresholds. Cannabis cultivators seeking a Cannabis Small Irrigation Use Registration permit from the SWRCB are assigned a compliance gauge near their operation and can legally divert water only when flows recorded at the gauge meet or exceed the Tessmann thresholds during the diversion season . The motivation for developing natural stream flow models and data rests on the premise that rivers and streams can be managed to preserve features of natural stream flow patterns critical to biological systems while still providing benefits to human society . For any stream of interest, balancing the needs of humans and nature requires an understanding of its natural flows, whether observed conditions are modified relative to natural patterns and what degree of modification harms its health. As noted in the examples above, this work has both direct and indirect implications for policy and decision-making.

A database of natural stream flows developed by machine-learning models was used to help define cannabis policy to set minimum flow targets — a direct application of the technique. However, this work also influenced policy and decision-making in more subtle ways, including building awareness of shortcomings in the state’s water rights accounting system. This form of engagement with government agencies and the broader public helps define the agenda early in the pol icy-making process , although quantifying the degree to which our research contributed to policy outcomes such as SB 88 is difficult. The future impact of our work on environmental flow management remains unclear, but early engagement with state and federal agencies through the Environmental Flows Work group suggests that our flow modeling tools and data will have an important role in future policy development. Recognizing there are likely other applications for our modeling tools, we have been working to make the data available to the public. Model predictions have now been generated for every stream in California, including values of mean monthly, maximum and minimum monthly flows and confidence intervals for California’s 139,912 stream segments in the National Hydrography Database . A more dynamic spatial mapping tool has been developed to explore the data in individual rivers, watersheds or regions. An online interactive visualization tool is also available that allows a user to select one or several stream gauges and generate the corresponding hydrograph of observed and expected monthly flows . An immediate next step for this project is to expand the natural flows dataset to include predictions of additional stream flow attributes that are relevant to environmental water management. This will support the Environmental Flows Work group’s goal of defining ecological flow criteria in all rivers and streams of the state and can help inform a variety of programs including, for example, water transactions and stream flow enhancement programs.

Other direct applications of the natural flows data may be in hydropower project relicensing, which requires consideration of environmental flow needs. In addition, under the Sustainable Groundwater Management Act , groundwater sustainability agencies are required to avoid undesirable rsults including depletions of interconnected surface water that have significant and unreasonable adverse impacts on beneficial uses of the surface water. Because environmental flow criteria have not been established for most streams in California, GSAs are rightfully confused as to the standards they are expected to meet. Statewide environmental flow criteria may help to define management targets required for SGMA implementation. Looking to the future, society will continue to face challenges in balancing environmental protections with the demands of a growing population. Tools that make use of long-term monitoring data and modern computing power, such as the models described here, can help inform policy and management intended to achieve this balance.Future high-performance computing systems are1 driven toward heterogeneity of compute and memory resources in response to the expected halt of traditional technology scaling, combined with continuous demands for increased performance and the wide landscape of HPC applications. In the long term, many HPC systems are expected to feature a variety of graphical processing units , partially programmable accelerators, fixed-function accelerators, reconfigurable accelerators such as field-programmable gate arrays, and new classes of memory that blur the line between memory and storage technology. If we preserve our current method of allocating resources to applications in units of statically configured nodes where every node is identical, then future systems risk substantially underutilizing expensive resources. This is because not every application will be able to profitably use specialized hardware resources as the value of a given accelerator can be very application-dependent. The potential for waste of resources when a given application does not use them grows with the number of new heterogeneous technologies and accelerators that might be co-integrated into future nodes. This observation, combined with the desire to increase utilization even of “traditional” resources, has led to research on systems that can pool and compose resources of different types in a fine-grain manner to match application requirements. This capability is referred to as resource disaggregation. In datacenters, resource disaggregation has increased the utilization of GPUs and memory. Such approaches usually employ a full-system solution where resources can be pooled from across the system. While this approach maximizes the flexibility and range of resource disaggregation,rolling benches hydroponcis it also increases the overhead to implement resource disaggregation, for instance by requiring long-range communication that stresses bandwidth and increases latency. As a result, some work focuses on intra-rack disaggregation. While resource disaggregation is regarded as a promising approach in HPC in addition to datacenters, there is currently no solid understanding of what range or flexibility of disaggregation HPC applications require and what is the expected improvement of resource utilization through this approach.

Without any data-driven analysis of the workload, we risk over designing resource disaggregation that will make it not only unnecessarily expensive but also may overly penalize application performance due to high latencies and limited communication bandwidth. To that end, we study and quantify what level of resource disaggregation is sufficient for typical HPC workloads and what the efficiency increase opportunity is if HPC embraces this approach, to guide future research into specific technological solutions. We perform a detailed, data-driven analysis in an exemplar open-science, high-ranked, production HPC system with a diverse scientific workload and complement our analysis with profiling key machine learning applications. For our system analysis, we sample key system-wide and per-job metrics that indicate how efficiently resources are used, sampled every second for a duration of three weeks on NERSC’s Cori. Cori is a top 20, open-science HPC system that supports thousands of projects, multiple thousands of users, and executes a diverse set of HPC workloads from fusion energy, material science, climate research, physics, computer science, and many other science domains. Because Cori has no GPUs, we also study machine learning applications executing on NVIDIA GPUs. For these applications, we examine a range of scales, training, and inference while analyzing utilization of key resources.Based on our analysis, we find that for a system configuration similar to Cori, intra-rack disag gregation suffices the vast majority of the time even after reducing overall resources. In particular, in a rack configuration similar to Cori but with ideal intra-rack resource disaggregation where network interface controllers and memory resources can be allocated to jobs in a fine-grain manner but only within racks, we show that a central processing unit has 99.5% probability to find all resources it requires inside its rack. Focusing on jobs, with 20% fewer memory modules and NIC bandwidth per rack, a job has an 11% probability to have to span more racks than its minimum possible in Cori. In addition, in our sampling time range and at worst across Haswell and KNL nodes, we could reduce 69.01% memory bandwidth, 5.36% memory capac ity, and 43.35% NIC bandwidth in Cori while still satisfying the worst-case average rack utilization. This quantifies how many resources intra-rack disaggregation can reduce at best.Future HPC systems are expected to have a variety of compute and memory resources as a means to reduce cost and preserve performance scaling. The onset of this trend is evident in recent HPC systems that feature partially programmable compute accelerators, with GPUs quickly gaining traction. For instance, approximately a third of the computational throughput of today’s top 500 HPC systems is attributed to accelerators. At the same time, recent literature proposes fixed function accelerators such as for artificial intelligence. Consequently, past work has examined how job scheduling should consider heterogeneous re source requests, how the operating system and runtime should adapt, how to write applications for heterogeneous systems, how to partition data-parallel applications onto heterogeneous compute resources, how to consider the different fault tolerances of heterogeneous resources, how to fairly compare the performance of different heteroge neous systems, and what the impact of heterogeneous resources is to application performance.Resource disaggregation refers to the ability of a system to pool and compose resources in a fine grain manner and thus be capable of allocating exactly the resources an application requests. This is in contrast to many systems today where nodes are allocated to applications as a unit with identical fixed-sized resources; any resources inside nodes that the application does not use have no choice but to idle. Following the trend for hardware specialization and the desire to better utilize resources as systems scale up, resource disaggregation across the system or a group of racks has been actively researched and deployed in commercial hyperscale datacenters in Google, Facebook, and others. In addition, many studies focus on disaggregation of GPUs and memory capacity. Resource disaggregation is slowly coming into focus for HPC in addition to the existing hyper scale datacenter deployments.

This entry was posted in Commercial Cannabis Cultivation and tagged , , . Bookmark the permalink.