Perturbation theory instead of large scale shell model calculations
International Nuclear Information System (INIS)
Feldmeier, H.; Mankos, P.
1977-01-01
Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de
Structure of exotic nuclei by large-scale shell model calculations
International Nuclear Information System (INIS)
Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio
2006-01-01
An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component
Shell model in large spaces and statistical spectroscopy
International Nuclear Information System (INIS)
Kota, V.K.B.
1996-01-01
For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)
Large-scale shell model calculations for the N=126 isotones Po-Pu
International Nuclear Information System (INIS)
Caurier, E.; Rejmund, M.; Grawe, H.
2003-04-01
Large-scale shell model calculations were performed in the full Z=82-126 proton model space π(Oh 9/2 , 1f 7/2 , Oi 13/2 , 2p 3/2 , 1f 5/2 , 2p 1/2 ) employing the code NATHAN. The modified Kuo-Herling interaction was used, no truncation was applied up to protactinium (Z=91) and seniority truncation beyond. The results are compared to experimental data including binding energies, level schemes and electromagnetic transition rates. An overall excellent agreement is obtained for states that can be described in this model space. Limitations of the approach with respect to excitations across the Z=82 and N=126 shells and deficiencies of the interaction are discussed. (orig.)
Approximate symmetries in atomic nuclei from a large-scale shell-model perspective
Launey, K. D.; Draayer, J. P.; Dytrych, T.; Sun, G.-H.; Dong, S.-H.
2015-05-01
In this paper, we review recent developments that aim to achieve further understanding of the structure of atomic nuclei, by capitalizing on exact symmetries as well as approximate symmetries found to dominate low-lying nuclear states. The findings confirm the essential role played by the Sp(3, ℝ) symplectic symmetry to inform the interaction and the relevant model spaces in nuclear modeling. The significance of the Sp(3, ℝ) symmetry for a description of a quantum system of strongly interacting particles naturally emerges from the physical relevance of its generators, which directly relate to particle momentum and position coordinates, and represent important observables, such as, the many-particle kinetic energy, the monopole operator, the quadrupole moment and the angular momentum. We show that it is imperative that shell-model spaces be expanded well beyond the current limits to accommodate particle excitations that appear critical to enhanced collectivity in heavier systems and to highly-deformed spatial structures, exemplified by the second 0+ state in 12C (the challenging Hoyle state) and 8Be. While such states are presently inaccessible by large-scale no-core shell models, symmetry-based considerations are found to be essential.
Symmetry-guided large-scale shell-model theory
Czech Academy of Sciences Publication Activity Database
Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.
2016-01-01
Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016
Nuclear spectroscopy in large shell model spaces: recent advances
International Nuclear Information System (INIS)
Kota, V.K.B.
1995-01-01
Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs
Experimental and numerical modelling of ductile crack propagation in large-scale shell structures
DEFF Research Database (Denmark)
Simonsen, Bo Cerup; Törnquist, R.
2004-01-01
plastic and controlled conditions. The test specimen can be deformed either in combined in-plane bending and extension or in pure extension. Experimental results are described for 5 and 10 mm thick aluminium and steel plates. By performing an inverse finite-element analysis of the experimental results......This paper presents a combined experimental-numerical procedure for development and calibration of macroscopic crack propagation criteria in large-scale shell structures. A novel experimental set-up is described in which a mode-I crack can be driven 400 mm through a 20(+) mm thick plate under fully...... for steel and aluminium plates, mainly as curves showing the critical element deformation versus the shell element size. These derived crack propagation criteria are then validated against a separate set of experiments considering centre crack specimens (CCS) which have a different crack-tip constraint...
Recent shell-model results for exotic nuclei
Directory of Open Access Journals (Sweden)
Utsuno Yusuke
2014-03-01
Full Text Available We report on our recent advancement in the shell model and its applications to exotic nuclei, focusing on the shell evolution and large-scale calculations with the Monte Carlo shell model (MCSM. First, we test the validity of the monopole-based universal interaction (VMU as a shell-model interaction by performing large-scale shell-model calculations in two different mass regions using effective interactions which partly comprise VMU. Those calculations are successful and provide a deeper insight into the shell evolution beyond the single-particle model, in particular showing that the evolution of the spin-orbit splitting due to the tensor force plays a decisive role in the structure of the neutron-rich N ∼ 28 region and antimony isotopes. Next, we give a brief overview of recent developments in MCSM, and show that it is applicable to exotic nuclei that involve many valence orbits. As an example of its applications to exotic nuclei, shape coexistence in 32Mg is examined.
Shell model and spectroscopic factors
International Nuclear Information System (INIS)
Poves, P.
2007-01-01
In these lectures, I introduce the notion of spectroscopic factor in the shell model context. A brief review is given of the present status of the large scale applications of the Interacting Shell Model. The spectroscopic factors and the spectroscopic strength are discussed for nuclei in the vicinity of magic closures and for deformed nuclei. (author)
Decaying and kicked turbulence in a shell model
DEFF Research Database (Denmark)
Hooghoudt, Jan Otto; Lohse, Detlef; Toschi, Federico
2001-01-01
Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account for the ens......Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account...
Large-scale micromagnetic simulation of Nd-Fe-B sintered magnets with Dy-rich shell structures
Directory of Open Access Journals (Sweden)
T. Oikawa
2016-05-01
Full Text Available Large-scale micromagnetic simulations have been performed using the energy minimization method on a model with structural features similar to those of Dy grain boundary diffusion (GBD-processed sintered magnets. Coercivity increases as a linear function of the anisotropy field of the Dy-rich shell, which is independent of Dy composition in the core as long as the shell thickness is greater than about 15 nm. This result shows that the Dy contained in the initial sintered magnets prior to the GBD process is not essential for enhancing coercivity. Magnetization reversal patterns indicate that coercivity is strongly influenced by domain wall pinning at the grain boundary. This observation is found to be consistent with the one-dimensional pinning theory.
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
New-generation Monte Carlo shell model for the K computer era
International Nuclear Information System (INIS)
Shimizu, Noritaka; Abe, Takashi; Yoshida, Tooru; Otsuka, Takaharu; Tsunoda, Yusuke; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio
2012-01-01
We present a newly enhanced version of the Monte Carlo shell-model (MCSM) method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new-generation framework of the MCSM provides us with a powerful tool to perform very advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using conventional shell-model calculations with an inert 40 Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken. (author)
Large scale shell model calculations: the physics in and the physics out
International Nuclear Information System (INIS)
Zuker, A.P.
1997-01-01
After giving a few examples of recent results of the (SM) 2 collaboration, the monopole modified realistic interactions to be used in shell model calculations are described and analyzed. Rotational motion is discussed in some detail, and some introductory remarks on level densities are made. (orig.)
Energy transfers in large-scale and small-scale dynamos
Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra
2015-11-01
We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.
Large scale nuclear structure studies
International Nuclear Information System (INIS)
Faessler, A.
1985-01-01
Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)
Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics
Kumar, Rohit
2017-08-11
It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.
Managing large-scale models: DBS
International Nuclear Information System (INIS)
1981-05-01
A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases
Large-Scale Trade in Legally Protected Marine Mollusc Shells from Java and Bali, Indonesia.
Nijman, Vincent; Spaan, Denise; Nekaris, K Anne-Isola
2015-01-01
Tropical marine molluscs are traded globally. Larger species with slow life histories are under threat from over-exploitation. We report on the trade in protected marine mollusc shells in and from Java and Bali, Indonesia. Since 1987 twelve species of marine molluscs are protected under Indonesian law to shield them from overexploitation. Despite this protection they are traded openly in large volumes. We collected data on species composition, origins, volumes and prices at two large open markets (2013), collected data from wholesale traders (2013), and compiled seizure data by the Indonesian authorities (2008-2013). All twelve protected species were observed in trade. Smaller species were traded for Java and Bali, but the trade involves networks stretching hundreds of kilometres throughout Indonesia. Wholesale traders offer protected marine mollusc shells for the export market by the container or by the metric ton. Data from 20 confiscated shipments show an on-going trade in these molluscs. Over 42,000 shells were seized over a 5-year period, with a retail value of USD700,000 within Indonesia; horned helmet (Cassis cornuta) (>32,000 shells valued at USD500,000), chambered nautilus (Nautilus pompilius) (>3,000 shells, USD60,000) and giant clams (Tridacna spp.) (>2,000 shells, USD45,000) were traded in largest volumes. Two-thirds of this trade was destined for international markets, including in the USA and Asia-Pacific region. We demonstrated that the trade in protected marine mollusc shells in Indonesia is not controlled nor monitored, that it involves large volumes, and that networks of shell collectors, traders, middlemen and exporters span the globe. This impedes protection of these species on the ground and calls into question the effectiveness of protected species management in Indonesia; solutions are unlikely to be found only in Indonesia and must involve the cooperation of importing countries.
Large-Scale Trade in Legally Protected Marine Mollusc Shells from Java and Bali, Indonesia
Nijman, Vincent; Spaan, Denise; Nekaris, K. Anne-Isola
2015-01-01
Background Tropical marine molluscs are traded globally. Larger species with slow life histories are under threat from over-exploitation. We report on the trade in protected marine mollusc shells in and from Java and Bali, Indonesia. Since 1987 twelve species of marine molluscs are protected under Indonesian law to shield them from overexploitation. Despite this protection they are traded openly in large volumes. Methodology/Principal Findings We collected data on species composition, origins, volumes and prices at two large open markets (2013), collected data from wholesale traders (2013), and compiled seizure data by the Indonesian authorities (2008–2013). All twelve protected species were observed in trade. Smaller species were traded for trade involves networks stretching hundreds of kilometres throughout Indonesia. Wholesale traders offer protected marine mollusc shells for the export market by the container or by the metric ton. Data from 20 confiscated shipments show an on-going trade in these molluscs. Over 42,000 shells were seized over a 5-year period, with a retail value of USD700,000 within Indonesia; horned helmet (Cassis cornuta) (>32,000 shells valued at USD500,000), chambered nautilus (Nautilus pompilius) (>3,000 shells, USD60,000) and giant clams (Tridacna spp.) (>2,000 shells, USD45,000) were traded in largest volumes. Two-thirds of this trade was destined for international markets, including in the USA and Asia-Pacific region. Conclusions/Significance We demonstrated that the trade in protected marine mollusc shells in Indonesia is not controlled nor monitored, that it involves large volumes, and that networks of shell collectors, traders, middlemen and exporters span the globe. This impedes protection of these species on the ground and calls into question the effectiveness of protected species management in Indonesia; solutions are unlikely to be found only in Indonesia and must involve the cooperation of importing countries. PMID:26717021
Large-Scale Trade in Legally Protected Marine Mollusc Shells from Java and Bali, Indonesia.
Directory of Open Access Journals (Sweden)
Vincent Nijman
Full Text Available Tropical marine molluscs are traded globally. Larger species with slow life histories are under threat from over-exploitation. We report on the trade in protected marine mollusc shells in and from Java and Bali, Indonesia. Since 1987 twelve species of marine molluscs are protected under Indonesian law to shield them from overexploitation. Despite this protection they are traded openly in large volumes.We collected data on species composition, origins, volumes and prices at two large open markets (2013, collected data from wholesale traders (2013, and compiled seizure data by the Indonesian authorities (2008-2013. All twelve protected species were observed in trade. Smaller species were traded for 32,000 shells valued at USD500,000, chambered nautilus (Nautilus pompilius (>3,000 shells, USD60,000 and giant clams (Tridacna spp. (>2,000 shells, USD45,000 were traded in largest volumes. Two-thirds of this trade was destined for international markets, including in the USA and Asia-Pacific region.We demonstrated that the trade in protected marine mollusc shells in Indonesia is not controlled nor monitored, that it involves large volumes, and that networks of shell collectors, traders, middlemen and exporters span the globe. This impedes protection of these species on the ground and calls into question the effectiveness of protected species management in Indonesia; solutions are unlikely to be found only in Indonesia and must involve the cooperation of importing countries.
Finite element model for nonlinear shells of revolution
International Nuclear Information System (INIS)
Cook, W.A.
1979-01-01
Nuclear material shipping containers have shells of revolution as basic structural components. Analytically modeling the response of these containers to severe accident impact conditions requires a nonlinear shell-of-revolution model that accounts for both geometric and material nonlinearities. Existing models are limited to large displacements, small rotations, and nonlinear materials. The paper presents a finite element model for a nonlinear shell of revolution that will account for large displacements, large strains, large rotations, and nonlinear materials
Chaotic behaviour of the nuclear shell-model hamiltonian
International Nuclear Information System (INIS)
Dias, H.; Hussein, M.S.; Oliveira, N.A. de; Wildenthal, B.H.
1987-11-01
Large scale nuclear shell-model calculations for several nuclear systems are discussed. In particular, the statistical baheviour of the energy eigenvalues and eigenstates, are discussed. The chaotic behaviour of the NSMH is then shown to be quite useful in calculating the spreading width of the highly collective multipole giant resonances. (author) [pt
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Large-scale multimedia modeling applications
International Nuclear Information System (INIS)
Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.
1995-08-01
Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications
Large-area super-resolution optical imaging by using core-shell microfibers
Liu, Cheng-Yang; Lo, Wei-Chieh
2017-09-01
We first numerically and experimentally report large-area super-resolution optical imaging achieved by using core-shell microfibers. The particular spatial electromagnetic waves for different core-shell microfibers are studied by using finite-difference time-domain and ray tracing calculations. The focusing properties of photonic nanojets are evaluated in terms of intensity profile and full width at half-maximum along propagation and transversal directions. In experiment, the general optical fiber is chemically etched down to 6 μm diameter and coated with different metallic thin films by using glancing angle deposition. The direct imaging of photonic nanojets for different core-shell microfibers is performed with a scanning optical microscope system. We show that the intensity distribution of a photonic nanojet is highly related to the metallic shell due to the surface plasmon polaritons. Furthermore, large-area super-resolution optical imaging is performed by using different core-shell microfibers placed over the nano-scale grating with 150 nm line width. The core-shell microfiber-assisted imaging is achieved with super-resolution and hundreds of times the field-of-view in contrast to microspheres. The possible applications of these core-shell optical microfibers include real-time large-area micro-fluidics and nano-structure inspections.
Experimental Damage Identification of a Model Reticulated Shell
Directory of Open Access Journals (Sweden)
Jing Xu
2017-04-01
Full Text Available The damage identification of a reticulated shell is a challenging task, facing various difficulties, such as the large number of degrees of freedom (DOFs, the phenomenon of modal localization and transition, and low modeling accuracy. Based on structural vibration responses, the damage identification of a reticulated shell was studied. At first, the auto-regressive (AR time series model was established based on the acceleration responses of the reticulated shell. According to the changes in the coefficients of the AR model between the damaged conditions and the undamaged condition, the damage of the reticulated shell can be detected. In addition, the damage sensitive factors were determined based on the coefficients of the AR model. With the damage sensitive factors as the inputs and the damage positions as the outputs, back-propagation neural networks (BPNNs were then established and were trained using the Levenberg–Marquardt algorithm (L–M algorithm. The locations of the damages can be predicted by the back-propagation neural networks. At last, according to the experimental scheme of single-point excitation and multi-point responses, the impact experiments on a K6 shell model with a scale of 1/10 were conducted. The experimental results verified the efficiency of the proposed damage identification method based on the AR time series model and back-propagation neural networks. The proposed damage identification method can ensure the safety of the practical engineering to some extent.
International Nuclear Information System (INIS)
Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.
1989-01-01
Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs
Cosmological streaming velocities and large-scale density maxima
International Nuclear Information System (INIS)
Peacock, J.A.; Lumsden, S.L.; Heavens, A.F.
1987-01-01
The statistical testing of models for galaxy formation against the observed peculiar velocities on 10-100 Mpc scales is considered. If it is assumed that observers are likely to be sited near maxima in the primordial field of density perturbations, then the observed filtered velocity field will be biased to low values by comparison with a point selected at random. This helps to explain how the peculiar velocities (relative to the microwave background) of the local supercluster and the Rubin-Ford shell can be so similar in magnitude. Using this assumption to predict peculiar velocities on two scales, we test models with large-scale damping (i.e. adiabatic perturbations). Allowed models have a damping length close to the Rubin-Ford scale and are mildly non-linear. Both purely baryonic universes and universes dominated by massive neutrinos can account for the observed velocities, provided 0.1 ≤ Ω ≤ 1. (author)
Comparison Between Overtopping Discharge in Small and Large Scale Models
DEFF Research Database (Denmark)
Helgason, Einar; Burcharth, Hans F.
2006-01-01
The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...
Isogeometric shell formulation based on a classical shell model
Niemi, Antti; Collier, Nathan; Dalcí n, Lisandro D.; Ghommem, Mehdi; Calo, Victor M.
2012-01-01
The authors future work is concerned with building an isogeometric finite element method for modelling nonlinear structural response of thin-walled shells undergoing large rigid-body motions. The aim is to use the model in a aeroelastic framework for the simulation of flapping wings.
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Shell Model Far From Stability: Island of Inversion Mergers
Nowacki, F.; Poves, A.
2018-02-01
In this study we propose a common mechanism for the disappearance of shell closures far from stabilty. With the use of Large Scale Shell Model calculations (SM-CI), we predict that the region of deformation which comprises the heaviest Chromium and Iron isotopes at and beyond N=40 will merge with a new one at N=50 in an astonishing parallel to the N=20 and N=28 case in the Neon and Magnesium isotopes. We propose a valence space including the full pf-shell for the protons and the full sdg shell for the neutrons, which represents a come-back of the the harmonic oscillator shells in the very neutron rich regime. Our calculations preserve the doubly magic nature of the ground state of 78Ni, which, however, exhibits a well deformed prolate band at low excitation energy, providing a striking example of shape coexistence far from stability. This new Island of Inversion (IoI) adds to the four well documented ones at N=8, 20, 28 and 40.
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
Energy Technology Data Exchange (ETDEWEB)
Dytrych, T. [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic); Louisiana State Univ., Baton Rouge, LA (United States); Maris, Pieter [Iowa State Univ., Ames, IA (United States); Launey, K. D. [Louisiana State Univ., Baton Rouge, LA (United States); Draayer, J. P. [Louisiana State Univ., Baton Rouge, LA (United States); Vary, James [Iowa State Univ., Ames, IA (United States); Langr, D. [Czech Technical Univ., Prague (Czech Republic); Aerospace Research and Test Establishment, Prague (Czech Republic); Saule, E. [Univ. of North Carolina, Charlotte, NC (United States); Caprio, M. A. [Univ. of Notre Dame, IN (United States); Catalyurek, U. [The Ohio State Univ., Columbus, OH (United States). Dept. of Electrical and Computer Engineering; Sosonkina, M. [Old Dominion Univ., Norfolk, VA (United States)
2016-06-09
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for ^{6}Li and ^{12}C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations of states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.
Energy Technology Data Exchange (ETDEWEB)
Lee, Chung-Che; Chen, Dong-Hwang [Department of Chemical Engineering, National Cheng Kung University, Tainan, Taiwan 701, Taiwan (China)
2006-07-14
The large-scale synthesis and characterization of Ni-core/Ag-shell (Ni at Ag) nanoparticles by the successive hydrazine reduction of nickel chloride and silver nitrate in ethylene glycol using polyethyleneimine (PEI) as a protective agent are described. The resultant Ni at Ag nanoparticles had a mean core diameter of 6.2 nm and a shell thickness of 0.85 nm, without significant change in the nickel concentration of 0.25-25 mM for the Ag coating. Also, both Ni cores and Ag nanoshells had an fcc structure and PEI was capped on the particle surface. X-ray photoelectron spectroscopy analysis confirmed that the Ni cores were fully covered by Ag nanoshells. In addition, the Ni at Ag nanoparticles exhibited a characteristic absorption band at 430 nm and were nearly superparamagnetic. Based on the weight of Ni cores, the saturation magnetization (M{sub s}), remanent magnetization (M{sub r}) and coercivity (H{sub c}) were obtained as 17.2 emu g{sup -1}, 4.0 emu g{sup -1} and 81 Oe, respectively. Furthermore, the resultant Ni at Ag nanoparticles exhibited better anti-oxidation properties than Ni nanoparticles did due to the protection of the Ag nanoshells.
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.; Dean, D.J.; Langanke, K.
1997-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Isogeometric shell formulation based on a classical shell model
Niemi, Antti
2012-09-04
This paper constitutes the first steps in our work concerning isogeometric shell analysis. An isogeometric shell model of the Reissner-Mindlin type is introduced and a study of its accuracy in the classical pinched cylinder benchmark problem presented. In contrast to earlier works [1,2,3,4], the formulation is based on a shell model where the displacement, strain and stress fields are defined in terms of a curvilinear coordinate system arising from the NURBS description of the shell middle surface. The isogeometric shell formulation is implemented using the PetIGA and igakit software packages developed by the authors. The igakit package is a Python package used to generate NURBS representations of geometries that can be utilised by the PetIGA finite element framework. The latter utilises data structures and routines of the portable, extensible toolkit for scientific computation (PETSc), [5,6]. The current shell implementation is valid for static, linear problems only, but the software package is well suited for future extensions to geometrically and materially nonlinear regime as well as to dynamic problems. The accuracy of the approach in the pinched cylinder benchmark problem and present comparisons against the h-version of the finite element method with bilinear elements. Quadratic, cubic and quartic NURBS discretizations are compared against the isoparametric bilinear discretization introduced in [7]. The results show that the quadratic and cubic NURBS approximations exhibit notably slower convergence under uniform mesh refinement as the thickness decreases but the quartic approximation converges relatively quickly within the standard variational framework. The authors future work is concerned with building an isogeometric finite element method for modelling nonlinear structural response of thin-walled shells undergoing large rigid-body motions. The aim is to use the model in a aeroelastic framework for the simulation of flapping wings.
Large-scale modeling of rain fields from a rain cell deterministic model
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
SUPERGIANT SHELLS AND MOLECULAR CLOUD FORMATION IN THE LARGE MAGELLANIC CLOUD
Energy Technology Data Exchange (ETDEWEB)
Dawson, J. R.; Dickey, John M. [School of Mathematics and Physics, University of Tasmania, Sandy Bay Campus, Churchill Avenue, Sandy Bay, TAS 7005 (Australia); McClure-Griffiths, N. M. [Australia Telescope National Facility, CSIRO Astronomy and Space Science, Marsfield NSW 2122 (Australia); Wong, T. [Astronomy Department, University of Illinois, Urbana, IL 61801 (United States); Hughes, A. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, D-69117, Heidelberg (Germany); Fukui, Y. [Department of Physics and Astrophysics, Nagoya University, Chikusa-ku, Nagoya (Japan); Kawamura, A., E-mail: joanne.dawson@utas.edu.au [National Astronomical Observatory of Japan, Tokyo 181-8588 (Japan)
2013-01-20
We investigate the influence of large-scale stellar feedback on the formation of molecular clouds in the Large Magellanic Cloud (LMC). Examining the relationship between H I and {sup 12}CO(J = 1-0) in supergiant shells (SGSs), we find that the molecular fraction in the total volume occupied by SGSs is not enhanced with respect to the rest of the LMC disk. However, the majority of objects ({approx}70% by mass) are more molecular than their local surroundings, implying that the presence of a supergiant shell does on average have a positive effect on the molecular gas fraction. Averaged over the full SGS sample, our results suggest that {approx}12%-25% of the molecular mass in supergiant shell systems was formed as a direct result of the stellar feedback that created the shells. This corresponds to {approx}4%-11% of the total molecular mass of the galaxy. These figures are an approximate lower limit to the total contribution of stellar feedback to molecular cloud formation in the LMC, and constitute one of the first quantitative measurements of feedback-triggered molecular cloud formation in a galactic system.
Directory of Open Access Journals (Sweden)
Ana Tomé
2018-02-01
Full Text Available A research and development project has been conducted aiming to design and produce ultra-thin concrete shells. In this paper, the first part of the project is described, consisting of an innovative method for shape generation and the consequent production of reduced-scale models of the selected geometries. First, the shape generation is explained, consisting of a geometrically nonlinear analysis based on the Finite Element Method (FEM to define the antifunicular of the shell’s deadweight. Next, the scale model production is described, consisting of 3D printing, specifically developed to evaluate the aesthetics and visual impact, as well as to study the aerodynamic behaviour of the concrete shells in a wind tunnel. The goals and constraints of the method are identified and a step-by-step guidelines presented, aiming to be used as a reference in future studies. The printed geometry is validated by high-resolution assessment achieved by photogrammetry. The results are compared with the geometry computed through geometric nonlinear finite-element-based analysis, and no significant differences are recorded. The method is revealed to be an important tool for automatic shape generation and building scale models of shells. The latter enables the performing of wind tunnel tests to obtain pressure coefficients, essential for structural analysis of this type of structures.
History and future perspectives of the Monte Carlo shell model -from Alphleet to K computer-
International Nuclear Information System (INIS)
Shimizu, Noritaka; Otsuka, Takaharu; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio; Abe, Takashi
2013-01-01
We report a history of the developments of the Monte Carlo shell model (MCSM). The MCSM was proposed in order to perform large-scale shell-model calculations which direct diagonalization method cannot reach. Since 1999 PC clusters were introduced for parallel computation of the MCSM. Since 2011 we participated the High Performance Computing Infrastructure Strategic Program and developed a new MCSM code for current massively parallel computers such as K computer. We discuss future perspectives concerning a new framework and parallel computation of the MCSM by incorporating conjugate gradient method and energy-variance extrapolation
Importance-truncated shell model for multi-shell valence spaces
Energy Technology Data Exchange (ETDEWEB)
Stumpf, Christina; Vobig, Klaus; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany)
2016-07-01
The valence-space shell model is one of the work horses in nuclear structure theory. In traditional applications, shell-model calculations are carried out using effective interactions constructed in a phenomenological framework for rather small valence spaces, typically spanned by one major shell. We improve on this traditional approach addressing two main aspects. First, we use new effective interactions derived in an ab initio approach and, thus, establish a connection to the underlying nuclear interaction providing access to single- and multi-shell valence spaces. Second, we extend the shell model to larger valence spaces by applying an importance-truncation scheme based on a perturbative importance measure. In this way, we reduce the model space to the relevant basis states for the description of a few target eigenstates and solve the eigenvalue problem in this physics-driven truncated model space. In particular multi-shell valence spaces are not tractable otherwise. We combine the importance-truncated shell model with refined extrapolation schemes to approximately recover the exact result. We present first results obtained in the importance-truncated shell model with the newly derived ab initio effective interactions for multi-shell valence spaces, e.g., the sdpf shell.
Type I Shell Galaxies as a Test of Gravity Models
Energy Technology Data Exchange (ETDEWEB)
Vakili, Hajar; Rahvar, Sohrab [Department of Physics, Sharif University of Technology, P.O. Box 11365-9161, Tehran (Iran, Islamic Republic of); Kroupa, Pavel, E-mail: vakili@physics.sharif.edu [Helmholtz-Institut für Strahlen-und Kernphysik, Universität Bonn, Nussallee 14-16, D-53115 Bonn (Germany)
2017-10-10
Shell galaxies are understood to form through the collision of a dwarf galaxy with an elliptical galaxy. Shell structures and kinematics have been noted to be independent tools to measure the gravitational potential of the shell galaxies. We compare theoretically the formation of shells in Type I shell galaxies in different gravity theories in this work because this is so far missing in the literature. We include Newtonian plus dark halo gravity, and two non-Newtonian gravity models, MOG and MOND, in identical initial systems. We investigate the effect of dynamical friction, which by slowing down the dwarf galaxy in the dark halo models limits the range of shell radii to low values. Under the same initial conditions, shells appear on a shorter timescale and over a smaller range of distances in the presence of dark matter than in the corresponding non-Newtonian gravity models. If galaxies are embedded in a dark matter halo, then the merging time may be too rapid to allow multi-generation shell formation as required by observed systems because of the large dynamical friction effect. Starting from the same initial state, the observation of small bright shells in the dark halo model should be accompanied by large faint ones, while for the case of MOG, the next shell generation patterns iterate with a specific time delay. The first shell generation pattern shows a degeneracy with the age of the shells and in different theories, but the relative distance of the shells and the shell expansion velocity can break this degeneracy.
Large-scale exact diagonalizations reveal low-momentum scales of nuclei
Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.
2018-03-01
Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.
Jiao, C. F.; Engel, J.; Holt, J. D.
2017-11-01
We use the generator-coordinate method (GCM) with realistic shell-model interactions to closely approximate full shell-model calculations of the matrix elements for the neutrinoless double-β decay of 48Ca, 76Ge, and 82Se. We work in one major shell for the first isotope, in the f5 /2p g9 /2 space for the second and third, and finally in two major shells for all three. Our coordinates include not only the usual axial deformation parameter β , but also the triaxiality angle γ and neutron-proton pairing amplitudes. In the smaller model spaces our matrix elements agree well with those of full shell-model diagonalization, suggesting that our Hamiltonian-based GCM captures most of the important valence-space correlations. In two major shells, where exact diagonalization is not currently possible, our matrix elements are only slightly different from those in a single shell.
Large-scale hydrology in Europe : observed patterns and model performance
Energy Technology Data Exchange (ETDEWEB)
Gudmundsson, Lukas
2011-06-15
In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale
An interactive display system for large-scale 3D models
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
Optimization of large-scale heterogeneous system-of-systems models.
Energy Technology Data Exchange (ETDEWEB)
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
Disinformative data in large-scale hydrological modelling
Directory of Open Access Journals (Sweden)
A. Kauffeldt
2013-07-01
Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent
Aslam, Umar; Linic, Suljo
2017-12-13
Bimetallic nanoparticles in which a metal is coated with an ultrathin (∼1 nm) layer of a second metal are often desired for their unique chemical and physical properties. Current synthesis methods for producing such core-shell nanostructures often require incremental addition of a shell metal precursor which is rapidly reduced onto metal cores. A major shortcoming of this approach is that it necessitates precise concentrations of chemical reagents, making it difficult to perform at large scales. To address this issue, we considered an approach whereby the reduction of the shell metal precursor was controlled through in situ chemical modification of the precursor. We used this approach to develop a highly scalable synthesis for coating atomic layers of Pt onto Ag nanocubes. We show that Ag-Pt core-shell nanostructures are synthesized in high yields and that these structures effectively combine the optical properties of the plasmonic Ag nanocube core with the surface properties of the thin Pt shell. Additionally, we demonstrate the scalability of the synthesis by performing a 10 times scale-up.
Evaluation of drought propagation in an ensemble mean of large-scale hydrological models
Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.
2012-01-01
Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological
Deriving the nuclear shell model from first principles
Barrett, Bruce R.; Dikmen, Erdal; Vary, James P.; Maris, Pieter; Shirokov, Andrey M.; Lisetskiy, Alexander F.
2014-09-01
The results of an 18-nucleon No Core Shell Model calculation, performed in a large basis space using a bare, soft NN interaction, can be projected into the 0 ℏω space, i.e., the sd -shell. Because the 16 nucleons in the 16O core are frozen in the 0 ℏω space, all the correlations of the 18-nucleon system are captured by the two valence, sd -shell nucleons. By the projection, we obtain microscopically the sd -shell 2-body effective interactions, the core energy and the sd -shell s.p. energies. Thus, the input for standard shell-model calculations can be determined microscopically by this approach. If the same procedure is then applied to 19-nucleon systems, the sd -shell 3-body effective interactions can also be obtained, indicating the importance of these 3-body effective interactions relative to the 2-body effective interactions. Applications to A = 19 and heavier nuclei with different intrinsic NN interactions will be presented and discussed. The results of an 18-nucleon No Core Shell Model calculation, performed in a large basis space using a bare, soft NN interaction, can be projected into the 0 ℏω space, i.e., the sd -shell. Because the 16 nucleons in the 16O core are frozen in the 0 ℏω space, all the correlations of the 18-nucleon system are captured by the two valence, sd -shell nucleons. By the projection, we obtain microscopically the sd -shell 2-body effective interactions, the core energy and the sd -shell s.p. energies. Thus, the input for standard shell-model calculations can be determined microscopically by this approach. If the same procedure is then applied to 19-nucleon systems, the sd -shell 3-body effective interactions can also be obtained, indicating the importance of these 3-body effective interactions relative to the 2-body effective interactions. Applications to A = 19 and heavier nuclei with different intrinsic NN interactions will be presented and discussed. Supported by the US NSF under Grant No. 0854912, the US DOE under
Evaluation of drought propagation in an ensemble mean of large-scale hydrological models
Directory of Open Access Journals (Sweden)
A. F. Van Loon
2012-11-01
Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.
Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an
Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation
DEFF Research Database (Denmark)
Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.
2015-01-01
This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...
Multibody dynamic analysis using a rotation-free shell element with corotational frame
Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen
2018-03-01
Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.
Large transverse momentum processes in a non-scaling parton model
International Nuclear Information System (INIS)
Stirling, W.J.
1977-01-01
The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)
Collapse analysis of toroidal shell
International Nuclear Information System (INIS)
Pomares, R.J.
1990-01-01
This paper describes a study performed to determine the collapse characteristics of a toroidal shell using finite element method (FEM) analysis. The study also included free drop testing of a quarter scale prototype to verify the analytical results. The full sized toroidal shell has a 24-inch toroidal diameter with a 24-inch tubal diameter. The shell material is type 304 strainless steel. The toroidal shell is part of the GE Model 2000 transportation packaging, and acts as an energy absorbing device. The analyses performed were on a full sized and quarter scaled models. The finite element program used in all analyses was the LIBRA code. The analytical procedure used both the elasto-plastic and large displacement options within the code. The loading applied in the analyses corresponded to an impact of an infinite rigid plane oriented normal to the drop direction vector. The application of the loading continued incrementally until the work performed by the deforming structure equalled the kinetic energy developed in the free fall. The comparison of analysis and test results showed a good correlation
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
The Hamburg large scale geostrophic ocean general circulation model. Cycle 1
International Nuclear Information System (INIS)
Maier-Reimer, E.; Mikolajewicz, U.
1992-02-01
The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)
International Nuclear Information System (INIS)
Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B
2013-01-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)
Multiresolution comparison of precipitation datasets for large-scale models
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
International Nuclear Information System (INIS)
Saha, S.; Palit, R.; Sethi, J.
2012-01-01
The excited states of nuclei near N=50 closed shell provide suitable laboratory for testing the interactions of shell model states, possible presence of high spin isomers and help in understanding the shape transition as the higher orbitals are occupied. In particular, the structure of N = 49 isotones (and Z =32 to 46) with one hole in N=50 shell gap have been investigated using different reactions. Interestingly, the high spin states in these isotones have contribution from particle excitations across the respective proton and neutron shell gaps and provide suitable testing ground for the prediction of shell model interactions describing theses excitations across the shell gap. In the literature, extensive study of the high spin states of heavier N = 49 isotones starting with 91 Mo up to 95 Pd are available. Limited information existed on the high spin states of lighter isotones. Therefore, the motivation of the present work is to extend the high spin structure of 89 Zr and to characterize the structure of these levels through comparison with the large scale shell model calculations based on two new residual interactions in f 5/2 pg 9/2 model space
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...
Photorealistic large-scale urban city model reconstruction.
Poullis, Charalambos; You, Suya
2009-01-01
The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).
Non-gut baryogenesis and large scale structure of the universe
International Nuclear Information System (INIS)
Kirilova, D.P.; Chizhov, M.V.
1995-07-01
We discuss a mechanism for generating baryon density perturbations and study the evolution of the baryon charge density distribution in the framework of the low temperature baryogenesis scenario. This mechanism may be important for the large scale structure formation of the Universe and particularly, may be essential for understanding the existence of a characteristic scale of 130h -1 Mpc in the distribution of the visible matter. The detailed analysis showed that both the observed very large scale of the visible matter distribution in the Universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, according to our model, at present the visible part of the Universe may consist of baryonic and antibaryonic shells, sufficiently separated, so that annihilation radiation is not observed. This is an interesting possibility as far as the observational data of antiparticles in cosmic rays do not rule out the possibility of antimatter superclusters in the Universe. (author). 16 refs, 3 figs
Hydrogen combustion modelling in large-scale geometries
International Nuclear Information System (INIS)
Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.
2014-01-01
Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)
The three-point function as a probe of models for large-scale structure
International Nuclear Information System (INIS)
Frieman, J.A.; Gaztanaga, E.
1993-01-01
The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales
Sizing and scaling requirements of a large-scale physical model for code validation
International Nuclear Information System (INIS)
Khaleel, R.; Legore, T.
1990-01-01
Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated
Shell-model-based deformation analysis of light cadmium isotopes
Schmidt, T.; Heyde, K. L. G.; Blazhev, A.; Jolie, J.
2017-07-01
Large-scale shell-model calculations for the even-even cadmium isotopes 98Cd-108Cd have been performed with the antoine code in the π (2 p1 /2;1 g9 /2) ν (2 d5 /2;3 s1 /2;2 d3 /2;1 g7 /2;1 h11 /2) model space without further truncation. Known experimental energy levels and B (E 2 ) values could be well reproduced. Taking these calculations as a starting ground we analyze the deformation parameters predicted for the Cd isotopes as a function of neutron number N and spin J using the methods of model independent invariants introduced by Kumar [Phys. Rev. Lett. 28, 249 (1972), 10.1103/PhysRevLett.28.249] and Cline [Annu. Rev. Nucl. Part. Sci. 36, 683 (1986), 10.1146/annurev.ns.36.120186.003343].
Research on large-scale wind farm modeling
Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng
2017-01-01
Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.
Small scale models equal large scale savings
International Nuclear Information System (INIS)
Lee, R.; Segroves, R.
1994-01-01
A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)
Extending SME to Handle Large-Scale Cognitive Modeling.
Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre
2017-07-01
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Traffic assignment models in large-scale applications
DEFF Research Database (Denmark)
Rasmussen, Thomas Kjær
the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...
Energy Technology Data Exchange (ETDEWEB)
Joy, Lija K.; Sooraj, V.; Sethulakshmi, N.; Anantharaman, M. R., E-mail: mraiyer@yahoo.com [Department of Physics, Cochin University of Science and Technology, Cochin-682022, Kerala (India); Sajeev, U. S. [Department of Physics, Government College, Kottayam-686613, Kerala (India); Nair, Swapna S. [Department of Physics, School of Mathematical and Physical Sciences, Central University of Kerala, Kasargode-671123, Kerala (India); Narayanan, T. N. [CSIR-Central Electrochemical Research Institute, Karaikkudi-630006, Tamil Nadu (India); Ajayan, P. M. [Department of Material Science and Nano Engineering, Rice University, 6100 Main Street, Houston, Texas 7700 (United States)
2014-03-24
Commercial samples of Magnetite with size ranging from 25–30 nm were coated with polyaniline by using radio frequency plasma polymerization to achieve a core shell structure of magnetic nanoparticle (core)–Polyaniline (shell). High resolution transmission electron microscopy images confirm the core shell architecture of polyaniline coated iron oxide. The dielectric properties of the material were studied before and after plasma treatment. The polymer coated magnetite particles exhibited a large dielectric permittivity with respect to uncoated samples. The dielectric behavior was modeled using a Maxwell–Wagner capacitor model. A plausible mechanism for the enhancement of dielectric permittivity is proposed.
Large-scale building energy efficiency retrofit: Concept, model and control
International Nuclear Information System (INIS)
Wu, Zhou; Wang, Bo; Xia, Xiaohua
2016-01-01
BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.
REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS
Directory of Open Access Journals (Sweden)
Kadir Alpaslan DEMIR
2015-10-01
Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.
Modeling the impact of large-scale energy conversion systems on global climate
International Nuclear Information System (INIS)
Williams, J.
There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)
DeBenedictis, Andrew; Atherton, Timothy J.; Rodarte, Andrea L.; Hirst, Linda S.
2018-03-01
A micrometer-scale elastic shell immersed in a nematic liquid crystal may be deformed by the host if the cost of deformation is comparable to the cost of elastic deformation of the nematic. Moreover, such inclusions interact and form chains due to quadrupolar distortions induced in the host. A continuum theory model using finite elements is developed for this system, using mesh regularization and dynamic refinement to ensure quality of the numerical representation even for large deformations. From this model, we determine the influence of the shell elasticity, nematic elasticity, and anchoring condition on the shape of the shell and hence extract parameter values from an experimental realization. Extending the model to multibody interactions, we predict the alignment angle of the chain with respect to the host nematic as a function of aspect ratio, which is found to be in excellent agreement with experiments.
On the shell model connection of the cluster model
International Nuclear Information System (INIS)
Cseh, J.; Levai, G.; Kato, K.
2000-01-01
Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known, but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model, in which not only the cluster model space is obtained from the full shell model space by an SU(3) symmetry-dictated truncation, but SU(3) dynamically symmetric interactions are also applied. Actually, Hamiltonians of this kind proved to be successful in describing the gross features of cluster states in a wide energy range. The novel feature of the present work is that we apply exclusively shell model interactions. The energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental
On the shell-model-connection of the cluster model
International Nuclear Information System (INIS)
Cseh, J.
2000-01-01
Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago [1] as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known [2] but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model [3,4] in order to find an answer to this question, which seems to be affirmative. In particular, the energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental values. The present results show that the simple and transparent SU(3) connection between the spherical shell model and the cluster model is valid not only for the harmonic oscillator interactions, but for much more general (SU(3) dynamically symmetric) Hamiltonians as well, which result in realistic energy spectra. Via
Detonation and fragmentation modeling for the description of large scale vapor explosions
International Nuclear Information System (INIS)
Buerger, M.; Carachalios, C.; Unger, H.
1985-01-01
The thermal detonation modeling of large-scale vapor explosions is shown to be indispensable for realistic safety evaluations. A steady-state as well as transient detonation model have been developed including detailed descriptions of the dynamics as well as the fragmentation processes inside a detonation wave. Strong restrictions for large-scale vapor explosions are obtained from this modeling and they indicate that the reactor pressure vessel would even withstand explosions with unrealistically high masses of corium involved. The modeling is supported by comparisons with a detonation experiment and - concerning its key part - hydronamic fragmentation experiments. (orig.) [de
Temporal structures in shell models
DEFF Research Database (Denmark)
Okkels, F.
2001-01-01
The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...
Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries
DEFF Research Database (Denmark)
Prunescu, Remus Mihail
with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...
Dynamic subgrid scale model of large eddy simulation of cross bundle flows
International Nuclear Information System (INIS)
Hassan, Y.A.; Barsamian, H.R.
1996-01-01
The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization
Investigation on the integral output power model of a large-scale wind farm
Institute of Scientific and Technical Information of China (English)
BAO Nengsheng; MA Xiuqian; NI Weidou
2007-01-01
The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes
Modeling and control of a large nuclear reactor. A three-time-scale approach
Energy Technology Data Exchange (ETDEWEB)
Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering
2013-07-01
Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.
Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio
Chang, T. Y.; Sawamiphakdi, K.
1984-01-01
A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Folding and unfolding of large-size shell construction for application in Earth orbit
Kondyurin, Alexey; Pestrenina, Irena; Pestrenin, Valery; Rusakov, Sergey
2016-07-01
A future exploration of space requires a technology of large module for biological, technological, logistic and other applications in Earth orbits [1-3]. This report describes the possibility of using large-sized shell structures deployable in space. Structure is delivered to the orbit in the spaceship container. The shell is folded for the transportation. The shell material is either rigid plastic or multilayer prepreg comprising rigid reinforcements (such as reinforcing fibers). The unfolding process (bringing a construction to the unfolded state by loading the internal pressure) needs be considered at the presence of both stretching and bending deformations. An analysis of the deployment conditions (the minimum internal pressure bringing a construction from the folded state to the unfolded state) of large laminated CFRP shell structures is formulated in this report. Solution of this mechanics of deformable solids (MDS) problem of the shell structure is based on the following assumptions: the shell is made of components whose median surface has a reamer; in the separate structural element relaxed state (not stressed and not deformed) its median surface coincides with its reamer (this assumption allows choose the relaxed state of the structure correctly); structural elements are joined (sewn together) by a seam that does not resist rotation around the tangent to the seam line. The ways of large shell structures folding, whose median surface has a reamer, are suggested. Unfolding of cylindrical, conical (full and truncated cones), and large-size composite shells (cylinder-cones, cones-cones) is considered. These results show that the unfolding pressure of such large-size structures (0.01-0.2 atm.) is comparable to the deploying pressure of pneumatic parts (0.001-0.1 atm.) [3]. It would be possible to extend this approach to investigate the unfolding process of large-sized shells with ruled median surface or for non-developable surfaces. This research was
Royer, J.; Brandon, V.
2011-12-01
The large-scale deformation observed in the Indo-Australian plate seems to challenge tenets of plate tectonics: plate rigidity and narrow oceanic plate boundaries. Its distribution along with kinematic data inversions however suggest that the Indo-Australian plate can be viewed as a composite plate made of three rigid component plates - India, Capricorn, Australia - separated by wide and diffuse boundaries either extensional or compressional. We tested this model using the SHELLS numerical code (Kong & Bird, 1995) where the Indo-Australian plate was meshed into 5281 spherical triangular finite elements. Model boundary conditions are defined only by the plate velocities of the rigid parts of the Indo-Australian plate relative to their neighboring plates. Different plate velocity models were tested. From these boundary conditions, and taking into account the age of the lithosphere, seafloor topography, and assumptions on the rheology of the oceanic lithosphere, SHELLS predicts strain rates within the plate. We also tested the role of fossil fracture zones as potential lithospheric weaknesses. In a first step, we considered different component plate pairs (India/Capricorn, Capricorn/Australia, India/Australia). Since the limits of their respective diffuse boundary (i.e. the limits of the rigid component plates) are not known, we let the corresponding edge free. In a second step, we merged the previous meshes to consider the whole Indo-Australian plate. In this case, the velocities on the model boundaries are all fully defined and were set relative to the Capricorn plate. Our models predict deformation patterns very consistent with that observed. Pre-existing structures of the lithosphere play an important role in the intraplate deformation and its distribution. The Chagos Bank focuses the extensional deformation between the Indian and Capricorn plates. Reactivation of fossil fracture zones may accommodate large part of the deformation both in extensional areas, off
Challenges of Modeling Flood Risk at Large Scales
Guin, J.; Simic, M.; Rowe, J.
2009-04-01
Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing
The alpha-particle and shell models of the nucleus
International Nuclear Information System (INIS)
Perring, J.K.; Skyrme, T.H.R.
1994-01-01
It is shown that it is possible to write down α-particle wave functions for the ground states of 8 Be, 12 C and 16 O, which become, when antisymmetrized, identical with shell-model wave functions. The α-particle functions are used to obtain potentials which can then be used to derive wave functions and energies of excited states. Most of the low-lying states of 16 O are obtained in this way, qualitative agreement with experiment being found. The shell structure of the 0 + level at 6·06 MeV is analyzed, and is found to consist largely of single-particle excitations. The lifetime for pair-production is calculated, and found to be comparable with the experimental value. The validity of the method is discussed, and comparison made with shell-model calculations. (author). 5 refs, 1 tab
Gasda, Sarah E.
2012-07-01
Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.
Extensions to a nonlinear finite-element axisymmetric shell model based on Reissner's shell theory
International Nuclear Information System (INIS)
Cook, W.A.
1981-01-01
Extensions to shell analysis not usually associated with shell theory are described in this paper. These extensions involve thick shells, nonlinear materials, a linear normal stress approximation, and a changing shell thickness. A finite element shell-of-revolution model has been developed to analyze nuclear material shipping containers under severe impact conditions. To establish the limits for this shell model, the basic assumptions used in its development were studied; these are listed in this paper. Several extensions were evident from the study of these limits: a thick shell, a plastic hinge, and a linear normal stress
Conventional shell model: some issues
International Nuclear Information System (INIS)
Vallieres, M.; Pan, X.W.; Feng, D.H.; Novoselsky, A.
1997-01-01
We discuss some important issues in shell-model calculations related to the effective interactions used in different regions of the periodic table; in particular the quality of different interactions is discussed, as well as the mass dependence of the interactions. Mention is made of the recently developed Drexel University shell-model (DUSM). (orig.)
International Nuclear Information System (INIS)
Furuhashi, Ichiro; Kasahara, Naoto
2002-01-01
Two types of finite element models analyzed eigenvalues of hot-leg pipelines of a large-scale sodium reactor. One is a beam element model, which is usual for pipe analyses. The other is a shell element model to evaluate particular modes in thin pipes with large diameters. Summary of analysis results: (1) A beam element model and a order natural frequency. A beam element model is available to get the first order vibration mode. (2) The maximum difference ratio of beam mode natural frequencies was 14% between a beam element model with no shear deformations and a shell element model. However, its difference becomes very small, when shear deformations are considered in beam element. (3) In the first order horizontal mode, the Y-piece acts like a pendulum, and the elbow acts like the hinge. The natural frequency is strongly affected by the bending and shear rigidities of the outer supporting pipe. (4) In the first order vertical mode, the vertical sections of the outer and inner pipes moves in the axial-directional piston mode, the horizontal section of inner pipe behaves like the cantilever, and the elbow acts like the hinge. The natural frequency is strongly affected by the axial rigidity of outer supporting pipe. (5) Both effective masses and participation factors were small for particular shell modes. (author)
Design and optimization of the large span dry-coal-shed latticed shell in Liyuan of Henan province
Directory of Open Access Journals (Sweden)
Du Wenfeng
2017-01-01
Full Text Available The design and optimization about the large span dry-coal-shed latticed shell in Liyuan of Henan province were studied. On the basis of the structural scheme of double-layer cylindrical reticulated shell, the optimization scheme of the folding double-layer cylindrical reticulated shell was proposed. Through the analysis of a plurality of calculation models, the optimal geometric parameters were obtained after discussing the influence of different slopes of folding lines and shell thickness on the structural bearing capacity and the amount of steel. The research results show that in the case of the same amount of steel, the ultimate bearing capacity of the double-layer folding cylindrical reticulated shell whose folding line slope is 9% and the shell thickness is about 4.4m can be increased 27.3% compared with the original design scheme.
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang
2013-01-01
Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...
The large-scale peculiar velocity field in flat models of the universe
International Nuclear Information System (INIS)
Vittorio, N.; Turner, M.S.
1986-10-01
The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs
Energy Technology Data Exchange (ETDEWEB)
Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)
2015-02-26
This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.
Klos, P.; Menéndez, J.; Gazit, D.; Schwenk, A.
2013-01-01
We perform state-of-the-art large-scale shell-model calculations of the structure factors for elastic spin-dependent WIMP scattering off 129,131Xe, 127I, 73Ge, 19F, 23Na, 27Al, and 29Si. This comprehensive survey covers the non-zero-spin nuclei relevant to direct dark matter detection. We include a pedagogical presentation of the formalism necessary to describe elastic and inelastic WIMP-nucleus scattering. The valence spaces and nuclear interactions employed have been previously used in nucl...
Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin
Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.
2011-01-01
The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in
Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin
Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.
2011-01-01
The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed
Proceedings of a symposium on the occasion of the 40th anniversary of the nuclear shell model
International Nuclear Information System (INIS)
Lee, T.S.H.; Wiringa, R.B.
1990-03-01
This report contains papers on the following topics: excitation of 1p-1h stretched states with the (p,n) reaction as a test of shell-model calculations; on Z=64 shell closure and some high spin states of 149 Gd and 159 Ho; saturating interactions in 4 He with density dependence; are short-range correlations visible in very large-basis shell-model calculations?; recent and future applications of the shell model in the continuum; shell model truncation schemes for rotational nuclei; the particle-hole interaction and high-spin states near A-16; magnetic moment of doubly closed shell +1 nucleon nucleus 41 Sc(I π =7/2 - ); the new magic nucleus 96 Zr; comparing several boson mappings with the shell model; high spin band structures in 165 Lu; optical potential with two-nucleon correlations; generalized valley approximation applied to a schematic model of the monopole excitation; pair approximation in the nuclear shell model; and many-particle, many-hole deformed states
International Nuclear Information System (INIS)
Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.
2013-01-01
We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions
Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin
Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.
2011-01-01
Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global
Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling
Her, Y. G.
2017-12-01
Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological
Numerical Modeling of Large-Scale Rocky Coastline Evolution
Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.
2008-12-01
Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
Large scale solar district heating. Evaluation, modelling and designing - Appendices
Energy Technology Data Exchange (ETDEWEB)
Heller, A.
2000-07-01
The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)
Towards a 'standard model' of large scale structure formation
International Nuclear Information System (INIS)
Shafi, Q.
1994-01-01
We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs
Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin
DEFF Research Database (Denmark)
Finsen, F.; Milzow, Christian; Smith, R.
2014-01-01
Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....
Divergence of perturbation theory in large scale structures
Pajer, Enrico; van der Woude, Drian
2018-05-01
We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.
Energy Technology Data Exchange (ETDEWEB)
1978-01-01
This paper notes the necessity of developing an international coal trade on a very large scale. The role of Shell in the coal industry is examined; the regions in which Shell companies are most active are Australia, Southern Africa, Indonesia; Europe and North America. Research is being carried out on marketing and transportation, especially via slurry pipelines; coal-oil emulsions; briquets; fluidized-bed combustion; recovery of coal from potential waste material; upgrading of low-rank coals; unconventional forms of mining; coal conversion (the Shell/Koppers high-pressure coal gasification process). Techniques for cleaning flue gas (the Shell Flue Gas Desulfurization process) are being examined.
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat
Fixed J spectral distributions in large shell model spaces. Pt. 3
International Nuclear Information System (INIS)
Jacquemin, C.; Auger, G.; Quesne, C.
1982-01-01
A method is developed to exactly calculate the fixed J quasiparticle centroid energies and partial widths. Some results obtained in the even-mass lead isotopes with various interactions are analysed. Fixed J quasiparticle distributions are used to predict an upper limit for the deviations between the quasiparticle approximation and the shell model results for the low-energy levels. The influence of the states with a high quasiparticle number in the low-energy region is seen to strongly depend upon the interaction. The importance of the dimensionalities and the internal widths is explaining the admixtures is stressed. (orig.)
Dynamo Scaling Laws for Uranus and Neptune: The Role of Convective Shell Thickness on Dipolarity
Stanley, Sabine; Yunsheng Tian, Bob
2017-10-01
Previous dynamo scaling law studies (Christensen and Aubert, 2006) have demonstrated that the morphology of a planet’s magnetic field is determined by the local Rossby number (Ro_l): a non-dimensional diagnostic variable that quantifies the ratio of inertial forces to Coriolis forces on the average length scale of the flow. Dynamos with Ro_l ~ 0.1 produce multipolar magnetic fields. Scaling studies have also determined the dependence of the local Rossby number on non-dimensional parameters governing the system - specifically the Ekman, Prandtl, magnetic Prandtl and flux-based Rayleigh numbers (Olson and Christensen, 2006). When these scaling laws are applied to the planets, it appears that Uranus and Neptune should have dipole-dominated fields, contrary to observations. However, those scaling laws were derived using the specific convective shell thickness of the Earth’s core. Here we investigate the role of convective shell thickness on dynamo scaling laws. We find that the local Rossby number depends exponentially on the convective shell thickness. Including this new dependence on convective shell thickness, we find that the dynamo scaling laws now predict that Uranus and Neptune reside deeply in the multipolar regime, thereby resolving the previous contradiction with observations.
Trends in large-scale testing of reactor structures
International Nuclear Information System (INIS)
Blejwas, T.E.
2003-01-01
Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
Deformed shell model studies of spectroscopic properties of Zn and ...
Indian Academy of Sciences (India)
2014-04-05
Apr 5, 2014 ... April 2014 physics pp. 757–767. Deformed shell model studies of ... experiments without isotopical enrichment thereby reducing the cost considerably. By taking a large mass of the sample because of its low cost, one can ...
Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge
Park, Heon-Joon; Lee, Changyeol
2017-04-01
Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper
We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...
Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities
Directory of Open Access Journals (Sweden)
Danwen Bao
2017-01-01
Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.
Large-scale inverse model analyses employing fast randomized data reduction
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Open source integrated modeling environment Delta Shell
Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.
2012-04-01
In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.
Including investment risk in large-scale power market models
DEFF Research Database (Denmark)
Lemming, Jørgen Kjærgaard; Meibom, P.
2003-01-01
Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...
Understanding nuclei in the upper sd - shell
Energy Technology Data Exchange (ETDEWEB)
Sarkar, M. Saha; Bisoi, Abhijit; Ray, Sudatta [Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India); Kshetri, Ritesh [Nuclear Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064, India and Sidho-Kanho-Birsha University, Purulia - 723101 (India); Sarkar, S. [Indian Institute of Engineering Science and Technology, Shibpur, Howrah - 711103 (India)
2014-08-14
Nuclei in the upper-sd shell usually exhibit characteristics of spherical single particle excitations. In the recent years, employment of sophisticated techniques of gamma spectroscopy has led to observation of high spin states of several nuclei near A ≃ 40. In a few of them multiparticle, multihole rotational states coexist with states of single particle nature. We have studied a few nuclei in this mass region experimentally, using various campaigns of the Indian National Gamma Array setup. We have compared and combined our empirical observations with the large-scale shell model results to interpret the structure of these nuclei. Indication of population of states of large deformation has been found in our data. This gives us an opportunity to investigate the interplay of single particle and collective degrees of freedom in this mass region.
Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System
He, Qing; Li, Hong
Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.
THE LAST MINUTES OF OXYGEN SHELL BURNING IN A MASSIVE STAR
Energy Technology Data Exchange (ETDEWEB)
Müller, Bernhard [Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, Belfast, BT7 1NN (United Kingdom); Viallet, Maxime; Janka, Hans-Thomas [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany); Heger, Alexander, E-mail: b.mueller@qub.ac.uk [Monash Centre for Astrophysics, School of Physics and Astronomy, Monash University, Victoria 3800 (Australia)
2016-12-10
We present the first 4 π– three-dimensional (3D) simulation of the last minutes of oxygen shell burning in an 18 M {sub ⊙} supernova progenitor up to the onset of core collapse. A moving inner boundary is used to accurately model the contraction of the silicon and iron core according to a one-dimensional stellar evolution model with a self-consistent treatment of core deleptonization and nuclear quasi-equilibrium. The simulation covers the full solid angle to allow the emergence of large-scale convective modes. Due to core contraction and the concomitant acceleration of nuclear burning, the convective Mach number increases to ∼0.1 at collapse, and an ℓ = 2 mode emerges shortly before the end of the simulation. Aside from a growth of the oxygen shell from 0.51 M {sub ⊙} to 0.56 M {sub ⊙} due to entrainment from the carbon shell, the convective flow is reasonably well described by mixing-length theory, and the dominant scales are compatible with estimates from linear stability analysis. We deduce that artificial changes in the physics, such as accelerated core contraction, can have precarious consequences for the state of convection at collapse. We argue that scaling laws for the convective velocities and eddy sizes furnish good estimates for the state of shell convection at collapse and develop a simple analytic theory for the impact of convective seed perturbations on shock revival in the ensuing supernova. We predict a reduction of the critical luminosity for explosion by 12% – 24% due to seed asphericities for our 3D progenitor model relative to the case without large seed perturbations.
Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.
2015-05-01
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential
The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit
Artem Ganiyev; Jan Vitasek
2010-01-01
This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.
2014-01-01
of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov
Kaplan, David; Lee, Chansoon
2018-01-01
This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.
Extensions to a nonlinear finite element axisymmetric shell model based on Reissner's shell theory
International Nuclear Information System (INIS)
Cook, W.A.
1981-01-01
A finite element shell-of-revolution model has been developed to analyze shipping containers under severe impact conditions. To establish the limits for this shell model, I studied the basic assumptions used in its development; these are listed in this paper. Several extensions were evident from the study of these limits: a thick shell, a plastic hinge, and a linear normal stress. (orig./HP)
Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin
Directory of Open Access Journals (Sweden)
E. H. Sutanudjaja
2011-09-01
Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.
Monte Carlo evaluation of path integral for the nuclear shell model
International Nuclear Information System (INIS)
Lang, G.H.
1993-01-01
The authors present a path-integral formulation of the nuclear shell model using auxillary fields; the path-integral is evaluated by Monte Carlo methods. The method scales favorably with valence-nucleon number and shell-model basis: full-basis calculations are demonstrated up to the rare-earth region, which cannot be treated by other methods. Observables are calculated for the ground state and in a thermal ensemble. Dynamical correlations are obtained, from which strength functions are extracted through the Maximum Entropy method. Examples in the s-d shell, where exact diagonalization can be carried out, compared well with exact results. The open-quotes sign problemclose quotes generic to quantum Monte Carlo calculations is found to be absent in the attractive pairing-plus-multipole interactions. The formulation is general for interacting fermion systems and is well suited for parallel computation. The authors have implemented it on the Intel Touchstone Delta System, achieving better than 99% parallelization
Shell model calculations for exotic nuclei
International Nuclear Information System (INIS)
Brown, B.A.; Wildenthal, B.H.
1991-01-01
A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs
Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice
International Nuclear Information System (INIS)
Liu, Xiaojing; Ren, Shuo; Cheng, Xu
2017-01-01
Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.
Noorman, Henk
2011-08-01
For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gravitational waves during inflation from a 5D large-scale repulsive gravity model
International Nuclear Information System (INIS)
Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio
2012-01-01
We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.
Gravitational waves during inflation from a 5D large-scale repulsive gravity model
Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio
2012-10-01
We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.
Gravitational waves during inflation from a 5D large-scale repulsive gravity model
Energy Technology Data Exchange (ETDEWEB)
Reyes, Luz M., E-mail: luzmarinareyes@gmail.com [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Moreno, Claudia, E-mail: claudia.moreno@cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Madriz Aguilar, Jose Edgar, E-mail: edgar.madriz@red.cucei.udg.mx [Departamento de Matematicas, Centro Universitario de Ciencias Exactas e ingenierias (CUCEI), Universidad de Guadalajara (UdG), Av. Revolucion 1500, S.R. 44430, Guadalajara, Jalisco (Mexico); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata (UNMdP), Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Fisicas de Mar del Plata (IFIMAR) - Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)
2012-10-22
We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.
International Nuclear Information System (INIS)
B Bello; M Junker
2006-01-01
Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)
Corrections to the neutrinoless double-β-decay operator in the shell model
Engel, Jonathan; Hagen, Gaute
2009-06-01
We use diagrammatic perturbation theory to construct an effective shell-model operator for the neutrinoless double-β decay of Se82. The starting point is the same Bonn-C nucleon-nucleon interaction that is used to generate the Hamiltonian for recent shell-model calculations of double-β decay. After first summing high-energy ladder diagrams that account for short-range correlations and then adding diagrams of low order in the G matrix to account for longer-range correlations, we fold the two-body matrix elements of the resulting effective operator with transition densities from the recent shell-model calculation to obtain the overall nuclear matrix element that governs the decay. Although the high-energy ladder diagrams suppress this matrix element at very short distances as expected, they enhance it at distances between one and two fermis, so that their overall effect is small. The corrections due to longer-range physics are large, but cancel one another so that the fully corrected matrix element is comparable to that produced by the bare operator. This cancellation between large and physically distinct low-order terms indicates the importance of a reliable nonperturbative calculation.
Shell model calculations for stoichiometric Na β-alumina
International Nuclear Information System (INIS)
Wang, J.C.
1985-01-01
Walker and Catlow recently reported the results of their shell model calculations for the structure and transport of Na β-alumina (Naβ). The main computer programs used by Walker and Catlow for their calculations are PLUTO and HADES III. The latter, a recent version of HADES II written for cubic crystals, is believed to be applicable to defects in crystals of both cubic and hexagonal symmetry. PLUTO is usually used in calculating properties of perfect crystals before defects are introduced into the structure. Walker and Catlow claim that, in some respects, their models are superior to those of Wang et al. Yet, their results are quite different from those observed experimentally. In this work these differences are investigated by using a computer program designed to calculate lattice energies for s Naβ using the same shell model parameters adopted by Walker and Catlow. The core and shell positions of all ions, as well as the lattice parameters, were fully relaxed. The calculated energy difference between aBR and BR sites (0.33 eV) is about twice as large as that reported by Walker and Catlow. The present results also show that the relaxed oxygen ion positions next to the conduction plane in this case are displaced from their observed sites reported. When the core-shell spring constant of the oxygen ion was adjusted to minimize these displacements, the above-mentioned energy difference increased to about 0.56 eV. These results cast doubt on the fluid conduction plane structure suggested by Walker and Catlow and on the defect structure and activation energy obtained from their calculations
Shell model description of Ge isotopes
International Nuclear Information System (INIS)
Hirsch, J G; Srivastava, P C
2012-01-01
A shell model study of the low energy region of the spectra in Ge isotopes for 38 ≤ N ≤ 50 is presented, analyzing the excitation energies, quadrupole moments, B(E2) values and occupation numbers. The theoretical results have been compared with the available experimental data. The shell model calculations have been performed employing three different effective interactions and valence spaces. We have used two effective shell model interactions, JUN45 and jj44b, for the valence space f 5/2 pg 9/2 without truncation. To include the proton subshell f 7/2 in valence space we have employed the fpg effective interaction due to Sorlin et al., with 48 Ca as a core and a truncation in the number of excited particles.
Isospin invariant boson models for fp-shell nuclei
International Nuclear Information System (INIS)
Van Isacker, P.
1994-01-01
Isospin invariant boson models, IBM-3 and IBM-4, applicable in nuclei with neutrons and protons in the same valence shell, are reviewed. Some basic results related to these models are discussed: the mapping onto the shell model, the relation to Wigner's supermultiplet scheme, the boson-number and isospin dependence of parameters, etc. These results are examined for simple single-j shell situations (e.g. f 7/2 ) and their extension to the f p shell is investigated. Other extensions discussed here concern the treatment of odd-mass nuclei and the classification of particle-hole excitations in light nuclei. The possibility of a pseudo-SU(4) supermultiplet scheme in f p -shell nuclei is discussed. (author) 4 figs., 3 tabs., 23 refs
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach
A large-scale multi-species spatial depletion model for overwintering waterfowl
Baveco, J.M.; Kuipers, H.; Nolet, B.A.
2011-01-01
In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within
Modeling of microencapsulated polymer shell solidification
International Nuclear Information System (INIS)
Boone, T.; Cheung, L.; Nelson, D.; Soane, D.; Wilemski, G.; Cook, R.
1995-01-01
A finite element transport model has been developed and implemented to complement experimental efforts to improve the quality of ICF target shells produced via controlled-mass microencapsulation. The model provides an efficient means to explore the effect of processing variables on the dynamics of shell dimensions, concentricity, and phase behavior. Comparisons with experiments showed that the model successfully predicts the evolution of wall thinning and core/wall density differences. The model was used to efficiently explore and identify initial wall compositions and processing temperatures which resulted in concentricity improvements from 65 to 99%. The evolution of trace amounts of water entering into the shell wall was also tracked in the simulations. Comparisons with phase envelope estimations from modified UNIFAP calculations suggest that the water content trajectory approaches the two-phase region where vacuole formation via microphase separation may occur
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations
Directory of Open Access Journals (Sweden)
C. Orbe
2018-05-01
Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
The creep analysis of shell structures using generalised models
International Nuclear Information System (INIS)
Boyle, J.T.; Spence, J.
1981-01-01
In this paper a new, more complete estimate of the accuracy of the stationary creep model is given for the general case through the evaluation of exact and approximate energy surfaces. In addition, the stationary model is extended to include more general non-stationary (combined elastic-creep) behaviour and to include the possibility of material deterioration through damage. The resulting models are then compared to existing exact solutions for several shell structures - e.g. a thin pressurised cylinder, a curved pipe in bending and an S-bellows under axial extension with large deflections. In each case very good agreement is obtained. Although requiring similar computing effort, so that the same solution techniques can be utilised, the calculation times are shown to be significantly reduced using the generalised approach. In conclusion, it has been demonstrated that a new simple mechanical model of a thin shell in creep, with or without material deterioration can be constructed; the model is assessed in detail and successfully compared to existing solutions. (orig./HP)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach
Energy Technology Data Exchange (ETDEWEB)
Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics
1998-12-31
In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)
Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow
Directory of Open Access Journals (Sweden)
Sam Ali Al
2015-01-01
Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.
Final Report Fermionic Symmetries and Self consistent Shell Model
International Nuclear Information System (INIS)
Zamick, Larry
2008-01-01
In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.
Wu, Xingfu; Taylor, Valerie
2013-01-01
In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.
Wu, Xingfu
2013-12-01
In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.
Many-body forces in nuclear shell-model
International Nuclear Information System (INIS)
Rath, P.K.
1985-01-01
In the microscopic derivation of the effective Hamiltonian for the nuclear shell model many-body forces between the valence nucleons occur. These many-body forces can be discriminated in ''real'' many-body forces, which can be related to mesonic and internal degrees of freedom of the nucleons, and ''effective'' many-body forces, which arise by the confinement of the nucleonic Hilbert space to the finite-dimension shell-model space. In the present thesis the influences of such three-body forces on the spectra of sd-shell nuclei are studied. For this the two common techniques for shell-model calculations (Oak Ridge-Rochester and Glasgow representation) are extended in such way that a general three-body term in the Hamiltonian can be regarded. The studies show that the repulsive contributions of the considered three-nucleon forces become more important with increasing number of valence nucleons. By this the particle-number dependence of empirical two-nucleon forces can be qualitatively explained. A special kind of effective many-body force occurs in the folded diagram expansion of the energy-dependent effective Hamiltonian for the shell model. Thereby it is shown that the contributions of the folded diagrams with three nucleons are just as important as those with two nucleons. Thus it is to be suspected that the folded diagram expansion contains many-particle terms with arbitrary particle number. The present studies however show that four nucleon effects are neglegible so that the folded diagram expansion can be confined to two- and three-particle terms. In shell-model calculations which extend over several main shells the influences of the spurious center-of-mass motion must be regarded. A procedure is discussed by which these spurious degrees of freedom can be exactly separated. (orig.) [de
Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.
2016-09-01
Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.
Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi
2015-01-01
three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.
Large-scale model-based assessment of deer-vehicle collision risk.
Directory of Open Access Journals (Sweden)
Torsten Hothorn
Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining
TWO-DIMENSIONAL APPROXIMATION OF EIGENVALUE PROBLEMS IN SHELL THEORY: FLEXURAL SHELLS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The eigenvalue problem for a thin linearly elastic shell, of thickness 2e, clamped along its lateral surface is considered. Under the geometric assumption on the middle surface of the shell that the space of inextensional displacements is non-trivial, the authors obtain, as ε→0,the eigenvalue problem for the two-dimensional"flexural shell"model if the dimension of the space is infinite. If the space is finite dimensional, the limits of the eigenvalues could belong to the spectra of both flexural and membrane shells. The method consists of rescaling the variables and studying the problem over a fixed domain. The principal difficulty lies in obtaining suitable a priori estimates for the scaled eigenvalues.
Probes of large-scale structure in the Universe
International Nuclear Information System (INIS)
Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.
1988-01-01
Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)
Cancio, Antonio C.; Redd, Jeremy J.
2017-03-01
The scaling of neutral atoms to large Z, combining periodicity with a gradual trend to homogeneity, is a fundamental probe of density functional theory, one that has driven recent advances in understanding both the kinetic and exchange-correlation energies. Although research focus is normally upon the scaling of integrated energies, insights can also be gained from energy densities. We visualise the scaling of the positive-definite kinetic energy density (KED) in closed-shell atoms, in comparison to invariant quantities based upon the gradient and Laplacian of the density. We notice a striking fit of the KED within the core of any atom to a gradient expansion using both the gradient and the Laplacian, appearing as an asymptotic limit around which the KED oscillates. The gradient expansion is qualitatively different from that derived from first principles for a slowly varying electron gas and is correlated with a nonzero Pauli contribution to the KED near the nucleus. We propose and explore orbital-free meta-GGA models for the kinetic energy to describe these features, with some success, but the effects of quantum oscillations in the inner shells of atoms make a complete parametrisation difficult. We discuss implications for improved orbital-free description of molecular properties.
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Numerically modelling the large scale coronal magnetic field
Panja, Mayukh; Nandi, Dibyendu
2016-07-01
The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems
DEFF Research Database (Denmark)
Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik
2014-01-01
A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence...
Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS
Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.
2015-12-01
Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab
Structural Acoustic Physics Based Modeling of Curved Composite Shells
2017-09-19
NUWC-NPT Technical Report 12,236 19 September 2017 Structural Acoustic Physics -Based Modeling of Curved Composite Shells Rachel E. Hesse...SUBTITLE Structural Acoustic Physics -Based Modeling of Curved Composite Shells 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...study was to use physics -based modeling (PBM) to investigate wave propagations through curved shells that are subjected to acoustic excitation. An
Spherical-shell boundaries for two-dimensional compressible convection in a star
Pratt, J.; Baraffe, I.; Goffrey, T.; Geroux, C.; Viallet, M.; Folini, D.; Constantino, T.; Popov, M.; Walder, R.
2016-10-01
Context. Studies of stellar convection typically use a spherical-shell geometry. The radial extent of the shell and the boundary conditions applied are based on the model of the star investigated. We study the impact of different two-dimensional spherical shells on compressible convection. Realistic profiles for density and temperature from an established one-dimensional stellar evolution code are used to produce a model of a large stellar convection zone representative of a young low-mass star, like our sun at 106 years of age. Aims: We analyze how the radial extent of the spherical shell changes the convective dynamics that result in the deep interior of the young sun model, far from the surface. In the near-surface layers, simple small-scale convection develops from the profiles of temperature and density. A central radiative zone below the convection zone provides a lower boundary on the convection zone. The inclusion of either of these physically distinct layers in the spherical shell can potentially affect the characteristics of deep convection. Methods: We perform hydrodynamic implicit large eddy simulations of compressible convection using the MUltidimensional Stellar Implicit Code (MUSIC). Because MUSIC has been designed to use realistic stellar models produced from one-dimensional stellar evolution calculations, MUSIC simulations are capable of seamlessly modeling a whole star. Simulations in two-dimensional spherical shells that have different radial extents are performed over tens or even hundreds of convective turnover times, permitting the collection of well-converged statistics. Results: To measure the impact of the spherical-shell geometry and our treatment of boundaries, we evaluate basic statistics of the convective turnover time, the convective velocity, and the overshooting layer. These quantities are selected for their relevance to one-dimensional stellar evolution calculations, so that our results are focused toward studies exploiting the so
de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.
2010-01-01
We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…
Large scale structure and baryogenesis
International Nuclear Information System (INIS)
Kirilova, D.P.; Chizhov, M.V.
2001-08-01
We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)
Note on off-shell relations in nonlinear sigma model
International Nuclear Information System (INIS)
Chen, Gang; Du, Yi-Jian; Li, Shuyi; Liu, Hanqing
2015-01-01
In this note, we investigate relations between tree-level off-shell currents in nonlinear sigma model. Under Cayley parametrization, all odd-point currents vanish. We propose and prove a generalized U(1) identity for even-point currents. The off-shell U(1) identity given in http://dx.doi.org/10.1007/JHEP01(2014)061 is a special case of the generalized identity studied in this note. The on-shell limit of this identity is equivalent with the on-shell KK relation. Thus this relation provides the full off-shell correspondence of tree-level KK relation in nonlinear sigma model.
Tang, G.; Bartlein, P. J.
2012-01-01
Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.
Statistical Mechanics of Thin Spherical Shells
Directory of Open Access Journals (Sweden)
Andrej Košmrlj
2017-01-01
Full Text Available We explore how thermal fluctuations affect the mechanics of thin amorphous spherical shells. In flat membranes with a shear modulus, thermal fluctuations increase the bending rigidity and reduce the in-plane elastic moduli in a scale-dependent fashion. This is still true for spherical shells. However, the additional coupling between the shell curvature, the local in-plane stretching modes, and the local out-of-plane undulations leads to novel phenomena. In spherical shells, thermal fluctuations produce a radius-dependent negative effective surface tension, equivalent to applying an inward external pressure. By adapting renormalization group calculations to allow for a spherical background curvature, we show that while small spherical shells are stable, sufficiently large shells are crushed by this thermally generated “pressure.” Such shells can be stabilized by an outward osmotic pressure, but the effective shell size grows nonlinearly with increasing outward pressure, with the same universal power-law exponent that characterizes the response of fluctuating flat membranes to a uniform tension.
Slush Fund: The Multiphase Nature of Oceanic Ices and Its Role in Shaping Europa's Icy Shell
Buffo, J.; Schmidt, B. E.; Huber, C.
2017-12-01
The role of Europa's ice shell in mediating ocean-surface interaction, constraining potential habitability of the underlying hydrosphere, and dictating the surface morphology of the moon is discussed extensively in the literature, yet the dynamics and characteristics of the shell itself remain largely unconstrained. Some of the largest unknowns arise from underrepresented physics and varying a priori assumptions built into the current ice shell models. Here we modify and apply a validated one-dimensional reactive transport model designed to simulate the formation and evolution of terrestrial sea ice to the Europa environment. The top-down freezing of sea ice due to conductive heat loss to the atmosphere is akin to the formation of the Jovian moon's outer ice shell, albeit on a different temporal and spatial scale. Nevertheless, the microscale physics that govern the formation of sea ice on Earth (heterogenous solidification leading to brine pockets and channels, multiphase reactive transport phenomena, gravity drainage) likely operate in a similar manner at the ice-ocean interface of Europa, dictating the thermal, chemical, and mechanical properties of the ice shell. Simulations of the European ice-ocean interface at different stages during the ice shell's evolution are interpolated to produce vertical profiles of temperature, salinity, solid fraction, and eutectic points throughout the entire shell. Additionally, the model is coupled to the equilibrium chemistry package FREZCHEM to investigate the impact a diverse range of putative European ocean chemistries has on ice shell properties. This method removes the need for a priori assumptions of impurity entrainment rates and ice shell properties, thus providing a first principles constraint on the stratigraphic characteristics of a simulated European ice shell. These insights have the potential to improve existing estimates for the onset of solid state convection, melt lens formation due to eutectic melting, ice
RELAPS choked flow model and application to a large scale flow test
International Nuclear Information System (INIS)
Ransom, V.H.; Trapp, J.A.
1980-01-01
The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium
Directory of Open Access Journals (Sweden)
Noritaka Shimizu
2016-02-01
Full Text Available We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of Jπ=2+ and 2− states in 58Ni in a unified manner.
Black, R. X.
2017-12-01
We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.
Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy
Directory of Open Access Journals (Sweden)
Leandros Perivolaropoulos
2014-01-01
Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Comparing several boson mappings with the shell model
International Nuclear Information System (INIS)
Menezes, D.P.; Yoshinaga, Naotaka; Bonatsos, D.
1990-01-01
Boson mappings are an essential step in establishing a connection between the successful phenomenological interacting boson model and the shell model. The boson mapping developed by Bonatsos, Klein and Li is applied to a single j-shell and the resulting energy levels and E2 transitions are shown for a pairing plus quadrupole-quadrupole Hamiltonian. The results are compared to the exact shell model calculation, as well as to these obtained through use of the Otsuka-Arima-Iachello mapping and the Zirnbauer-Brink mapping. In all cases good results are obtained for the spherical and near-vibrational cases
How uncertainty in socio-economic variables affects large-scale transport model forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2015-01-01
A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...
Wu, Xingfu; Taylor, Valerie
2011-01-01
In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.
Wu, Xingfu
2011-08-01
In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.
Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach
Shimjith, S R; Bandyopadhyay, B
2013-01-01
Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...
Laboratory astrophysics. Model experiments of astrophysics with large-scale lasers
International Nuclear Information System (INIS)
Takabe, Hideaki
2012-01-01
I would like to review the model experiment of astrophysics with high-power, large-scale lasers constructed mainly for laser nuclear fusion research. The four research directions of this new field named 'Laser Astrophysics' are described with four examples mainly promoted in our institute. The description is of magazine style so as to be easily understood by non-specialists. A new theory and its model experiment on the collisionless shock and particle acceleration observed in supernova remnants (SNRs) are explained in detail and its result and coming research direction are clarified. In addition, the vacuum breakdown experiment to be realized with the near future ultra-intense laser is also introduced. (author)
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.
2015-05-01
A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.
Indentation of Ellipsoidal and Cylindrical Elastic Shells
Vella, Dominic
2012-10-01
Thin shells are found in nature at scales ranging from viruses to hens\\' eggs; the stiffness of such shells is essential for their function. We present the results of numerical simulations and theoretical analyses for the indentation of ellipsoidal and cylindrical elastic shells, considering both pressurized and unpressurized shells. We provide a theoretical foundation for the experimental findings of Lazarus etal. [following paper, Phys. Rev. Lett. 109, 144301 (2012)PRLTAO0031-9007] and for previous work inferring the turgor pressure of bacteria from measurements of their indentation stiffness; we also identify a new regime at large indentation. We show that the indentation stiffness of convex shells is dominated by either the mean or Gaussian curvature of the shell depending on the pressurization and indentation depth. Our results reveal how geometry rules the rigidity of shells. © 2012 American Physical Society.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
International Nuclear Information System (INIS)
Robinson, R.A.; Hadden, J.A.; Basham, S.J.
1978-01-01
Preliminary experimental studies of dynamic impact response of scale models of lead-shielded radioactive material shipping containers are presented. The objective of these studies is to provide DOE/ECT with a data base to allow the prediction of a rational margin of confidence in overviewing and assessing the adequacy of the safety and environmental control provided by these shipping containers. Replica scale modeling techniques were employed to predict full scale response with 1/8, 1/4, and 1/2 scale models of shipping containers that are used in the shipment of spent nuclear fuel and high level wastes. Free fall impact experiments are described for scale models of plain cylindrical stainless steel shells, stainless steel shells filled with lead, and replica scale models of radioactive material shipping containers. Dynamic induced strain and acceleration measurements were obtained at several critical locations on the models. The models were dropped from various heights, attitudes to the impact surface, with and without impact limiters and at uniform temperatures between -40 and 175 0 C. In addition, thermal expansion and thermal gradient induced strains were measured at -40 and 175 0 C. The frequency content of the strain signals and the effect of different drop pad compositions and stiffness were examined. Appropriate scale modeling laws were developed and scaling techniques were substantiated for predicting full scale response by comparison of dynamic strain data for 1/8, 1/4, and 1/2 scale models with stainless steel shells and lead shielding
Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale
Directory of Open Access Journals (Sweden)
Husin Alatas
2015-01-01
Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.
International Nuclear Information System (INIS)
Tosic, P.T.
2011-01-01
We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)
Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Shuai Li
2008-03-01
Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Cloud-enabled large-scale land surface model simulations with the NASA Land Information System
Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.
2017-12-01
Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and
The contribution of Skyrme Hartree-Fock calculations to the understanding of the shell model
International Nuclear Information System (INIS)
Zamick, L.
1984-01-01
The authors present a detailed comparison of Skyrme Hartree-Fock and the shell model. The H-F calculations are sensitive to the parameters that are chosen. The H-F results justify the use of effective charges in restricted model space calculations by showing that the core contribution can be large. Further, the H-F results roughly justify the use of a constant E2 effective charge, but seem to yield nucleus dependent E4 effective charges. The H-F can yield results for E6 and higher multipoles, which would be zero in s-d model space calculations. On the other side of the coin in H-F the authors can easily consider only the lowest rotational band, whereas in the shell model one can calculate the energies and properties of many more states. In the comparison some apparent problems remain, in particular E4 transitions in the upper half of the s-d shell
Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins
Serinaldi, F.; Kilsby, C. G.
2012-04-01
While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale
International Nuclear Information System (INIS)
Barsamian, H.R.; Hassan, Y.A.
1996-01-01
Turbulence is one of the most commonly occurring phenomena of engineering interest in the field of fluid mechanics. Since most flows are turbulent, there is a significant payoff for improved predictive models of turbulence. One area of concern is the turbulent buffeting forces experienced by the tubes in steam generators of nuclear power plants. Although the Navier-Stokes equations are able to describe turbulent flow fields, the large number of scales of turbulence limit practical flow field calculations with current computing power. The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (Smagorinsky, 1963) (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization
Seidl, R.; Schelhaas, M.J.; Lindner, M.; Lexer, M.J.
2009-01-01
To study potential consequences of climate-induced changes in the biotic disturbance regime at regional to national scale we integrated a model of Ips typographus (L. Scol. Col.) damages into the large-scale forest scenario model EFISCEN. A two-stage multivariate statistical meta-model was used to
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using
Real-time simulation of large-scale floods
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Similitude and scaling of large structural elements: Case study
Directory of Open Access Journals (Sweden)
M. Shehadeh
2015-06-01
Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Large-scale modelling of neuronal systems
International Nuclear Information System (INIS)
Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.
2009-01-01
The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.
An IBM-3 hamiltonian from a multi-j-shell model
International Nuclear Information System (INIS)
Evans, J.A.; Elliott, J.P.; Lac, V.S.; Long, G.L.
1995-01-01
The number and isospin dependence of the hamiltonian in the isospin invariant form (IBM-3) of the boson model is deduced from a seniority mapping onto a shell-model system of several shells. The numerical results are compared with earlier work for a single j-shell. (orig.)
Shell model for time-correlated random advection of passive scalars
DEFF Research Database (Denmark)
Andersen, Ken Haste; Muratore-Ginanneschi, P.
1999-01-01
We study a minimal shell model for the advection of a passive scalar by a Gaussian time-correlated velocity field. The anomalous scaling properties of the white noise limit are studied analytically. The effect of the time correlations are investigated using perturbation theory around the white...... noise limit and nonperturbatively by numerical integration. The time correlation of the velocity field is seen to enhance the intermittency of the passive scalar. [S1063-651X(99)07711-9]....
Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling
Huber, I.; Archontoulis, S.
2017-12-01
In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar
Li, Qian; Matula, Thomas J; Tu, Juan; Guo, Xiasheng; Zhang, Dong
2013-02-21
It has been accepted that the dynamic responses of ultrasound contrast agent (UCA) microbubbles will be significantly affected by the encapsulating shell properties (e.g., shell elasticity and viscosity). In this work, a new model is proposed to describe the complicated rheological behaviors in an encapsulating shell of UCA microbubbles by applying the nonlinear 'Cross law' to the shell viscous term in the Marmottant model. The proposed new model was verified by fitting the dynamic responses of UCAs measured with either a high-speed optical imaging system or a light scattering system. The comparison results between the measured radius-time curves and the numerical simulations demonstrate that the 'compression-only' behavior of UCAs can be successfully simulated with the new model. Then, the shell elastic and viscous coefficients of SonoVue microbubbles were evaluated based on the new model simulations, and compared to the results obtained from some existing UCA models. The results confirm the capability of the current model for reducing the dependence of bubble shell parameters on the initial bubble radius, which indicates that the current model might be more comprehensive to describe the complex rheological nature (e.g., 'shear-thinning' and 'strain-softening') in encapsulating shells of UCA microbubbles by taking into account the nonlinear changes of both shell elasticity and shell viscosity.
International Nuclear Information System (INIS)
Li Qian; Tu Juan; Guo Xiasheng; Zhang Dong; Matula, Thomas J
2013-01-01
It has been accepted that the dynamic responses of ultrasound contrast agent (UCA) microbubbles will be significantly affected by the encapsulating shell properties (e.g., shell elasticity and viscosity). In this work, a new model is proposed to describe the complicated rheological behaviors in an encapsulating shell of UCA microbubbles by applying the nonlinear ‘Cross law’ to the shell viscous term in the Marmottant model. The proposed new model was verified by fitting the dynamic responses of UCAs measured with either a high-speed optical imaging system or a light scattering system. The comparison results between the measured radius–time curves and the numerical simulations demonstrate that the ‘compression-only’ behavior of UCAs can be successfully simulated with the new model. Then, the shell elastic and viscous coefficients of SonoVue microbubbles were evaluated based on the new model simulations, and compared to the results obtained from some existing UCA models. The results confirm the capability of the current model for reducing the dependence of bubble shell parameters on the initial bubble radius, which indicates that the current model might be more comprehensive to describe the complex rheological nature (e.g., ‘shear-thinning’ and ‘strain-softening’) in encapsulating shells of UCA microbubbles by taking into account the nonlinear changes of both shell elasticity and shell viscosity. (paper)
Large eddy simulation of new subgrid scale model for three-dimensional bundle flows
International Nuclear Information System (INIS)
Barsamian, H.R.; Hassan, Y.A.
2004-01-01
Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)
Large Scale Computing for the Modelling of Whole Brain Connectivity
DEFF Research Database (Denmark)
Albers, Kristoffer Jon
organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...
Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.
2013-12-01
We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.
Study of the tensor correlation in oxygen isotopes using mean-field-type and shell model methods
International Nuclear Information System (INIS)
Sugimoto, Satoru
2007-01-01
The tensor force plays important roles in nuclear structure. Recently, we have developed a mean-field-type model which can treat the two-particle-two-hole correlation induced by the tensor force. We applied the model to sub-closed-shell oxygen isotopes and found that an sizable attractive energy comes from the tensor force. We also studied the tensor correlation in 16O using a shell model including two-particle-two-hole configurations. In this case, quite a large attractive energy is obtained for the correlation energy from the tensor force
Tang, Shuaiqi
Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing
Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns
Directory of Open Access Journals (Sweden)
B. Ziv
2013-03-01
Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.
Can limited area NWP and/or RCM models improve on large scales inside their domain?
Mesinger, Fedor; Veljovic, Katarina
2017-04-01
In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales
Technology for the large-scale production of multi-crystalline silicon solar cells and modules
International Nuclear Information System (INIS)
Weeber, A.W.; De Moor, H.H.C.
1997-06-01
In cooperation with Shell Solar Energy (formerly R and S Renewable Energy Systems) and the Research Institute for Materials of the Catholic University Nijmegen the Netherlands Energy Research Foundation (ECN) plans to develop a competitive technology for the large-scale manufacturing of solar cells and solar modules on the basis of multi-crystalline silicon. The project will be carried out within the framework of the Economy, Ecology and Technology (EET) program of the Dutch ministry of Economic Affairs and the Dutch ministry of Education, Culture and Sciences. The aim of the EET-project is to reduce the costs of a solar module by 50% by means of increasing the conversion efficiency as well as the development of cheap processes for large-scale production
Modeling and experiments of biomass combustion in a large-scale grate boiler
DEFF Research Database (Denmark)
Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen
2007-01-01
is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...
Oscillating shells: A model for a variable cosmic object
Nunez, Dario
1997-01-01
A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.
Ab Initio Symmetry-Adapted No-Core Shell Model
International Nuclear Information System (INIS)
Draayer, J P; Dytrych, T; Launey, K D
2011-01-01
A multi-shell extension of the Elliott SU(3) model, the SU(3) symmetry-adapted version of the no-core shell model (SA-NCSM), is described. The significance of this SA-NCSM emerges from the physical relevance of its SU(3)-coupled basis, which – while it naturally manages center-of-mass spuriosity – provides a microscopic description of nuclei in terms of mixed shape configurations. Since typically configurations of maximum spatial deformation dominate, only a small part of the model space suffices to reproduce the low-energy nuclear dynamics and hence, offers an effective symmetry-guided framework for winnowing of model space. This is based on our recent findings of low-spin and high-deformation dominance in realistic NCSM results and, in turn, holds promise to significantly enhance the reach of ab initio shell models.
A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows
International Nuclear Information System (INIS)
Singh, Satbir; You, Donghyun
2013-01-01
Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations
Large scale solar district heating. Evaluation, modelling and designing
Energy Technology Data Exchange (ETDEWEB)
Heller, A.
2000-07-01
The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application
Modeling of large-scale oxy-fuel combustion processes
DEFF Research Database (Denmark)
Yin, Chungen
2012-01-01
Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...
Yang, T.; Welles, E.
2017-12-01
In this paper, we introduce a flood forecasting and decision making platform, named Delft-FEWS, which has been developed over years at the Delft Hydraulics and now at Deltares. The philosophy of Delft-FEWS is to provide water managers and operators with an open shell tool, which allows the integratation of a variety of hydrological, hydraulics, river routing, and reservoir models with hydrometerological forecasts data. Delft-FEWS serves as an powerful tool for both basin-scale and national-scale water resources management. The essential novelty of Delft-FEWS is to change the flood forecasting and water resources management from a single model or agency centric paradigm to a intergrated framework, in which different model, data, algorithm and stakeholders are strongly linked together. The paper will start with the challenges in water resources managment, and the concept and philosophy of Delft-FEWS. Then, the details of data handling and linkages of Delft-FEWS with different hydrological, hydraulic, and reservoir models, etc. Last, several cases studies and applications of Delft-FEWS will be demonstrated, including the National Weather Service and the Bonneville Power Administration in USA, and a national application in the water board in the Netherland.
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai
2014-01-01
Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.
An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling
Directory of Open Access Journals (Sweden)
Theodore W. Manikas
2011-02-01
Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.
Large-scale model of flow in heterogeneous and hierarchical porous media
Chabanon, Morgan; Valdés-Parada, Francisco J.; Ochoa-Tapia, J. Alberto; Goyeau, Benoît
2017-11-01
Heterogeneous porous structures are very often encountered in natural environments, bioremediation processes among many others. Reliable models for momentum transport are crucial whenever mass transport or convective heat occurs in these systems. In this work, we derive a large-scale average model for incompressible single-phase flow in heterogeneous and hierarchical soil porous media composed of two distinct porous regions embedding a solid impermeable structure. The model, based on the local mechanical equilibrium assumption between the porous regions, results in a unique momentum transport equation where the global effective permeability naturally depends on the permeabilities at the intermediate mesoscopic scales and therefore includes the complex hierarchical structure of the soil. The associated closure problem is numerically solved for various configurations and properties of the heterogeneous medium. The results clearly show that the effective permeability increases with the volume fraction of the most permeable porous region. It is also shown that the effective permeability is sensitive to the dimensionality spatial arrangement of the porous regions and in particular depends on the contact between the impermeable solid and the two porous regions.
Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations
Directory of Open Access Journals (Sweden)
José Gaite
2013-05-01
Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.
Scale breaking effects in the quark-parton model for large P perpendicular phenomena
International Nuclear Information System (INIS)
Baier, R.; Petersson, B.
1977-01-01
We discuss how the scaling violations suggested by an asymptotically free parton model, i.e., the Q 2 -dependence of the transverse momentum of partons within hadrons may affect the parton model description of large p perpendicular phenomena. We show that such a mechanism can provide an explanation for the magnitude of the opposite side correlations and their dependence on the trigger momentum. (author)
Lasota, Rafal; Pierscieniak, Karolina; Garcia, Pascale; Simon-Bouhet, Benoit; Wolowicz, Maciej
2016-11-01
The aim of the study was to determine genetic diversity in the soft-shell clam Mya arenaria on a wide geographical scale using mtDNA COI gene sequences. Low levels of genetic diversity was found, which can most likely be explained by a bottleneck effect during Pleistocene glaciations and/or selection. The geographical genetic structuring of the studied populations was also very low. The star-like phylogeny of the haplotypes indicates a relatively recent, rapid population expansion following the glaciation period and repeated expansion following the founder effect(s) after the initial introduction of the soft-shell clam to Europe. North American populations are characterized by the largest number of haplotypes, including rare ones, as expected for native populations. Because of the founder effect connected with initial and repeated expansion events, European populations have significantly lower numbers of haplotypes in comparison with those of North America. We also observed subtle differentiations among populations from the North and Baltic seas. The recently founded soft-shell clam population in the Black Sea exhibited the highest genetic similarity to Baltic populations, which confirmed the hypothesis that M. arenaria was introduced to the Gulf of Odessa from the Baltic Sea. The most enigmatic results were obtained for populations from the White Sea, which were characterized by high genetic affinity with American populations.
International Nuclear Information System (INIS)
Caurier, E.; Nowacki, F.; Menendez, J.; Poves, A.
2007-02-01
Large scale shell model calculations, with dimensions reaching 10 9 , are carried out to describe the recently observed deformed (ND) and superdeformed (SD) bands based on the first and second excited 0 + states of 40 Ca at 3.35 MeV and 5.21 MeV respectively. A valence space comprising two major oscillator shells, sd and pf, can accommodate most of the relevant degrees of freedom of this problem. The ND band is dominated by configurations with four particles promoted to the pf-shell (4p-4h in short). The SD band by 8p-8h configurations. The ground state of 40 Ca is strongly correlated, but the closed shell still amounts to 65%. The energies of the bands are very well reproduced by the calculations. The out-band transitions connecting the SD band with other states are very small and depend on the details of the mixing among the different np-nh configurations, in spite of that, the calculation describes them reasonably. For the in-band transition probabilities along the SD band, we predict a fairly constant transition quadrupole moment Q 0 (t) ∼ 70 e fm 2 up to J=10, that decreases toward the higher spins. We submit also that the J=8 states of the deformed and superdeformed band are maximally mixed. (authors)
International Nuclear Information System (INIS)
Caurier, E.; Nowacki, F.; Menendez, J.; Poves, A.
2007-01-01
Large-scale shell-model calculations, with dimensions reaching 10 9 , are carried out to describe the recently observed deformed (ND) and superdeformed (SD) bands based on the first and second excited 0 + states of 40 Ca at 3.35 and 5.21 MeV, respectively. A valence space comprising two major oscillator shells, sd and pf, can accommodate most of the relevant degrees of freedom of this problem. The ND band is dominated by configurations with four particles promoted to the pf shell (4p-4h in short). The SD band by 8p-8h configurations. The ground state of 40 Ca is strongly correlated, but the closed shell still amounts to 65%. The energies of the bands are very well reproduced by the calculations. The out-band transitions connecting the SD band with other states are very small and depend on the details of the mixing among the different np-nh configurations; in spite of that, the calculation describes them reasonably. For the in-band transition probabilities along the SD band, we predict a fairly constant transition quadrupole moment Q 0 (t)∼170 e fm 2 up to J=10 that decreases toward the higher spins. We submit also that the J=8 states of the deformed and superdeformed bands are maximally mixed
Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail
2016-01-01
With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution
International Nuclear Information System (INIS)
Dednam, W; Botha, A E
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution
The linearly scaling 3D fragment method for large scale electronic structure calculations
Energy Technology Data Exchange (ETDEWEB)
Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)
2009-07-01
The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.
Linear velocity fields in non-Gaussian models for large-scale structure
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Prototype Vector Machine for Large Scale Semi-Supervised Learning
Energy Technology Data Exchange (ETDEWEB)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.
International Nuclear Information System (INIS)
Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.
2012-01-01
This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)
Large scale hydrogeological modelling of a low-lying complex coastal aquifer system
DEFF Research Database (Denmark)
Meyer, Rena
2018-01-01
intrusion. In this thesis a new methodological approach was developed to combine 3D numerical groundwater modelling with a detailed geological description and hydrological, geochemical and geophysical data. It was applied to a regional scale saltwater intrusion in order to analyse and quantify...... the groundwater flow dynamics, identify the driving mechanisms that formed the saltwater intrusion to its present extent and to predict its progression in the future. The study area is located in the transboundary region between Southern Denmark and Northern Germany, adjacent to the Wadden Sea. Here, a large-scale...... parametrization schemes that accommodate hydrogeological heterogeneities. Subsequently, density-dependent flow and transport modelling of multiple salt sources was successfully applied to simulate the formation of the saltwater intrusion during the last 4200 years, accounting for historic changes in the hydraulic...
Large scale and big data processing and management
Sakr, Sherif
2014-01-01
Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas
Shell model studies in the N = 54 isotones 99Rh, 100Pd
International Nuclear Information System (INIS)
Ghugre, S.S.; Sarkar, S.; Chintalapudi, S.N.
1996-01-01
The shell model in reproducing the observed level is used to investigate the observed level sequences in 99 Rh and 100 Pd within the spherical shell model framework. Shell model calculations have been performed using the code OXBASH
Canuto, V. M.
1994-01-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The
Assessing Human Modifications to Floodplains using Large-Scale Hydrogeomorphic Floodplain Modeling
Morrison, R. R.; Scheel, K.; Nardi, F.; Annis, A.
2017-12-01
Human modifications to floodplains for water resource and flood management purposes have significantly transformed river-floodplain connectivity dynamics in many watersheds. Bridges, levees, reservoirs, shifts in land use, and other hydraulic engineering works have altered flow patterns and caused changes in the timing and extent of floodplain inundation processes. These hydrogeomorphic changes have likely resulted in negative impacts to aquatic habitat and ecological processes. The availability of large-scale topographic datasets at high resolution provide an opportunity for detecting anthropogenic impacts by means of geomorphic mapping. We have developed and are implementing a methodology for comparing a hydrogeomorphic floodplain mapping technique to hydraulically-modeled floodplain boundaries to estimate floodplain loss due to human activities. Our hydrogeomorphic mapping methodology assumes that river valley morphology intrinsically includes information on flood-driven erosion and depositional phenomena. We use a digital elevation model-based algorithm to identify the floodplain as the area of the fluvial corridor laying below water reference levels, which are estimated using a simplified hydrologic model. Results from our hydrogeomorphic method are compared to hydraulically-derived flood zone maps and spatial datasets of levee protected-areas to explore where water management features, such as levees, have changed floodplain dynamics and landscape features. Parameters associated with commonly used F-index functions are quantified and analyzed to better understand how floodplain areas have been reduced within a basin. Preliminary results indicate that the hydrogeomorphic floodplain model is useful for quickly delineating floodplains at large watershed scales, but further analyses are needed to understand the caveats for using the model in determining floodplain loss due to levees. We plan to continue this work by exploring the spatial dependencies of the F
Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean
Digital Repository Service at National Institute of Oceanography (India)
Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.
In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...
Development of Full-Scale Ultrathin Shell Reflector
Directory of Open Access Journals (Sweden)
Durmuş Türkmen
2012-01-01
Full Text Available It is aimed that a new ultrathin shell composite reflector is developed considering different design options to optimize the stiffness/mass ratio, cost, and manufacturing. The reflector is an offset parabolic reflector with a diameter of 6 m, a focal length of 4.8 m, and an offset of 0.3 m and has the ability of folding and self-deploying. For Ku-band missions a full-scale offset parabolic reflector antenna is designed by considering different concepts of stiffening: (i reflective surface and skirt, (ii reflective surface and radial ribs, and (iii reflective surface, skirt, and radial ribs. In a preliminary study, the options are modeled using ABAQUS finite element program and compared with respect to their mass, fundamental frequency, and thermal surface errors. It is found that the option of reflective surface and skirt is more advantageous. The option is further analyzed to optimize the stiffness/mass ratio considering the design parameters of material thickness, width of the skirt, and ply angles. Using the TOPSIS method is determined the best reflector concept among thirty different designs. Accordingly, new design can be said to have some advantages in terms of mass, natural frequency, number of parts, production, and assembly than both SSBR and AstroMesh reflectors.
State of the Art in Large-Scale Soil Moisture Monitoring
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.;
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system
DEFF Research Database (Denmark)
Jensen, Tue Vissing; Pinson, Pierre
2017-01-01
, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven...... to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecastingof renewable power generation....
Clustering of 1p-shell nuclei in the framework of the shell model
International Nuclear Information System (INIS)
Kwasniewicz, E.
1991-01-01
The two- and three-fragment clustering of the 1p-shell nuclei has been studied in the framework of the shell model. The absolute probabilities of the required types of clustering in a given nucleus have been obtained by projecting its realistic shell-model wavefunction onto the suitable subspace of the orthonormal, completely antisymmetric two- or three-cluster states. With the aid of these data the selectivity in population of final states produced in multinucleon transfer reactions has been discussed. This problem has also been considered in the approach where the exchange of nucleons between clusters has been neglected. This has enabled to demonstrate the role of the complete antisymmetrization in predicting the intensities of states populated in multinucleon transfer reactions. The compact theory of the multinucleon one- and two-cluster spectroscopic amplitudes has been formulated. The examples of studying the nuclear structure and reactions with the aid of these spectroscopic amplitudes have been presented. (author)
International Nuclear Information System (INIS)
Forssen, C.; Caurier, E.; Navratil, P.
2009-01-01
Recently, charge radii and ground-state electromagnetic moments of Li and Be isotopes were measured precisely. We have performed large-scale ab initio no-core shell model calculations for these isotopes using high-precision nucleon-nucleon potentials. The isotopic trends of our computed charge radii and quadrupole and magnetic-dipole moments are in good agreement with experimental results with the exception of the 11 Li charge radius. The magnetic moments are in particular well described, whereas the absolute magnitudes of the quadrupole moments are about 10% too small. The small magnitude of the 6 Li quadrupole moment is reproduced, and with the CD-Bonn NN potential, also its correct sign
International Nuclear Information System (INIS)
1999-01-01
The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)
BOWOOSS: bionic optimized wood shells with sustainability
Pohl, Göran
2011-04-01
In architecture, shell construction is used for the most efficient, large spatial structures. Until now the use of wood rather played a marginal role, implementing those examples of architecture, although this material offers manifold advantages, especially against the background of accelerating shortage of resources and increasing requirements concerning the energy balance. Regarding the implementation of shells, nature offers a wide range of suggestions. The focus of the examinations is on the shells of marine plankton, especially of diatoms, whose richness in species promises the discovery of entirely new construction principles. The project is targeting at transferring advantageous features of these organisms on industrial produced, modular wood shell structures. Currently a transfer of these structures in CAD - models is taking place, helping to perform stress analysis by computational methods. Micro as well as macro structures are the subject of diverse consideration, allowing to draw the necessary conclusions for an architectural design. The insights of these tests are the basis for the development of physical models on different scales, which are used to verify the different approaches. Another important aim which is promoted in the project is to enhance the competitiveness of timber construction. Downsizing of the prefabricated structural elements leads to considerable lower transportation costs as abnormal loads can be avoided as far as possible and means of transportation can be loaded with higher efficiency so that an important contribution to the sustainability in the field of architecture can also be made.
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
A finite element model for nonlinear shells of revolution
International Nuclear Information System (INIS)
Cook, W.A.
1979-01-01
A shell-of-revolution model was developed to analyze impact problems associated with the safety analysis of nuclear material shipping containers. The nonlinear shell theory presented by Eric Reissner in 1972 was used to develop our model. Reissner's approach includes transverse shear deformation and moments turning about the middle surface normal. With these features, this approach is valid for both thin and thick shells. His theory is formulated in terms of strain and stress resultants that refer to the undeformed geometry. This nonlinear shell model is developed using the virtual work principle associated with Reissner's equilibrium equations. First, the virtual work principle is modified for incremental loading; then it is linearized by assuming that the nonlinear portions of the strains are known. By iteration, equilibrium is then approximated for each increment. A benefit of this approach is that this iteration process makes it possible to use nonlinear material properties. (orig.)
The role of large-scale, extratropical dynamics in climate change
Energy Technology Data Exchange (ETDEWEB)
Shepherd, T.G. [ed.
1994-02-01
The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.
The role of large-scale, extratropical dynamics in climate change
International Nuclear Information System (INIS)
Shepherd, T.G.
1994-02-01
The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database
Directory of Open Access Journals (Sweden)
Wenying Liu
2015-03-01
Full Text Available For the interconnected power system with large-scale wind power, the problem of the small signal stability has become the bottleneck of restricting the sending-out of wind power as well as the security and stability of the whole power system. Around this issue, this paper establishes a small signal stability region boundary model of the interconnected power system with large-scale wind power based on catastrophe theory, providing a new method for analyzing the small signal stability. Firstly, we analyzed the typical characteristics and the mathematic model of the interconnected power system with wind power and pointed out that conventional methods can’t directly identify the topological properties of small signal stability region boundaries. For this problem, adopting catastrophe theory, we established a small signal stability region boundary model of the interconnected power system with large-scale wind power in two-dimensional power injection space and extended it to multiple dimensions to obtain the boundary model in multidimensional power injection space. Thirdly, we analyzed qualitatively the topological property’s changes of the small signal stability region boundary caused by large-scale wind power integration. Finally, we built simulation models by DIgSILENT/PowerFactory software and the final simulation results verified the correctness and effectiveness of the proposed model.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system
Jensen, Tue V.; Pinson, Pierre
2017-11-01
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.
RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.
Jensen, Tue V; Pinson, Pierre
2017-11-28
Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
Shell structure from N=Z (100Sn) to N>>Z (78Ni)
International Nuclear Information System (INIS)
Grawe, H.
2003-01-01
The shell structure of 100 Sn shows striking resemblance to 56 Ni one major shell below. Large-scale shell model calculations employing realistic interactions derived from effective NN potentials and allowing for up to 4p4h excitations of the 100 Sn core account very well for the spectroscopy of key neighbours 102,103 Sn, 98 Cd and 94 Ag, as inferred from level energies, isomerism, E2 strengths and Gamow-Teller (GT) decay of high-spin states. Recent β- decay studies of 101-104 Sn using the sulphurisation ISOL technique open the perspective to study the 100 Sn GT resonance. At N>>Z the persistence of the N=50 and the weakness of the N=40 shells are traced back to the monopole interaction in S=0 proton-neutron (πν) pairs of nucleons, a scenario which can be generalised to account for the new N=6,16(14),34(32) magicity in light neutron-rich nuclei. (orig.)
Model of large scale man-machine systems with an application to vessel traffic control
Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.
1989-01-01
Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the
Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.
2018-04-01
Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are
A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE
Energy Technology Data Exchange (ETDEWEB)
RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory
2007-01-30
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Structural-performance testing of titanium-shell lead-matrix container MM2
Energy Technology Data Exchange (ETDEWEB)
Hosaluk, L. J.; Barrie, J. N.
1992-05-15
This report describes the hydrostatic structural-performance testing of a half-scale, titanium-shell, lead-matrix container (MM2) with a large, simulated volumetric casting defect. Mechancial behaviour of the container is assessed from extensive surface-strain measurements and post-test non-destructive and destructive examinations. Measured strain data are compared briefly with analytical results from a finite-element model of a previous test prototype, MM1, and with data generated by a finite-difference computer code. Finally, procedures are recommended for more detailed analytical modelling. (auth)
Scheduling of power generation a large-scale mixed-variable model
Prékopa, András; Strazicky, Beáta; Deák, István; Hoffer, János; Németh, Ágoston; Potecz, Béla
2014-01-01
The book contains description of a real life application of modern mathematical optimization tools in an important problem solution for power networks. The objective is the modelling and calculation of optimal daily scheduling of power generation, by thermal power plants, to satisfy all demands at minimum cost, in such a way that the generation and transmission capacities as well as the demands at the nodes of the system appear in an integrated form. The physical parameters of the network are also taken into account. The obtained large-scale mixed variable problem is relaxed in a smart, practical way, to allow for fast numerical solution of the problem.
Spatiotemporal property and predictability of large-scale human mobility
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
Wang, Xujing; Becker, Frederick F.; Gascoyne, Peter R. C.
2010-01-01
The scale-invariant property of the cytoplasmic membrane of biological cells is examined by applying the Minkowski–Bouligand method to digitized scanning electron microscopy images of the cell surface. The membrane is found to exhibit fractal behavior, and the derived fractal dimension gives a good description of its morphological complexity. Furthermore, we found that this fractal dimension correlates well with the specific membrane dielectric capacitance derived from the electrorotation measurements. Based on these findings, we propose a new fractal single-shell model to describe the dielectrics of mammalian cells, and compare it with the conventional single-shell model (SSM). We found that while both models fit with experimental data well, the new model is able to eliminate the discrepancy between the measured dielectric property of cells and that predicted by the SSM. PMID:21198103
Delamater, N D; Wilson, D C; Kyrala, G A; Seifter, A; Hoffman, N M; Dodd, E; Singleton, R; Glebov, V; Stoeckl, C; Li, C K; Petrasso, R; Frenje, J
2008-10-01
We present the calculations and preliminary results from experiments on the Omega laser facility using d-(3)He filled plastic capsule implosions in gold Hohlraums. These experiments aim to develop a technique to measure shell rho r and capsule unablated mass with proton spectroscopy and will be applied to future National Ignition Facility (NIF) experiments with ignition scale capsules. The Omega Hohlraums are 1900 microm length x 1200 microm diameter and have a 70% laser entrance hole. This is approximately a 0.2 NIF scale ignition Hohlraum and reaches temperatures of 265-275 eV similar to those during the peak of the NIF drive. These capsules can be used as a diagnostic of shell rho r, since the d-(3)He gas fill produces 14.7 MeV protons in the implosion, which escape through the shell and produce a proton spectrum that depends on the integrated rho r of the remaining shell mass. The neutron yield, proton yield, and spectra change with capsule shell thickness as the unablated mass or remaining capsule rho r changes. Proton stopping models are used to infer shell unablated mass and shell rho r from the proton spectra measured with different filter thicknesses. The experiment is well modeled with respect to Hohlraum energetics, neutron yields, and x-ray imploded core image size, but there are discrepancies between the observed and simulated proton spectra.
Equivalence of the spherical and deformed shell-model approach to intruder states
International Nuclear Information System (INIS)
Heyde, K.; Coster, C. de; Ryckebusch, J.; Waroquier, M.
1989-01-01
We point out that the description of intruder states, incorporating particle-hole (p-h) excitation across a closed shell in the spherical shell model or a description starting from the Nilsson model are equivalent. We furthermore indicate that the major part of the nucleon-nucleon interaction, responsible for the low excitation energy of intruder states comes as a two-body proton-neutron quadrupole interaction in the spherical shell model. In the deformed shell model, quadrupole binding energy is gained mainly through the one-body part of the potential. (orig.)
Statistical properties of the nuclear shell-model Hamiltonian
International Nuclear Information System (INIS)
Dias, H.; Hussein, M.S.; Oliveira, N.A. de
1986-01-01
The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt
Shell-model predictions for Lambda Lambda hypernuclei
International Nuclear Information System (INIS)
Gal, A.; Millener, D.
2011-01-01
It is shown how the recent shell-model determination of ΛN spin-dependent interaction terms in Λ hypernuclei allows for a reliable deduction of ΛΛ separation energies in ΛΛ hypernuclei across the nuclear p shell. Comparison is made with the available data, highlighting # Lambda# # Lambda# 11 Be and # Lambda# # Lambda# 12 Be which have been suggested as possible candidates for the KEK-E373 HIDA event.
Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.
2017-12-01
In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at
Shell model in-water frequencies of the core barrel
International Nuclear Information System (INIS)
Takeuchi, K.; De Santo, D.F.
1980-01-01
Natural frequencies of a 1/24th-scale core barrel/vessel model in air and in water are measured by determining frequency responses to applied forces. The measured data are analyzed by the use of the one-dimensional fluid-structure computer code, MULTIFLEX, developed to calculate the hydraulic force. The fluid-structure interaction in the downcomer annulus is computed with a one-dimensional network model formed to be equivalent to two-dimensional fluid-structure interaction. The structural model incorporated in MULTIFLEX is substantially simpler than that necessary for structural analyses. Proposed for computation of structural dynamics is the projector method than can deal with the beam mode by modal analysis and the other shell modes by a direct integration method. Computed in-air and in-water frequencies agree fairly well with the experimental data, verifying the above MULTIFLEX technique
Pair shell model description of collective motions
International Nuclear Information System (INIS)
Chen Hsitseng; Feng Dahsuan
1996-01-01
The shell model in the pair basis has been reviewed with a case study of four particles in a spherical single-j shell. By analyzing the wave functions according to their pair components, the novel concept of the optimum pairs was developed which led to the proposal of a generalized pair mean-field method to solve the many-body problem. The salient feature of the method is its ability to handle within the framework of the spherical shell model a rotational system where the usual strong configuration mixing complexity is so simplified that it is now possible to obtain analytically the band head energies and the moments of inertia. We have also examined the effects of pair truncation on rotation and found the slow convergence of adding higher spin pairs. Finally, we found that when the SDI and Q .Q interactions are of equal strengths, the optimum pair approximation is still valid. (orig.)
Stability of bubble nuclei through Shell-Effects
Dietrich, Klaus; Pomorski, Krzysztof
1997-01-01
We investigate the shell structure of bubble nuclei in simple phenomenological shell models and study their binding energy as a function of the radii and of the number of neutron and protons using Strutinsky's method. Shell effects come about, on the one hand, by the high degeneracy of levels with large angular momentum and, on the other, by the big energy gaps between states with a different number of radial nodes. Shell energies down to -40 MeV are shown to occur for certain magic nuclei. E...
Multi-shell model of ion-induced nucleic acid condensation
Energy Technology Data Exchange (ETDEWEB)
Tolokh, Igor S. [Department of Computer Science, Virginia Tech, Blacksburg, Virginia 24061 (United States); Drozdetski, Aleksander V. [Department of Physics, Virginia Tech, Blacksburg, Virginia 24061 (United States); Pollack, Lois [School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853-3501 (United States); Baker, Nathan A. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Division of Applied Mathematics, Brown University, Providence, Rhode Island 02912 (United States); Onufriev, Alexey V. [Department of Computer Science, Virginia Tech, Blacksburg, Virginia 24061 (United States); Department of Physics, Virginia Tech, Blacksburg, Virginia 24061 (United States)
2016-04-21
We present a semi-quantitative model of condensation of short nucleic acid (NA) duplexes induced by trivalent cobalt(III) hexammine (CoHex) ions. The model is based on partitioning of bound counterion distribution around single NA duplex into “external” and “internal” ion binding shells distinguished by the proximity to duplex helical axis. In the aggregated phase the shells overlap, which leads to significantly increased attraction of CoHex ions in these overlaps with the neighboring duplexes. The duplex aggregation free energy is decomposed into attractive and repulsive components in such a way that they can be represented by simple analytical expressions with parameters derived from molecular dynamic simulations and numerical solutions of Poisson equation. The attractive term depends on the fractions of bound ions in the overlapping shells and affinity of CoHex to the “external” shell of nearly neutralized duplex. The repulsive components of the free energy are duplex configurational entropy loss upon the aggregation and the electrostatic repulsion of the duplexes that remains after neutralization by bound CoHex ions. The estimates of the aggregation free energy are consistent with the experimental range of NA duplex condensation propensities, including the unusually poor condensation of RNA structures and subtle sequence effects upon DNA condensation. The model predicts that, in contrast to DNA, RNA duplexes may condense into tighter packed aggregates with a higher degree of duplex neutralization. An appreciable CoHex mediated RNA-RNA attraction requires closer inter-duplex separation to engage CoHex ions (bound mostly in the “internal” shell of RNA) into short-range attractive interactions. The model also predicts that longer NA fragments will condense more readily than shorter ones. The ability of this model to explain experimentally observed trends in NA condensation lends support to proposed NA condensation picture based on the multivalent
Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.
2012-12-01
Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of
Symplectic no-core shell-model approach to intermediate-mass nuclei
Tobin, G. K.; Ferriss, M. C.; Launey, K. D.; Dytrych, T.; Draayer, J. P.; Dreyfuss, A. C.; Bahri, C.
2014-03-01
We present a microscopic description of nuclei in the intermediate-mass region, including the proximity to the proton drip line, based on a no-core shell model with a schematic many-nucleon long-range interaction with no parameter adjustments. The outcome confirms the essential role played by the symplectic symmetry to inform the interaction and the winnowing of shell-model spaces. We show that it is imperative that model spaces be expanded well beyond the current limits up through 15 major shells to accommodate particle excitations, which appear critical to highly deformed spatial structures and the convergence of associated observables.
International Nuclear Information System (INIS)
Wloch, Marta; Gour, Jeffrey R; Piecuch, Piotr; Dean, David J; Hjorth-Jensen, Morten; Papenbrock, Thomas
2005-01-01
We discuss large-scale ab initio calculations of ground and excited states of 16 O and preliminary calculations for 15 O and 17 O using coupled-cluster methods and algorithms developed in quantum chemistry. By using realistic two-body interactions and the renormalized form of the Hamiltonian obtained with a no-core G-matrix approach, we are able to obtain the virtually converged results for 16 O and promising results for 15 O and 17 O at the level of two-body interactions. The calculated properties other than binding and excitation energies include charge radius and charge form factor. The relatively low costs of coupled-cluster calculations, which are characterized by the low-order polynomial scaling with the system size, enable us to probe large model spaces with up to seven or eight major oscillator shells, for which nontruncated shell-model calculations for nuclei with A = 15-17 active particles are presently not possible
The use of COD and plastic instability in crack propagation and arrest in shells
Erdogan, F.; Ratwani, M.
1974-01-01
The initiation, growth, and possible arrest of fracture in cylindrical shells containing initial defects are dealt with. For those defects which may be approximated by a part-through semi-elliptic surface crack which is sufficiently shallow so that part of the net ligament in the plane of the crack is still elastic, the existing flat plate solution is modified to take into account the shell curvature effect as well as the effect of the thickness and the small scale plastic deformations. The problem of large defects is then considered under the assumptions that the defect may be approximated by a relatively deep meridional part-through surface crack and the net ligament through the shell wall is fully yielded. The results given are based on an 8th order bending theory of shallow shells using a conventional plastic strip model to account for the plastic deformations around the crack border.
Stability of large scale interconnected dynamical systems
International Nuclear Information System (INIS)
Akpan, E.P.
1993-07-01
Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
Spatial scale separation in regional climate modelling
Energy Technology Data Exchange (ETDEWEB)
Feser, F.
2005-07-01
In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter
Bevilacqua, G; Hartanto, H B; Kraus, M; Worek, M
2016-02-05
We present a complete description of top quark pair production in association with a jet in the dilepton channel. Our calculation is accurate to next-to-leading order (NLO) in QCD and includes all nonresonant diagrams, interferences, and off-shell effects of the top quark. Moreover, nonresonant and off-shell effects due to the finite W gauge boson width are taken into account. This calculation constitutes the first fully realistic NLO computation for top quark pair production with a final state jet in hadronic collisions. Numerical results for differential distributions as well as total cross sections are presented for the Large Hadron Collider at 8 TeV. With our inclusive cuts, NLO predictions reduce the unphysical scale dependence by more than a factor of 3 and lower the total rate by about 13% compared to leading-order QCD predictions. In addition, the size of the top quark off-shell effects is estimated to be below 2%.
Model-based plant-wide optimization of large-scale lignocellulosic bioethanol plants
DEFF Research Database (Denmark)
Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest
2017-01-01
Second generation biorefineries transform lignocellulosic biomass into chemicals with higher added value following a conversion mechanism that consists of: pretreatment, enzymatic hydrolysis, fermentation and purification. The objective of this study is to identify the optimal operational point...... with respect to maximum economic profit of a large scale biorefinery plant using a systematic model-based plantwide optimization methodology. The following key process parameters are identified as decision variables: pretreatment temperature, enzyme dosage in enzymatic hydrolysis, and yeast loading per batch...... in fermentation. The plant is treated in an integrated manner taking into account the interactions and trade-offs between the conversion steps. A sensitivity and uncertainty analysis follows at the optimal solution considering both model and feed parameters. It is found that the optimal point is more sensitive...
Model uncertainties of local-thermodynamic-equilibrium K-shell spectroscopy
Nagayama, T.; Bailey, J. E.; Mancini, R. C.; Iglesias, C. A.; Hansen, S. B.; Blancard, C.; Chung, H. K.; Colgan, J.; Cosse, Ph.; Faussurier, G.; Florido, R.; Fontes, C. J.; Gilleron, F.; Golovkin, I. E.; Kilcrease, D. P.; Loisel, G.; MacFarlane, J. J.; Pain, J.-C.; Rochau, G. A.; Sherrill, M. E.; Lee, R. W.
2016-09-01
Local-thermodynamic-equilibrium (LTE) K-shell spectroscopy is a common tool to diagnose electron density, ne, and electron temperature, Te, of high-energy-density (HED) plasmas. Knowing the accuracy of such diagnostics is important to provide quantitative conclusions of many HED-plasma research efforts. For example, Fe opacities were recently measured at multiple conditions at the Sandia National Laboratories Z machine (Bailey et al., 2015), showing significant disagreement with modeled opacities. Since the plasma conditions were measured using K-shell spectroscopy of tracer Mg (Nagayama et al., 2014), one concern is the accuracy of the inferred Fe conditions. In this article, we investigate the K-shell spectroscopy model uncertainties by analyzing the Mg spectra computed with 11 different models at the same conditions. We find that the inferred conditions differ by ±20-30% in ne and ±2-4% in Te depending on the choice of spectral model. Also, we find that half of the Te uncertainty comes from ne uncertainty. To refine the accuracy of the K-shell spectroscopy, it is important to scrutinize and experimentally validate line-shape theory. We investigate the impact of the inferred ne and Te model uncertainty on the Fe opacity measurements. Its impact is small and does not explain the reported discrepancies.
Understanding dynamics of large-scale atmospheric vortices with moist-convective shallow water model
International Nuclear Information System (INIS)
Rostami, M.; Zeitlin, V.
2016-01-01
Atmospheric jets and vortices which, together with inertia-gravity waves, constitute the principal dynamical entities of large-scale atmospheric motions, are well described in the framework of one- or multi-layer rotating shallow water models, which are obtained by vertically averaging of full “primitive” equations. There is a simple and physically consistent way to include moist convection in these models by adding a relaxational parameterization of precipitation and coupling precipitation with convective fluxes with the help of moist enthalpy conservation. We recall the construction of moist-convective rotating shallow water model (mcRSW) model and give an example of application to upper-layer atmospheric vortices. (paper)
Double inflation: A possible resolution of the large-scale structure problem
International Nuclear Information System (INIS)
Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.
1986-11-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs
Learning from large scale neural simulations
DEFF Research Database (Denmark)
Serban, Maria
2017-01-01
Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...
Ground state energy fluctuations in the nuclear shell model
International Nuclear Information System (INIS)
Velazquez, Victor; Hirsch, Jorge G.; Frank, Alejandro; Barea, Jose; Zuker, Andres P.
2005-01-01
Statistical fluctuations of the nuclear ground state energies are estimated using shell model calculations in which particles in the valence shells interact through well-defined forces, and are coupled to an upper shell governed by random 2-body interactions. Induced ground-state energy fluctuations are found to be one order of magnitude smaller than those previously associated with chaotic components, in close agreement with independent perturbative estimates based on the spreading widths of excited states
Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design
Afify, Laila H.
2016-12-01
The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored
Ruuskanen, Suvi; Laaksonen, Toni; Morales, Judith; Moreno, Juan; Mateo, Rafael; Belskii, Eugen; Bushuev, Andrey; Järvinen, Antero; Kerimov, Anvar; Krams, Indrikis; Morosinotto, Chiara; Mänd, Raivo; Orell, Markku; Qvarnström, Anna; Slate, Fred; Tilgar, Vallo; Visser, Marcel E; Winkel, Wolfgang; Zang, Herwig; Eeva, Tapio
2014-03-01
Birds have been used as bioindicators of pollution, such as toxic metals. Levels of pollutants in eggs are especially interesting, as developing birds are more sensitive to detrimental effects of pollutants than adults. Only very few studies have monitored intraspecific, large-scale variation in metal pollution across a species' breeding range. We studied large-scale geographic variation in metal levels in the eggs of a small passerine, the pied flycatcher (Ficedula hypoleuca), sampled from 15 populations across Europe. We measured 10 eggshell elements (As, Cd, Cr, Cu, Ni, Pb, Zn, Se, Sr, and Ca) and several shell characteristics (mass, thickness, porosity, and color). We found significant variation among populations in eggshell metal levels for all metals except copper. Eggshell lead, zinc, and chromium levels decreased from central Europe to the north, in line with the gradient in pollution levels over Europe, thus suggesting that eggshell can be used as an indicator of pollution levels. Eggshell lead levels were also correlated with soil lead levels and pH. Most of the metals were not correlated with eggshell characteristics, with the exception of shell mass, or with breeding success, which may suggest that birds can cope well with the current background exposure levels across Europe.
Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation
Ogawa, Masatoshi; Ogai, Harutoshi
Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.
Planck intermediate results XLII. Large-scale Galactic magnetic fields
DEFF Research Database (Denmark)
Adam, R.; Ade, P. A. R.; Alves, M. I. R.
2016-01-01
Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...
Dynamic model of frequency control in Danish power system with large scale integration of wind power
DEFF Research Database (Denmark)
Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar
2013-01-01
This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....
Hindmarsh, Mark
2018-02-16
A model for the acoustic production of gravitational waves at a first-order phase transition is presented. The source of gravitational radiation is the sound waves generated by the explosive growth of bubbles of the stable phase. The model assumes that the sound waves are linear and that their power spectrum is determined by the characteristic form of the sound shell around the expanding bubble. The predicted power spectrum has two length scales, the average bubble separation and the sound shell width when the bubbles collide. The peak of the power spectrum is at wave numbers set by the sound shell width. For a higher wave number k, the power spectrum decreases to k^{-3}. At wave numbers below the inverse bubble separation, the power spectrum goes to k^{5}. For bubble wall speeds near the speed of sound where these two length scales are distinguished, there is an intermediate k^{1} power law. The detailed dependence of the power spectrum on the wall speed and the other parameters of the phase transition raises the possibility of their constraint or measurement at a future space-based gravitational wave observatory such as LISA.
Hindmarsh, Mark
2018-02-01
A model for the acoustic production of gravitational waves at a first-order phase transition is presented. The source of gravitational radiation is the sound waves generated by the explosive growth of bubbles of the stable phase. The model assumes that the sound waves are linear and that their power spectrum is determined by the characteristic form of the sound shell around the expanding bubble. The predicted power spectrum has two length scales, the average bubble separation and the sound shell width when the bubbles collide. The peak of the power spectrum is at wave numbers set by the sound shell width. For a higher wave number k , the power spectrum decreases to k-3. At wave numbers below the inverse bubble separation, the power spectrum goes to k5. For bubble wall speeds near the speed of sound where these two length scales are distinguished, there is an intermediate k1 power law. The detailed dependence of the power spectrum on the wall speed and the other parameters of the phase transition raises the possibility of their constraint or measurement at a future space-based gravitational wave observatory such as LISA.
Galileon radiation from a spherical collapsing shell
Energy Technology Data Exchange (ETDEWEB)
Martín-García, Javier [Instituto de Física Teórica UAM/CSIC,C/ Nicolás Cabrera 15, E-28049 Madrid (Spain); Vázquez-Mozo, Miguel Á. [Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM),Universidad de Salamanca, Plaza de la Merced s/n, E-37008 Salamanca (Spain)
2017-01-17
Galileon radiation in the collapse of a thin spherical shell of matter is analyzed. In the framework of a cubic Galileon theory, we compute the field profile produced at large distances by a short collapse, finding that the radiated field has two peaks traveling ahead of light fronts. The total energy radiated during the collapse follows a power law scaling with the shell’s physical width and results from two competing effects: a Vainshtein suppression of the emission and an enhancement due to the thinness of the shell.
Economic Model Predictive Control for Large-Scale and Distributed Energy Systems
DEFF Research Database (Denmark)
Standardi, Laura
Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption. We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies. An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...
Large scale FCI experiments in subassembly geometry. Test facility and model experiments
International Nuclear Information System (INIS)
Beutel, H.; Gast, K.
A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975
Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources
Directory of Open Access Journals (Sweden)
Hui Li
2016-10-01
Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.
Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.
2017-12-01
California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness
Scale-free models for the structure of business firm networks.
Kitsak, Maksim; Riccaboni, Massimo; Havlin, Shlomo; Pammolli, Fabio; Stanley, H Eugene
2010-03-01
We study firm collaborations in the life sciences and the information and communication technology sectors. We propose an approach to characterize industrial leadership using k -shell decomposition, with top-ranking firms in terms of market value in higher k -shell layers. We find that the life sciences industry network consists of three distinct components: a "nucleus," which is a small well-connected subgraph, "tendrils," which are small subgraphs consisting of small degree nodes connected exclusively to the nucleus, and a "bulk body," which consists of the majority of nodes. Industrial leaders, i.e., the largest companies in terms of market value, are in the highest k -shells of both networks. The nucleus of the life sciences sector is very stable: once a firm enters the nucleus, it is likely to stay there for a long time. At the same time we do not observe the above three components in the information and communication technology sector. We also conduct a systematic study of these three components in random scale-free networks. Our results suggest that the sizes of the nucleus and the tendrils in scale-free networks decrease as the exponent of the power-law degree distribution lambda increases, and disappear for lambda>or=3 . We compare the k -shell structure of random scale-free model networks with two real-world business firm networks in the life sciences and in the information and communication technology sectors. We argue that the observed behavior of the k -shell structure in the two industries is consistent with the coexistence of both preferential and random agreements in the evolution of industrial networks.
Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...
Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow
Sam Ali Al; Szasz Robert; Revstedt Johan
2015-01-01
The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...
Bioinspired large-scale aligned porous materials assembled with dual temperature gradients.
Bai, Hao; Chen, Yuan; Delattre, Benjamin; Tomsia, Antoni P; Ritchie, Robert O
2015-12-01
Natural materials, such as bone, teeth, shells, and wood, exhibit outstanding properties despite being porous and made of weak constituents. Frequently, they represent a source of inspiration to design strong, tough, and lightweight materials. Although many techniques have been introduced to create such structures, a long-range order of the porosity as well as a precise control of the final architecture remain difficult to achieve. These limitations severely hinder the scale-up fabrication of layered structures aimed for larger applications. We report on a bidirectional freezing technique to successfully assemble ceramic particles into scaffolds with large-scale aligned, lamellar, porous, nacre-like structure and long-range order at the centimeter scale. This is achieved by modifying the cold finger with a polydimethylsiloxane (PDMS) wedge to control the nucleation and growth of ice crystals under dual temperature gradients. Our approach could provide an effective way of manufacturing novel bioinspired structural materials, in particular advanced materials such as composites, where a higher level of control over the structure is required.
Kinematic morphology of large-scale structure: evolution from potential to rotational flow
International Nuclear Information System (INIS)
Wang, Xin; Szalay, Alex; Aragón-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L.
2014-01-01
As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.
Kinematic morphology of large-scale structure: evolution from potential to rotational flow
Energy Technology Data Exchange (ETDEWEB)
Wang, Xin; Szalay, Alex; Aragón-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States)
2014-09-20
As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.
Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling
Directory of Open Access Journals (Sweden)
G. Delmonaco
2003-01-01
Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering
Mayer–Jensen Shell Model and Magic Numbers
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 12. Mayer-Jensen Shell Model and Magic Numbers - An Independent Nucleon Model with Spin-Orbit Coupling. R Velusamy. General Article Volume 12 Issue 12 December 2007 pp 12-24 ...
Quark shell model using projection operators
International Nuclear Information System (INIS)
Ullah, N.
1988-01-01
Using the projection operators in the quark shell model, the wave functions for proton are calculated and expressions for calculating the wave function of neutron and also magnetic moment of proton and neutron are derived. (M.G.B.)
Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.
2015-12-01
Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are
Transition in x-ray yield, mass scaling observed in the high-wire-number, plasma-shell regime
International Nuclear Information System (INIS)
Whitney, K.G.; Pulsifer, P.E.; Apruzese, J.P.; Thornhill, J.W.; Davis, J.; Sanford, T.W.L.; Mock, R.C.; Nash, T.J.
1999-01-01
Initial calculations, based on classical transport coefficients and carried out to predict the efficiency with which the implosion kinetic energy of aluminum Z pinches could be thermalized and converted into kilovolt x-rays, predicted a sharp transition between m 2 and m yield scaling, where m is the aluminum array mass. Later, when ad hoc increases in the heat conductivity and artificial viscosity were introduced into these calculations and the densities that were achieved on axis were sharply reduced, the transition from m 2 to m scaling was found to have shifted, but was otherwise still fairly sharp and well-defined. The location of these breakpoint curves defined the locus of implosion velocities at which the yields would obtain their maximum for different mass arrays. The first such mass breakpoint curve that was calculated is termed hard, while the second is termed soft. Early 24, 30, and 42 aluminum wire experiments on the Saturn accelerator at the Sandia National laboratories confirmed the predictions of the soft breakpoint curve calculations. In this talk, the authors present results from a more recent set of aluminum experiments on Saturn, in which the array mass was varied at a fixed array radius and in which the radius was varied for a fixed mass. In both sets of experiments, the wire numbers were large: in excess of 92 and generally 136 or 192. In this high-wire-number regime, the wire plasmas are calculated to merge to form a plasma shell prior to significant radial implosion. Large wire number has been found to improve the pinch compressibility, and the analysis of these experiments in the shell regime shows that they come very close to the original predictions of the hard breakpoint curve calculations. A discussion of these detailed comparisons will be presented
A testing facility for large scale models at 100 bar and 3000C to 10000C
International Nuclear Information System (INIS)
Zemann, H.
1978-07-01
A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)
Directory of Open Access Journals (Sweden)
Guoqi Wei
2016-02-01
Full Text Available According to comprehensive research on forming conditions including sedimentary facies, reservoirs, source rocks, and palaeo-uplift evolution of Sinian-Cambrian in Sichuan Basin, it is concluded that: (1 large-scale inherited palaeo-uplifts, large-scale intracratonic rifts, three widely-distributed high-quality source rocks, four widely-distributed karst reservoirs, and oil pyrolysis gas were all favorable conditions for large-scale and high-abundance accumulation; (2 diverse accumulation models were developed in different areas of the palaeo-uplift. In the core area of the inherited palaeo-uplift, “in-situ” pyrolysis accumulation model of paleo-reservoir was developed. On the other hand, in the slope area, pyrolysis accumulation model of dispersed liquid hydrocarbon was developed in the late stage structural trap; (3 there were different exploration directions in various areas of the palaeo-uplift. Within the core area of the palaeo-uplift, we mainly searched for the inherited paleo-structural trap which was also the foundation of lithological-strigraphic gas reservoirs. In the slope areas, we mainly searched for the giant structural trap formed in the Himalayan Period.
Analysis using large-scale ringing data
Directory of Open Access Journals (Sweden)
Baillie, S. R.
2004-06-01
Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES
Identification of low order models for large scale processes
Wattamwar, S.K.
2010-01-01
Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand
Large-Scale Optimization for Bayesian Inference in Complex Systems
Energy Technology Data Exchange (ETDEWEB)
Willcox, Karen [MIT; Marzouk, Youssef [MIT
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to
Large-scale derived flood frequency analysis based on continuous simulation
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several
Directory of Open Access Journals (Sweden)
Aliyeh Kazemi
2016-09-01
Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.
A mixed-layer model study of the stratocumulus response to changes in large-scale conditions
De Roode, S.R.; Siebesma, A.P.; Dal Gesso, S.; Jonker, H.J.J.; Schalkwijk, J.; Sival, J.
2014-01-01
A mixed-layer model is used to study the response of stratocumulus equilibrium state solutions to perturbations of cloud controlling factors which include the sea surface temperature, the specific humidity and temperature in the free troposphere, as well as the large-scale divergence and horizontal
Gkoulalas-Divanis, Aris
2014-01-01
Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field
Large-scale fracture mechancis testing -- requirements and possibilities
International Nuclear Information System (INIS)
Brumovsky, M.
1993-01-01
Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig
Comparison of vibration test results for Atucha II NPP and large scale concrete block models
International Nuclear Information System (INIS)
Iizuka, S.; Konno, T.; Prato, C.A.
2001-01-01
In order to study the soil structure interaction of reactor building that could be constructed on a Quaternary soil, a comparison study of the soil structure interaction springs was performed between full scale vibration test results of Atucha II NPP and vibration test results of large scale concrete block models constructed on Quaternary soil. This comparison study provides a case data of soil structure interaction springs on Quaternary soil with different foundation size and stiffness. (author)
International Nuclear Information System (INIS)
Langdal, Bjoern Inge; Eggen, Arnt Ove
2003-01-01
The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series
Haer, Toon; Aerts, Jeroen
2015-04-01
Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.
Modeling the carbon isotope composition of bivalve shells (Invited)
Romanek, C.
2010-12-01
The stable carbon isotope composition of bivalve shells is a valuable archive of paleobiological and paleoenvironmental information. Previous work has shown that the carbon isotope composition of the shell is related to the carbon isotope composition of dissolved inorganic carbon (DIC) in the ambient water in which a bivalve lives, as well as metabolic carbon derived from bivalve respiration. The contribution of metabolic carbon varies among organisms, but it is generally thought to be relatively low (e.g., 90%) in the shells from terrestrial organisms. Because metabolic carbon contains significantly more C-12 than DIC, negative excursions from the expected environmental (DIC) signal are interpreted to reflect an increased contribution of metabolic carbon in the shell. This observation contrasts sharply with modeled carbon isotope compositions for shell layers deposited from the inner extrapallial fluid (EPF). Previous studies have shown that growth lines within the inner shell layer of bivalves are produced during periods of anaerobiosis when acidic metabolic byproducts (e.g., succinic acid) are neutralized (or buffered) by shell dissolution. This requires the pH of EPF to decrease below ambient levels (~7.5) until a state of undersaturation is achieved that promotes shell dissolution. This condition may occur when aquatic bivalves are subjected to external stressors originating from ecological (predation) or environmental (exposure to atm; low dissolved oxygen; contaminant release) pressures; normal physiological processes will restore the pH of EPF when the pressure is removed. As a consequence of this process, a temporal window should also exist in EPF at relatively low pH where shell carbonate is deposited at a reduced saturation state and precipitation rate. For example, EPF chemistry should remain slightly supersaturated with respect to aragonite given a drop of one pH unit (6.5), but under closed conditions, equilibrium carbon isotope fractionation
Large scale injection test (LASGIT) modelling
International Nuclear Information System (INIS)
Arnedo, D.; Olivella, S.; Alonso, E.E.
2010-01-01
Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug
Shell stabilization of super- and hyperheavy nuclei without magic gaps
International Nuclear Information System (INIS)
Bender, M.; Nazarewicz, W.; Oak Ridge National Lab., TN; Warsaw Univ.; Reinhard, P.G.; Oak Ridge National Lab., TN
2001-05-01
Quantum stabilization of superheavy elements is quantified in terms of the shell-correction energy. We compute the shell correction using self-consistent nuclear models: the non-relativistic Skyrme-Hartree-Fock approach and the relativistic mean-field model, for a number of parametrizations. All the forces applied predict a broad valley of shell stabilization around Z = 120 and N = 172-184. We also predict two broad regions of shell stabilization in hyperheavy elements with N ∼ 258 and N ∼ 308. Due to the large single-particle level density, shell corrections in the superheavy elements differ markedly from those in lighter nuclei. With increasing proton and neutron numbers, the regions of nuclei stabilized by shell effects become poorly localized in particle number, and the familiar pattern of shells separated by magic gaps is basically gone. (orig.)
Shell model description of band structure in 48Cr
International Nuclear Information System (INIS)
Vargas, Carlos E.; Velazquez, Victor M.
2007-01-01
The band structure for normal and abnormal parity bands in 48Cr are described using the m-scheme shell model. In addition to full fp-shell, two particles in the 1d3/2 orbital are allowed in order to describe intruder states. The interaction includes fp-, sd- and mixed matrix elements
Adaptive Texture Synthesis for Large Scale City Modeling
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
Large-scale structure of the Universe
International Nuclear Information System (INIS)
Doroshkevich, A.G.
1978-01-01
The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology
Reversible patterning of spherical shells through constrained buckling
Marthelot, J.; Brun, P.-T.; Jiménez, F. López; Reis, P. M.
2017-07-01
Recent advances in active soft structures envision the large deformations resulting from mechanical instabilities as routes for functional shape morphing. Numerous such examples exist for filamentary and plate systems. However, examples with double-curved shells are rarer, with progress hampered by challenges in fabrication and the complexities involved in analyzing their underlying geometrical nonlinearities. We show that on-demand patterning of hemispherical shells can be achieved through constrained buckling. Their postbuckling response is stabilized by an inner rigid mandrel. Through a combination of experiments, simulations, and scaling analyses, our investigation focuses on the nucleation and evolution of the buckling patterns into a reticulated network of sharp ridges. The geometry of the system, namely, the shell radius and the gap between the shell and the mandrel, is found to be the primary ingredient to set the surface morphology. This prominence of geometry suggests a robust, scalable, and tunable mechanism for reversible shape morphing of elastic shells.
Investigation of the large scale regional hydrogeological situation at Ceberg
International Nuclear Information System (INIS)
Boghammar, A.; Grundfelt, B.; Hartley, L.
1997-11-01
The present study forms part of the large-scale groundwater flow studies within the SR 97 project. The site of interest is Ceberg. Within the present study two different regional scale groundwater models have been constructed, one large regional model with an areal extent of about 300 km 2 and one semi-regional model with an areal extent of about 50 km 2 . Different types of boundary conditions have been applied to the models. Topography driven pressures, constant infiltration rates, non-linear infiltration combined specified pressure boundary conditions, and transfer of groundwater pressures from the larger model to the semi-regional model. The present model has shown that: -Groundwater flow paths are mainly local. Large-scale groundwater flow paths are only seen below the depth of the hypothetical repository (below 500 meters) and are very slow. -Locations of recharge and discharge, to and from the site area are in the close vicinity of the site. -The low contrast between major structures and the rock mass means that the factor having the major effect on the flowpaths is the topography. -A sufficiently large model, to incorporate the recharge and discharge areas to the local site is in the order of kilometres. -A uniform infiltration rate boundary condition does not give a good representation of the groundwater movements in the model. -A local site model may be located to cover the site area and a few kilometers of the surrounding region. In order to incorporate all recharge and discharge areas within the site model, the model will be somewhat larger than site scale models at other sites. This is caused by the fact that the discharge areas are divided into three distinct areas to the east, south and west of the site. -Boundary conditions may be supplied to the site model by means of transferring groundwater pressures obtained with the semi-regional model
Large scale stochastic spatio-temporal modelling with PCRaster
Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.
2013-01-01
PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model
Novel extrapolation method in the Monte Carlo shell model
International Nuclear Information System (INIS)
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2010-01-01
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.
Tidal-induced large-scale regular bed form patterns in a three-dimensional shallow water model
Hulscher, Suzanne J.M.H.
1996-01-01
The three-dimensional model presented in this paper is used to study how tidal currents form wave-like bottom patterns. Inclusion of vertical flow structure turns out to be necessary to describe the formation, or absence, of all known large-scale regular bottom features. The tide and topography are
Sutanudjaja, E.H.
2012-01-01
In this thesis, the possibilities of using spaceborne remote sensing for large-scale groundwater modeling are explored. We focus on a soil moisture product called European Remote Sensing Soil Water Index (ERS SWI, Wagner et al., 1999) - representing the upper profile soil moisture. As a test-bed, we
Probing cosmology with the homogeneity scale of the Universe through large scale structure surveys
International Nuclear Information System (INIS)
Ntelis, Pierros
2017-01-01
This thesis exposes my contribution to the measurement of homogeneity scale using galaxies, with the cosmological interpretation of results. In physics, any model is characterized by a set of principles. Most models in cosmology are based on the Cosmological Principle, which states that the universe is statistically homogeneous and isotropic on a large scales. Today, this principle is considered to be true since it is respected by those cosmological models that accurately describe the observations. However, while the isotropy of the universe is now confirmed by many experiments, it is not the case for the homogeneity. To study cosmic homogeneity, we propose to not only test a model but to test directly one of the postulates of modern cosmology. Since 1998 the measurements of cosmic distances using type Ia supernovae, we know that the universe is now in a phase of accelerated expansion. This phenomenon can be explained by the addition of an unknown energy component, which is called dark energy. Since dark energy is responsible for the expansion of the universe, we can study this mysterious fluid by measuring the rate of expansion of the universe. The universe has imprinted in its matter distribution a standard ruler, the Baryon Acoustic Oscillation (BAO) scale. By measuring this scale at different times during the evolution of our universe, it is then possible to measure the rate of expansion of the universe and thus characterize this dark energy. Alternatively, we can use the homogeneity scale to study this dark energy. Studying the homogeneity and the BAO scale requires the statistical study of the matter distribution of the universe at large scales, superior to tens of Mega-parsecs. Galaxies and quasars are formed in the vast over densities of matter and they are very luminous: these sources trace the distribution of matter. By measuring the emission spectra of these sources using large spectroscopic surveys, such as BOSS and eBOSS, we can measure their positions
No-Core Shell Model and Reactions
International Nuclear Information System (INIS)
Navratil, P; Ormand, W E; Caurier, E; Bertulani, C
2005-01-01
There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+ 6 Li and 6 He+p scattering as well as a calculation of the astrophysically important 7 Be(p, γ) 8 B S-factor
Shell model for warm rotating nuclei
Energy Technology Data Exchange (ETDEWEB)
Matsuo, M.; Yoshida, K. [Kyoto Univ. (Japan); Dossing, T. [Univ. of Copenhagen (Denmark)] [and others
1996-12-31
Utilizing a shell model which combines the cranked Nilsson mean-field and the residual surface and volume delta two-body forces, the authors discuss the onset of rotational damping in normal- and super-deformed nuclei. Calculation for a typical normal deformed nucleus {sup 168}Yb indicates that the rotational damping sets in at around 0.8 MeV above the yrast line, and about 30 rotational bands of various length exists at a given rotational frequency, in overall agreement with experimental findings. It is predicted that the onset of rotational damping changes significantly in different superdeformed nuclei due to the variety of the shell gaps and single-particle orbits associated with the superdeformed mean-field.
Large-scale networks in engineering and life sciences
Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai
2014-01-01
This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines. The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...
Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa
Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.
2012-01-01
We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.
Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating
DEFF Research Database (Denmark)
Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael
2011-01-01
before 2050. The modelling tools developed by the International Energy Agency (IEA) Implementing Agreement ETSAP include both multi-regional global and long-term energy models till 2100, as well as national or regional models with shorter time horizons. Examples are the EFDA-TIMES model, focusing...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...
Altenbach, Holm
2011-01-01
In this volume, scientists and researchers from industry discuss the new trends in simulation and computing shell-like structures. The focus is put on the following problems: new theories (based on two-dimensional field equations but describing non-classical effects), new constitutive equations (for materials like sandwiches, foams, etc. and which can be combined with the two-dimensional shell equations), complex structures (folded, branching and/or self intersecting shell structures, etc.) and shell-like structures on different scales (for example: nano-tubes) or very thin structures (similar
Imprint of thawing scalar fields on the large scale galaxy overdensity
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
Robust large-scale parallel nonlinear solvers for simulations.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any
Shell model Monte Carlo investigation of rare earth nuclei
International Nuclear Information System (INIS)
White, J. A.; Koonin, S. E.; Dean, D. J.
2000-01-01
We utilize the shell model Monte Carlo method to study the structure of rare earth nuclei. This work demonstrates the first systematic full oscillator shell with intruder calculations in such heavy nuclei. Exact solutions of a pairing plus quadrupole Hamiltonian are compared with the static path approximation in several dysprosium isotopes from A=152 to 162, including the odd mass A=153. Some comparisons are also made with Hartree-Fock-Bogoliubov results from Baranger and Kumar. Basic properties of these nuclei at various temperatures and spin are explored. These include energy, deformation, moments of inertia, pairing channel strengths, band crossing, and evolution of shell model occupation numbers. Exact level densities are also calculated and, in the case of 162 Dy, compared with experimental data. (c) 2000 The American Physical Society
Theoretical spectroscopy and the fp shell
International Nuclear Information System (INIS)
Poves, A.; Zuker, A.
1980-01-01
The recently developed quasiconfiguration method is applied to fp shell nuclei. Second order degenerate perturbation theory is shown to be sufficient to produce, for low lying states, the same results as large diagonalizations in the f(7/2)p(3/2)p(1/2)f(5/2)sup(n) full space. due to the operation of linked cluster mechanisms. Realistic interactions with minimal monopole changes are shown to be successful in reproducing spectra, binding energies, quadrupole moments and transition rates. Large shell model spaces are seen to exhibit typical many body behaviour. Quasiconfigurations allow insight into the underlying coupling schemes
Arler, Finn
2006-01-01
The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...
Evaluation of scaling correlations for mobilization of double-shell tank waste
Energy Technology Data Exchange (ETDEWEB)
Shekarriz, A.; Hammad, K.J.; Powell, M.R.
1997-09-01
In this report, we have examined some of the fundamental mechanisms expected to be at work during mobilization of the waste within the double-shell tanks at Hanford. The motivation stems from the idea that in order to properly apply correlations derived from scaled tests, one would have to ensure that appropriate scaling laws are utilized. Further, in the process of delineating the controlling mechanisms during mobilization, the currently used computational codes are being validated and strengthened based on these findings. Experiments were performed at 1/50-scale, different from what had been performed in the previous fiscal years (i.e., 1/12- and 1/25-scale). It was anticipated that if the current empirical correlations are to work, they should be scale invariant. The current results showed that linear scaling between the 1/25-scale and 1/50-scale correlations do not work well. Several mechanisms were examined in the scaled tests which might have contributed to the discrepancies between the results at these two scales. No deficiencies in the experimental approach and the data were found. Cognizant of these results, it was concluded that the use of the current empirical correlations for ECR should be done cautiously taking into account the appropriate properties of the material for yielding.
Evaluation of scaling correlations for mobilization of double-shell tank waste
International Nuclear Information System (INIS)
Shekarriz, A.; Hammad, K.J.; Powell, M.R.
1997-09-01
In this report, we have examined some of the fundamental mechanisms expected to be at work during mobilization of the waste within the double-shell tanks at Hanford. The motivation stems from the idea that in order to properly apply correlations derived from scaled tests, one would have to ensure that appropriate scaling laws are utilized. Further, in the process of delineating the controlling mechanisms during mobilization, the currently used computational codes are being validated and strengthened based on these findings. Experiments were performed at 1/50-scale, different from what had been performed in the previous fiscal years (i.e., 1/12- and 1/25-scale). It was anticipated that if the current empirical correlations are to work, they should be scale invariant. The current results showed that linear scaling between the 1/25-scale and 1/50-scale correlations do not work well. Several mechanisms were examined in the scaled tests which might have contributed to the discrepancies between the results at these two scales. No deficiencies in the experimental approach and the data were found. Cognizant of these results, it was concluded that the use of the current empirical correlations for ECR should be done cautiously taking into account the appropriate properties of the material for yielding
Wellposedness of a cylindrical shell model
International Nuclear Information System (INIS)
McMillan, C.
1994-01-01
We consider a well-known model of a thin cylindrical shell with dissipative feedback controls on the boundary in the form of forces, shears, and moments. We show that the resulting closed loop feedback problem generates a s.c. semigroup of contractions in the energy space
Study of nickel nuclei by (p,d) and (p,t) reactions. Shell model interpretation
International Nuclear Information System (INIS)
Kong-A-Siou, D.-H.
1975-01-01
The experimental techniques employed at the Nuclear Science Institute (Grenoble) and at Michigan State University are described. The development of the transition amplitude calculation of the one-or two-nucleon transfer reactions is described first, after which the principle of shell model calculations is outlined. The choices of configuration space and two-body interactions are discussed. The DWBA method of analysis is studied in more detail. The effects of different approximations and the influence of the parameters are examined. Special attention is paid to the j-dependence of the form of the angular distributions, on effect not explained in the standard DWBA framework. The results are analysed and a large section is devoted to a comparative study of the experimental results obtained and those from other nuclear reactions. The spectroscopic data obtained are compared with the results of shell model calculations [fr
Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing
Directory of Open Access Journals (Sweden)
Zhaosheng Yang
2014-01-01
Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.
Fatigue Analysis of Large-scale Wind turbine
Directory of Open Access Journals (Sweden)
Zhu Yongli
2017-01-01
Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.
Lai, Changliang; Wang, Junbiao; Liu, Chuang
2014-10-01
Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.
Measuring the topology of large-scale structure in the universe
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
International Nuclear Information System (INIS)
Gott, J.R. III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references
Directory of Open Access Journals (Sweden)
D. Bachmann
2004-01-01
Full Text Available Using a new 3-D physical modelling technique we investigated the initiation and evolution of large scale landslides in presence of pre-existing large scale fractures and taking into account the slope material weakening due to the alteration/weathering. The modelling technique is based on the specially developed properly scaled analogue materials, as well as on the original vertical accelerator device enabling increases in the 'gravity acceleration' up to a factor 50. The weathering primarily affects the uppermost layers through the water circulation. We simulated the effect of this process by making models of two parts. The shallower one represents the zone subject to homogeneous weathering and is made of low strength material of compressive strength σl. The deeper (core part of the model is stronger and simulates intact rocks. Deformation of such a model subjected to the gravity force occurred only in its upper (low strength layer. In another set of experiments, low strength (σw narrow planar zones sub-parallel to the slope surface (σwl were introduced into the model's superficial low strength layer to simulate localized highly weathered zones. In this configuration landslides were initiated much easier (at lower 'gravity force', were shallower and had smaller horizontal size largely defined by the weak zone size. Pre-existing fractures were introduced into the model by cutting it along a given plan. They have proved to be of small influence on the slope stability, except when they were associated to highly weathered zones. In this latter case the fractures laterally limited the slides. Deep seated rockslides initiation is thus directly defined by the mechanical structure of the hillslope's uppermost levels and especially by the presence of the weak zones due to the weathering. The large scale fractures play a more passive role and can only influence the shape and the volume of the sliding units.
Fires in large scale ventilation systems
International Nuclear Information System (INIS)
Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.
1991-01-01
This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)
Harris, B.; McDougall, K.; Barry, M.
2012-07-01
Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.
Three-fluid MHD-model of a current shell in Z-pinch
International Nuclear Information System (INIS)
Bazdenkov, S.V.; Vikhrev, V.V.
1975-01-01
Formation and motion of the current shell in a power pulsed discharge (Z-pinch) are discussed. One-dimmensional nonstationary problem about a discharge in deuterium is solved in the three-liquid magnetohydrodynamic approximation with regard for gas ionization and motion of neutral atoms. It is shown that after the shell removal there remains a large quantity of an ionized gas near an isolating chamber wall. The quantity is sufficient that a secondary breakdown may take place in the ionized gas. The moving current shell has a double structure, i.e. a current ''piston'' and a current layer in the shock wave front
Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling
Saksena, S.; Dey, S.; Merwade, V.
2016-12-01
Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.
Major shell centroids in the symplectic collective model
International Nuclear Information System (INIS)
Draayer, J.P.; Rosensteel, G.; Tulane Univ., New Orleans, LA
1983-01-01
Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)
Large Scale Visual Recommendations From Street Fashion Images
Jagadeesh, Vignesh; Piramuthu, Robinson; Bhardwaj, Anurag; Di, Wei; Sundaresan, Neel
2014-01-01
We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive e...
UHPFRC in large span shell structures
Ter Maten, R.N.; Grunewald, S.; Walraven, J.C.
2013-01-01
Ultra-High Performance Fibre-Reinforced Concrete (UHPFRC) is an innovative concrete type with a high compressive strength and a far more durable character compared to conventional concrete. UHPFRC can be applied in structures with aesthetic appearance and high material efficiency. Shell structures
Evolution of scaling emergence in large-scale spatial epidemic spreading.
Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan
2011-01-01
Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.
Gravity settling of Hanford single-shell tank sludges
International Nuclear Information System (INIS)
Brooks, K.P.; Rector, D.R.; Smith, P.A.
1999-01-01
The US Department of Energy plans to use gravity settling in million-gallon storage tanks while pretreating sludge on the Hanford site. To be considered viable in these large tanks, the supernatant must become clear, and the sludge must be concentrated in an acceptable time. These separations must occur over the wide range of conditions associated with sludge pretreatment. In the work reported here, gravity settling was studied with liter quantities of actual single-shell tank sludge from hanford Tank 241-C-107. Because of limited sludge availability, an approach was developed using the results of these liter-scale tests to predict full-scale operation. Samples were centrifuged at various g-forces to simulate compaction with higher layers of sludge. A semi-empirical settling model was then developed incorporating both the liter-scale settling data and the centrifuge compression results to describe the sludge behavior in a million-gallon tank. The settling model predicted that the compacted sludge solids would exceed 20 wt% in less than 30 days of settling in a 10-m-tall tank for all pretreatment steps
Mean field theory of nuclei and shell model. Present status and future outlook
International Nuclear Information System (INIS)
Nakada, Hitoshi
2003-01-01
Many of the recent topics of the nuclear structure are concerned on the problems of unstable nuclei. It has been revealed experimentally that the nuclear halos and the neutron skins as well as the cluster structures or the molecule-like structures can be present in the unstable nuclei, and the magic numbers well established in the stable nuclei disappear occasionally while new ones appear. The shell model based on the mean field approximation has been successfully applied to stable nuclei to explain the nuclear structure as the finite many body system quantitatively and it is considered as the standard model at present. If the unstable nuclei will be understood on the same model basis or not is a matter related to fundamental principle of nuclear structure theories. In this lecture, the fundamental concept and the framework of the theory of nuclear structure based on the mean field theory and the shell model are presented to make clear the problems and to suggest directions for future researches. At first fundamental properties of nuclei are described under the subtitles: saturation and magic numbers, nuclear force and effective interactions, nuclear matter, and LS splitting. Then the mean field theory is presented under subtitles: the potential model, the mean field theory, Hartree-Fock approximation for nuclear matter, density dependent force, semiclassical mean field theory, mean field theory and symmetry, Skyrme interaction and density functional, density matrix expansion, finite range interactions, effective masses, and motion of center of mass. The subsequent section is devoted to the shell model with the subtitles: beyond the mean field approximation, core polarization, effective interaction of shell model, one-particle wave function, nuclear deformation and shell model, and shell model of cross shell. Finally structure of unstable nuclei is discussed with the subtitles: general remark on the study of unstable nuclear structure, asymptotic behavior of wave
Dynamical symmetries of the shell model
International Nuclear Information System (INIS)
Van Isacker, P.
2000-01-01
The applications of spectrum generating algebras and of dynamical symmetries in the nuclear shell model are many and varied. They stretch back to Wigner's early work on the supermultiplet model and encompass important landmarks in our understanding of the structure of the atomic nucleus such as Racah's SU(2) pairing model and Elliot's SU(3) rotational model. One of the aims of this contribution has been to show the historical importance of the idea of dynamical symmetry in nuclear physics. Another has been to indicate that, in spite of being old, this idea continues to inspire developments that are at the forefront of today's research in nuclear physics. It has been argued in this contribution that the main driving features of nuclear structure can be represented algebraically but at the same time the limitations of the symmetry approach must be recognised. It should be clear that such approach can only account for gross properties and that any detailed description requires more involved numerical calculations of which we have seen many fine examples during this symposium. In this way symmetry techniques can be used as an appropriate starting point for detailed calculations. A noteworthy example of this approach is the pseudo-SU(3) model which starting from its initial symmetry Ansatz has grown into an adequate and powerful description of the nucleus in terms of a truncated shell model. (author)
Directory of Open Access Journals (Sweden)
Slaviša M. Ilić
2011-10-01
Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.
Design and modeling of an additive manufactured thin shell for x-ray astronomy
Feldman, Charlotte; Atkins, Carolyn; Brooks, David; Watson, Stephen; Cochrane, William; Roulet, Melanie; Willingale, Richard; Doel, Peter
2017-09-01
Future X-ray astronomy missions require light-weight thin shells to provide large collecting areas within the weight limits of launch vehicles, whilst still delivering angular resolutions close to that of Chandra (0.5 arc seconds). Additive manufacturing (AM), also known as 3D printing, is a well-established technology with the ability to construct or `print' intricate support structures, which can be both integral and light-weight, and is therefore a candidate technique for producing shells for space-based X-ray telescopes. The work described here is a feasibility study into this technology for precision X-ray optics for astronomy and has been sponsored by the UK Space Agency's National Space Technology Programme. The goal of the project is to use a series of test samples to trial different materials and processes with the aim of developing a viable path for the production of an X-ray reflecting prototype for astronomical applications. The initial design of an AM prototype X-ray shell is presented with ray-trace modelling and analysis of the X-ray performance. The polishing process may cause print-through from the light-weight support structure on to the reflecting surface. Investigations in to the effect of the print-through on the X-ray performance of the shell are also presented.
Hierarchical Cantor set in the large scale structure with torus geometry
Energy Technology Data Exchange (ETDEWEB)
Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com
2008-12-15
The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
The stratigraphic distribution of large marine vertebrates and shell beds in the Pliocene of Tuscany
Dominici, Stefano; Benvenuti, Marco; Danise, Silvia
2015-04-01
, within an otherwise oligotrophic Mediterranean Sea, sustain a rich and diverse cetacean and shark, epipelagic and mesopelagic community. The modern steep bathymetric gradient was displaced towards the East during the Pliocene, before the latest phases of uplift of the Northern Apennines. An open marine, nutrient-rich ecosystem influenced hinterland basins during major transgressive pulses, leading to a higher productivity and the formation of laterally-continuos accumulations of biogenic hard parts. A comparison with the few available studies on the sequence-stratigraphic distribution of large marine vertebrates and shell beds suggests that a model integrating high-productivity and sea level rise, favouring bone bed and shell bed formation, can be applied at other settings, and other geologic intervals.
Inflationary tensor fossils in large-scale structure
Energy Technology Data Exchange (ETDEWEB)
Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail: ema@physics.umn.edu, E-mail: mrf65@case.edu, E-mail: duj13@psu.edu, E-mail: kamion@jhu.edu [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)
2014-12-01
Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.
Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola
2016-01-01
Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.
Large Scale Community Detection Using a Small World Model
Directory of Open Access Journals (Sweden)
Ranjan Kumar Behera
2017-11-01
Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.
Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)
International Nuclear Information System (INIS)
Schroeder, William J.
2011-01-01
This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem
Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)
Energy Technology Data Exchange (ETDEWEB)
William J. Schroeder
2011-11-13
This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally
ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING
Directory of Open Access Journals (Sweden)
G. Despine
2015-02-01
Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
Nonlinear evolution of large-scale structure in the universe
International Nuclear Information System (INIS)
Frenk, C.S.; White, S.D.M.; Davis, M.
1983-01-01
Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1
ROSAT view of the ISM in the Large Magellanic Cloud
Chu, You-Hua
1996-01-01
Rosat observations of the Large Magellanic Cloud (LMC) show a large scale unbounded diffuse X-ray emission, as well as an enhanced emission within large shell structures. These observations allow the distribution of hot ionized medium in the LMC to be examined. Moreover, the hot interior of supernova shells and superbubbles, supernova remnants and the multi-phase structure of the interstellar medium (ISM) can be investigated.
Optical properties of core-shell and multi-shell nanorods
Mokkath, Junais Habeeb; Shehata, Nader
2018-05-01
We report a first-principles time dependent density functional theory study of the optical response modulations in bimetallic core-shell (Na@Al and Al@Na) and multi-shell (Al@Na@Al@Na and Na@Al@Na@Al: concentric shells of Al and Na alternate) nanorods. All of the core-shell and multi-shell configurations display highly enhanced absorption intensity with respect to the pure Al and Na nanorods, showing sensitivity to both composition and chemical ordering. Remarkably large spectral intensity enhancements were found in a couple of core-shell configurations, indicative that optical response averaging based on the individual components can not be considered as true as always in the case of bimetallic core-shell nanorods. We believe that our theoretical results would be useful in promising applications depending on Aluminum-based plasmonic materials such as solar cells and sensors.
Mathematical Modeling and Kinematics Analysis of Double Spherical Shell Rotary Docking Skirt
Directory of Open Access Journals (Sweden)
Gong Haixia
2017-01-01
Full Text Available In order to solve the problem of large trim and heel angles of the wrecked submarine, the double spherical shell rotating docking skirt is studied. According to the working principle of the rotating docking skirt, and the fixed skirt, the directional skirt, the angle skirt are simplified as the connecting rod. Therefore, the posture equation and kinematics model of the docking skirt are deduced, and according to the kinematics model, the angle of rotation of the directional skirt and the angle skirt is obtained when the wrecked submarine is in different trim and heel angles. Through the directional skirt and angle skirt with the matching rotation can make docking skirt interface in the 0°~2γ range within the rotation, to complete the docking skirt and the wrecked submarine docking. The MATLAB software is used to visualize the rotation angle of fixed skirt and directional skirt, which lays a good foundation for the development of the control of the double spherical shell rotating docking skirt in future.
The experimental and shell model approach to 100Sn
International Nuclear Information System (INIS)
Grawe, H.; Maier, K.H.; Fitzgerald, J.B.; Heese, J.; Spohr, K.; Schubart, R.; Gorska, M.; Rejmund, M.
1995-01-01
The present status of experimental approach to 100 Sn and its shell model structure is given. New developments in experimental techniques, such as low background isomer spectroscopy and charged particle detection in 4π are surveyed. Based on recent experimental data shell model calculations are used to predict the structure of the single- and two-nucleon neighbours of 100 Sn. The results are compared to the systematic of Coulomb energies and spin-orbit splitting and discussed with respect to future experiments. (author). 51 refs, 11 figs, 1 tab
Power suppression at large scales in string inflation
Energy Technology Data Exchange (ETDEWEB)
Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Università di Bologna, via Irnerio 46, Bologna, 40126 (Italy); Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX, 77843-4242 (United States)
2013-12-01
We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.
Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.
Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S
2016-09-26
In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.
Some Statistics for Measuring Large-Scale Structure
Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey
1993-01-01
Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...
Modeling and simulation of large scale stirred tank
Neuville, John R.
The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the
Large-eddy simulation with accurate implicit subgrid-scale diffusion
B. Koren (Barry); C. Beets
1996-01-01
textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is
Dynamic model of open shell structures buried in poroelastic soils
Bordón, J. D. R.; Aznárez, J. J.; Maeso, O.
2017-08-01
This paper is concerned with a three-dimensional time harmonic model of open shell structures buried in poroelastic soils. It combines the dual boundary element method (DBEM) for treating the soil and shell finite elements for modelling the structure, leading to a simple and efficient representation of buried open shell structures. A new fully regularised hypersingular boundary integral equation (HBIE) has been developed to this aim, which is then used to build the pair of dual BIEs necessary to formulate the DBEM for Biot poroelasticity. The new regularised HBIE is validated against a problem with analytical solution. The model is used in a wave diffraction problem in order to show its effectiveness. It offers excellent agreement for length to thickness ratios greater than 10, and relatively coarse meshes. The model is also applied to the calculation of impedances of bucket foundations. It is found that all impedances except the torsional one depend considerably on hydraulic conductivity within the typical frequency range of interest of offshore wind turbines.
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing
Qiang Liu; Yi Qin; Guodong Li
2018-01-01
Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...
Large scale sodium-water reaction tests for Monju steam generators
International Nuclear Information System (INIS)
Sato, M.; Hiroi, H.; Hori, M.
1976-01-01
To demonstrate the safe design of the steam generator system of the prototype fast reactor Monju against the postulated large leak sodium-water reaction, a large scale test facility SWAT-3 was constructed. SWAT-3 is a 1/2.5 scale model of the Monju secondary loop on the basis of the iso-velocity modeling. Two tests have been conducted in SWAT-3 since its construction. The test items using SWAT-3 are discussed, and the description of the facility and the test results are presented
A non-local shell model of hydrodynamic and magnetohydrodynamic turbulence
Energy Technology Data Exchange (ETDEWEB)
Plunian, F [Laboratoire de Geophysique Interne et Tectonophysique, CNRS, Universite Joseph Fourier, Maison des Geosciences, BP 53, 38041 Grenoble Cedex 9 (France); Stepanov, R [Institute of Continuous Media Mechanics, Korolyov 1, 614013 Perm (Russian Federation)
2007-08-15
We derive a new shell model of magnetohydrodynamic (MHD) turbulence in which the energy transfers are not necessarily local. Like the original MHD equations, the model conserves the total energy, magnetic helicity, cross-helicity and volume in phase space (Liouville's theorem) apart from the effects of external forcing, viscous dissipation and magnetic diffusion. The model of hydrodynamic (HD) turbulence is derived from the MHD model setting the magnetic field to zero. In that case the conserved quantities are the kinetic energy and the kinetic helicity. In addition to a statistically stationary state with a Kolmogorov spectrum, the HD model exhibits multiscaling. The anomalous scaling exponents are found to depend on a free parameter {alpha} that measures the non-locality degree of the model. In freely decaying turbulence, the infra-red spectrum also depends on {alpha}. Comparison with theory suggests using {alpha} = -5/2. In MHD turbulence, we investigate the fully developed turbulent dynamo for a wide range of magnetic Prandtl numbers in both kinematic and dynamic cases. Both local and non-local energy transfers are clearly identified.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-01
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.
Directory of Open Access Journals (Sweden)
B. Harris
2012-07-01
Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.
Political consultation and large-scale research
International Nuclear Information System (INIS)
Bechmann, G.; Folkers, H.
1977-01-01
Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de
Large-scale Modeling of Nitrous Oxide Production: Issues of Representing Spatial Heterogeneity
Morris, C. K.; Knighton, J.
2017-12-01
Nitrous oxide is produced from the biological processes of nitrification and denitrification in terrestrial environments and contributes to the greenhouse effect that warms Earth's climate. Large scale modeling can be used to determine how global rate of nitrous oxide production and consumption will shift under future climates. However, accurate modeling of nitrification and denitrification is made difficult by highly parameterized, nonlinear equations. Here we show that the representation of spatial heterogeneity in inputs, specifically soil moisture, causes inaccuracies in estimating the average nitrous oxide production in soils. We demonstrate that when soil moisture is averaged from a spatially heterogeneous surface, net nitrous oxide production is under predicted. We apply this general result in a test of a widely-used global land surface model, the Community Land Model v4.5. The challenges presented by nonlinear controls on nitrous oxide are highlighted here to provide a wider context to the problem of extraordinary denitrification losses in CLM. We hope that these findings will inform future researchers on the possibilities for model improvement of the global nitrogen cycle.
Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models
Energy Technology Data Exchange (ETDEWEB)
Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H
2005-12-01
This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and
International Nuclear Information System (INIS)
Cook, W.A.
1978-10-01
Nuclear Material shipping containers have shells of revolution as a basic structural component. Analytically modeling the response of these containers to severe accident impact conditions requires a nonlinear shell-of-revolution model that accounts for both geometric and material nonlinearities. Present models are limited to large displacements, small rotations, and nonlinear materials. This report discusses a first approach to developing a finite element nonlinear shell of revolution model that accounts for these nonlinear geometric effects. The approach uses incremental loads and a linear shell model with equilibrium iterations. Sixteen linear models are developed, eight using the potential energy variational principle and eight using a mixed variational principle. Four of these are suitable for extension to nonlinear shell theory. A nonlinear shell theory is derived, and a computational technique used in its solution is presented
Testing refined shell-model interactions in the sd shell: Coulomb excitation of Na26
Siebeck, B; Blazhev, A; Reiter, P; Altenkirch, R; Bauer, C; Butler, P A; De Witte, H; Elseviers, J; Gaffney, L P; Hess, H; Huyse, M; Kröll, T; Lutter, R; Pakarinen, J; Pietralla, N; Radeck, F; Scheck, M; Schneiders, D; Sotty, C; Van Duppen, P; Vermeulen, M; Voulot, D; Warr, N; Wenander, F
2015-01-01
Background: Shell-model calculations crucially depend on the residual interaction used to approximate the nucleon-nucleon interaction. Recent improvements to the empirical universal sd interaction (USD) describing nuclei within the sd shell yielded two new interactions—USDA and USDB—causing changes in the theoretical description of these nuclei. Purpose: Transition matrix elements between excited states provide an excellent probe to examine the underlying shell structure. These observables provide a stringent test for the newly derived interactions. The nucleus Na26 with 7 valence neutrons and 3 valence protons outside the doubly-magic 16O core is used as a test case. Method: A radioactive beam experiment with Na26 (T1/2=1,07s) was performed at the REX-ISOLDE facility (CERN) using Coulomb excitation at safe energies below the Coulomb barrier. Scattered particles were detected with an annular Si detector in coincidence with γ rays observed by the segmented MINIBALL array. Coulomb excitation cross sections...
Puttonen, Ana; Harzhauser, Mathias; Puttonen, Eetu; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert
2018-02-01
Shell beds represent a useful source of information on various physical processes that cause the depositional condition. We present an automated method to calculate the 3D orientations of a large number of elongate and platy objects (fossilized oyster shells) on a sedimentary bedding plane, developed to support the interpretation of possible depositional patterns, imbrications, or impact of local faults. The study focusses on more than 1900 fossil oyster shells exposed in a densely packed Miocene shell bed. 3D data were acquired by terrestrial laser scanning on an area of 459 m2 with a resolution of 1 mm. Bivalve shells were manually defined as 3D-point clouds of a digital surface model and stored in an ArcGIS database. An individual shell coordinate system (ISCS) was virtually embedded into each shell and its orientation was determined relative to the coordinate system of the entire, tectonically tilted shell bed. Orientation is described by the rotation angles roll, pitch, and yaw in a Cartesian coordinate system. This method allows an efficient measurement and analysis of the orientation of thousands of specimens and is a major advantage compared to the traditional 2D approach, which measures only the azimuth (yaw) angles. The resulting data can variously be utilized for taphonomic analyses and the reconstruction of prevailing hydrodynamic regimes and depositional environments. For the first time, the influence of possible post-sedimentary vertical displacements can be quantified with high accuracy. Here, the effect of nearby fault lines—present in the reef—was tested on strongly tilted oyster shells, but it was found out that the fault lines did not have a statistically significant effect on the large tilt angles. Aside from the high reproducibility, a further advantage of the method is its non-destructive nature, which is especially suitable for geoparks and protected sites such as the studied shell bed.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Large-scale hydrogen production using nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)
2014-07-01
For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing
Directory of Open Access Journals (Sweden)
Qiang Liu
2018-05-01
Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data
Fries, K. J.; Kerkez, B.
2017-12-01
We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Burnout of pulverized biomass particles in large scale boiler - Single particle model approach
Energy Technology Data Exchange (ETDEWEB)
Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)
2010-05-15
Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)
Large-scale Intelligent Transporation Systems simulation
Energy Technology Data Exchange (ETDEWEB)
Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.
1995-06-01
A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.
Shell Models of Superfluid Turbulence
International Nuclear Information System (INIS)
Wacks, Daniel H; Barenghi, Carlo F
2011-01-01
Superfluid helium consists of two inter-penetrating fluids, a viscous normal fluid and an inviscid superfluid, coupled by a mutual friction. We develop a two-fluid shell model to study superfluid turbulence and investigate the energy spectra and the balance of fluxes between the two fluids in a steady state. At sufficiently low temperatures a 'bottle-neck' develops at high wavenumbers suggesting the need for a further dissipative effect, such as the Kelvin wave cascade.
International Nuclear Information System (INIS)
Ababou, R.
1991-08-01
This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs
Long-Term Calculations with Large Air Pollution Models
DEFF Research Database (Denmark)
Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.
1999-01-01
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point
Energy Technology Data Exchange (ETDEWEB)
Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)
2016-12-15
We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.
Large scale structure from the Higgs fields of the supersymmetric standard model
International Nuclear Information System (INIS)
Bastero-Gil, M.; Di Clemente, V.; King, S.F.
2003-01-01
We propose an alternative implementation of the curvaton mechanism for generating the curvature perturbations which does not rely on a late decaying scalar decoupled from inflation dynamics. In our mechanism the supersymmetric Higgs scalars are coupled to the inflaton in a hybrid inflation model, and this allows the conversion of the isocurvature perturbations of the Higgs fields to the observed curvature perturbations responsible for large scale structure to take place during reheating. We discuss an explicit model which realizes this mechanism in which the μ term in the Higgs superpotential is generated after inflation by the vacuum expectation value of a singlet field. The main prediction of the model is that the spectral index should deviate significantly from unity, vertical bar n-1 vertical bar ∼0.1. We also expect relic isocurvature perturbations in neutralinos and baryons, but no significant departures from Gaussianity and no observable effects of gravity waves in the CMB spectrum
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the
Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H
2014-07-01
In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.
Large-Scale Graph Processing Using Apache Giraph
Sakr, Sherif
2017-01-07
This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.
Large-Scale Graph Processing Using Apache Giraph
Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair
2017-01-01
This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.
Statistical mechanics of microscopically thin thermalized shells
Kosmrlj, Andrej
Recent explosion in fabrication of microscopically thin free standing structures made from graphene and other two-dimensional materials has led to a renewed interest in the mechanics of such structures in presence of thermal fluctuations. Since late 1980s it has been known that for flat solid sheets thermal fluctuations effectively increase the bending rigidity and reduce the bulk and shear moduli in a scale-dependent fashion. However, much is still unknown about the mechanics of thermalized flat sheets of complex geometries and about the mechanics of thermalized shells with non-zero background curvature. In this talk I will present recent development in the mechanics of thermalized ribbons, spherical shells and cylindrical tubes. Long ribbons are found to behave like hybrids between flat sheets with renormalized elastic constants and semi-flexible polymers, and these results can be used to predict the mechanics of graphene kirigami structures. Contrary to the anticipated behavior for ribbons, the non-zero background curvature of shells leads to remarkable novel phenomena. In shells, thermal fluctuations effectively generate negative surface tension, which can significantly reduce the critical buckling pressure for spherical shells and the critical axial load for cylindrical tubes. For large shells this thermally generated load becomes big enough to spontaneously crush spherical shells and cylindrical tubes even in the absence of external loads. I will comment on the relevance for crushing of microscopic shells (viral capsids, bacteria, microcapsules) due to osmotic shocks and for crushing of nanotubes.
Decentralized Large-Scale Power Balancing
DEFF Research Database (Denmark)
Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2013-01-01
problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...
Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick
2017-04-01
Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model
Evaluating neighborhood structures for modeling intercity diffusion of large-scale dengue epidemics.
Wen, Tzai-Hung; Hsu, Ching-Shun; Hu, Ming-Che
2018-05-03
Dengue fever is a vector-borne infectious disease that is transmitted by contact between vector mosquitoes and susceptible hosts. The literature has addressed the issue on quantifying the effect of individual mobility on dengue transmission. However, there are methodological concerns in the spatial regression model configuration for examining the effect of intercity-scale human mobility on dengue diffusion. The purposes of the study are to investigate the influence of neighborhood structures on intercity epidemic progression from pre-epidemic to epidemic periods and to compare definitions of different neighborhood structures for interpreting the spread of dengue epidemics. We proposed a framework for assessing the effect of model configurations on dengue incidence in 2014 and 2015, which were the most severe outbreaks in 70 years in Taiwan. Compared with the conventional model configuration in spatial regression analysis, our proposed model used a radiation model, which reflects population flow between townships, as a spatial weight to capture the structure of human mobility. The results of our model demonstrate better model fitting performance, indicating that the structure of human mobility has better explanatory power in dengue diffusion than the geometric structure of administration boundaries and geographic distance between centroids of cities. We also identified spatial-temporal hierarchy of dengue diffusion: dengue incidence would be influenced by its immediate neighboring townships during pre-epidemic and epidemic periods, and also with more distant neighbors (based on mobility) in pre-epidemic periods. Our findings suggest that the structure of population mobility could more reasonably capture urban-to-urban interactions, which implies that the hub cities could be a "bridge" for large-scale transmission and make townships that immediately connect to hub cities more vulnerable to dengue epidemics.
Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram
2017-03-13
A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Revisiting the EC/CMB model for extragalactic large scale jets
Lucchini, M.; Tavecchio, F.; Ghisellini, G.
2017-04-01
One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
Directory of Open Access Journals (Sweden)
Xinhua He
2014-01-01
Full Text Available This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
He, Xinhua; Hu, Wenfa
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
Idealised modelling of storm surges in large-scale coastal basins
Chen, Wenlong
2015-01-01
Coastal areas around the world are frequently attacked by various types of storms, threatening human life and property. This study aims to understand storm surge processes in large-scale coastal basins, particularly focusing on the influences of geometry, topography and storm characteristics on the
International Nuclear Information System (INIS)
Caty, O.; Maire, E.; Youssef, S.; Bouchet, R.
2008-01-01
Closed-cell cellular materials exhibit several interesting properties. These properties are, however, very difficult to simulate and understand from the knowledge of the cellular microstructure. This problem is mostly due to the highly complex organization of the cells and to their very fine walls. X-ray tomography can produce three-dimensional (3-D) images of the structure, enabling one to visualize locally the damage of the cell walls that would result in the structure collapsing. These data could be used for meshing with continuum elements of the structure for finite element (FE) calculations. But when the density is very low, the walls are fine and the meshes based on continuum elements are not suitable to represent accurately the structure while preserving the representativeness of the model in terms of cell size. This paper presents a shell FE model obtained from tomographic 3-D images that allows bigger volumes of low-density closed-cell cellular materials to be calculated. The model is enriched by direct thickness measurement on the tomographic images. The values measured are ascribed to the shell elements. To validate and use the model, a structure composed of stainless steel hollow spheres is firstly compressed and scanned to observe local deformations. The tomographic data are also meshed with shells for a FE calculation. The convergence of the model is checked and its performance is compared with a continuum model. The global behavior is compared with the measures of the compression test. At the local scale, the model allows the local stress and strain field to be calculated. The calculated deformed shape is compared with the deformed tomographic images
Symmetry-dictated trucation: Solutions of the spherical shell model for heavy nuclei
International Nuclear Information System (INIS)
Guidry, M.W.
1992-01-01
Principles of dynamical symmetry are used to simplify the spherical shell model. The resulting symmetry-dictated truncation leads to dynamical symmetry solutions that are often in quantitative agreement with a variety of observables. Numerical calculations, including terms that break the dynamical symmetries, are shown that correspond to shell model calculations for heavy deformed nuclei. The effective residual interaction is simple, well-behaved, and can be determined from basic observables. With this approach, we intend to apply the shell model in systematic fashion to all nuclei. The implications for nuclear structure far from stability and for nuclear masses and other quantities of interest in astrophysics are discussed
Stabilization Algorithms for Large-Scale Problems
DEFF Research Database (Denmark)
Jensen, Toke Koldborg
2006-01-01
The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...
Large-scale synthesis of onion-like carbon nanoparticles by carbonization of phenolic resin
International Nuclear Information System (INIS)
Zhao Mu; Song Huaihe; Chen Xiaohong; Lian Wentao
2007-01-01
Onion-like carbon nanoparticles have been synthesized on a large scale by carbonization of phenolic-formaldehyde resin at 1000 o C with the aid of ferric nitrate (FN). The effects of FN loading content on the yield, morphology and structure of carbonized products were investigated using transmission electron microscopy (TEM), high-resolution TEM and X-ray diffraction. It was found that the onion-like carbon nanoparticles, which had a narrow size distribution ranging from 30 to 50 nm, were composed mainly of quasi-spherically concentric shells of well-aligned graphene layers with interlayer spacing of 0.336 nm. Based on the results of the investigation, the formation mechanism of onion-like carbon nanoparticles was also discussed
Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.
2014-08-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.
Automating large-scale reactor systems
International Nuclear Information System (INIS)
Kisner, R.A.
1985-01-01
This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig
Large-scale linear programs in planning and prediction.
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Shell model test of the Porter-Thomas distribution
International Nuclear Information System (INIS)
Grimes, S.M.; Bloom, S.D.
1981-01-01
Eigenvectors have been calculated for the A=18, 19, 20, 21, and 26 nuclei in an sd shell basis. The decomposition of these states into their shell model components shows, in agreement with other recent work, that this distribution is not a single Gaussian. We find that the largest amplitudes are distributed approximately in a Gaussian fashion. Thus, many experimental measurements should be consistent with the Porter-Thomas predictions. We argue that the non-Gaussian form of the complete distribution can be simply related to the structure of the Hamiltonian
International Nuclear Information System (INIS)
Piecuch, Piotr; Wloch, Marta; Gour, Jeffrey R.; Dean, David J.; Papenbrock, Thomas; Hjorth-Jensen, Morten
2005-01-01
We review basic elements of the single-reference coupled-cluster theory and discuss large scale ab initio calculations of ground and excited states of 15O, 16O, and 17O using coupled-cluster methods and algorithms developed in quantum chemistry. By using realistic two-body interactions and the renormalized form of the Hamiltonian obtained with a no-core G-matrix approach, we obtain the converged results for 16O and promising preliminary results for 15O and 17O at the level of two-body interactions. The calculated properties other than energies include matter density, charge radius, and charge form factor. The relatively low costs of coupled-cluster calculations, which are characterized by the low-order polynomial scaling with the system size, enable us to probe large model spaces with up to 7 or 8 major oscillator shells, for which non-truncated shell-model calculations for nuclei with A = 15 17 active particles are presently not possible. We argue that the use of coupled-cluster methods and computer algorithms developed by quantum chemists to calculate properties of nuclei is an important step toward the development of accurate and affordable many-body theories that cross the boundaries of various physical sciences
Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model
Directory of Open Access Journals (Sweden)
Xin Wang
2012-01-01
Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.
Test of large-scale specimens and models as applied to NPP equipment materials
International Nuclear Information System (INIS)
Timofeev, B.T.; Karzov, G.P.
1993-01-01
The paper presents the test results on low-cycle fatigue, crack growth rate and fracture toughness of large-scale specimens and structures, manufactured from steel, widely applied in power engineering industry and used for the production of NPP equipment with VVER-440 and VVER-1000 reactors. The obtained results are compared with available test results of standard specimens and calculation relations, accepted in open-quotes Calculation Norms on Strength.close quotes At the fatigue crack initiation stage the experiments were performed on large-scale specimens of various geometry and configuration, which permitted to define 15X2MFA steel fracture initiation resistance by elastic-plastic deformation of large material volume by homogeneous and inhomogeneous state. Besides the above mentioned specimen tests in the regime of low-cycle loading, the test of models with nozzles were performed and a good correlation of the results on fatigue crack initiation criterium was obtained both with calculated data and standard low-cycle fatigue tests. It was noted that on the Paris part of the fatigue fracture diagram a specimen thickness increase does not influence fatigue crack growth resistance by tests in air both at 20 and 350 degrees C. The estimation of the comparability of the results, obtained on specimens and models was also carried out for this stage of fracture. At the stage of unstable crack growth by static loading the experiments were conducted on specimens of various thickness for 15X2MFA and 15X2NMFA steels and their welded joints, produced by submerged arc welding, in as-produced state (the beginning of service) and after embrittling heat treatment, simulating neutron fluence attack (the end of service). The obtained results give evidence of the possibility of the reliable prediction of structure elements brittle fracture using fracture toughness test results on relatively small standard specimens. 35 refs., 23 figs
Accelerating large-scale phase-field simulations with GPU
Directory of Open Access Journals (Sweden)
Xiaoming Shi
2017-10-01
Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.
Directory of Open Access Journals (Sweden)
Karl E. Havens
2002-01-01
Full Text Available A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m, and variable sediment types. Based on sampling carried out in AugustœSeptember 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat. A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.
Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P
2002-04-09
A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.
On two-dimensionalization of three-dimensional turbulence in shell models
DEFF Research Database (Denmark)
Chakraborty, Sagar; Jensen, Mogens Høgh; Sarkar, A.
2010-01-01
Applying a modified version of the Gledzer-Ohkitani-Yamada (GOY) shell model, the signatures of so-called two-dimensionalization effect of three-dimensional incompressible, homogeneous, isotropic fully developed unforced turbulence have been studied and reproduced. Within the framework of shell m......-similar PDFs for longitudinal velocity differences are also presented for the rotating 3D turbulence case....
TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY
International Nuclear Information System (INIS)
Wang Xin; Chen Xuelei; Park, Changbom
2012-01-01
The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.
Solving the nuclear shell model with an algebraic method
International Nuclear Information System (INIS)
Feng, D.H.; Pan, X.W.; Guidry, M.
1997-01-01
We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)
Design and analysis of reactor containment of steel-concrete composite laminated shell
International Nuclear Information System (INIS)
Ichikawa, K.
1977-01-01
Reinforced and prestressed concrete containments for reactors have been developed in order to avoid the difficulties of welding of steel containments encountered as their capacities have become large: growing thickness of steel shells gave rise to the requirement of stress relief at the construction sites. However, these concrete vessels also seem to face another difficulty: the lack of shearing resistance capacity. In order to improve the shearing resistance capacity of the containment vessel, while avoiding the difficulty of welding, a new scheme of containment consisting of steel-concrete laminated shell is being developed. In the main part of a cylindrical vessel, the shell consists of two layers of thin steel plates located at the inner and outer surfaces, and a layer of concrete core into which both the steel plates are anchored. In order to validate the feasibility and safety of this new design, the results of analysis on the basis of up-to-date design loads are presented. The results of model tests in 1:30 scale are also reported. (Auth.)
Acoustic modeling of shell-encapsulated gas bubbles
P.J.A. Frinking (Peter); N. de Jong (Nico)
1998-01-01
textabstractExisting theoretical models do not adequately describe the scatter and attenuation properties of the ultrasound contrast agents Quantison(TM) and Myomap(TM). An adapted version of the Rayleigh-Plesset equation, in which the shell is described by a viscoelastic solid, is proposed and
Zone modelling of the thermal performances of a large-scale bloom reheating furnace
International Nuclear Information System (INIS)
Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy
2013-01-01
This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.
Energy Technology Data Exchange (ETDEWEB)
Veljovic, Katarina; Rajkovic, Borivoj [Belgrade Univ. (RS). Inst. of Meteorology; Fennessy, Michael J.; Altshuler, Eric L. [Center for Ocean-Land-Atmosphere Studies, Calverton, MD (United States); Mesinger, Fedor [Maryland Univ., College Park (United States). Earth System Science Interdisciplinary Center; Serbian Academy of Science and Arts, Belgrade (RS)
2010-06-15
A considerable number of authors presented experiments in which degradation of large scale circulation occurred in regional climate integrations when large-scale nudging was not used (e.g., von Storch et al., 2000; Biner et al., 2000; Rockel et al., 2008; Sanchez-Gomez et al., 2008; Alexandru et al., 2009; among others). We here show an earlier 9-member ensemble result of the June-August precipitation difference over the contiguous United States between the ''flood year'' of 1993 and the ''drought year'' of 1988, in which the Eta model nested in the COLA AGCM gave a rather accurate depiction of the analyzed difference, even though the driver AGCM failed in doing so to the extent of having a minimum in the area where the maximum ought to be. It is suggested that this could hardly have been possible without an RCM's improvement in the large scales of the driver AGCM. We further revisit the issue by comparing the large scale skill of the Eta RCM against that of a global ECMWF 32-day ensemble forecast used as its driver. Another issue we are looking into is that of the lateral boundary condition (LBC) scheme. The question we ask is whether the almost universally used but somewhat costly relaxation scheme is necessary for a desirable RCM performance? We address this by running the Eta in two versions differing in the lateral boundary scheme used. One of these is the traditional relaxation scheme and the other is the Eta model scheme in which information is used at the outermost boundary only and not all variables are prescribed at the outflow boundary. The skills of these two sets of RCM forecasts are compared against each other and also against that of their driver. A novelty in our experiments is the verification used. In order to test the large scale skill we are looking at the forecast position accuracy of the strongest winds at the jet stream level, which we have taken as 250 hPa. We do this by calculating bias adjusted
Steady state model for the thermal regimes of shells of airships and hot air balloons
Luchev, Oleg A.
1992-10-01
A steady state model of the temperature regime of airships and hot air balloons shells is developed. The model includes three governing equations: the equation of the temperature field of airships or balloons shell, the integral equation for the radiative fluxes on the internal surface of the shell, and the integral equation for the natural convective heat exchange between the shell and the internal gas. In the model the following radiative fluxes on the shell external surface are considered: the direct and the earth reflected solar radiation, the diffuse solar radiation, the infrared radiation of the earth surface and that of the atmosphere. For the calculations of the infrared external radiation the model of the plane layer of the atmosphere is used. The convective heat transfer on the external surface of the shell is considered for the cases of the forced and the natural convection. To solve the mentioned set of the equations the numerical iterative procedure is developed. The model and the numerical procedure are used for the simulation study of the temperature fields of an airship shell under the forced and the natural convective heat transfer.
Large Scale Frequent Pattern Mining using MPI One-Sided Model
Energy Technology Data Exchange (ETDEWEB)
Vishnu, Abhinav; Agarwal, Khushbu
2015-09-08
In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.
Modeling of a Large-Scale High Temperature Regenerative Sulfur Removal Process
DEFF Research Database (Denmark)
Konttinen, Jukka T.; Johnsson, Jan Erik
1999-01-01
model that does not account for bed hydrodynamics. The pilot-scale test run results, obtained in the test runs of the sulfur removal process with real coal gasifier gas, have been used for parameter estimation. The validity of the reactor model for commercial-scale design applications is discussed.......Regenerable mixed metal oxide sorbents are prime candidates for the removal of hydrogen sulfide from hot gasifier gas in the simplified integrated gasification combined cycle (IGCC) process. As part of the regenerative sulfur removal process development, reactor models are needed for scale......-up. Steady-state kinetic reactor models are needed for reactor sizing, and dynamic models can be used for process control design and operator training. The regenerative sulfur removal process to be studied in this paper consists of two side-by-side fluidized bed reactors operating at temperatures of 400...
Neutrino nucleosynthesis in supernovae: Shell model predictions
International Nuclear Information System (INIS)
Haxton, W.C.
1989-01-01
Almost all of the 3 · 10 53 ergs liberated in a core collapse supernova is radiated as neutrinos by the cooling neutron star. I will argue that these neutrinos interact with nuclei in the ejected shells of the supernovae to produce new elements. It appears that this nucleosynthesis mechanism is responsible for the galactic abundances of 7 Li, 11 B, 19 F, 138 La, and 180 Ta, and contributes significantly to the abundances of about 15 other light nuclei. I discuss shell model predictions for the charged and neutral current allowed and first-forbidden responses of the parent nuclei, as well as the spallation processes that produce the new elements. 18 refs., 1 fig., 1 tab
The Large-Scale Structure of Scientific Method
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…
No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation
Abolhasani, Ali Akbar; Firouzjahi, Hassan
2010-01-01
In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...
Extrapolation method in the Monte Carlo Shell Model and its applications
International Nuclear Information System (INIS)
Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio
2011-01-01
We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56 Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72 Ge with f5pg9-shell. The structure of 72 Se is also studied including the discussion of the shape-coexistence phenomenon.
Dipolar modulation of Large-Scale Structure
Yoon, Mijin
For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.
Sodium leak detection on large pipes. Heat insulating shells made of silico-aluminate
International Nuclear Information System (INIS)
Antonakas, D.; Blanc, R.; Casselman, C.; Malet, J.C.
1986-05-01
This report presents an equipment installed on the large secondary pipes of fast reactors, ensuring several functions: support and equilibrium of static and dynamic loads, heat insulator, preheating, and the detection of possible sodium leaks. The research programs associated to the development of the shells are briefly evoked; then, the report deals no longer with the studies on silico-aluminate aging and the detection performance [fr
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...
Lacour, Thomas; Guédra, Matthieu; Valier-Brasier, Tony; Coulouvrat, François
2018-01-01
Nanodroplets have great, promising medical applications such as contrast imaging, embolotherapy, or targeted drug delivery. Their functions can be mechanically activated by means of focused ultrasound inducing a phase change of the inner liquid known as the acoustic droplet vaporization (ADV) process. In this context, a four-phases (vapor + liquid + shell + surrounding environment) model of ADV is proposed. Attention is especially devoted to the mechanical properties of the encapsulating shell, incorporating the well-known strain-softening behavior of Mooney-Rivlin material adapted to very large deformations of soft, nearly incompressible materials. Various responses to ultrasound excitation are illustrated, depending on linear and nonlinear mechanical shell properties and acoustical excitation parameters. Different classes of ADV outcomes are exhibited, and a relevant threshold ensuring complete vaporization of the inner liquid layer is defined. The dependence of this threshold with acoustical, geometrical, and mechanical parameters is also provided.
Cask for concrete shells transportation
International Nuclear Information System (INIS)
Labergri, F.
2001-01-01
Nowadays, nuclear plant radioactive waste are conditioned in situ into concrete shells. Most of them enter in the industrial waste category defined by the regulations of radioactive material transportation. However, the content of a few ones exceeds the limits set for low specific activity substances. Thus, these shells must be transported into type B packagings. To this end, Robatel has undertaken, for EDF (Electricite de France), the development of a container, named ROBATEL TM R68, for further licensing. The particularity of this packaging is that the lid must have a wide opening to allow the usual handling operations of the concrete shells. This leads to a non-conventional conception, and makes the package more vulnerable to drop test solicitations. In order to define a minimal drop test program on a reduced scale model, we use a simple method to find the most damageable drop angle. (author)
BigSUR: large-scale structured urban reconstruction
Kelly, Tom
2017-11-22
The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.
BigSUR: large-scale structured urban reconstruction
Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.
2017-01-01
The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.
Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se
Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit
2015-04-01
Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model
Design techniques for large scale linear measurement systems
International Nuclear Information System (INIS)
Candy, J.V.
1979-03-01
Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented
Comparison of void strengthening in fcc and bcc metals: Large-scale atomic-level modelling
International Nuclear Information System (INIS)
Osetsky, Yu.N.; Bacon, D.J.
2005-01-01
Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects
Energy Technology Data Exchange (ETDEWEB)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.
2017-01-04
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.
Shell structures and chaos in nuclei and large metallic clusters
International Nuclear Information System (INIS)
Heiss, W.D.; University of the Witwatersrand, Johannesburg; Nazmitdinov, R.G.; Radu, S.; University of the Witwatersrand, Johannesburg
1995-01-01
A reflection-asymmetric deformed oscillator potential is analyzed from the classical and quantum mechanical point of view. The connection between occurrence of shell structures and classical periodic orbits is studied using the ''removal of resonances method'' in a classical analysis. In this approximation, the effective single particle potential becomes separable and the frequencies of the classical trajectories are easily determined. It turns out that the winding numbers calculated in this way are in good agreement with the ones found from the corresponding quantum mechanical spectrum using the particle number dependence of the fluctuating part of the total energy. When the octupole term is switched on it is found that prolate shapes are stable against chaos and can exhibit shells where spherical and oblate cases become chaotic. An attempt is made to explain this difference in the quantum mechanical context by looking at the distribution of exceptional points which results from the matrix structure of the respective Hamiltonians. In a similar way we analyze the modified Nilsson model and discuss its consequences for metallic clusters. (orig.)
The Effect of Large Scale Salinity Gradient on Langmuir Turbulence
Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.
2017-12-01
Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by
Large-scale structure in the universe: Theory vs observations
International Nuclear Information System (INIS)
Kashlinsky, A.; Jones, B.J.T.
1990-01-01
A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)
A DATA-DRIVEN ANALYTIC MODEL FOR PROTON ACCELERATION BY LARGE-SCALE SOLAR CORONAL SHOCKS
Energy Technology Data Exchange (ETDEWEB)
Kozarev, Kamen A. [Smithsonian Astrophysical Observatory (United States); Schwadron, Nathan A. [Institute for the Study of Earth, Oceans, and Space, University of New Hampshire (United States)
2016-11-10
We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona, using remote observations from the Solar Dynamics Observatory ’s Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front’s surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model’s performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate that the results approach the expected DSA steady-state behavior. We then apply the model to the event of 2011 May 11 using the OCBF time-dependent parameters derived by Kozarev et al. We find that the compressive front likely produced energetic particles as low as 1.3 solar radii in the corona. Comparing the modeled and observed fluences near Earth, we also find that the bulk of the acceleration during this event must have occurred above 1.5 solar radii. With this study we have taken a first step in using direct observations of shocks and compressions in the innermost corona to predict the onsets and intensities of solar energetic particle events.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [The University of Texas at Austin
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.
Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses
International Nuclear Information System (INIS)
Takahashi, K.; Mathews, G.J.; Bloom, S.D.
1985-01-01
Examples of large-basis shell-model calculations of Gamow-Teller β-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus 99 Tc; and (2) the GT-strength function for the neutron-rich nucleus 130 Cd, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 23 refs., 3 figs
Phylogenetic distribution of large-scale genome patchiness
Directory of Open Access Journals (Sweden)
Hackenberg Michael
2008-04-01
Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.
Assessing Effects of Joining Common Currency Area with Large-Scale DSGE model: A Case of Poland
Maciej Bukowski; Sebastian Dyrda; Pawe³ Kowal
2008-01-01
In this paper we present a large scale dynamic stochastic general equilibrium model, in order to analyze and simulate effects of Euro introduction in Poland. Presented framework is a based on a two-country open economy model, where foreign acts as the Eurozone, and home as a candidate country. We have implemented various types of structural frictions in the open economy block, that generate empirically observable deviations from purchasing power parity rule. We consider such mechanisms as a d...