भारतीय उष्णदेशीय मौसम विज्ञान संस्थान
Indian Institute of Tropical Meteorology
पृथ्वी विज्ञान मंत्रालय, भारत सरकार का एक स्वायत्त संस्थान An Autonomous Institute of the Ministry of Earth Sciences, Govt. of India
High Performance Computing System
Objectives
To establish a petaflops-scale HPC facility at MoES institutes to cater the needs of modeling activities of Monsoon Mission, Climate Change Research and National Training Centre and other programs of the Institute, and also to share the facility with other groups in the country.
To establish, update and maintain an extensive database required for modeling and observational studies.
To provide assistance in processing the data.
To provide programming and software support for model improvement.
To maintain the facility by providing the necessary supporting infrastructure such as UPS, cooling system, Power and Generator backup. Significant investment has to be planned for maintaining the Community Facility.
About Us
The Earth behaves as a single interlinked and self regulating system. Its subsystems, viz. atmosphere, hydrosphere, cryosphere, geosphere and biosphere function together and their interactions are significant and complex. The energy and material transport within and across subsystems occur from local to global scale in varying space and time. Improved and reliable forecast of weather and climate requires integration of observations using very high resolution dynamical models with realistic representation of all physical processes and their complex non linear interactions. Since weather is an initial value problem, accuracy of the initial condition is as important as the accuracy of the model. Thus, data assimilation is a crucial component of weather predictions. As conventional data coverage is spatially and temporally limited, satellite data provides much better coverage in both space and time. About 90% of the data that goes into the assimilation of any analysis-forecast system comprise of data from satellite and rest from in situ platforms. In addition, it is important that adequate computing facility is available for carrying out various numerical experiments pertaining to various programs of the Ministry. This involves augmenting the computational power for the training school where hand on training are to be conducted with high resolution state of the art weather and climate numerical models , conducting research and development work for improving forecasts in the short, medium and long range scales for monsoon mission programs that involve sensitivity experiments for various physical processes. , the impact studies of different physical parameterization schemes etc. , data impact studies, ensemble prediction models with more members, climate change scenario generation for hundreds of years etc. In addition, it is essential to carry out studies related to observation simulation experiments (OSE), observation system simulation experiments (OSSE) and targeted observation experiments that can guide the planners on the location and type of observations that are crucial for the numerical models. Accordingly observation network can be better formulated. This is highly compute intensive job. Large number of numerical experiments shall have to be carried out to identify these crucial locations where observation network need to be strengthened. Hence, it is seen that the entire range of research work involves simulation runs of multiple versions of the same high resolution analysis forecast model which means the utilization of HPC time as well as storage also becomes manifold (directly depending on the total number of experiments undertaken by each student). In order to study the effect/impact on a large temporal scale (from monthly to decadal to 100s of years), these runs are to be undertaken accordingly. In addition, for understanding the micro scale process studies one has to go for extremely high resolution models that can resolve scales of the cloud and related processes. Thus these entire range of studies require not only high level of computer storage but also high computational power.
Project Details
AADITYA is 790+ TeraFlops High Performance Computing System IBM iDataPlex cluster, which features 38,144 Intel Sandy Bridge processors and 149 TB of memory.
System Configuration Details:
AADITYA is an IBM iDataPlex Supercomputer. The login and compute nodes are populated with two Intel Sandy Bridge 8-core processors. AADITYA uses the FDR 10 Infiniband interconnect in a Fat Tree configuration as its high-speed network for MPI messages and IO traffic. AADITYA uses IBM's General Parallel File System (GPFS) to manage its parallel file system. AADITYA has 2,384 compute nodes that share memory only on the node; memory is not shared across the nodes. Each compute node has two 8-core processors (16 cores) with its own Red Hat Enterprise Linux OS, sharing 64 GBytes of memory, with no user-accessible swap space. AADITYA is rated at 800 peak TFLOPS and has 6 PBytes (formatted) of disk storage.
AADITYA is intended to be used as a batch-scheduled HPC system. Its login nodes are not to be used for large computational (memory, IO, long executions) work. All executions that require large amounts of system resources must be sent to the compute nodes by batch job submission.
System overview
38,144 Intel Sandy Bridge processors in 2384 compute nodes
AADITYA uses 2.6-GHz Intel Sandy Bridge processors on its login and compute nodes. There are 2 processors per node, each with 8 cores, for a total of 16 cores per node. In addition, these processors have 8x256 KBytes of L2 cache, and 12x6 MBytes of L3 cache.
Details on Memory
AADITYA uses both shared and distributed memory models. Memory is shared among all the cores on a node, but is not shared among the nodes across the cluster.
Each login node contains 128 GBytes of main memory. All memory and cores on the node are shared among all users who are logged in. Therefore, users should not submit memory intensive jobs on login nodes.
2384 compute nodes contain 64 GBytes of memory.
Operating System
The operating system on AADITYA is RedHat Enterprise Linux Server. The operating system supports 64-bit software.
Utility Node configuration:
There are 6 utility servers are available which can be used for post-processing analysis using large amount of data. The configuration of each server is as follows:
Singh Manmeet, Kumar Bipin, Chattopadhyay R., Amarjyothi K., Sutar A.K., Roy Sukanta, Rao Suryachandra A., Nanjundiah R.S., Artificial intelligence and machine learning in earth system sciences with special reference to climate science and meteorology in South Asia , Current Science, 122, May 2022, DOI:10.18520/cs/v122/i9/1019-1030, 1019-1030 (Impact Factor 1.102)
Gharat J., Kumar Bipin, Ragha L., Barve A., Jeelani S.M., Clyne J., Development of NCL equivalent serial and parallel python routines for meteorological data analysis, International Journal of High Performance Computing Applications, 36, March 2022, DOI:10.1177/10943420221077110, 337-355 (Impact Factor 1.942)
Grabowski W.W., Thomas L., Kumar Bipin, Impact of cloud-base turbulence on CCN activation: Single-size CCN , Journal of the Atmospheric Sciences, 79, February 2022, DOI:10.1175/JAS-D-21-0184.1, 551–566 (Impact Factor 3.184)
Kumar Bipin, Ranjan R., Yau M-K, Bera Sudarsan, Rao Suryachadra A., Impact of high- and low-vorticity turbulence on cloud–environment mixing and cloud microphysics processes, Atmospheric Chemistry and Physics, 21, August 2021, DOI:10.5194/acp-21-12317-2021, 12317-12329 (Impact Factor 6.133)
Kumar Bipin, Rehme M., Suresh N., Cherukuru N., Stanislaw J., Li S., Pearse S., Scheitlin T., Rao Suryachandra A., Nanjundiah R.S., Optimization of DNS code and visualization of entrainment and mixing phenomena at cloud edges, Parallel Computing, 107:102811, August 2021, DOI:10.1016/j.parco.2021.102811 (Impact Factor 0.986)
Conference Presentations:
Kumar. B., Bera, S., Parabhakaran, T. and Grabowski, W. W. (2016) “Investigation of DSD variations in a developing monsoon cloud: Analysis from numerical simulation and field observation”, University of Manchester July 25-29, Manchester, UK.
Goetzfried, P., Kumar, B., Shaw, R. A., Schumacher, J.(2016), “Mixing at the boundary between a turbulent cloud and non-turbulent environment”, University of Manchester July 25-29, Manchester, UK.
Konwar, M., Kumar, B., Bera, S., Prabhakaran, T, (2016) “Physical basis for Cloud droplet spectral broadening in the downdraft zones”, University of Manchester July 25-29, Manchester, UK.
Kumar, B. (2014) “Direct Numerical Simulation of Droplet Dynamics at Cloud Edge”, International Conference on Modeling and Simulation of Diffusive Processes and Applications, October 29-31, 2014. Banaras Hindu University, Varanasi, India.
Götzfried, P., Kumar, B. Siebert, H., Shaw, R.A., and Schumacher, J. (2014), “ Numerical Studies of Small-Scale Turbulent Mixing and Entrianment“, AMS:14th Conference on Cloud Physics, 7-11, July 2014, Westin Copley Place, Boston, USA
Team
Project: High Performance Computing System
Project Director: Dr. A. Suryachandra Rao, Scientist-G
Dr. A. Suryachandra Rao Scientist-G
surya@tropmet.res.in
Phone No - +91-(0)20-25904245 View Profile
Shri. S. M. D. Jeelani Scientist-E (Computer Engineer)
jeelani@tropmet.res.in
Phone No - +91-(0)20-25904213
Dr. Bipin Kumar Scientist-E
bipink@tropmet.res.in
Phone No - +91-(0)20-25904522
Shri. Mahesh Dharua Sci-D(Mechanical Engineer)
nilakantha@tropmet.res.in
Phone No - +91-(0)20-25904507
Mr.Jnanesh S.P Sci-D (ElectricalEngineer)
jnanesh@tropmet.res.in
Phone No - +91-(0)20-25904355
Mr. A.K. Saxena Sci-D (CivilEngineer)
anupam@tropmet.res.in
Phone No - +91-(0)20-25904335
Mr. R.M. Bankar Sci-D (MechanicalEngineer)
ravindra@tropmet.res.in
Phone No - +91-(0)20-25904549
Smt. S.U. Athale Sci. Offr Gd II
athale@tropmet.res.in
Phone No - +91-(0)20-25904214
Mr.Vipin Mali Sci. OffrGd-II
vipin@tropmet.res.in
Phone No - +91-(0)20-25904567