Systems & Networks
Systems & Networks Division lead by Sr. Technical Officer (IT) (System Administrator) is responsible for the Design, Installation, Testing and Commissioning including Extension and Upgradation Projects, and Maintenance of the following:
SYSTEMS | Server Farm, Server Clusters incl. HPC, Storage, Desktops & Workstations, Digital Classroom, Software License Server |
NETWORKS | Campus-Wide Wired and Wireless Network, Campus-Wide Telephony Network, CCTV Network and Server Farm Networks |
SECURITY | Server and Network Security, OS Hardening, Unified Threat Management (UTM) |
ICT SUPPORT INFRA | Power Backup Systems, Cooling Systems, Institute Data Center |
This division is also responsible for Ensuring Design and Equipment Standardization and Continuity along Up-gradation Paths for Ease of Maintenance and Seamless Integration.
Turnkey Project Management:This division has successfully completed four large turnkey ICT Infrastructure projects in the Institute. It has proven its expertise and capabilities in all phases of turnkey ICT Infrastructure Project Management including Conception and Initiation, Planning, Execution, Performance / Monitoring, and Project Closure.
This division is responsible for the design and content of the central computer center website.
Sunil Pandey
| Sr.Technical Officer (IT)
(System Admin)
M.Tech. | sys_admin@nitrr.ac.in
Abhishek K Dewangan
| Technical Assistant
Diploma in CS | adewanagan.ccc@nitrr.ac.in
Krishna Kumar
| Technical Assistant
M.E. | kkumar.ccc@nitrr.ac.in
Bhagchand Verma
| Trainee Technician I
Ashok Dewangan
| Technician
Ishwar Prasad
| Trainee Technician I
Lomesh Kumar Sahu
| Trainee Technician II
Swati Sharma
| Trainee Technician II
Campus Wide Wired Network
Campus Wide Wireless Network
Indoor WiFi hotspots have been provisioned in the corridors of the main building and the architecture buildings. They have also been provisioned in the mess halls of all hostels. Limited WiFi connectivity shall be available in the student rooms of H-hostel this month onwards. The Central Computer Center building is completely WiFi enabled.
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Campus Wide Telephony Network
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Campus Network Security
Incoming and outgoing network traffic is monitored for information security reasons at the gateway of the institute network using what is referred to as a next generation firewall or a Unified Threat Management, i.e., UTM Device. This UTM device is a multi-functional network appliance and performs the following standard roles related to information security of the campus network.
- * Network Firewall
- * Network Intrusion Detection System
- * Network Intrusion Prevention System
- * Gateway Antivirus
- * Web and Application Content Filtering
- * Data Leak Prevention
- * Virtual Private Network
The UTM device is configured in High Availability Active-Active Configuration.
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Advanced Computing Facility
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Campus Hosting Services
Hosting services include:
- * Secured Access
- * Periodic manual monitoring
- * Air-conditioned Environment
- * Rodent Repellant System
- * Periodic CCTV based Remote Monitoring
- * Sufficient power for all installed equipment
- * Uninterruptible Power Supply (UPS) to protect against power anomalies
- * Battery backup and standby generators to maintain normal operations during a utility outage
- * Preexisting, standard 19-inch racks with full cable management
- * Receiving services
- * Installation services
S. No. | Name of Department / Section | Application | Number of Servers |
1 | Central Library | Library Automation | 1 |
2 | Main Office | Admin MIS | 1 |
3 | Academics Section | Academic MIS | 2 |
4 | Computer Science and Engineering | Deep Learning | 2 |
A policy is in place for hosting other department’s servers in the Central Computer Center. At present, the Central Computer Center is not charging respective departments / concerned sections for these services.
However, plans are in place to charge these services to the ICT services bill of the concerned department / section. The department / section shall be required to include and provision the same in their annual budgets.
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Campus Wide Software Licensing
Central Computer Center is presently managing the License Servers and Installation Management of the following Campus Wide Software.
S. No. | Software | Applications |
1 | Mathworks® Matlab® | Multi-paradigm Numerical Computing Environment and Proprietary Programming Language |
2 | IBM® SPSS® | Statistical Analysis Software |
3 | IBM® SPSS® Amos | Structural Equation Modeling |
4 | ADAPT | Structural Analysis Software |
5 | ANSYS | Engineering Simulation and 3D Design Software |
Installation Steps for the above Software have also been provided by the Central Computer Center. Details can be found at the following link
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Computer Lab in Central Computer Center
S. No. | Facility | Location | Number of Desktops | Projector |
1. | Computer Laboratory | First Floor | 242 | Online Tests, Multi-purpose personal computing, Client access systems for advanced computing facilities of CCC |
The PCs have been chosen to support myriad educational and research applications. They are based on state-of-the-art Accelerated Processing Units with 8 GB RAM and 1 TB hard disk space. The heart of this PC is a 3.2 GHz APU having 4 compute and 6 graphic cores. The PCs have entry level workstation capabilities. They run the GNU Linux Operating System which can support the latest scientific software applications like Scilab, R, PSPP and numerical libraries and latest development tools like compilers etc. used in similar settings across the world.
The PCs will also serve as client access systems for high end compute applications running on remote servers. The computing in CCC is therefore on a very different model (Client-Server with near Real-Time Remote Graphics) from the computing in the regular departmental labs. This requires a very stable and high-performance network. The CCC network is a state-of-the art high-performance failsafe 1 Gbps network.
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
Computer Lab Classrooms in Central Computer Center
S. No. | Facility | Location | Number of Desktops | Projector | PA system | Intended Use |
1. | CLC 1 | First Floor | 35 + 1 | Provisioned | Provisioned | Hands-on software training sessions |
2. | CLC 2 | First Floor | 35 + 1 | Provisioned | Provisioned | Hands-on software training sessions |
3. | CLC 3 | Ground Floor | 54 + 1 | Provisioned | Provisioned | Hands-on software training sessions, Language Learning |
4. | CLC 4 | Ground Floor | 36 + 1 | Provisioned | Provisioned | Hands-on software training sessions |
The PCs have been chosen to support myriad educational and research applications. They are based on state-of-the-art Accelerated Processing Units with 8 GB RAM and 1 TB hard disk space. The heart of this PC is a 3.2 GHz APU having 4 compute and 6 graphic cores. The PCs have entry level workstation capabilities. They run the GNU Linux Operating System which can support the latest scientific software applications like Scilab, R, PSPP and numerical libraries and latest development tools like compilers etc. used in similar settings across the world.
The PCs will also serve as client access systems for high end compute applications running on remote servers. The computing in CCC is therefore on a very different model (Client-Server with near Real-Time Remote Graphics) from the computing in the regular departmental labs. This requires a very stable and high-performance network. The CCC network is a state-of-the art high-performance failsafe 1 Gbps network.
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)
ADVANCED COMPUTING FACILITY (ACF)
The Advanced Computing Facility at NIT Raipur has been established with the aim of providing advanced platforms to support work in compute-intensive disciplines like Numerical Modeling and Simulations etc. For more on the kind of work which will benefit, please refer "Supercomputer and its Applications" under Articles.
Facility 1: HPC Cluster / Supercomputer
The High Performance Computing (HPC) Cluster has been setup in the Advanced Computing Facility.
The node configuration of the HPC facility is as follows
Master | Compute | |
CPU | 2 x Intel® Xeon® Processor E5-2620 v4 | 2 x Intel® Xeon® Processor E5-2650 v4 |
Memory | 4x 32GB (128GB) configured for 2400MHz | 4x 32GB (128GB) configured for 2400MHz |
OS | CentOS Linux release 7.5.1804 (Core) | CentOS Linux release 7.5.1804 (Core) |
The HPC cluster comprises of one master and fifty four compute nodes. The supercomputer uses a 100 Gbps low latency cluster interconnect.
The Rmax performance of the HPC cluster with CPU only nodes is 39.5 TFLOPS. Two nodes are equipped with special-purpose cards like NVIDIA Tesla P100 and Quadro M4000 for compute acceleration, graphics visualization, etc.
The significant achievements of this section include the establishment of the infrastructure mentioned in table below
S.No. | Systems & Networks Division Achievements |
1 | Advanced Computing Facility with 32nd Ranked HPC system in Top Supercomputers India List (2019) |
2 | Central Computer Center with 400+ computer systems (2018) |
3 | SDN based, High Performance, Resilient Campus-Wide Wired Network (2014, 2018) |
4 | Campus-Wide Wireless Network in (2018) |
5 | Campus-Wide IP Telephony Network (2014, 2018) |
6 | IP CCTV Network (2018) |
Supercomputer and its Applications
Until recently, the two traditional pillars of science and technology research have been “Theory” and “Experiment”. In recent times, “Simulation” has come to be recognized as an important third pillar supporting science and technology research.Supercomputing is the use of super computers and parallel processing techniques for solving large and complex computational problems. HPC focuses on the design and development of parallel processing algorithms and their implementation on specialized hardware as parallel codes. The development of codes to run on this kind of specialized hardware is fairly sophisticated and niche area in computer programming. The codes are “architecture aware” in the sense that they make extensive use of the underlying hardware of the machine.
HPC Applications are specifically designed to take advantage of the parallel nature of high-performance computing systems. Algorithmically designed to take advantage of high-performance architecture, these applications can be run on compute cluster. The research areas where the high performance computing is game changer are discussed below
- Astrophysical systems
- Biology and Medicine: protein folding simulations (and other macromolecules), bioinformatics, genomics, computational neurological modeling, modeling of biological systems (e.g., ecological systems), 3D CT ultrasound, MRI imaging, molecular bionetworks, cancer and seizure control
- Chemistry: calculating the structures and properties of chemical compounds/molecules and solids, computational chemistry/cheminformatics, molecular mechanics simulations, computational chemical methods in solid state physics, chemical pollution transport
- Civil Engineering: finite element analysis, structures with random loads, construction engineering, water supply systems, transportation/vehicle modeling
- Computer Engineering, Electrical Engineering, and Telecommunications: VLSI, computational electromagnetics, semiconductor modeling, simulation of microelectronics, energy infrastructure, RF simulation, networks
- Environmental Engineering and Numerical weather prediction: climate research, Computational geophysics (seismic processing), modeling of natural disasters
- Industrial Engineering: discrete event and Monte-Carlo simulations (for logistics and manufacturing systems for example), queueing networks, mathematical optimization
- Material Science: glass manufacturing, polymers, and crystals
- Mechanical Engineering: combustion simulations, structural dynamics, computational fluid dynamics, computational thermodynamics, computational solid mechanics, vehicle crash simulation, biomechanics
- Physics: Computational particle physics, automatic calculation of particle interaction or decay, plasma modeling, cosmological simulations
- Transportation
With this in mind, we have implemented a 20 Gbps fiber optic primary backbone network in the campus which connects the campus core network switch with two primary distribution switches in a resilient ring topology. The primary campus backbone has a bandwidth of 20Gbps which can be increased to 40 Gbps when required. The recovery time of this core ring is less than 50 milliseconds in the event of a fiber-cut. This 50ms switch time makes it similar to carrier grade synchronous digital hierarchy / synchronous optical network switching standards. This time includes fault detection and the actual switching and re-synchronizing of the circuit. With a minimum of 50ms worth of data buffering, even a live call is not disconnected. We have field tested this concept. The equipment keeps transmitting the data in the buffer and by then, the circuit has switched and new data arrives in the buffer. The campus core is highly available with 160 Gbps dynamic failover.
Two similar 10 Gbps secondary distribution resilient rings have also been implemented and also comprise the campus backbone network. One comprises of the primary distribution switch and the secondary distribution switches in the boys hostels and the other comprises of a primary distribution switch and the secondary distribution switches in the girls hostel. The gigabit access switches of a hostel have been stacked with 40Gbps bandwidth where necessary and are connected to the distribution switch of the respective hostel in a star topology.
In the main building of the institute, we have three large floors – each greater than one lakh square foot area - comprising of the departments, their laboratories, classrooms, library, workshop, departmental and institute staff and administrative offices etc. In each of these floors we have six secondary distribution switches connected in a 10G resilient fiber optic ring with the aggregation switches at the core. The access switches on these floors are connected to the distribution switch in their proximity using category 6 unshielded twisted pair copper cables. In the architecture building, we have a 10G resilient ring connecting four secondary distribution switches evenly distributed in two floors to the aggregation switch at the core. Access switches in this building are connected similar to the main building.
On the residence side and in satellite buildings of the institute we have run gigabit fiber to individual buildings and blocks. The specialized access devices used in this segment are gateways providing both ethernet and analog phone connectivity. In the newly constructed G+6 building, a secondary fiber distribution switch has been used along with these specialized access devices to provide LAN and telephone connectivity.
The entire wired campus network of the institute is based on a centralized software defined network controller. The switched network devices in the institute seamlessly integrate with this centralized controller. This controller provides a number of very useful network automation features like zero touch provisioning, scheduled automatic firmware upgrades with rolling reboots, automatic backup, restore and recovery of devices, simplified service rollouts besides facilitating the entire network to be managed as a single entity. This simplifies network management tasks greatly and significantly reduces the operational costs. We are the first and only SDN based Campus Network in an Institute of National Importance in India and have this status since 2014.
Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)