Systems & Networks Division lead by Sr. Technical Officer (IT) (System Administrator) is responsible for the Design, Installation, Testing and Commissioning including Extension and Upgradation Projects, and Maintenance of the following:

SYSTEMS Server Farm, Server Clusters incl. HPC, Storage, Desktops & Workstations, Digital Classroom, Software License Server
NETWORKS Campus-Wide Wired and Wireless Network, Campus-Wide Telephony Network, CCTV Network and Server Farm Networks
SECURITY Server and Network Security, OS Hardening, Unified Threat Management (UTM)
ICT SUPPORT INFRA Power Backup Systems, Cooling Systems, Institute Data Center

This division is also responsible for Ensuring Design and Equipment Standardization and Continuity along Up-gradation Paths for Ease of Maintenance and Seamless Integration.

Turnkey Project Management:This division has successfully completed four large turnkey ICT Infrastructure projects in the Institute. It has proven its expertise and capabilities in all phases of turnkey ICT Infrastructure Project Management including Conception and Initiation, Planning, Execution, Performance / Monitoring, and Project Closure.

This division is responsible for the design and content of the central computer center website.



Sunil Pandey
| Sr.Technical Officer (IT)
(System Admin)
M.Tech. | sys_admin@nitrr.ac.in
Abhishek K Dewangan
| Technical Assistant
Diploma in CS | adewanagan.ccc@nitrr.ac.in
Krishna Kumar
| Technical Assistant
M.E. | kkumar.ccc@nitrr.ac.in
Bhagchand Verma
| Trainee Technician
Ashok Dewangan
| Technician

Campus Wide Wired Network


The campus wide wired local area network of the institute encompasses the entire institute and provides a primarily gigabit converged network to all the departments, centers and sections of the institute including their computer laboratories, academic, technical and administrative staff offices, and classrooms. It also extends to the boys and girls student hostels and staff residences.

The campus wired LAN has been planned and designed following a systematic approach. The emphasis has been on performance, longevity, operational continuity with future scalability in mind. Patterns have been designed for different segments which can be replicated during any future expansion.

With this in mind, we have implemented a 20 Gbps fiber optic primary backbone network in the campus which connects the campus core network switch with two primary distribution switches in a resilient ring topology. The primary campus backbone has a bandwidth of 20Gbps which can be increased to 40 Gbps when required. The recovery time of this core ring is less than 50 milliseconds in the event of a fiber-cut. This 50ms switch time makes it similar to carrier grade synchronous digital hierarchy / synchronous optical network switching standards. This time includes fault detection and the actual switching and re-synchronizing of the circuit. With a minimum of 50ms worth of data buffering, even a live call is not disconnected. We have field tested this concept. The equipment keeps transmitting the data in the buffer and by then, the circuit has switched and new data arrives in the buffer. The campus core is highly available with 160 Gbps dynamic failover.

Two similar 10 Gbps secondary distribution resilient rings have also been implemented and also comprise the campus backbone network. One comprises of the primary distribution switch and the secondary distribution switches in the boys hostels and the other comprises of a primary distribution switch and the secondary distribution switches in the girls hostel. The gigabit access switches of a hostel have been stacked with 40Gbps bandwidth where necessary and are connected to the distribution switch of the respective hostel in a star topology.

In the main building of the institute, we have three large floors – each greater than one lakh square foot area - comprising of the departments, their laboratories, classrooms, library, workshop, departmental and institute staff and administrative offices etc. In each of these floors we have six secondary distribution switches connected in a 10G resilient fiber optic ring with the aggregation switches at the core. The access switches on these floors are connected to the distribution switch in their proximity using category 6 unshielded twisted pair copper cables. In the architecture building, we have a 10G resilient ring connecting four secondary distribution switches evenly distributed in two floors to the aggregation switch at the core. Access switches in this building are connected similar to the main building.

On the residence side and in satellite buildings of the institute we have run gigabit fiber to individual buildings and blocks. The specialized access devices used in this segment are gateways providing both ethernet and analog phone connectivity. In the newly constructed G+6 building, a secondary fiber distribution switch has been used along with these specialized access devices to provide LAN and telephone connectivity.

The entire wired campus network of the institute is based on a centralized software defined network controller. The switched network devices in the institute seamlessly integrate with this centralized controller. This controller provides a number of very useful network automation features like zero touch provisioning, scheduled automatic firmware upgrades with rolling reboots, automatic backup, restore and recovery of devices, simplified service rollouts besides facilitating the entire network to be managed as a single entity. This simplifies network management tasks greatly and significantly reduces the operational costs. We are the first and only SDN based Campus Network in an Institute of National Importance in India and have this status since 2014.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)


Campus Wide Wireless Network


The campus wide wireless network is based on a centralized WiFi controller and WiFi access points which are spread across the campus. The WiFi access points seamlessly integrate with the WiFi controller. The different parameters of the WiFi access points can be centrally tuned using the WiFi controller. The access points in the institute are mostly dual-band and are based on the IEEE 802.11ac standard.

The WiFi network in the institute comprises of WiFi hotspots at different locations of the campus which typically experience a high footfall. Outdoor WiFi hotspots have been provisioned in the institute porch and gardens in front of the main building. Outdoor hotspots have also been provisioned in the gardens of the main building, the architecture building, and the boys and girls hostels.

Indoor WiFi hotspots have been provisioned in the corridors of the main building and the architecture buildings. They have also been provisioned in the mess halls of all hostels. Limited WiFi connectivity shall be available in the student rooms of H-hostel this month onwards. The Central Computer Center building is completely WiFi enabled.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Campus Wide Telephony Network


The campus telephony is based on an IP-based Private Branch Exchange (IP-PBX). The IP-PBX system provides internal communication inside the campus through telephony extensions. Provisions are also in place to connect these extensions to the public switched telephony network (PSTN).

The IP-PBX system makes use of the TCP/IP protocol stack for enabling features to provide audio, video and instant messaging to IP connections. IP phones have been provisioned in the offices whereas analog phones have been provisioned in the residences. Specialized devices have been provisioned in the residences for enabling two-way communication between the analog phones and the IP-PBX.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Campus Network Security


Computer networks including campus networks can come under attack from malicious sources. A passive network attack is one in which the intruder is mainly interested in intercepting the data communication through the network. An active network attack is one in which the intruder is not merely interested in intercepting data communication. Rather, the intruder attempts to disrupt the normal operation of the network, or a covert network reconnaissance and tries to move laterally in attempting to discover and gain access to data, information, software and hardware assets which may be available on the network.

Network Security is the term associated with the entire gamut of processes involving physical and software defensive measures and mechanisms undertaken to protect and safeguard the base network infrastructure from being compromised through illegal access leading to potential misuse and malfunction of devices and resulting in possible insertion, deletion, modification, destruction, or improper disclosure of data; the objective being to create a robust and secure environment for authorized systems, users and programs for performing their intended and permitted functions.

Incoming and outgoing network traffic is monitored for information security reasons at the gateway of the institute network using what is referred to as a next generation firewall or a Unified Threat Management, i.e., UTM Device. This UTM device is a multi-functional network appliance and performs the following standard roles related to information security of the campus network.

  • * Network Firewall
  • * Network Intrusion Detection System
  • * Network Intrusion Prevention System
  • * Gateway Antivirus
  • * Web and Application Content Filtering
  • * Data Leak Prevention
  • * Virtual Private Network

The UTM device is configured in High Availability Active-Active Configuration.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Advanced Computing Facility


The Advanced Computing Facility at NIT Raipur has been established with the aim of providing advanced platforms for research and education in advanced disciplines like Computational Science and Engineering and Big Data Analytics. Presently, this facility is equipped with a high performance coumputing / supercomputing cluster.

The HPC cluster comprises of one master and fifty four compute nodes. The supercomputer uses a fat tree network with 100Gbps bandwidth. The Rmax performance of the HPC cluster with CPU only nodes is 39.5 TFLOPS. Two nodes are equipped with special-purpose cards like NVIDIA Tesla P100 and Quadro M4000 for compute acceleration, graphics visualization, etc.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Campus Hosting Services


The Central Computer Center is providing hosting services for campus clients who want to operate their servers in a secure, centrally managed server room.

This controlled environment is run as an institute resource, providing secure and reliable services for institute communications, computing, and data applications.

Hosting services include:
  • * Secured Access
  • * Periodic manual monitoring
  • * Air-conditioned Environment
  • * Rodent Repellant System
  • * Periodic CCTV based Remote Monitoring
  • * Sufficient power for all installed equipment
  • * Uninterruptible Power Supply (UPS) to protect against power anomalies
  • * Battery backup and standby generators to maintain normal operations during a utility outage
  • * Preexisting, standard 19-inch racks with full cable management
  • * Receiving services
  • * Installation services

S. No. Name of Department / Section Application Number of Servers
1 Central Library Library Automation 1
2 Main Office Admin MIS 1
3 Academics Section Academic MIS 2
4 Computer Science and Engineering Deep Learning 2

A policy is in place for hosting other department’s servers in the Central Computer Center. At present, the Central Computer Center is not charging respective departments / concerned sections for these services.

However, plans are in place to charge these services to the ICT services bill of the concerned department / section. The department / section shall be required to include and provision the same in their annual budgets.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Campus Wide Software Licensing


The Campus Wide Software Licensing allows access on Institute owned computers to all faculty, staff and students. They are also able to install the software on their personally owned computers.

Typically, a limited number of concurrent users are possible with a Campus-Wide software license.

Central Computer Center is presently managing the License Servers and Installation Management of the following Campus Wide Software.

S. No. Software Applications
1 Mathworks® Matlab® Multi-paradigm Numerical Computing Environment and Proprietary Programming Language
2 IBM® SPSS® Statistical Analysis Software
3 IBM® SPSS® Amos Structural Equation Modeling
4 ADAPT Structural Analysis Software
5 ANSYS Engineering Simulation and 3D Design Software

Installation Steps for the above Software have also been provided by the Central Computer Center. Details can be found at the following link

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Computer Lab in Central Computer Center


The Computer Laboratory is for General Purpose Individual Computing, Accessing Computational Resources and Digital Learning Content available on the Intranet, Internet, and for Computer Based Testing purpose etc.

The Computer Laboratory of the Central Computer Center houses 240 desktops for use by its students, faculty and staff. The systems in the Computer Laboratory of the Central Computer Center are based on 3.1 GHz quad-CPU and six GPU cores, 8 GB RAM and 1 TB Hard Disk Drives and run Open Source Linux Operating System. The Computer Center Building including the CLCs is Centrally Air Conditioned and has UPS and Generator Power Backup.

S. No. Facility Location Number of Desktops Projector
1. Computer Laboratory First Floor 242 Online Tests, Multi-purpose personal computing, Client access systems for advanced computing facilities of CCC

The PCs have been chosen to support myriad educational and research applications. They are based on state-of-the-art Accelerated Processing Units with 8 GB RAM and 1 TB hard disk space. The heart of this PC is a 3.2 GHz APU having 4 compute and 6 graphic cores. The PCs have entry level workstation capabilities. They run the GNU Linux Operating System which can support the latest scientific software applications like Scilab, R, PSPP and numerical libraries and latest development tools like compilers etc. used in similar settings across the world.

The PCs will also serve as client access systems for high end compute applications running on remote servers. The computing in CCC is therefore on a very different model (Client-Server with near Real-Time Remote Graphics) from the computing in the regular departmental labs. This requires a very stable and high-performance network. The CCC network is a state-of-the art high-performance failsafe 1 Gbps network.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)



Computer Lab Classrooms in Central Computer Center


There are four Computer Laboratory Classrooms (CLCs) in the Central Computer Center. The Computer Laboratory Classrooms have been specially designed for Instructor-Led Hands-On Software Training Sessions involving Live Demonstrations and each of them is furnished with Projector, Electronic Whiteboard, Public Address System, etc. to facilitate such training programmes.

The four CLCs in the Center house 160+ desktops for use by the students, faculty and staff. The Computer Center Building including the CLCs is Centrally Air Conditioned and has UPS and Generator Power Backup. The systems in CLC3 are based on 3.5 GHz quad CPU, 8 GB RAM and 1 TB Hard Disk Drives and run Windows 10 Operating System. The systems in the remaining CLCs are based on 3.1 GHz quad-CPU and six GPU cores, 8 GB RAM and 1 TB Hard Disk Drives and run Open Source Linux Operating System.

S. No. Facility Location Number of Desktops Projector PA system Intended Use
1. CLC 1 First Floor 35 + 1 Provisioned Provisioned Hands-on software training sessions
2. CLC 2 First Floor 35 + 1 Provisioned Provisioned Hands-on software training sessions
3. CLC 3 Ground Floor 54 + 1 Provisioned Provisioned Hands-on software training sessions, Language Learning
4. CLC 4 Ground Floor 36 + 1 Provisioned Provisioned Hands-on software training sessions

The PCs have been chosen to support myriad educational and research applications. They are based on state-of-the-art Accelerated Processing Units with 8 GB RAM and 1 TB hard disk space. The heart of this PC is a 3.2 GHz APU having 4 compute and 6 graphic cores. The PCs have entry level workstation capabilities. They run the GNU Linux Operating System which can support the latest scientific software applications like Scilab, R, PSPP and numerical libraries and latest development tools like compilers etc. used in similar settings across the world.

The PCs will also serve as client access systems for high end compute applications running on remote servers. The computing in CCC is therefore on a very different model (Client-Server with near Real-Time Remote Graphics) from the computing in the regular departmental labs. This requires a very stable and high-performance network. The CCC network is a state-of-the art high-performance failsafe 1 Gbps network.

Information provided by Sunil Pandey, System Administrator & Sr. Technical Officer (IT)

ADVANCED COMPUTING FACILITY (ACF)

The Advanced Computing Facility at NIT Raipur has been established with the aim of providing advanced platforms to support work in compute-intensive disciplines like Numerical Modeling and Simulations etc. For more on the kind of work which will benefit, please refer "Supercomputer and its Applications" under Articles.

Facility 1: HPC Cluster / Supercomputer

The High Performance Computing (HPC) Cluster has been setup in the Advanced Computing Facility.


The node configuration of the HPC facility is as follows

Master Compute
CPU 2 x Intel® Xeon® Processor E5-2620 v4 2 x Intel® Xeon® Processor E5-2650 v4
Memory 4x 32GB (128GB) configured for 2400MHz 4x 32GB (128GB) configured for 2400MHz
OS CentOS Linux release 7.5.1804 (Core) CentOS Linux release 7.5.1804 (Core)

The HPC cluster comprises of one master and fifty four compute nodes. The supercomputer uses a 100 Gbps low latency cluster interconnect.

The Rmax performance of the HPC cluster with CPU only nodes is 39.5 TFLOPS. Two nodes are equipped with special-purpose cards like NVIDIA Tesla P100 and Quadro M4000 for compute acceleration, graphics visualization, etc.

The significant achievements of this section include the establishment of the infrastructure mentioned in table below

S.No. Systems & Networks Division Achievements
1 Advanced Computing Facility with 32nd Ranked HPC system in Top Supercomputers India List (2019)
2 Central Computer Center with 400+ computer systems (2018)
3 SDN based, High Performance, Resilient Campus-Wide Wired Network (2014, 2018)
4 Campus-Wide Wireless Network in (2018)
5 Campus-Wide IP Telephony Network (2014, 2018)
6 IP CCTV Network (2018)

Supercomputer and its Applications
Until recently, the two traditional pillars of science and technology research have been “Theory” and “Experiment”. In recent times, “Simulation” has come to be recognized as an important third pillar supporting science and technology research.

Supercomputing is typically used for solving advanced problems and performing research activities through computer modelling, simulation and analysis. Supercomputer systems have the capability to deliver sustained performance through the concurrent use of computing resources. Supercomputing evolved to meet increasing demands for processing speed. It brings together several technologies such as computer architecture, algorithms, programs and electronics, and system software under a single canopy to solve advanced problems effectively and quickly. A state-of-the-art highly efficient supercomputing system requires a high-bandwidth, low-latency network to connect multiple nodes and clusters.

Supercomputing is the use of super computers and parallel processing techniques for solving large and complex computational problems. HPC focuses on the design and development of parallel processing algorithms and their implementation on specialized hardware as parallel codes. The development of codes to run on this kind of specialized hardware is fairly sophisticated and niche area in computer programming. The codes are “architecture aware” in the sense that they make extensive use of the underlying hardware of the machine.

HPC Applications are specifically designed to take advantage of the parallel nature of high-performance computing systems. Algorithmically designed to take advantage of high-performance architecture, these applications can be run on compute cluster. The research areas where the high performance computing is game changer are discussed below
  • Astrophysical systems

  • Biology and Medicine: protein folding simulations (and other macromolecules), bioinformatics, genomics, computational neurological modeling, modeling of biological systems (e.g., ecological systems), 3D CT ultrasound, MRI imaging, molecular bionetworks, cancer and seizure control

  • Chemistry: calculating the structures and properties of chemical compounds/molecules and solids, computational chemistry/cheminformatics, molecular mechanics simulations, computational chemical methods in solid state physics, chemical pollution transport

  • Civil Engineering: finite element analysis, structures with random loads, construction engineering, water supply systems, transportation/vehicle modeling

  • Computer Engineering, Electrical Engineering, and Telecommunications: VLSI, computational electromagnetics, semiconductor modeling, simulation of microelectronics, energy infrastructure, RF simulation, networks

  • Environmental Engineering and Numerical weather prediction: climate research, Computational geophysics (seismic processing), modeling of natural disasters

  • Industrial Engineering: discrete event and Monte-Carlo simulations (for logistics and manufacturing systems for example), queueing networks, mathematical optimization

  • Material Science: glass manufacturing, polymers, and crystals

  • Mechanical Engineering: combustion simulations, structural dynamics, computational fluid dynamics, computational thermodynamics, computational solid mechanics, vehicle crash simulation, biomechanics

  • Physics: Computational particle physics, automatic calculation of particle interaction or decay, plasma modeling, cosmological simulations

  • Transportation

Sunil Pandey