ITSM Glossary

ITSM terms explained

Anyone dealing with IT in general and IT documentation in particular will inevitably come across numerous technical terms. In our glossary we explain these terms in an easily understandable way. So you can have your say even if you are not at home in the world of ITSM.

You are missing a keyword or you have a suggestion for improvement? Please let us know. We are continuously expanding our ITSM glossary and welcome your suggestions.

Active Directory

Active Directory is a directory service from Microsoft. It is used with Windows Server 2000 and Windows Server 2003. With an Active Directory, the structure and network of one or more organizations with its spatial distribution can be reproduced. Starting with Windows Server 2008, Active Directory is called Active Directory Domain Services (ADDS).

Active Directory can be read out via the LDAP (Lightweight Directory Access Protocol) / LDAPS (Lightweight Directory Access Protocol over SSL) protocol. This makes it possible to obtain information about existing computers, users and groups.

Basic requirement for setting up an Active Directory domain service under Windows Server 201X

  • DNS
  • SMB / SMBv3 (Server Message Block)
  • Kerberos (user authentication)
  • LDAP (Lightweight Directory Access Protocol)

Alternatives to set up a directory service on LINUX / UNIX based operating systems are Open-LDAP and Samba.

Authorization concept (user and role concept)

An authorization concept describes which rules apply to individual users and user groups when accessing an IT system. An authorization concept is thus an important part of data protection in a company.

User and role concepts have proven to be a useful concept in software and administrative services. They serve to regulate access to data, functions, resources and information. For this purpose, uniform security groups are often created for the respective departments, external service providers or organizations. These security groups each have their own set of access rights. Users can be added to these groups. These users are then automatically given the permissions of the group to access certain parts of a system. Security groups have the advantage that the effort required for each individual user is drastically reduced compared to the configuration. Only individual requirements are implemented in the user rights. This can be, for example, a required access to a network resource if the employee concerned is to participate in a project.

Business Service Management (BSM)

Business Service Management connects IT services with business processes. Its task is to improve the coordination between IT and business processes. This is achieved by showing the dependencies between business processes and IT. The BSM also examines the impact of IT outages on business processes.
In BSM, there is always a link between at least one business process and an IT service. However, a business service can also consist exclusively of an IT service. This is the case, for example, with an online shop where customers can order goods and products.

Change Management

Change management is part of configuration management and defines processes for documenting and implementing changes to software and systems.

The aim of change management is to make changes only if

  • resources are determined
  • possible consequences have been assessed
  • a point in time is determined, and
  • the release by an authorised body (approval process) is available.

Configuration Management

Configuration management is a summary of various processes. These serve to map and control systems, products and software in their holistic configuration in all phases of your product life cycle. Not only the physical, but also the functional properties are considered.
In this way, the configurations can be exactly reproduced in all phases and with every change (version). A CMDB is used to consolidate this large amount of information in one place. It is also used to map change management and its processes.

Configuration Management Database (CMDB)

A CMDB is a database that contains all relevant information about an organization’s IT infrastructure.

Among other things, the CMDB

  • contains Objects (CI – Configuration Items) with information
  • shows the relationships between objects (location, integration, assigned / handed over to specific person),
  • links objects to documents
  • shows the status of each object (ready for operation / defective / in repair) which is documented
  • contains information about the responsible persons.

The CMDB already contains all relevant existing information about the IT landscape. This information is continuously kept up to date through appropriate processes (so-called configuration management processes).

Some systems or system environments access the data in the CMDB. This is why the ITIL (IT Infrastructure Library) best practices define the CMDB as the central database for IT service management. Other systems exchange data with the CMDB or serve to update the data stock within the framework of configuration management. This applies to the ISMS (Information Security Management System), for example, but also to network monitoring and discovery.

Corrective action

Corrective measures are planned for risks if they occur and are part of risk and emergency management. In the event of a server failure, for example, a special IT service company could start restoring the system immediately.

By planning corrective measures, suitable measures are determined in advance in the event of a disruption in order to restore the operation of the impaired systems as quickly as possible.

Data Center Infrastructure Management (DCIM)

The DCIM is used to measure and control the utilization of a data center and the energy consumption of its components.
The DCIM not only records the IT systems themselves, but also parts of the building infrastructure. This includes power connections, sockets and distribution strips as well as air conditioning systems. The DCIM thus deals with everything that is necessary for the administration and control of the systems, the energy supply and heat regulation.

Domain Name System (DNS)

The DNS is one of the most important network services besides DHCP. It is responsible for name resolution in networks.

The DNS basically works like a telephone book. The user usually knows the name, e.g. the address of a website. If he now sends a query to it, the DNS translates the specified name into an IP address.

The DNS can also be a cause of serious network problems. Often an incorrectly configured or unreachable DNS server is the cause of systems, services and programs not functioning properly.

Dynamic Host Configuration Protocol (DHCP)

DHCP is one of the most important server services to integrate connected clients into a network without additional configuration. The client receives various information from the DHCP server such as the IP address assigned to it, the netmask, the responsible and, if necessary, alternative DNS server and the standard gateway.
Different areas can be defined on a DHCP server. Clients receive their network configuration from these areas (dynamic or reserved).
DHCP uses the ports 67/68 UDP for IPv4 and 546/547 for IPv6.

Failure simulation

Failure simulations serve to examine the IT infrastructure with regard to security, stability and reliability. The results can then be used to plan preventive or corrective measures in case of an impact. 

A failure simulation can be used to simulate the failure of an important server, for example. For this purpose, the dependencies between systems and services must be known. In this way it is possible to trace which secondary systems, locations, persons or networks are impaired in their function in the event of a system failure. Such an investigation is also called impact analysis. 
The resulting information and findings can be used for further planning. For example, a follow-up measure can be the procurement of an additional system that takes over the tasks of the main system in the event of its failure.

Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik / BSI)

The Federal Office for Information Security (BSI) is the contact for all IT security issues. It is a federal authority assigned to the Federal Ministry of the Interior.
The BSI cooperates with companies and public authorities alike. It provides information about important topics in information and communication technology. It also designs and develops IT security applications and carries out certifications of IT systems with regard to their security features.

The BSI makes the IT basic protection catalogues available free of charge. By implementing them, companies can achieve a high level of IT security. Currently, ISO 27001 certification according to IT-Grundschutz is also possible.

The BSI also offers a service for private users. This is called “BSI for citizens”.


IMAC/R/D is a process for service oriented IT lifecycle management. It consists of the phases Install, Move, Add, Change, Remove and Dispose.

IMAC/R/D was originally developed for the service-oriented management of PC workstations. In the meantime, however, it is also used in many other areas of IT management such as server administration. The process begins with the installation and commissioning of the system. This is followed by transport and installation. In the Add phase, all activities involving the addition of new (hardware) components and software are combined. The most extensive phase is Change. Here all changes to the system are carried out, for example software updates, replacement of defective components or changes to system settings.

These 4 phases represent the basic IMAC process. However, since the life cycle of a system does not end with a Change, the phases Remove (removal of components or software) and Dispose (disposal, return of the system) have been added.


In IT, an impact refers to an effect on a system or infrastructure. A distinction is made between own impacts and external impacts.

Own impacts mainly concern power failures, faulty fuses or fires. External impacts refer to all faults caused by third parties, such as malware, theft or social engineering.

Failure simulations can be used to simulate different impacts without endangering the IT infrastructure. The IT security concept should carefully consider possible impacts and their effects. This way, employees can be sensitized and appropriate measures can be taken in advance (preventive and corrective).

IT managers must regularly inform themselves about new possible impacts (e.g. newly occurring malware). They must also update their security concept at regular intervals (PDCA cycle) and in the event of changes.

Information Security Management System (ISMS)

An ISMS describes the process of establishing information security and its continuous improvement.

The basis for IT security management in the DACH region is usually the standards 200-1 and 200-2 defined by the Federal Office for Information Security (BSI).

The BSI standard 200-1 “Management Systems for Information Security” describes how a management system can be set up. The BSI standard 200-2 “IT Basic Protection Methodology”, on the other hand, defines methods for setting up, checking and expanding an ISMS. The ISO 27001 standard is also an important basis in this context.

Necessary points to introduce an ISM

  • the hazard situation must be analysed
  • Security objectives should be defined
  • Strategies and guidelines must be developed
  • a suitable organisational structure for information security should be in place
  • appropriate security measures must be established for all information processing.

IPAM – IP Address Management

IP Address Management is a management system for the administration of networks. The more hosts there are in a network, the more complex the management and the tasks involved become. The goal of an IPAM system is to provide administrators with all information about the existing networks. This includes all basic information about the networks such as the IP address range, the subnet masks / CIDR used and the default gateway. In addition, a list of the devices in the network is of course also useful to quickly determine which host names and IP addresses the devices use, which VLANs are available in the network and how many IP addresses are still available or already occupied in the network.

Based on this information, administrators and IT managers can plan new networks, identify a shortage of IP addresses, and quickly add or change devices to a network.

ISO 27001

DIN ISO 27001 is an international standard and serves as proof of a high information security standard within an organization. In addition to the establishment of an information security management system (ISMS), ISO 27001 also requires the analysis and handling of risks (preventive & corrective measures).

Companies can be certified within the framework of ISO 27001. The preparation for certification is complex and presents organizations with a number of challenges. Often the IT documentation must be processed and all assets of the organization must be identified. Processes and responsibilities must also often be redefined or redesigned.

ISO 27001 certification also offers a number of advantages. First and foremost, these include the minimization of liability and business risks. However, certified companies achieve a competitive advantage above all else, since many tenders mention ISO 27001 certification as a requirement.

Threats are identified in advance through the comprehensive analysis of risks and security gaps from both a technical and organizational point of view. Thus a high level of information security can be achieved. For companies that are subject to the KRITIS regulation, the certification serves as proof of the legally required audit every two years.

IT documentation

An IT documentation represents the current state of an organization. All required information (according to the documentation concept) is recorded and stored physically or digitally.

A complete IT documentation provides information about

  • systems in use
  • their version and patch level
  • hardware properties
  • installed and used software and licenses
  • peripherals
  • network devices and
  • everything that is necessary to operate the IT infrastructure.

The primary purpose of IT documentation is to make information useful for the organization.

IT Service Management (ITSM)

IT Service Management encompasses the development, management and improvement of IT (services) performance in order to provide the best possible support for business processes. In addition to pure performance optimization, IT service management primarily considers customer-oriented process quality from an economic perspective.

ITSM systems consolidate information in a central system from

  • risk analysis and management
  • cost structure
  • services and contracts
  • quality and quantity of existing data
  • evaluation of existing data
  • prognosis on future data.

A good ITSM thus replaces a multitude of “isolated applications”. It offers an interface on which the data of different systems are displayed in a summarized form. This creates transparency and ensures that all departments work with the same database. The basis and central element of an ITSM is a CMDB.


JDisc is a discovery solution for networks and their inventory. It is equally suitable for the discovery of simple and complex infrastructures. JDISC works agentlessly by using system credentials (e.g. root / domain admin, DHCP admin or similar) to discover the information directly and unaltered. It also uses various protocols such as WMI (Windows Management Instrumentation) or SNMP (Simple Network Management Protocol) to identify the configurations of systems or connections to other network devices. It can also identify virtual machines, port configurations, installed software and licenses in use. All information is stored in a local PostgreSQL database. From there the data can be automatically exported to other systems such as i-doit.

License Management

License management describes the process in the company that ensures the efficient handling of software licenses. It ensures that the existing licenses are available in sufficient quantity and the right quality.

License management affects many other processes in the company. Procurement processes from PC workstations to server and capacity management and cost savings are affected by it. A good license management ensures that only those licenses are in the company that are needed according to the target concept.
It also ensures that processes for procurement or license renewal are initiated or fully automated. The implementation of a license management system makes the handling of a manufacturer audit considerably easier, as over- or under-licensing is ruled out. Discovery solutions such as JDISC can also automatically retrieve information about the licenses used in the company.

Manufacturer Audit

In recent years, manufacturers of hardware and software have increasingly initiated a manufacturer audit or license audit. Based on the contracts concluded, the manufacturers have the possibility to verify the proper use of licenses.

If the possible audit is not part of the license agreement, §101 of the copyright law applies. This ensures a right to information.
Before an audit, the manufacturers receive an overview of the manufacturer-specific licenses used in the company. An IT documentation with a license management is therefore a good preparation for an audit.


Monitoring is used to monitor networks, systems and devices. It uses various protocols such as SNMP, WMI or ICMP to query configurations and information from systems. The data received is then displayed in a structured way in the monitoring solution.

Frequently, information from the devices is also determined by user authentication. Another possibility is to install an agent on the target system. Modern monitoring solutions can often be connected to ticket and / or documentation systems. In this way, a troubleshooting process can be initiated automatically and its progress documented.

There are basically two variants of monitoring:

Monitoring with agents
When monitoring with agents, software is installed on each system to be monitored. This determines the desired status information directly from the system and sends it to the monitoring software.Often a separate user is created on each device in order to have the required authorizations. The agent collects data independently of the availability of the network. This enables it to check the status of the device even if communication with the management system is temporarily not possible. It also receives information faster. In some scenarios, the agent can collect more information because it is installed directly on the system.

The disadvantage is that the agent requires system resources from the host and must be maintained regularly. In addition, errors that occur in the agent can also affect the host.

Agentless Monitoring
When monitoring without agents, the required information is queried via protocols such as SNMP, WMI or directly from the system after user authentication has been completed. However, monitoring only works if the target systems can be reached via the network.The performance of the network has a direct influence on how quickly data can be collected. The monitoring solution is sometimes more limited in its functionality than when using an agent.

For large infrastructures, we recommend that you configure multiple monitoring servers to query information from the target systems. If one area of the network fails, information from devices in other networks can still be monitored and a total failure of the monitoring can be prevented. In addition, a higher performance and lower network load is achieved.

PDCA cycle

PCDA is the abbreviation for “Plan – Do – Check – Act” and describes a four-phase process With its help a continuous improvement (KVP) of systems is to be achieved.
The individual phases are:

Plan: Identification of improvement opportunities and planning of measures to achieve the objectives.

Do: Implementation of the previous planning with prototype or pilot model.

Check: Controlling of the implemented measures and achievement of objectives.

Act: Review of the findings from all previous phases. From these findings new possibilities for improvement can be identified if necessary. In this case the process starts again in the “Plan” phase

Preventive measure

Preventive measures are part of the risk analysis and are taken to reduce the probability of a risk occurring.

Example: A server that provides important services for the network has a relatively high probability of failure. A preventive measure could be the creation of redundancy. A second identical system can compensate for the server failure and thus reduce the risk.


Ramsomware is malicious software that encrypts data on infected systems and thus blocks access to this data. The name is derived from the English terms “ransom” and “malware”.

Malware of this type usually gets onto a computer via social engineering (e.g. manipulated e-mail attachments). After an attack, the authors of such software often offer a key after paying a certain amount of money. This key is used to release the data again. As a rule, however, this key is not supplied, even if the “ransom” has been paid.

The principle of ransomware has existed since the late 1980s. This form of malware first came into the public eye in 2005 with the Trojan “PGPcoder”.

Often, when infested, connected systems such as storages or cloud storage are also encrypted. The safest measures against an infection with this type of malware are sensitized users and always up-to-date backup copies.


In the IT environment, the term “redundancy” stands for the presence of additional technical equipment. These are equal or comparable in function to already existing resources. They serve to be put into operation as a replacement in the event of a failure of the main resource.

The creation of redundancy serves to increase the reliability of systems and data. This can be a second server, for example. In the event of a failure, this server takes over the role and associated services of the actual server, such as DHCP or DNS.

Many redundancy systems can be configured in your design. This makes it possible to operate them actively as redundancy systems. However, they can also be used as load balancing for incoming and outgoing data packets or passively as standby systems that only activate in the event of a failure.

In addition to servers, services or storage systems can also be set up redundantly. Redundancies are always useful when systems are responsible for business-critical processes or when important data must be protected against loss.

Simple Network Management Protocol (SNMP)

SNMP is a network protocol used for the administration, control and monitoring of network elements. These include in particular routers, switches and firewalls. Discovery and monitoring solutions can use SNMP to capture VLANs, switch port configurations and communication, for example. SNMP has become the standard in network management and is therefore supported by a wide range of devices and management systems.

Ticket system / Service Desk

Ticket systems are used to list incidents in a comprehensible way. For this purpose, the communication between help desk / service desk and customers is listed in tickets. A ticket is the electronic form of a formulated request. This can be a fault report, for example.

The tickets are provided with a unique identification number so that they can be clearly named and located at any time. Incoming communications (e.g. e-mails) are usually automatically added to the respective ticket. Here too, the assignment is made via the unique ID.

By collecting all communication processes within a ticket, even employees who previously had no information about the incident can quickly gain an overview.

By connecting a ticket system to a CMDB, further information about the respective systems can be determined and made available. Usually, the tickets are also documented in the CMDB in order to identify frequent system malfunctions. Monitoring systems can detect faults in systems and networks and – if configured accordingly – automatically trigger a ticket.

Well-known open source ticket systems are OTRS, Zammad, iTop and Request Tracker.

Windows Management Instrumentation (WMI)

WMI is Microsoft’s implementation and extension of the Common Information Model (CIM). In Microsoft Windows infrastructures, WMI is an important protocol for retrieving device information from workstations and servers. Various monitoring and discovery solutions use WMI to obtain information about the hardware such as CPU, memory and hard disk capacity of computers. The installed software can also be determined in this way.


Zammad is a web-based service desk software that can be used for internal and external processing of cases and as an issue tracker. In addition to industry-standard functions such as user and group administration, assignment of roles within and outside the organization, a calendar, and system-supported time recording, it also offers text modules, the definition of SLAs and its own knowledge base. It also offers various interfaces to exchange information and data with other platforms and systems. These include e-mail services, monitoring solutions (Check_MK, Nagios…), chat applications (Slack), documentation systems (i-doit), VoIP telephone systems (sipgate, placetel…), directory services (Active-Directory…) and social networks such as Facebook or Twitter. A REST API is available for the connection via user authentication or token. Through the integrated reporting system, tickets can be evaluated by year, month, week, day or in real time.

Zammad is licensed under GNU AGPLv3 and can be freely obtained through GitHub.