Wednesday, January 19, 2022

Monitor Linux Servers Using Prometheus Node Exporter

 Setup Node Exporter Binary

1. Download the latest node exporter package. You should check the Prometheus downloads section for the latest version and update this command to get that package.




2. Download the node_exporter using wget and untar the file

[root@server1 ~]# wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
[root@server1 ~]# tar -xvzf node_exporter-1.3.1.linux-amd64.tar.gz 

3. Move the node export binary to /usr/local/bin

[root@server1 ~]# mv node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin/


Create a Custom Node Exporter Service

1. Create a node_exporter user to run the node exporter service.

[root@server1 ~]# useradd -rs /bin/false node_exporter

2. Create a node_exporter service file under systemd.

[root@server1 ~]# vi /etc/systemd/system/node_exporter.service 

3. Add the following service file content to the service file and save it.

[Unit]
Description=Node Exporter
After=network.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target 

4.  Reload the system daemon and start the node exporter service.

[root@server1 ~]# systemctl daemon-reload
[root@server1 ~]# systemctl start node_exporter

5. Enable the node_exporter service and check the status.

[root@server1 ~]# systemctl enable node_exporter
[root@server1 ~]# systemctl status node_exporter

6. Enable the port 9100/tcp in firewall.

[root@server1 ~]# firewall-cmd --permanent --add-port=9100/tcp
success
[root@server1 ~]# firewall-cmd --reload
success 

7. You can see all the server metrics by visiting your server URL on /metrics as shown below.

http://<server-IP>:9100/metrics


Configure the Server as Target on Prometheus Server

Now that we have the node exporter up and running on the server, we have to add this server as target on the Prometheus server configuration.

Note: This configuration should be done on the Prometheus server.

1. Login to the Prometheus server and open the prometheus.yml file.

[root@promotheus-server ~]# vi /etc/prometheus/prometheus.yml

2. Under the scrape config section add the node exporter target as shown below. Change 192.168.31.252 with your server IP where you have setup node exporter. Job name can be your server hostname or IP for identification purposes.

- job_name: 'node_exporter_metrics'
  scrape_interval: 5s
  static_configs:
    - targets: ['192.168.31.252:9100']

3. Restart the prometheus service for the configuration changes to take place.

[root@promotheus-server ~]# systemctl restart prometheus

4. Now, if you check the target in prometheus web UI (http://<prometheus-IP>:9090/targets) , you will be able to see the status as shown below.


5. Also, you can use the Prometheus expression browser to query for node related metrics. Following are the few key node metrics you can use to find its statistics.

node_memory_MemFree_bytes
node_cpu_seconds_total
node_filesystem_avail_bytes
rate(node_cpu_seconds_total{mode="system"}[1m]) 
rate(node_network_receive_bytes_total[1m])



Prometheus Setup on Linux Server

What is  Prometheus ?

Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database  built using a HTTP pull model, with flexible queries and real-time alerting.


Installing and configuring Prometheus?

1. Go to the official Prometheus downloads page and get the latest download link for the Linux binary.



2. Download the source using wget, untar it, and rename the extracted folder to prometheus-files.

[root@server1 ~]# wget https://github.com/prometheus/prometheus/releases/download/v2.32.1/prometheus-2.32.1.linux-amd64.tar.gz
[root@server1 ~]# tar -xvzf prometheus-2.32.1.linux-amd64.tar.gz
[root@server1 ~]# mv prometheus-2.32.1.linux-amd64 prometheus-files

3. Create a Prometheus user, required directories, and make Prometheus the user as the owner of those directories.

[root@server1 ~]# useradd --no-create-home --shell /bin/false prometheus
[root@server1 ~]# mkdir /etc/prometheus
[root@server1 ~]# mkdir /var/lib/prometheus
[root@server1 ~]# chown prometheus:prometheus /etc/prometheus
[root@server1 ~]# chown prometheus:prometheus /var/lib/prometheus

4. Copy prometheus and promtool binary from prometheus-files folder to /usr/local/bin and change the ownership to prometheus user.

[root@server1 ~]# cp prometheus-files/prometheus /usr/local/bin/
[root@server1 ~]# cp prometheus-files/promtool /usr/local/bin/
[root@server1 ~]# chown prometheus:prometheus /usr/local/bin/prometheus
[root@server1 ~]# chown prometheus:prometheus /usr/local/bin/promtool

5. Move the consoles and console_libraries directories from prometheus-files to /etc/prometheus folder and change the ownership to prometheus user.

[root@server1 ~]# cp -r prometheus-files/consoles /etc/prometheus
[root@server1 ~]# cp -r prometheus-files/console_libraries /etc/prometheus
[root@server1 ~]# chown -R prometheus:prometheus /etc/prometheus/consoles
[root@server1 ~]# chown -R prometheus:prometheus /etc/prometheus/console_libraries


Setup Prometheus Configuration

All the prometheus configurations should be present in /etc/prometheus/prometheus.yml file.

1. Create the prometheus.yml file.
[root@server1 ~]# vi /etc/prometheus/prometheus.yml

2. Copy the following contents to the prometheus.yml file.

global:
  scrape_interval: 10s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

3. Change the ownership of the file to prometheus user.

[root@server1 ~]# chown prometheus:prometheus /etc/prometheus/prometheus.yml

Setup Prometheus Service File

1. Create a prometheus service file.

[root@server1 ~]# vi /etc/systemd/system/prometheus.service

2. Copy the following content to the file.

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

3. Reload the systemd service to register the prometheus service and start the prometheus service.

[root@server1 ~]#  systemctl daemon-reload
[root@server1 ~]#  systemctl start prometheus

4. Check the prometheus service status using the following command.

[root@server1 ~]# systemctl status prometheus.service
● prometheus.service - Prometheus
   Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-01-19 05:02:15 EST; 35min ago
 Main PID: 1220 (prometheus)
    Tasks: 8 (limit: 4304)
   Memory: 124.4M
   CGroup: /system.slice/prometheus.service
           └─1220 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml

5. Open Firewall port 9090/tcp.

[root@server1 ~]# firewall-cmd --permanent --add-port=9090/tcp
success
[root@server1 ~]# firewall-cmd --reload
success 


Access Prometheus Web UI

Now you will be able to access the prometheus UI on 9090 port of the prometheus server.

http://<prometheus-ServerIp>:9090/graph 

Right now, we have just configured the Prometheus server. You need to register the target in the prometheus.yml file to get the metrics from the source systems.

For example, if you want to monitor ten servers, the IP address of these servers should be added as a target in the Prometheus configuration to scrape the metrics.

The server should have Node Exporter installed to collect all the system metrics and make it available for Prometheus to scrap it.

Tuesday, December 7, 2021

Configuration iSCSI Server and Initiator in RHEL 7 / RHEL 8

Configure the iSCSI Server 


1. Install the targetcli in iSCSI Server

[root@iscsiserver ~]# yum install targetcli

2. Enable and start the target service
        
[root@iscsiserver ~]# systemctl enable target; systemctl start target

3. Open SCSI server port on the firewall as permanent change and reload the configuration for immediate user

[root@iscsiserver ~]# firewall-cmd --permanent --add-port=3260/tcp
[root@iscsiserver ~]# firewall-cmd --reload


4. We have a disk /dev/sdb (2GB)  and we are going add that as a  Backstores block device using targetcli tool.

[root@iscsiserver ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> cd backstores/
/backstores> block/ create blocklun1 /dev/sdb
Created block storage object blocklun1 using /dev/sdb.
/backstores> ls
o- backstores .................................................................................................. [...]
  o- block ...................................................................................... [Storage Objects: 1]
  | o- blocklun1 .......................................................... [/dev/sdb (2.0GiB) write-thru deactivated]
  |   o- alua ....................................................................................... [ALUA Groups: 1]
  |     o- default_tg_pt_gp ........................................................... [ALUA state: Active/optimized]
  o- fileio ..................................................................................... [Storage Objects: 0]
  o- pscsi ...................................................................................... [Storage Objects: 0]
  o- ramdisk .................................................................................... [Storage Objects: 0]
/backstores>

 5. Create a unique iSCSI qualified Name (IQN) for the target . IQN will be iqn.2021-12.com.example:iscsiserver

[root@iscsiserver ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> iscsi/ create  iqn.2021-12.com.example:iscsiserver
Created target iqn.2021-12.com.example:iscsiserver.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/> ls iscsi/
o- iscsi ................................................................................................ [Targets: 1]
  o- iqn.2021-12.com.example:iscsiserver ................................................................... [TPGs: 1]
    o- tpg1 ................................................................................... [no-gen-acls, no-auth]
      o- acls .............................................................................................. [ACLs: 0]
      o- luns .............................................................................................. [LUNs: 0]
      o- portals ........................................................................................ [Portals: 1]
        o- 0.0.0.0:3260 ......................................................................................... [OK]


6. Create an ACL for the iscsiclient (initiator). The initiator will be connecting with the name iqn.2021-12.com.example:iscsiclient.

[root@iscsiserver ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> iscsi/iqn.2021-12.com.example:iscsiserver/tpg1/acls create  iqn.2021-12.com.example:iscsiclient
Created Node ACL for iqn.2021-12.com.example:iscsiclient
/> ls iscsi/
o- iscsi ........................................................................................ [Targets: 1]
  o- iqn.2021-12.com.example:iscsiserver ........................................................... [TPGs: 1]
    o- tpg1 ........................................................................... [no-gen-acls, no-auth]
      o- acls ...................................................................................... [ACLs: 1]
      | o- iqn.2021-12.com.example:iscsiclient .............................................. [Mapped LUNs: 0]
      o- luns ...................................................................................... [LUNs: 0]
      o- portals ................................................................................ [Portals: 1]
        o- 0.0.0.0:3260 ................................................................................. [OK]

7. Create the LUN under the target, the LUN should use the previously defined backing storage device name blocklun1.

[root@iscsiserver ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> /iscsi/iqn.2021-12.com.example:iscsiserver/tpg1/luns create /backstores/block/blocklun1
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2021-12.com.example:iscsiclient
/> ls /iscsi/
o- iscsi ...................................................................................... [Targets: 1]
  o- iqn.2021-12.com.example:iscsiserver ......................................................... [TPGs: 1]
    o- tpg1 ......................................................................... [no-gen-acls, no-auth]
      o- acls .................................................................................... [ACLs: 1]
      | o- iqn.2021-12.com.example:iscsiclient ............................................ [Mapped LUNs: 1]
      |   o- mapped_lun0 ....................................................... [lun0 block/blocklun1 (rw)]
      o- luns .................................................................................... [LUNs: 1]
      | o- lun0 ..............................................block/blocklun1 (/dev/sdb) (default_tg_pt_gp)]
      o- portals .............................................................................. [Portals: 1]
        o- 0.0.0.0:3260 ............................................................................... [OK]

8. Delete port 0.0.0.0:3260 and Configure the portal for the target to listen on specific IP Address 192.168.31.252.

[root@iscsiserver ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> /iscsi/iqn.2021-12.com.example:iscsiserver/tpg1/portals delete 0.0.0.0 3260
Deleted network portal 0.0.0.0:3260
/> /iscsi/iqn.2021-12.com.example:iscsiserver/tpg1/portals create 192.168.31.252 3260
Using default IP port 3260
Created network portal 192.168.31.252:3260.
/> ls iscsi/
o- iscsi ....................................................................................... [Targets: 1]
  o- iqn.2021-12.com.example:iscsiserver .......................................................... [TPGs: 1]
    o- tpg1 .......................................................................... [no-gen-acls, no-auth]
      o- acls ..................................................................................... [ACLs: 1]
      | o- iqn.2021-12.com.example:iscsiclient ............................................. [Mapped LUNs: 1]
      |   o- mapped_lun0 ........................................................ [lun0 block/blocklun1 (rw)]
      o- luns ..................................................................................... [LUNs: 1]
      | o- lun0 ............................................. [block/blocklun1 (/dev/sdb) (default_tg_pt_gp)]
      o- portals ............................................................................... [Portals: 1]
        o- 192.168.31.252:3260 ......................................................................... [OK]

 9. View the full configruation.

[root@iscsiserver ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ............................................................................................. [...]
  o- backstores .................................................................................. [...]
  | o- block ...................................................................... [Storage Objects: 1]
  | | o- blocklun1 ............................................ [/dev/sdb (2.0GiB) write-thru activated]
  | |   o- alua ....................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ........................................... [ALUA state: Active/optimized]
  | o- fileio ..................................................................... [Storage Objects: 0]
  | o- pscsi ...................................................................... [Storage Objects: 0]
  | o- ramdisk .................................................................... [Storage Objects: 0]
  o- iscsi ................................................................................ [Targets: 1]
  | o- iqn.2021-12.com.example:iscsiserver ................................................... [TPGs: 1]
  |   o- tpg1 ................................................................... [no-gen-acls, no-auth]
  |     o- acls .............................................................................. [ACLs: 1]
  |     | o- iqn.2021-12.com.example:iscsiclient ...................................... [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ................................................. [lun0 block/blocklun1 (rw)]
  |     o- luns .............................................................................. [LUNs: 1]
  |     | o- lun0 ...................................... [block/blocklun1 (/dev/sdb) (default_tg_pt_gp)]
  |     o- portals ........................................................................ [Portals: 1]
  |       o- 192.168.31.252:3260 .................................................................. [OK]
  o- loopback ............................................................................. [Targets: 0] 

 10. Exit and save the configuration to the default file /etc/target/saveconfig.json

/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json

 

Saturday, December 28, 2019

Network Teaming in RHEL7 and Cent OS 7

Network Teaming is the method of linking NICs together logically to allow for failover or Higher throughput.
Teaming is the new implementation that doesn't affect the older bonding driver in the Linux kernel, It offers alternate method.
RHEL 7 supports channel bonding for backward comparability. Network Teaming provides better performance and is extensible because of its modular design,

RHEL 7 implement network teaming with small kernel driver and a user space daemon, teamd.

The Kernel handles network packets efficiently and teamd handles logic and interface processing.Software called runner, implements load balancing and active-backup logic, such as roundrobin.

The following runner are available to teamd.
broadcast: a simple runner which transmits each packet from all ports
roundrobin: a simple runner which transmits each packets in roub-robin fashion from each of the ports
activebackup: This is failover runner which watches for link changes and selects an active port for data transfers.
loadbalance: This runner monitor traffic and uses a hash function to try to reach a perfect balance when selecting ports for packet transmission.
lacp: implements the 802.3ad link Aggregation control protocol. can use the same transmit port selection possibilities as loadbalance runner.

All network interaction is done through a team interface, composed of multiple network port interfaces.When controlling teamed port interface using Network Manager, and especially when fault finding, keep the following in mind:


  • Starting the network team interface does not automatically start the port interfaces.
  • Starting a port interface always starts the teamed interface.
  • Stopping the teamed interface also stops the port interfaces.
  • A teamed interface without ports can start static IP connections.
  • A team without ports waits for ports when starting DHCP connection.
  • A team with a DHCP connection waiting for ports completes when a port with a carrier is added.
  • A team with a DHCP connection waiting for ports continues waiting when a port without a carrier is added.
Configuring Network Teams
The nmcli command can be used to create and manage team and port interfaces. The following four steps are used to create and activate a network team interface:
  1. Create the team interface
  2. Determine the IPv4 and/or IPv6 attributes of the team interface.
  3. Assign the port interfaces
  4. Bring the team and port interfaces up/down.

Create the team interface
Use the nmcli command to create a connection for the network team interface. with the following syntax
nmcli con add type team con-name CNAME ifname INAME [ config JSON]
CNAME - Connection Name
INAME - Interface Name
JSON - Specifies the runner to be used


'{"runner":{"name": "METHOD"}}'
METHOD: one of the following broadcast, roundrobin, activebackup, loadbalance or lacp

Example
[root@server1 ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name": "loadbalance"}}'




Determine the IPv4 and/or IPv6 attributes of the team interface.


Once the team interface is created, IPv4 and /or IPv6 attribute can be assigned to it. If DHCP is available, this step is optional, because the default attributes configure the interface to get it IP setting using DHCP.

The following example demonstrates how to assign a static IPv4 address ti the team0 interface:
[root@server1 ~]# nmcli con mod team0 ipv4.addresses 1.2.3.4/24
[root@server1 ~]# nmcli con mod team0 ipv4.method manual

Note: The ipv4.addresses have to be assinged before the ipv4.method can be set to manual

Assign the port interfaces
Use the nmcli command to create each of the port interfaces with the following syntax:
nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
the connection name can be explicitly specified, or it will be team-slave-IFACE by fedault.

[root@server1 ~]# nmcli con add type team-slave ifname eth1  master team0
[root@server1 ~]# nmcli con add type team-slave ifname eth2  master team0 con-name team0-eth2

Bring the team and port interface up/down
nmcli command can also be used to manage the connections for the team and port interfaces wiht the following syntax:

nmcli dev dis INAME
nmcli con up CNAME

INAME -  Device name of the team or port interface to be managed
CNAME - Connection name of the interface, Where CNAME is the connection name of the team or port interface to be managed.

Example:
[root@server1 ~]# nmcli con up team0
[root@server1 ~]# nmcli dev dis eth2

When the team interface is up, the teamctl command be used to display the team's state.
[root@server1 ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  ens38
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  ens33
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
runner:
  active port: ens38

Summary
Creating Team interface
nmcli  connection add type team con-name team0 ifname team0 config '{"runner": {"name": "activebackup"}}'

Adding IPv4 attribute
nmcli connection mod team0 ipv4.addresses "192.168.219.10/24"
nmcli con mod team0 ipv4.method manual

Adding port Interface
nmcli connection add type team-slave con-name team0-port1 ifname ens38 master team0
nmcli connection add type team-slave con-name team0-port2 ifname ens33 master team0

Bringing the team and port interface up
nmcli connection up team0

Checking the status
teamdctl team0 state

[root@server1 ~]# nmcli connection show
NAME         UUID                                  TYPE            DEVICE
team0        8eb1132a-abe5-4ef8-ab48-f834ad8f0434  team            team0
team0-port1  511c3215-603a-4d23-9a94-b58a7455d143  802-3-ethernet  ens38
team0-port2  198ca207-c7f7-4802-acd1-404819aeab98  802-3-ethernet  ens33

[root@server1 ~]# teamdctl team0 config dump
{
    "device": "team0",
    "mcast_rejoin": {
        "count": 1
    },
    "notify_peers": {
        "count": 1
    },
    "ports": {
        "ens33": {
            "link_watch": {
                "name": "ethtool"
            }
        },
        "ens38": {
            "link_watch": {
                "name": "ethtool"
            }
        }
    },
    "runner": {
        "name": "activebackup"
    }
}



Sunday, December 22, 2019

Creating a second instance of the sshd service

System Administrators often need to configure and run multiple instances of a service. This is done by creating copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. The following procedure shows how to create a second instance of the sshd service:

1. Create a copy of the sshd_config file that will be used by the second daemon:
[root@server1 ~]# cp /etc/ssh/sshd_config  /etc/ssh/sshd-second_config

2. Edit the sshd-second_config file created in the previous step to assign a different port number and PID file to the second daemon:
Port 22220
PidFile /var/run/sshd-second.pid 

3. Create a copy of the systemd unit file for the sshd service:
 ~]# cp -v  /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service
‘/usr/lib/systemd/system/sshd.service’ -> ‘/etc/systemd/system/sshd-second.service’

4. Alter the sshd-second.service created in the previous step as follows:
[Unit]
Description=OpenSSH server second instance daemon
After=syslog.target network.target auditd.service sshd.service

[Service]
EnvironmentFile=/etc/sysconfig/sshd
ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=42s

[Install]
WantedBy=multi-user.target
  • Description is modified
  • sshd.service is added to After option, so that Second instance start after the first Instance .
  • The first instance of sshd includes key generation, therefore remove the ExecStartPre=/usr/sbin/sshd-keygen line.
  • Add the -f /etc/ssh/sshd-second_config parameter to the sshd command, so that the alternative configuration file is used.
5. If using SELinux and firewalld, add the port for the second instance of sshd to SSH ports and allow port 22220 in firewall
~]# semanage port -a -t ssh_port_t -p tcp 22220
~]# firewall-cmd --permanent --add-port 22220/tcp
~]# firewall-cmd --reload

6. Enable sshd-second.service, so that it starts automatically upon boot. And start the service
~]# systemctl enable sshd-second.service
~]# systemctl start sshd-second.service

Now try to connect using ssh  -p 22220 user@server




Creating and Modifying SYSTEMD Unit files

A unit file contains configuration directives that describe the unit and define its behavior.
Several systemctl commands work with unit files in the background.
To make finer adjustments, system administrator must edit or create unit files manually.

Unit file names take the following form:
unit_name.type_extension
unit_name - Name of the Unit file
type_extension - Type of the Unit file eg service,socket,target etc

Understanding the Unit File Structure
Unit files typically consist of three sections:

[Unit] — contains generic options that are not dependent on the type of the unit. These options provide unit description, specify the unit's behavior, and set dependencies to other units.

[unit type] — if a unit has type-specific directives, these are grouped under a section named after the unit type.
For example, service unit files contain the [Service] section

[Install] — contains information about unit installation used by systemctl enable and disable commands

Example
[root@server1 ~]# systemctl cat postfix.service
# /usr/lib/systemd/system/postfix.service
[Unit]
Description=Postfix Mail Transport Agent
After=syslog.target network.target
Conflicts=sendmail.service exim.service

[Service]
Type=forking
PIDFile=/var/spool/postfix/pid/master.pid
EnvironmentFile=-/etc/sysconfig/network
ExecStartPre=-/usr/libexec/postfix/aliasesdb
ExecStartPre=-/usr/libexec/postfix/chroot-update
ExecStart=/usr/sbin/postfix start
ExecReload=/usr/sbin/postfix reload
ExecStop=/usr/sbin/postfix stop

[Install]
WantedBy=multi-user.target




 Important [Unit] section option
Options
Description
Description
A meaningful description of the unit. This text is displayed for example in the output of the systemctl status command.
Documentation
Provides a list of URIs referencing documentation for the unit.
After[b]
Defines the order in which units are started. The unit starts only after the units specified in After are active. Unlike RequiresAfter does not explicitly activate the specified units. The Before option has the opposite functionality to After.
Requires
Configures dependencies on other units. The units listed in Requires are activated together with the unit. If any of the required units fail to start, the unit is not activated.
Wants
Configures weaker dependencies than Requires. If any of the listed units does not start successfully, it has no impact on the unit activation. This is the recommended way to establish custom unit dependencies.
Conflicts
Configures negative dependencies, an opposite to Requires.
systemd.unit(5) to see complete list of unit options
[b] In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also set a requirement dependency with Wants (recommended) or Requires, the ordering dependency still needs to be specified. That is because ordering and requirement dependencies work independently from each other.

Important [Service] Section options
Options
Description
Type
Configures the unit process startup type that affects the functionality of ExecStart and related options. One of:
·         simple – The default value. The process started with ExecStart is the main process of the service.
·         forking – The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete.
·         oneshot – This type is similar to simple, but the process exits before starting consequent units.
·         dbus – This type is similar to simple, but consequent units are started only after the main process gains a D-Bus name.
·         notify – This type is similar to simple, but consequent units are started only after a notification message is sent via the sd_notify() function.
·         idle – similar to simple, the actual execution of the service binary is delayed until all jobs are finished, which avoids mixing the status output with shell output of services.
ExecStart
Specifies commands or scripts to be executed when the unit is started. ExecStartPre and ExecStartPost specify custom commands to be executed before and after ExecStartType=oneshot enables specifying multiple custom commands that are then executed sequentially.
ExecStop
Specifies commands or scripts to be executed when the unit is stopped.
ExecReload
Specifies commands or scripts to be executed when the unit is reloaded.
Restart
With this option enabled, the service is restarted after its process exits, with the exception of a clean stop by the systemctl command.
RemainAfterExit
If set to True, the service is considered active even when all its processes exited. Default value is False. This option is especially useful if Type=oneshot is configured.
[a] For a complete list of options configurable in the [Service] section, see the systemd.service(5) manual page.


Important [install] options
Options
Description
Alias
Provides a space-separated list of additional names for the unit. Most systemctl commands, excluding systemctl enable, can use aliases instead of the actual unit name.
RequiredBy
A list of units that depend on the unit. When this unit is enabled, the units listed in RequiredBy gain a Require dependency on the unit.
WantedBy
A list of units that weakly depend on the unit. When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit.
Also
Specifies a list of units to be installed or uninstalled along with the unit.
DefaultInstance
Limited to instantiated units, this option specifies the default instance for which the unit is enabled.
[a] For a complete list of options configurable in the [Install] section, see the systemd.unit(5) manual page.

Creating Custom Unit Files

The following procedure describes the general process of creating a custom service
1. Prepare the executable file
2.Create the unit file
3.Configure the unit file with the required option
4.Notify the systemd using "systemctl daemon-reload' 

1. Prepare the executable file
 This can be a custom-created script, or an executable delivered by a software provider.
2.Create the unit file
 Create a unit file in the /etc/systemd/system/ directory and make sure it has correct file permissions. Execute as root:
touch /etc/systemd/system/name.service
chmod 664 /etc/systemd/system/name.service
Note that file does not need to be executable.
3.Configure the unit file with the required option
Add the service configuration options. There is a variety of options that can be used depending on the type of service you wish to create,
The following is an example unit configuration for a network-related service:
[Unit]
Description=service_description
After=network.target

[Service]
ExecStart=path_to_executable
Type=forking
PIDFile=path_to_pidfile

[Install]
WantedBy=default.target

Where:
  • service_description is an informative description that is displayed in journal log files and in the output of the systemctl status command.
  • the After setting ensures that the service is started only after the network is running. Add a space-separated list of other relevant services or targets.
  • path_to_executable stands for the path to the actual service executable.
  • Type=forking is used for daemons that make the fork system call. The main process of the service is created with the PID specified in path_to_pidfile. Find other startup types in Table 10.10, “Important [Service] Section Options”.
  • WantedBy states the target or targets that the service should be started under. Think of these targets as of a replacement of the older concept of runlevels.
4.Notify the systemd using "systemctl daemon-reload"
    Notify systemd that a new name.service file exists by executing the following command as root:
systemctl daemon-reload
systemctl start name.service

Click the below link which show how to create a secondary SSHD service .