Saturday, December 28, 2019

Network Teaming in RHEL7 and Cent OS 7

Network Teaming is the method of linking NICs together logically to allow for failover or Higher throughput.
Teaming is the new implementation that doesn't affect the older bonding driver in the Linux kernel, It offers alternate method.
RHEL 7 supports channel bonding for backward comparability. Network Teaming provides better performance and is extensible because of its modular design,

RHEL 7 implement network teaming with small kernel driver and a user space daemon, teamd.

The Kernel handles network packets efficiently and teamd handles logic and interface processing.Software called runner, implements load balancing and active-backup logic, such as roundrobin.

The following runner are available to teamd.
broadcast: a simple runner which transmits each packet from all ports
roundrobin: a simple runner which transmits each packets in roub-robin fashion from each of the ports
activebackup: This is failover runner which watches for link changes and selects an active port for data transfers.
loadbalance: This runner monitor traffic and uses a hash function to try to reach a perfect balance when selecting ports for packet transmission.
lacp: implements the 802.3ad link Aggregation control protocol. can use the same transmit port selection possibilities as loadbalance runner.

All network interaction is done through a team interface, composed of multiple network port interfaces.When controlling teamed port interface using Network Manager, and especially when fault finding, keep the following in mind:


  • Starting the network team interface does not automatically start the port interfaces.
  • Starting a port interface always starts the teamed interface.
  • Stopping the teamed interface also stops the port interfaces.
  • A teamed interface without ports can start static IP connections.
  • A team without ports waits for ports when starting DHCP connection.
  • A team with a DHCP connection waiting for ports completes when a port with a carrier is added.
  • A team with a DHCP connection waiting for ports continues waiting when a port without a carrier is added.
Configuring Network Teams
The nmcli command can be used to create and manage team and port interfaces. The following four steps are used to create and activate a network team interface:
  1. Create the team interface
  2. Determine the IPv4 and/or IPv6 attributes of the team interface.
  3. Assign the port interfaces
  4. Bring the team and port interfaces up/down.

Create the team interface
Use the nmcli command to create a connection for the network team interface. with the following syntax
nmcli con add type team con-name CNAME ifname INAME [ config JSON]
CNAME - Connection Name
INAME - Interface Name
JSON - Specifies the runner to be used


'{"runner":{"name": "METHOD"}}'
METHOD: one of the following broadcast, roundrobin, activebackup, loadbalance or lacp

Example
[root@server1 ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name": "loadbalance"}}'




Determine the IPv4 and/or IPv6 attributes of the team interface.


Once the team interface is created, IPv4 and /or IPv6 attribute can be assigned to it. If DHCP is available, this step is optional, because the default attributes configure the interface to get it IP setting using DHCP.

The following example demonstrates how to assign a static IPv4 address ti the team0 interface:
[root@server1 ~]# nmcli con mod team0 ipv4.addresses 1.2.3.4/24
[root@server1 ~]# nmcli con mod team0 ipv4.method manual

Note: The ipv4.addresses have to be assinged before the ipv4.method can be set to manual

Assign the port interfaces
Use the nmcli command to create each of the port interfaces with the following syntax:
nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
the connection name can be explicitly specified, or it will be team-slave-IFACE by fedault.

[root@server1 ~]# nmcli con add type team-slave ifname eth1  master team0
[root@server1 ~]# nmcli con add type team-slave ifname eth2  master team0 con-name team0-eth2

Bring the team and port interface up/down
nmcli command can also be used to manage the connections for the team and port interfaces wiht the following syntax:

nmcli dev dis INAME
nmcli con up CNAME

INAME -  Device name of the team or port interface to be managed
CNAME - Connection name of the interface, Where CNAME is the connection name of the team or port interface to be managed.

Example:
[root@server1 ~]# nmcli con up team0
[root@server1 ~]# nmcli dev dis eth2

When the team interface is up, the teamctl command be used to display the team's state.
[root@server1 ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  ens38
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  ens33
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
runner:
  active port: ens38

Summary
Creating Team interface
nmcli  connection add type team con-name team0 ifname team0 config '{"runner": {"name": "activebackup"}}'

Adding IPv4 attribute
nmcli connection mod team0 ipv4.addresses "192.168.219.10/24"
nmcli con mod team0 ipv4.method manual

Adding port Interface
nmcli connection add type team-slave con-name team0-port1 ifname ens38 master team0
nmcli connection add type team-slave con-name team0-port2 ifname ens33 master team0

Bringing the team and port interface up
nmcli connection up team0

Checking the status
teamdctl team0 state

[root@server1 ~]# nmcli connection show
NAME         UUID                                  TYPE            DEVICE
team0        8eb1132a-abe5-4ef8-ab48-f834ad8f0434  team            team0
team0-port1  511c3215-603a-4d23-9a94-b58a7455d143  802-3-ethernet  ens38
team0-port2  198ca207-c7f7-4802-acd1-404819aeab98  802-3-ethernet  ens33

[root@server1 ~]# teamdctl team0 config dump
{
    "device": "team0",
    "mcast_rejoin": {
        "count": 1
    },
    "notify_peers": {
        "count": 1
    },
    "ports": {
        "ens33": {
            "link_watch": {
                "name": "ethtool"
            }
        },
        "ens38": {
            "link_watch": {
                "name": "ethtool"
            }
        }
    },
    "runner": {
        "name": "activebackup"
    }
}



Sunday, December 22, 2019

Creating a second instance of the sshd service

System Administrators often need to configure and run multiple instances of a service. This is done by creating copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. The following procedure shows how to create a second instance of the sshd service:

1. Create a copy of the sshd_config file that will be used by the second daemon:
[root@server1 ~]# cp /etc/ssh/sshd_config  /etc/ssh/sshd-second_config

2. Edit the sshd-second_config file created in the previous step to assign a different port number and PID file to the second daemon:
Port 22220
PidFile /var/run/sshd-second.pid 

3. Create a copy of the systemd unit file for the sshd service:
 ~]# cp -v  /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service
‘/usr/lib/systemd/system/sshd.service’ -> ‘/etc/systemd/system/sshd-second.service’

4. Alter the sshd-second.service created in the previous step as follows:
[Unit]
Description=OpenSSH server second instance daemon
After=syslog.target network.target auditd.service sshd.service

[Service]
EnvironmentFile=/etc/sysconfig/sshd
ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=42s

[Install]
WantedBy=multi-user.target
  • Description is modified
  • sshd.service is added to After option, so that Second instance start after the first Instance .
  • The first instance of sshd includes key generation, therefore remove the ExecStartPre=/usr/sbin/sshd-keygen line.
  • Add the -f /etc/ssh/sshd-second_config parameter to the sshd command, so that the alternative configuration file is used.
5. If using SELinux and firewalld, add the port for the second instance of sshd to SSH ports and allow port 22220 in firewall
~]# semanage port -a -t ssh_port_t -p tcp 22220
~]# firewall-cmd --permanent --add-port 22220/tcp
~]# firewall-cmd --reload

6. Enable sshd-second.service, so that it starts automatically upon boot. And start the service
~]# systemctl enable sshd-second.service
~]# systemctl start sshd-second.service

Now try to connect using ssh  -p 22220 user@server




Creating and Modifying SYSTEMD Unit files

A unit file contains configuration directives that describe the unit and define its behavior.
Several systemctl commands work with unit files in the background.
To make finer adjustments, system administrator must edit or create unit files manually.

Unit file names take the following form:
unit_name.type_extension
unit_name - Name of the Unit file
type_extension - Type of the Unit file eg service,socket,target etc

Understanding the Unit File Structure
Unit files typically consist of three sections:

[Unit] — contains generic options that are not dependent on the type of the unit. These options provide unit description, specify the unit's behavior, and set dependencies to other units.

[unit type] — if a unit has type-specific directives, these are grouped under a section named after the unit type.
For example, service unit files contain the [Service] section

[Install] — contains information about unit installation used by systemctl enable and disable commands

Example
[root@server1 ~]# systemctl cat postfix.service
# /usr/lib/systemd/system/postfix.service
[Unit]
Description=Postfix Mail Transport Agent
After=syslog.target network.target
Conflicts=sendmail.service exim.service

[Service]
Type=forking
PIDFile=/var/spool/postfix/pid/master.pid
EnvironmentFile=-/etc/sysconfig/network
ExecStartPre=-/usr/libexec/postfix/aliasesdb
ExecStartPre=-/usr/libexec/postfix/chroot-update
ExecStart=/usr/sbin/postfix start
ExecReload=/usr/sbin/postfix reload
ExecStop=/usr/sbin/postfix stop

[Install]
WantedBy=multi-user.target




 Important [Unit] section option
Options
Description
Description
A meaningful description of the unit. This text is displayed for example in the output of the systemctl status command.
Documentation
Provides a list of URIs referencing documentation for the unit.
After[b]
Defines the order in which units are started. The unit starts only after the units specified in After are active. Unlike RequiresAfter does not explicitly activate the specified units. The Before option has the opposite functionality to After.
Requires
Configures dependencies on other units. The units listed in Requires are activated together with the unit. If any of the required units fail to start, the unit is not activated.
Wants
Configures weaker dependencies than Requires. If any of the listed units does not start successfully, it has no impact on the unit activation. This is the recommended way to establish custom unit dependencies.
Conflicts
Configures negative dependencies, an opposite to Requires.
systemd.unit(5) to see complete list of unit options
[b] In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also set a requirement dependency with Wants (recommended) or Requires, the ordering dependency still needs to be specified. That is because ordering and requirement dependencies work independently from each other.

Important [Service] Section options
Options
Description
Type
Configures the unit process startup type that affects the functionality of ExecStart and related options. One of:
·         simple – The default value. The process started with ExecStart is the main process of the service.
·         forking – The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete.
·         oneshot – This type is similar to simple, but the process exits before starting consequent units.
·         dbus – This type is similar to simple, but consequent units are started only after the main process gains a D-Bus name.
·         notify – This type is similar to simple, but consequent units are started only after a notification message is sent via the sd_notify() function.
·         idle – similar to simple, the actual execution of the service binary is delayed until all jobs are finished, which avoids mixing the status output with shell output of services.
ExecStart
Specifies commands or scripts to be executed when the unit is started. ExecStartPre and ExecStartPost specify custom commands to be executed before and after ExecStartType=oneshot enables specifying multiple custom commands that are then executed sequentially.
ExecStop
Specifies commands or scripts to be executed when the unit is stopped.
ExecReload
Specifies commands or scripts to be executed when the unit is reloaded.
Restart
With this option enabled, the service is restarted after its process exits, with the exception of a clean stop by the systemctl command.
RemainAfterExit
If set to True, the service is considered active even when all its processes exited. Default value is False. This option is especially useful if Type=oneshot is configured.
[a] For a complete list of options configurable in the [Service] section, see the systemd.service(5) manual page.


Important [install] options
Options
Description
Alias
Provides a space-separated list of additional names for the unit. Most systemctl commands, excluding systemctl enable, can use aliases instead of the actual unit name.
RequiredBy
A list of units that depend on the unit. When this unit is enabled, the units listed in RequiredBy gain a Require dependency on the unit.
WantedBy
A list of units that weakly depend on the unit. When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit.
Also
Specifies a list of units to be installed or uninstalled along with the unit.
DefaultInstance
Limited to instantiated units, this option specifies the default instance for which the unit is enabled.
[a] For a complete list of options configurable in the [Install] section, see the systemd.unit(5) manual page.

Creating Custom Unit Files

The following procedure describes the general process of creating a custom service
1. Prepare the executable file
2.Create the unit file
3.Configure the unit file with the required option
4.Notify the systemd using "systemctl daemon-reload' 

1. Prepare the executable file
 This can be a custom-created script, or an executable delivered by a software provider.
2.Create the unit file
 Create a unit file in the /etc/systemd/system/ directory and make sure it has correct file permissions. Execute as root:
touch /etc/systemd/system/name.service
chmod 664 /etc/systemd/system/name.service
Note that file does not need to be executable.
3.Configure the unit file with the required option
Add the service configuration options. There is a variety of options that can be used depending on the type of service you wish to create,
The following is an example unit configuration for a network-related service:
[Unit]
Description=service_description
After=network.target

[Service]
ExecStart=path_to_executable
Type=forking
PIDFile=path_to_pidfile

[Install]
WantedBy=default.target

Where:
  • service_description is an informative description that is displayed in journal log files and in the output of the systemctl status command.
  • the After setting ensures that the service is started only after the network is running. Add a space-separated list of other relevant services or targets.
  • path_to_executable stands for the path to the actual service executable.
  • Type=forking is used for daemons that make the fork system call. The main process of the service is created with the PID specified in path_to_pidfile. Find other startup types in Table 10.10, “Important [Service] Section Options”.
  • WantedBy states the target or targets that the service should be started under. Think of these targets as of a replacement of the older concept of runlevels.
4.Notify the systemd using "systemctl daemon-reload"
    Notify systemd that a new name.service file exists by executing the following command as root:
systemctl daemon-reload
systemctl start name.service

Click the below link which show how to create a secondary SSHD service .


Saturday, December 21, 2019

Linux Control Groups (cgroups)

Cgroups - Administer CPU, Memory, Network Bandwidth and I/O resources among hierarchically ordered groups of processes.
Also, systemd automatically mounts hierarchies for important kernel resource controllers  in the /sys/fs/cgroup/ directory.

Resource controller (cgroup subsystem): Allow you set limits on each resource.

Systemd Unit Types
All processes running on the system are child processes of the systemd init process. Systemd provides three unit types that are used for the purpose of resource control

            Service — A process or a group of processes, which systemd started based on a unit configuration file. Services encapsulate the specified processes so that they can be started and stopped as one set. 

             Scope — A group of externally created processes. Scopes encapsulate processes that are started and stopped by arbitrary processes through the fork() function and then registered by systemd at runtime. For instance, user sessions, containers, and virtual machines are treated as scopes. Scopes are named as follows:
              Slice — A group of hierarchically organized units. Slices do not contain processes, they organize a hierarchy in which scopes and services are placed. The actual processes are contained in scopes or in services. In this hierarchical tree, every name of a slice unit corresponds to the path to a location in the hierarchy. The dash ("-") character acts as a separator of the path components. For example, if the name of a slice looks as follows:
parent-name.slice
It means that a slice called parent-name.slice is a subslice of the parent.slice. This slice can have its own sub-slice named parent-name-name2.slice, and so on.


there are four slices created by default:
             -.slice  — the root slice;
             system.slice — the default place for all system services;
             user.slice — the default place for all user sessions;
             machine.slice — the default place for all virtual machines and Linux containers.
The following tree is a simplified example of a cgroup tree. This output was generated with the systemd-cgls
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
├─user.slice
│ └─user-0.slice
│   ├─session-5.scope
│   │ └─3155 /usr/sbin/anacron -s
│   ├─session-3.scope
│   │ ├─2972 sshd: root@pts/0
│   │ ├─2977 -bash
│   │ ├─3754 systemd-cgls
│   │ └─3755 systemd-cgls
│   └─session-1.scope
│     ├─1261 login -- root
│     └─2667 -bash
└─system.slice
  ├─crond.service
  │ └─1264 /usr/sbin/crond -n
  ├─atd.service
  │ └─1259 /usr/sbin/atd -f
  ├─rhnsd.service
  │ └─1269 rhnsd
  ├─libvirtd.service
  │ ├─1247 /usr/sbin/libvirtd

Important cgroup commands

systemd-cgls  - Recursively show control group contents
systemd-cgtop - Show top control groups by their resource usage like CPU,Memory,Disk I/O
systemd-run  - Run programs in transient scope or service or timer units
systemctl set-property - Applies limits to a CGroup

systemd-cgls
This command gives CGroup Hierarchy in a tree format.

├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
├─user.slice
│ └─user-0.slice
│   ├─session-3.scope
│   │ ├─2972 sshd: root@pts/0
│   │ ├─2977 -bash
│   │ ├─4326 systemd-cgls
│   │ └─4327 systemd-cgls
│   └─session-1.scope
│     ├─1261 login -- root
│     └─2667 -bash
└─system.slice
  ├─crond.service
  │ └─1264 /usr/sbin/crond -n
  ├─atd.service
  │ └─1259 /usr/sbin/atd -f
  ├─rhnsd.service
  │ └─1269 rhnsd
  ├─rhsmcertd.service
  │ └─1254 /usr/bin/rhsmcertd
  ├─rsyslog.service
  │ └─1244 /usr/sbin/rsyslogd -n
  ├─tuned.service
  │ └─1240 /usr/bin/python -Es /usr/sbin/tuned -l -P
  ├─postfix.service
  │ ├─1910 /usr/libexec/postfix/master -w
  │ ├─1990 pickup -l -t unix -u
  │ └─1991 qmgr -l -t unix -u
  ├─sshd.service
  │ └─1451 /usr/sbin/sshd
  ├─cups.service
  │ └─1234 /usr/sbin/cupsd -f
  ├─NetworkManager.service
  

systemd-cgtop
This show the usage of CPU, Memory ,Disk I/O of CGroups slice in the fashion of top command.

Path                                  Tasks   %CPU   Memory  Input/s Output/s

/                                       177    1.4   268.9M        -        -
/system.slice/ModemManager.service        1      -        -        -        -
/system.slice/NetworkManager.service      2      -        -        -        -
/system.slice/abrt-oops.service           1      -        -        -        -
/system.slice/abrt-xorg.service           1      -        -        -        -
/system.slice/abrtd.service               1      -        -        -        -
/system.slice/alsa-state.service          1      -        -        -        -
/system.slice/atd.service                 1      -        -        -        -
/system.slice/auditd.service              3      -        -        -        -
/system.slice/avahi-daemon.service        2      -        -        -        -

systemd-run

command is used to create and start a transient service or scope unit and run a custom command in the unit.
Following command will start a slice called aravindan.slice and a toptest.service unit will be started under this slice.

systemd-run --unit=name --scope --slice=slice_name command
[root@server1 ~]#  systemd-run --unit=toptest --slice=aravindan.slice top -b
Running as unit toptest.service.
[root@server1 ~]# systemd-cgls 
/:
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
├─aravindan.slice
│ └─toptest.service
│   └─4717 /usr/bin/top -b
├─user.slice
│ └─user-0.slice
[root@server1 ~]# systemctl status toptest
● toptest.service - /usr/bin/top -b
   Loaded: loaded (/run/systemd/system/toptest.service; static; vendor preset: disabled)
  Drop-In: /run/systemd/system/toptest.service.d
           └─50-Description.conf, 50-ExecStart.conf, 50-Slice.conf
   Active: active (running) since Sat 2019-12-21 21:45:45 IST; 4min 10s ago
 Main PID: 4717 (top)
   CGroup: /aravindan.slice/toptest.service
           └─4717 /usr/bin/top -b

Dec 21 21:49:53 server1.example.com top[4717]: 2667 root      20   0  116648   3420   1780 S  0.0  0.2   0:00.37 bash
Dec 21 21:49:53 server1.example.com top[4717]: 2977 root      20   0  116280   3124   1796 S  0.0  0.2   0:00.76 bash
Dec 21 21:49:53 server1.example.com top[4717]: 3566 root      20   0       0      0      0 S  0.0  0.0   0:00.07 kworker/u2+
Dec 21 21:49:53 server1.example.com top[4717]: 3665 root      20   0  112820  15868   3392 S  0.0  0.8   0:00.04 dhclient
Dec 21 21:49:53 server1.example.com top[4717]: 4406 postfix   20   0   91164   3964   2968 S  0.0  0.2   0:00.04 pickup
Dec 21 21:49:53 server1.example.com top[4717]: 4494 root       0 -20       0      0      0 S  0.0  0.0   0:00.00 kworker/0:+
Dec 21 21:49:53 server1.example.com top[4717]: 4535 root       0 -20       0      0      0 S  0.0  0.0   0:00.02 kworker/0:+
Dec 21 21:49:53 server1.example.com top[4717]: 4557 root      20   0       0      0      0 S  0.0  0.0   0:00.00 kworker/0:0
Dec 21 21:49:53 server1.example.com top[4717]: 4596 root      20   0  487104   7060   3996 S  0.0  0.4   0:00.05 packagekitd
Dec 21 21:49:53 server1.example.com top[4717]: 4714 root      20   0       0      0      0 S  0.0  0.0   0:00.00 kworker/0:2



Thursday, December 19, 2019

Enabling rc.local in systemd (RHEL 7, CentOS 7)

The rc.local service is stopped by default in CentOS/RHEL 7. If you check the etc/rc.d/rc.local configuration file, there are hints about this.

[root@server1 ~]# cat /etc/rc.d/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
echo "Hi this is rc.local file"
touch /var/lock/subsys/local

1. Change the permission of the file /etc/rc.d/rc.local so that it has execute permission
[root@server1 ~]# chmod -v +x /etc/rc.d/rc.local
mode of ‘/etc/rc.d/rc.local’ retained as 0755 (rwxr-xr-x)

2.Enabled the service 
[root@server1 ~]# systemctl enable rc-local

3. Reboot the server and check whether the service is executed.
[root@server1 ~]# systemctl status rc-local
 rc-local.service - /etc/rc.d/rc.local Compatibility
   Loaded: loaded (/usr/lib/systemd/system/rc-local.service; static; vendor preset: disabled)
   Active: active (exited) since Fri 2019-12-20 11:30:53 IST; 8min ago

Dec 20 11:30:53 server1.example.com systemd[1]: Starting /etc/rc.d/rc.local Compatibility...
Dec 20 11:30:53 server1.example.com rc.local[1141]: Hi this is rc.local file
Dec 20 11:30:53 server1.example.com systemd[1]: Started /etc/rc.d/rc.local Compatibility.







Wednesday, December 18, 2019

Selecting systemd target

A systemd target is a set of systemd units  that should be started to reach desired state.
Important target are listed below.

Target
Purpose
graphical.target
System supports multiple user,graphical and test-based logins
multi-user.target
System support multiple users, text-based logins only.
rescue.target
Sulogin prompt, Basic system initilzation completed
emergency.target
Sulogin prompt, initramfs pivot complete and system root mounted on / read-only

It is possible for a target to be part of another target. For eg, the graphical.target include multi-user.target, which in turn depends on basic-target and others.
This dependencies can be viewed using the following command
[root@server1 ~]# systemctl list-dependencies graphical.target  | grep target
graphical.target
● └─multi-user.target
●   ├─basic.target
●   │ ├─paths.target
●   │ ├─slices.target
●   │ ├─sockets.target
●   │ ├─sysinit.target
●   │ │ ├─cryptsetup.target
●   │ │ ├─local-fs.target
●   │ │ └─swap.target
●   │ └─timers.target
●   ├─getty.target
●   ├─nfs-client.target
●   │ └─remote-fs-pre.target
●   └─remote-fs.target
●     └─nfs-client.target
●       └─remote-fs-pre.target

An overview of all available target can viewed with
[root@server1 ~]# systemctl list-units --type=target --all
  UNIT                   LOAD      ACTIVE   SUB    DESCRIPTION
  basic.target           loaded    active   active Basic System
  cryptsetup.target      loaded    active   active Encrypted Volumes
  emergency.target       loaded    inactive dead   Emergency Mode
  final.target           loaded    inactive dead   Final Step
  getty.target           loaded    active   active Login Prompts
  graphical.target       loaded    active   active Graphical Interface
  local-fs-pre.target    loaded    active   active Local File Systems (Pre)
  local-fs.target        loaded    active   active Local File Systems
  multi-user.target      loaded    active   active Multi-User System
  sysinit.target         loaded    active   active System Initialization
● syslog.target          not-found inactive dead   syslog.target
  time-sync.target       loaded    inactive dead   System Time Synchronized
  timers.target          loaded    active   active Timers
  umount.target          loaded    inactive dead   Unmount All Filesystems
  ....
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

30 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.

An overview of all targets installed on disk can be viewed with.
[root@server1 ~]# systemctl list-unit-files --type=target
UNIT FILE                 STATE
anaconda.target           static
basic.target              static
bluetooth.target          static
cryptsetup-pre.target     static
cryptsetup.target         static
ctrl-alt-del.target       disabled
default.target            enabled
emergency.target          static
final.target              static
getty.target              static
graphical.target          enabled
halt.target               disabled
hibernate.target          static
..
60 unit files listed.

Selecting a target at run-time.
On running system , administrator can choose to switch to a different target using the systemctl isolate TARGET command.
[root@server1 ~]# systemctl isolate graphical.target
Not all target can be isolated . Only target that have AllowIsolate=yes set in their unit file can be isolated.

Set the default target
When the system starts, and control is passed over to systemd from the initramfs, systemd will try to activate the default.target . Normally the default.target will be symbolic link to either graphical.target or multi-user.target

Systemctl provide two command to manage the link. get-default and set-default
[root@server1 ~]# systemctl get-default
graphical.target
[root@server1 ~]# systemctl set-default multi-user.target
Removed symlink /etc/systemd/system/default.target.
Created symlink from /etc/systemd/system/default.target to /usr/lib/systemd/system/multi-user.target.
[root@server1 ~]#  systemctl get-default
multi-user.target

Selecting a different target at boot time.
To select a different target at boot time, a special option can be appended to the kernel command line from the boot loader. systemd.unit= .
For eg to boot the system into a resuce shell, pass the following option at the interactive boot loader menu
systemd.unit=rescue.target
To use this method  of selecting a different target. Use the following procedure for RHEL 7
1. (Re)boot the machine
2. Interrupt the boot loader menu countdown by pressing any key
3. Move the cursor to the entry to be started
4. Press e to edit the current entry
5. Move the cursor to the in that starts with linux16. This is kernel command line.
6 Append systemd.unit=rescue.target
7. Ctrl+x to boot with these changes.

Diagnose and repair systemd boot issues.
If there are problem during the starting of services, there few tools available to System Administrator that can help with debugging and/or troubleshooting.

Early Debug shell
By running systemctl enable debug-shell.service, a root shell will be spawned on TTY9(Ctrl+alt+F9) early during the boot sequence. This shell is automatically logged in as root so that administrator can use some of the other debugging tools while the system is still booting.

Note:  Disable the debug-shell.service service when debugging is completed, as it leaves unauthenticated root shell open to anyone with local console access.

Emergency and rescue shell

Append the following into kernel command line from the boot loader
systemd.unit=rescue.target
systemd.unit=emergency.target 

The system will span into a special rescue or emergency shell instead of starting normally. Both of these shell require root password.

The emeregency target keeps the root file system mounted in read-only mode.
The rescue target waits for sysinit.target to complete first so that more of the system will be initialized, for eg logging, file system etc.

Exiting from these shell will continue with the regular boot process.

Stuck Jobs
During startup, systemd spawns a number of jobs. if some of these jobs cannot be completed , they will block other jobs from running.
To inspect the current job list systemctl list-jobs command is used.
Any jobs list as running must completed before the jobs listed as waiting can continue.
[root@server1 ~]# systemctl list-jobs
No jobs running.

Sunday, December 15, 2019

Controlling services using systemctl

Introduction to systemd

System startup and server processes are managed by the systemd system and Service Manager.This program provide a mentod for activating system resources, Server Daemons and other processes, both on boot time and on  running system.

Daemons are process that wait or run in the background performing various task.
To listen for connection Daemon uses a socket.

A Service often refers to one or more daemons,but starting or stopping a service may instead make a on-time change to the state of the system for example, to configure Network interfaces, which does not involve leaving a daemon process running afterward.

In RHEL 7 , Process ID 1 is systemd, the new init system.
In RHEL 6 and older system , Process ID 1 is init process

Few new feature of systemd
  • Paralleization capabilities, which increase the boot speed of a system
  • On-Demand starting of Daemons without requiring a separate service.
  • Automatic service dependency management prevents long timeouts, such as not starting a network service when the network is not active.
  • A method of tracking related processes together using Linux Control groups.

systemctl and systemd units
The systemctl command is used to manage different type of systemd objects, called units. A list of available unit type can be displayed with systemctl -t help .
[root@server1 ~]# systemctl -t help
Available unit types:
service
socket
busname
target
snapshot
device
mount
automount
swap
timer
path
slice
scope

Some of the common unit type are listed as follows.
service unit have a .service extension and represent system services. This type of unit is used to start  frequently accessed daemons, such as a web server
socket unit have a .socket extension and represent Inter process communication sockets. Control of the socket will be passed to a daemon or newly started service when a client connection is made.
Path unit have .path extension and are used to delay the activation of a service unitl a specific file system change occurs.This commonly used for service which use spool directories, such as a printing systems.


Command                                   
Task                                                                                
systemctl  status UNIT
View detailed information about a unit state
systemctl   stop UNIT
Stop a service ona running system
systemctl   start UNIT
Start a service on a running system
systemctl   restart UNIT
Restart a service on a running system
systemctl   reload UNIT
Reload configuration file of a running service.
systemctl   mask UNIT
Completely disable a service from being started, both manually and at boot.
systemctl   unmask UNIT
Make a masked service available
systemctl   enable UNIT
Configure a service to start at boot time
systemctl   disable UNIT
Disable a service from starting at boot time
systemctl   list-dependencies UNIT
List units which are required and wanted by the specified unit.
systemctl   is-active UNIT
Check whether the unit is in active state
systemctl   is-enabled UNIT
Check whether the unit is enabled to  start automatically at boot time

Checking Service status
[root@server1 ~]# systemctl status sshd.service
 sshd.service - OpenSSH server daemon
   Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2019-12-16 16:17:07 IST; 4h 7min left
     Docs: man:sshd(8)
           man:sshd_config(5)
  Process: 1164 ExecStart=/usr/sbin/sshd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1303 (sshd)
   CGroup: /system.slice/sshd.service
           └─1303 /usr/sbin/sshd

Dec 16 16:17:06 localhost.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 16 16:17:07 localhost.localdomain systemd[1]: PID file /var/run/sshd.pid not readable (yet?) after start.
Dec 16 16:17:07 localhost.localdomain sshd[1303]: Server listening on 0.0.0.0 port 22.
Dec 16 16:17:07 localhost.localdomain sshd[1303]: Server listening on :: port 22.
Dec 16 16:17:07 localhost.localdomain systemd[1]: Started OpenSSH server daemon.

Keyword:
Description:
loaded
Unit configuration file has been processed
active(running)
Running with one or more continuing processes
active(exited)
Sucessfully completed one-time configuration
active(waiting)
Running but waiting for a event
inactive
Not running
enabled
Will be started at boot time
disabled
Will not be started at boot time
static
Cannot be enabled, but may be started by an enabled unit.

List Unit files with systemctl

Query the state of all units to verify a system startup.
Both the command give the same output.
[root@Server1 ~]# systemctl list-units 
[root@Server1 ~]# systemctl 
UNIT                               LOAD   ACTIVE SUB     DESCRIPTION
auditd.service                     loaded active running Security Auditing Service
NetworkManager.service             loaded active running Network Manager

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.


Query the status of only the service units
[root@Server1 ~]# systemctl list-units --type=service
[root@Server1 ~]# systemctl --type=service

The --all option will add inactive units as well.
[root@Server1 ~]# systemctl list-units  -t service -all
[root@Server1 ~]# systemctl -t service -all
  UNIT                                   LOAD      ACTIVE   SUB     DESCRIPTION
  auditd.service                         loaded    active   running Security Auditing Service
  auth-rpcgss-module.service             loaded    inactive dead    Kernel Module supporting RPCSEC_GSS

List only failed services
[root@Server1 ~]# systemctl --failed --type=service
0 loaded units listed. Pass --all to see loaded but inactive units, too.

Viewing the enabled and disabled setting for all units. Optionally, Limit the type of unit.
[root@Server1 ~]# systemctl list-unit-files --type=service
UNIT FILE                                   STATE
arp-ethers.service                          disabled
auditd.service                              enabled
auth-rpcgss-module.service                  static

We can also check whether the particular service enabled to start after reboot using is-enabled option
[root@Server1 ~]# systemctl is-enabled sshd
enabled

We can check whether particular unit is active using is-active option
[root@Server1 ~]# systemctl is-active sshd
active


Unit Dependencies
Service may be started as dependencies of other services. If a socket unit is enabled and the service unit with the same name is not, the service will automatically started when a request is made to that socket unit.

[root@Server1 ~]# systemctl list-dependencies multi-user.target
multi-user.target
● ├─auditd.service
● ├─besclient.service
● ├─brandbot.path
● ├─choose_repo.service

--reverse show particular unit is required for which other units.
[root@Server1 ~]# systemctl list-dependencies multi-user.target --reverse
multi-user.target
● └─graphical.target


Masking services
A system may have conflicting services installed for a certain function, such as firewalls (iptables and firewalld). To prevent an administrator from accidentally starting a service, a service may be masked.
[root@Server1 ~]# systemctl mask iptables
ln -s '/dev/null' '/etc/systemd/system/iptables.service'
[root@Server1 ~]# systemctl unmask iptables
rm '/etc/systemd/system/iptables.service'

Enabling system daemons to start or stop at boot
Service are started at boot when links are created in the appropriate systemd configuration directories.
[root@server1 ~]# systemctl enable sshd
Created symlink from /etc/systemd/system/multi-user.target.wants/sshd.service to /usr/lib/systemd/system/sshd.service.

Same way disable can be used to disable the service from starting during the boot.
[root@server1 ~]# systemctl disable sshd
Removed symlink /etc/systemd/system/multi-user.target.wants/sshd.service.