Thursday, September 23, 2021

SAP Kernel Executable Files with Description.

 Here is List of SAP executable on SAP systems.


SAPCAR
To uncar compression files

R3ldctl
The tool for exporting all table structures to the file system during an OS/DB-Migration.

R3check
The tool will check Cluster-Tables for errors

R3ta
Split large tables for export and import

R3load
The table import & export tool of SAP during Installation, Upgrade, and Migration.

R3szchk
The tool for determining the sizes of the different tables in the target database during the import in an OS/DB-Migration.

R3trans
his is the tool, that does the real work for tp. tp controls the import and export of changes and r3trans does them using scripts, that were generated from tp.

R3tran_XXX.SAR
It is a compressed archive with the latest version of R3trans from the SAP Service Marketplace, used when we patched the kernel.

SAPEXEDB_XX.SAR
Compressed file containing DB Dependent executables (Kernel Files)

SAPEXE_XX.SAR
Compressed file containing the Database Independent executables 
(Kernel Files)

sapexec
Call SAP Function Modules

startdb
Program to start the database

startj2eedb
Program to start the database (Java)

startrfc
The tool is a very easy SAP command line interface to start all of the implemented function modules of SAP systems.

startsap
Program to start SAP

stopdb
Program to stop the database

stopj2eedb
Program to stop the database (Java)

stopsap
Program to stop SAP

tp
The Transport Tool. This program coordinates the complete import and export of program and table changes made within the SAP system in order to transport them through the complete System Landscape.

sapmscsa
SCSA Administration

saplicense
The Tool for the installation of a new SAP License. This is needed when the license expires e.g. because of a hardware change.

saposcol
The SAP Interface to the Operating System for Performance Data. The Operating System Collector collects CPU Usage, Disk Performance, Paging etc.

sapparar
Reads the SAP Profile

sappfpar
This tool can be used for checking the profiles after changes and before restarting the SAP system.

saproot.sh
Script to set Root permissions necessary for some kernel programs

saprouter
The program for the Router Connection from customers to SAP and vice versa.

sapsecin
Generation of the PSE (Personal Security Environment)

sapstart
Starts SAP processes

sapstartsrv
Starts SAP processes

cfw
GUI Control Framework for ABAP Objects

cleanipc
Cleans Inter-Process Communications Memory

disp+work
Dispatcher & Workprocess – “The complete Kernel” – Here the complete ABAP is processed

Dpj2ee
Dispatcher for Java

msclients
Shows running instances registered in the Message Server

msg_server
Main Message Server executable

msmon
Message Server Monitor Utility

msprot
Monitor Message Server at the OS level

db2jcllib.o
Rsdb/db2jcl_library parameter

db2radm
Used to configure DB2Connect

dbadaslib.o
Part of lib_dbsl – database-dependent SQL handler

dbdb2pwd
Create an encrypted DB2 Password File

dbdb2slib.o
Part of lib_dbsl – database-dependent SQL handler – DB2

dbsdbslib.o
Part of lib_dbsl – database-dependent SQL handler

dev_sapstart
Log file for starting sap

dipgntab
Activation and adjustment of the nametabs with the ABAP Dictionary.

dpmon
Used to get the process overview of an instance in text mode.

dsrlib.o
Distributed Statistics Records

Dw_gui.o
Dependent module for Disp+work

Dw_mdm.o
Dependent module for Disp+work

Dw_stl.o
Dependent module for Disp+work

Dw_xml.o
Dependent module for Disp+work

Dw_xtc.o
Dependent module for Disp+work

Eg2mon
Monitor program for Extended Global Memory Segments 

Em2mon
Monitor program for Extended Memory management 

emmon
Test program for Extended Memory

enqt
Check and Monitor the Enqueue Lock Table

enrepserver
SAP Enqueue Replication Server

enserver
SAP Enqueue Server

ensmon
Enqueue Server Monitor Programs to monitor the enqueue server and the enqueue replication servers.

Es2mon
Programs to monitor the enqueue server and the enqueue replication servers.

esmon
Program to monitor the enqueue server and the enqueue replication servers.

estst
Test program for the Extended Memory Segments 

evtd
This program is able to trigger events within the SAP system. The tp tool uses this feature. It can be used as the trigger for self-written interfaces as well.

exe_d
b2.lst
The ‘*.lst’ files are text files used by sapcpe to determine which files to compare/copy on instance startup.

gateway.lst
The ‘*.lst’ files are text files used by sapcpe to determine which files to compare/copy on instance startup.

gw.properties
Gateway processes

gwmon  
Program gwmon (at the operating system level) or SAP transaction SMGW monitors the SAP Gateway.

gwrd
Program gwmon (at the operating system level) or SAP transaction SMGW monitors the SAP Gateway.

icm.properties
Tool to monitor and manage the Internet Communication Manager (ICM) from the SAP System (transaction SMICM).

icmadmin.SAR
Tool to monitor and manage the Internet Communication Manager (ICM) from the SAP System (transaction SMICM).

icman
Tool to monitor and manage the Internet Communication Manager (ICM) from the SAP System (transaction SMICM).

icmbnd
Program to bind ports with numbers from 0 to 1023

icmon
Internet Communication Manager (ICM) used for HTTP(S), SMTP based communication used to monitor and manage the Internet Communication Manager (ICM) from the SAP System (transaction SMICM).

instance.lst
List of database-independent executables. These executables are valid for all database systems used by the SAP system.

instancedb.lst
List of database-dependent executables. These executables can only be used with a particular database system.

ipclimits
Reports IPC memory available to SAP at the OS level

j2eeinst.lst
The ‘*.lst’ files are text files used by sapcpe to determine which files to compare/copy on instance startup.

jcmon
Program to monitor and manage Java processes

jcontrol
Program to control Java processes

jenqulib.jar
Java Enqueue Library

jlaunch
Program starts the Java processes

jlogunzip.jar
Java Classes used to unzip archives (used from the sapstartsrv)

jperflib.jar
J2EE client Jar file

jstartup.jar
Java Startup FrameWork jar file

jstartupapi.jar
J2EE Engine Monitoring API

jstartupimpl.jar
J2EE Monitoring

ldap_rfc
LDAP Connector

ldappasswd
Store LDAP directory user and password

ldapreg
LDAP Registration Service

lgtst
Program to test the message server

libicudata30.a
ICU Common Library – Part of the RFC SDK and are used for RFC connections.

libicui18n30.a
ICU Common Library – Part of the RFC SDK and are used for RFC connections.

libicuuc30.a
ICU Common Library – Part of the RFC SDK and are used for RFC connections.

libjenqulib.o
Part of the RFC SDK and are used for RFC connections.

libjmon.so
JMON Shared Library – Part of the RFC SDK and are used for RFC connections.

libjperflib.so
Part of the RFC SDK and are used for RFC connections.

libregex.o
Part of the RFC SDK and are used for RFC connections

librfcum.o
Dynamic Load Library – Part of the RFC SDK and are used for RFC connections.

libsapcsa.o
CSA Shared Library – Part of the RFC SDK and are used for RFC connections

libsapsecu.o
SECU Shared Library – SAP seculib library used for default encryption. It’s referenced in the j2ee startup logs in the working directory.

libsapu16.so
Part of the RFC SDK and are used for RFC connections

libsapu16_mt.so
Part of the RFC SDK and are used for RFC connections.

mdxsvr
MDX Parser for RFC

memlimits
he program memlimits lets you determine how much swap space is currently available in the host system.

niping
Program to test the SAP network layer and the SAProuter

rfcexec
The tool to start other programs from within SAP (ABAP) on the OS level via the gateway on any other (or the same) server.

rfcexec.sec
The tool to start other programs from within SAP (ABAP) on the OS level via the gateway on any other (or the same) server.

rfcping
Ping the RFC layer

rscparulib.o
 Dynamic shared library with code page converter

rscpf2f
Check installed locales for given list of languages.  

rscpf3f
Find possible system code pages for given list of languages.      
          
rscpf_ars
Test program for code page conversion, language handling, and locales in combination with ‘rscparulib.o’.  (only for support)

rscpf_db
est program for code page conversion, language handling, and locales. This program will connect to the database and also attached to the shared memory of an instance if possible.  (only for support)

rslgcoll
Central System Logging Collector

rslglscs
Show the Syslog specific parts of the shared memory segment ‘SCSA’. 

rslgsend
Central System Logging Sender

rslgview
View SAP Log at the OS level

rstrcscs
Program creates a common memory segment for the SCSA, locates the trace switches block within it and initializes the trace switches block.

rstrfile
R/3 system trace file tool

rstrlscs
The command “rstrcscs r” removes that common memory segment again.

rstrsscs
The command “rstrsscs” allows change to the switch settings in the trace switches block within the SCSA.

rsyn.bin
For each kernel version of the R/3 System, there exists an appropriate file rsyn.bin which contains the ABAP/4 syntax description.  It describes what to do when compiling an ABAP statement 

sapccm4x
CCMS Agent for Abap

sapccmsr
CCMS Agent for Java

sapcontrol
sapcontrol’ is used to stop/start/monitor the sap instances (for example, from the sapmc).

sapcpe
This checks that the local executable are up to date each time an SAP instance that uses local executable is started.

sapcpeft
Parameter file used by sapcpe

sapcpp46.o
Virus Scan Adapter files (Note 964305)

sapdbmrfc
RFC for SAPDB connections

sapevt
This program is able to trigger events within the SAP system. The tp tool uses this feature. It can be used as trigger for self-written interfaces as well.

sapftp
FTP Client, that can be used from within the SAP system (from ABAP) to communicate with other FTP servers.

saphttp
HTTP Client, that can be used from within the SAP system (from ABAP) to communicate with other HTTP servers – e.g. for interfaces.

sapiconv
Program for the conversion of text files

sapkprotp
Relocate a Content Server Repository

sapmanifest.mf
Text file that contains the kernel patch level and is read by the JSPM (Java Support Package Manager).

sapmanifestdb.mf
Text file that contains the database kernel patch level and is read by the JSPM (Java Support Package Manager).

sapuxuserchk
The program xuser is a tool from maxdb which stores the logon information to the DB.  This utility program called by sapcontrol which is a program that uses the web service APIs of the ABAP and Java startup framework to control an instance from the command line.

sapwebdisp
The SAP Web dispatcher is used for load balancing for a setting up an SAP Internet scenario. It is the only application that needs to be located in the DMZ. Everything “behind” this can (and should) be located in your intranet. So, only one port on one IP address needs to be opened to internet and the SAP Web dispatcher can handle the traffic with the different SAP instances.

sapxpg
Program that starts programs on an external host. The tool for starting OS commands from within SAP Systems.

scs.lst
The ‘*.lst’ files are text files used by sapcpe to determine which files to compare/copy on instance startup.

scsclient.lst
The ‘*.lst’ files are text files used by sapcpe to determine which files to compare/copy on instance startup.

semd
A Test Tool used to verify semaphore operations.

servicehttp
The port number where the server should listen for HTTP requests.

shmd
Related to Shared Memory

showipc
Checks shared memory segments

sldreg
SLD registration program

sldreglib.o
SLD registration program

ssfrfc
Secure Store and Forward (SSF)

wdispadmin.SAR
Web Dispatcher Administration Interface archive program

wdispmon
Web Dispatcher Monitor program

webdispinst.lst
Web Dispatcher Administration Interface list

xml63d.o
Virus Scan Adapter files

vscan_rfc
Virus Scan Adapter files 

SAP Kernel Upgrade

SAP Kernel is a central Program which act as an interface between SAP Application and the operating system , It responsible for Starting and stopping of  SAP R/3 system , Dispatcher, Message server and other application services .

                               

Kernel Upgrade help to increase the System health and Performance , now check how can we upgrade SAP Kernel.


First we need to find our current Kernel Version, for that Login with SIDADM user or navigate to the kernel directory (windows - usr\sap\DA1\SYS\exe\uc\NTAMDXX, Unix - sapmnt/SID/exe) and goto CMD use the command disp+work,
                                  

Download the kernel from service market place, need to download the exact file as the operating system.

Kernel is 2 parts, 1) Database Independent
                            2) Database Dependent  
Need to download this 2 parts of the kernel

First of all you may Stop the SAP system and SAP services before upgrade the Kernel and take a backup of the current kernel in a safe location.

Then uncar the downloaded kernel files to a new folder with the commands
"sapcar -xvf filename.SAR"

System identify the kernel by using the Path and the Folder name so we need to move the uncared files to the existing folder, 


So move the data of the newly created folder to,
Windows - usr\sap\SID\SYS\exe\uc\NDTAMDXX
Unix - sapmnt/SID/exe

The kernel upgrade in Windows and Unix is almost same, the just different is in Unix, that is after the previous step we need to Run the following comment in the Kernel folder
./saproot.sh SID

Then you can restart the SAP system and check the Current Kernel version using "disp+work"

                                 

Thursday, April 8, 2021

Partitioning Data Volumes for HANA DB performance improvement.

 Partitioning Data Volumes

Below is a simple question and answer format to understand the usage of data volume partitioning ,how it helps in improving over all HANA DB read and write performance and how this is different from data volume striping

1.What is data volume partitioning? How does it add performance advantage over default setup? Since when it is available?

Data volumes on the Indexserver can be partitioned so that read and write operations can run in parallel with increased data throughput. Also, the HANA startup time will benefit from this feature because the data throughput will increase massively, depending on the number and performance of the additional disks or volumes

The data files can now be segmented and stored in different locations and can then be accessed in parallel threads.

This feature is available since SAP HANA 2.0 SPS 03 .

2.How does SAP HANA data volume partitioning takes advantage of NFS filesystem type usage in HANA?

In the case of Network File Systems data can also be (along with parallel read) written in parallel across multiple connections. Partitioning data volumes in this way will therefore speed up all read/write operations on the volume including savepoints, merges, restart, table loading operations, and backups.

This feature enables the filesystem to have more parallel channels for processing I/O. To truly benefit from this feature additional mountpoints need to be configured for the additional locations of data volume partitions in order to leverage additional TCP connections in case of NFS. No further network configuration is required as long as the network infrastructure can sustain the additional workload

3.Can non-NFS type filesystems used in HANA takes benefit of data volume partitioning?

For non-NFS type filesystems the benefits of adopting the feature depends on setup provided by the hardware vendor / TDI design. Hence this has to be discussed with the vendor for feasibility as they responsible for designing the storage layout.

4.Can we use partitioning for log volumes in HANA?

NO,HANA data partitioning is only available for data volumes not for log volumes

5.When does data gets written to newly partitioned DATA volume  in HANA ?

For a newly added data volume partition on an existing system, data is not immediately distributed . Fresh I/O writes are distributed to the new data volume partition and eventually the database achieves even distribution from a size point of view.

However,If immediate even distribution of data is required, we have to consider using SAP HANA backup and recovery (only file and backint based backup and recovery)

6.How to perform data volume partitioning?

Starting with SAP HANA 2.0 SPS 03 explicit commands exist to adjust the number of volume partitions respectively data files:

ALTER SYSTEM ALTER DATAVOLUME ADD PARTITION

ALTER SYSTEM ALTER DATAVOLUME DROP PARTITION <id>

Starting with SAP HANA 2.0 SPS 04 you can optionally specify a system replication site ID

ALTER SYSTEM ALTER DATAVOLUME ADD PARTITION SYSTEM REPLICATION SITE                  <site_id>
ALTER SYSTEM ALTER DATAVOLUME DROP PARTITION <id> SYSTEM REPLICATION                SITE <site_id>

Refer below link for end to end steps starting from OS level steps to HANA DB steps

https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/hana-data-file-partitioning-installation/ba-p/2092948

7.What happens when we create a new partition on the indexserver ?

When we create a new partition on the index server, it is added simultaneously to all indexservers in the topology. New partitions become active after the next savepoint on each indexserver, this is shown in the partition STATE value which changes from ACTIVATING to ACTIVE. By default all data volumes have a single partition with the numeric ID zero. A numeric partition ID is assigned automatically to new partitions by the HANA persistency layer. If the partition numbering is for any reason inconsistent across all indexservers then any attempt to add new partitions will fail.

8.What are the ways to monitor hana data volume partition ?

Way1: Starting with SAP HANA 2.0 SPS 04 you can use SQL: “HANA_Disks_Data_Partitions” (SAP Note 1969700) to display an overview of existing data volume partitions.

Way2: Using monitoring views :We can see the current data volume configuration from the following two views:

  • M_DATA_VOLUME_STATISTICS: This provides statistics for the data volume partitions on the indexserver including the number of partitions and size.
  • M_DATA_VOLUME_PARTITION_STATISTICS: This view provides statistics for the individual partitions, identified by PARTITION_ID, and includes the partition STATE value.

In a replication scenario we can monitor the M_DATA_VOLUME_PARTITION_STATISTICS view on the secondary via proxy schema SYS_SR_SITE<siteName> (where <siteName> is the name of the secondary site).

9.What are the performance impact on dropping an data volume partition from HANA DB?

Dropping a SAP HANA data volume partition involves reading data from the dropped partition and writing it into existing partitions. Since this could involve significant I/O activity depending on the quantity of reads/writes, such an activity should be performed during low business workload timeframes.

10.What happens when we drop a partition using sql DROP PARTITION?

This command drops the identified partition from all indexservers in the topology. The default partition with ID zero cannot be dropped. If we drop a partition, then all data stored in the partition is automatically moved to the remaining partitions and for this reason dropping a partition may take time. This operation also removes the partition entry from the configuration file.

11.Can we drop an active data volume partition with a HANA DB with HSR enabled ?

No. In a running system replication setup, we may not be able to drop an active data volume partition as system replication uses data volume snapshot technology. We will see the error “Cannot move page inside/out of DataVolume”. In this case it may be necessary to disable system replication functionality, drop the partition, and then setup system replication again.

12.Can we add HANA data volume partitions to the path of our own choice ? When do we use user defined path? How to configure / enable it?

Despite the default data volume base path of /usr/sap/<SID>/SYS/global/hdb/data, we can also add partitions in a specified path of our own choice (Using SQL syntax ADD PARTITION PATH.Refer Question 6). The path must be reachable by all nodes or services in the topology. Beneath the specified path the standard folder structure is created automatically with a numbered folder for each host. A user-defined path is required, for example, if we are using multiple NFS connections so that data can be written in parallel to different paths. This option must be explicitly enabled by setting the PERSISTENCE_DATAVOLUME_PARTITION_MULTIPATH parameter in the customizable_functionalities section of global.ini to TRUE. The partition basepath is saved in the indexserver.ini configuration file in the basepath_datavolume key of the partition ID section.

13.How is data volume partitioning and data volume striping different?

  • SAP HANA data volume partitioning distributes fresh incoming pages across data volume partitions to achieve parallelization of I/O operations wherever possible.
  • SAP HANA data volume striping provides the possibility to limit the size of the existing data volume files and create a new data volume file and redirect incoming pages to the new file if no space exists in the older file. There is no even distribution of I/O writes as achieved with data volume partitioning.

14.What is data volume striping? How to configure data volume striping ?

In order to prevent HANA trying to grow data files beyond the certain file size limit ,we need to set the following parameters in the global.ini configuration file of HANA. With this HANA creates a new data volume file and redirect incoming pages to the new file if no space exists in the older file or the threshold is met

global.ini -> [persistence] -> datavolume_striping = true

global.ini -> [persistence] -> datavolume_striping_size_gb = <max_file_size_gb>

15.How does Azure HANA large instance(HLI) make use of data striping feature?Why is it mandatory to use data striping in Azure HLI?

The storage used in HANA Large Instances has a file size limitation of 16 TB per file.Unlike in file size limitations in the EXT3 file systems, HANA is not aware implicitly of the storage limitation enforced by the HANA Large Instances storage. As a result HANA will not automatically create a new data file when the file size limit of 16 TB is reached. As HANA attempts to grow the file beyond 16 TB, HANA will report errors and the index server will crash at the end.

In order to prevent HANA trying to grow data files beyond the 16 TB file size limit of we have to set the following parameters in the global.ini configuration file of HANA

  • datavolume_striping=true
  • datavolume_striping_size_gb = 15000

16.How does  EXT* and XFS file systems overcome the problem of excessively large file sizes in relation to data volumes?

For EXT* and XFS file systems the problem of excessively large file sizes in relation to large data volumes is overcome by striping. These file systems have a maximum size of 2TB for each physical file and the persistency layer automatically chooses appropriate striping sizes for these file systems. We can define a maximum file size for stripes using the striping configuration parameters in the persistence section of the indexserver.ini file: datavolume_striping (TRUE / FALSE), and datavolume_striping_size_gb (a value between 100 and 2000GB).