Tuesday, May 12, 2026

SAP Hana DB - Refresh / Migration / Tenant Copy Procedure...!!!

This article details the process of copying a HANA database using the tenant copy method. This approach is useful in various scenarios, such as homogeneous database migrations, database refreshes, or the creation of new systems from existing instances.

A particularly useful feature is that the destination tenant does not need to be identical to the source tenant; it can even run on a different instance number!

The underlying mechanism is similar to HANA database replication. The following prerequisites apply:

Open communication from the target system to the source system

Creation of credentials for authenticated access to the source system

Unlike HANA System Replication, no operating system-level access is required. Instead, the credentials for the SYSTEM user (or an equivalent user) are required for both the target and source HANA databases.

Configuration of Parameters in the Source System

The parameters listed below must be configured in the source system. Since these may already be set in the database, a prior verification is advisable. As no SSL communication or trust relationship existed between the source and the target in this instance, SSL was disabled.

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('multidb', 'enforce_ssl_database_replication') = 'false' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication_communication', 'enable_ssl') = 'off' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('communication', 'ssl') = 'off' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('communication', 'listeninterface') = '.global' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'logshipping_async_buffer_size') = '1073741824' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'logshipping_timeout') = '30' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'log_mode') = 'normal' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('inifile_checker', 'enable') = 'true' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('inifile_checker', 'replicate') = 'false' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') SET ('system_replication', 'logshipping_async_buffer_size') = '10737418240' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'logshipping_async_wait_on_buffer_full') = 'false' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'logshipping_max_retention_size') = '1048576' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'enable_log_retention') = 'on' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication_communication', 'listeninterface') = '.global' WITH RECONFIGURE;

Configuration of Parameters in the Target Database:

The following commands must be executed in the target database (SystemDB):

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('multidb', 'enforce_ssl_database_replication') = 'false' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication_communication', 'enable_ssl') = 'off' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('communication', 'ssl') = 'off' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('communication', 'listeninterface') = '.global' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'logshipping_async_buffer_size') = '1073741824' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('system_replication', 'logshipping_timeout') = '30' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'log_mode') = 'normal' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('inifile_checker', 'enable') = 'true' WITH RECONFIGURE;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('inifile_checker', 'replicate') = 'false' WITH RECONFIGURE;


Note: If these parameters have been newly set, a restart of the HANA database (both source and target) may be required.

Additionally, the following command can be used to verify whether installed plugins are present in the source system, as they must be identical in the target system:

SELECT * FROM M_PLUGIN_MANIFESTS;

Executing the Tenant Copy

The following steps are required to initiate the tenant copy in the target system's SystemDB (via the HANA Studio SQL Editor):

1) Stop the tenant database: ALTER SYSTEM STOP DATABASE <SID>; 
(If the target SID is identical to the source SID, the existing tenant must be deleted: DROP DATABASE <SID>;)
2) Create credentials for the source system: 
CREATE CREDENTIAL FOR COMPONENT 'DATABASE_REPLICATION' PURPOSE '<Source DB Hostname>:<Nameserver SQL Port>' TYPE 'PASSWORD' USING 'user="SYSTEM";password="<Password>"'

3) Start the copy process: 
CREATE DATABASE <SID> AS REPLICA OF <Source DB SID> AT '<Source DB Hostname>:<30001>'; # (=Instance 00)

Progress can be monitored as follows:

SELECT * FROM SYS_DATABASES.M_DATABASE_REPLICA_STATISTICS;

In the event of errors regarding authentication or an unknown replication status, the index server log files on the target system provide further insight.

Takeover and Completion of Replication.

Once the status shows "Active," the copy is ready for finalization. In the case of a migration, the application on the source system must first be shut down.

1) Finalize replication: 
ALTER DATABASE <SID> FINALIZE REPLICA;
2) Remove credentials: 
DROP CREDENTIAL FOR COMPONENT 'DATABASE_REPLICATION' PURPOSE '<Source DB Hostname>:30001' TYPE 'PASSWORD';

Upon completion, the parameters configured during the procedure can be reset to their original values. Using this method, systems have been successfully migrated between different data centers.

Note:
The ports can be reset to match the original system configuration using the command below: (If desired)

ALTER SYSTEM STOP DATABASE <SID>;
ALTER DATABASE <SID> ALTER 'indexserver' AT '<Hostname>:31040' TO '31003';
ALTER SYSTEM START DATABASE <SID>;


































Friday, May 8, 2026

Joule - A shift from "AI features" to "AI agents"...!!!


The pivotal architectural distinction came in the morning session: Joule Skills are deterministic fixed inputs, fixed outputs, predictable logic, the unit of trust. Joule Agents are adaptive they sequence skills together through LLM-driven reasoning, handle multi-step problems, recover from errors, and operate within a charter defined inside Joule Studio. Skills are the verbs; agents are the workflows.

This separation matters more than it sounds. It means you can:

Govern skills tightly while letting agent reasoning evolve. Reuse the same skill across many agents. Audit the deterministic layer rigorously while still letting the adaptive layer improve. It is, in effect, the architectural answer to the most common enterprise objection to LLM-based automation — "we can't put a black box on a financial close." With Joule, you don't. You put a deterministic skill on the close, and you let the agent decide when to call it.

Joule for Developers showed how this manifests for engineering teams: code generation, refactoring, test scaffolding, application generation across SAP Build, ABAP, JavaScript, and Build Process Automation. The fine-tuning on the SAP codebase is what makes it different — general foundation models simply don't understand ABAP idioms or CDS view semantics the way Joule does.

Joule for Consultants put a number on the productivity claim: 14% average project acceleration, roughly 1.5 hours per consultant per day, 40% less time analyzing code, 50% less rework. The mechanism is a 25-million-document SAP knowledge base layered with the ABAP-tuned models. For implementation partners, this isn't a productivity tool — it's a margin lever.


Monday, January 12, 2026

ACDOCA Repartitioning in SAP S/4HANA...!!!

ACDOCA is the most critical and data-heavy table in S/4HANA. As business volume grows, a low partition count leads to:

  • Poor parallelism
  • Longer financial close cycles
  • High memory pressure during heavy jobs
  • Slower reporting and analytics

Scaling partitions from 4 to 12 improves CPU utilization, memory distribution, and runtime of finance workloads. But repartitioning is a high-impact operation and must be executed in a controlled and technical manner.









Step 1 – Pre-Checks

Before touching ACDOCA:

  • Ensure system health in:
  • Check available memory:

System RAM Statement MemoryQAS~1 TB400 GBPPD~1.8 TB300 GBPRD~3.8 TB300 GB

  • Decide whether temporary VM scale-up is required based on runtime vs cost.

Step 2 – Mandatory Parameters (Exact Names)

indexserver.ini → [partitioning]

split_threads = 64

Controls how many threads are used during repartitioning.

global.ini → [memorymanager]

statement_memory_limit = 400 GB

total_statement_memory_limit = 400 GB

Protects system memory during long-running ALTER.

Step 3 – Scale-Up Strategy

Temporary scale-up reduces runtime significantly:

SystemBase RAMTemporary VMQAS~1 TBM64sPPD~1.8 TBM96PRD~3.8 TBM128m

Always downscale after successful repartitioning.

Step 4 – Runtime Comparison (4 → 12)

SystemThreadsStatement MemoryRuntimeQAS64400 GB7–8 hoursPPD64300 GB~7.65 hoursPRD64300 GB~5:03:49

Scale-up + correct parameters saves hours.

Step 5 – Repartition Command

ALTER TABLE "SAPHANADB"."ACDOCA" PARTITION BY HASH (BELNR) PARTITIONS 12; 

Step 6 – Execution Method (Very Important)

Initial partitioning can be done via Studio wizard. Repartitioning must be done using ALTER command only.

Avoid Studio GUI for long-running ALTER:

  • Session timeouts
  • IDE disconnects
  • You lose visibility though job continues

Best practice: Run from DB host using hdbsql:

nohup hdbsql -u <DBUSER> -d <DBNAME> -A -j -C \ 

"ALTER TABLE 'SAPHANADB'.'ACDOCA' PARTITION BY HASH (BELNR) PARTITIONS 12;" & 

Why:

  • nohup → survives logout
  • & → runs as background job
  • hdbsql → stable for long execution

Step 7 – Monitoring & Verification

Check status:


SELECT * FROM M_TABLE_PARTITION_OPERATIONS WHERE TABLE_NAME = 'ACDOCA'; 

Verify partition layout:

SELECT PART_ID, PARTITION_SPEC, RECORD_COUNT FROM TABLE_PARTITIONS WHERE TABLE_NAME = 'ACDOCA'; 

Optional runtime statistics:

SELECT * FROM M_TABLE_PARTITION_STATISTICS; 

Final Checklist

  • Memory tuned
  • Threads configured
  • VM scaled
  • ALTER executed via hdbsql + nohup
  • Status monitored via SQL
  • VM scaled back

Conclusion

Repartitioning ACDOCA is not just a DDL change — it is a controlled engineering operation. With the right parameters, execution method, and monitoring, you can:

  • Reduce runtimes by hours
  • Avoid production risk
  • Make ACDOCA future-ready

Use the attached infographic as your one-stop technical reference for every ACDOCA repartitioning project.

Tuesday, November 25, 2025

SAP Joule - Behind the Scenes...!!!

When you type a question into SAP Joule inside S/4HANA, SuccessFactors, Ariba, or any SAP cloud app, a surprisingly sophisticated orchestration begins.


Here’s the flow explained in the simplest possible way.

1️⃣ The User Query

It starts when a user asks Joule something -

“Create a purchase order”
“Show my pending invoices”
“Explain this error”

This query is sent from the SAP app’s native Joule interface.

2️⃣ Joule’s Intelligence Layer Kicks In

Before going to an LLM, Joule evaluates three things:

↳ Scenario Catalog

What actions am I allowed to perform?
Joule checks SAP-delivered skills and customer-built skills across your SAP landscape.

↳ Knowledge Catalog (RAG)

Do I need to look up information?
Joule retrieves SAP-owned + customer-owned content using Retrieval Augmented Generation for Enterprise.
This prevents hallucinations.

↳ User Context & Authorization

Who is this user? What app are they in? What roles do they have?
Joule NEVER shows or performs anything the user can’t do directly in SAP.

This is what makes Joule enterprise-safe.

3️⃣ The LLM Step (Dialog Brain)

Using all this context, Joule enriches the user query and sends it to an LLM inside SAP’s AI Foundation / Generative AI Hub.

Important:
SAP’s contract with LLM providers forbids training on customer data.
Your data stays yours.

4️⃣ Action or Insight — Depending on the Query

Joule now decides:

Generate an answer
(for “inform me” questions)

OR

Invoke a Joule Function
for “do something” tasks

↳ create PO
↳ approve leave
↳ show sales orders
↳ navigate to the right Fiori app

It calls the right backend system (S/4HANA, SuccessFactors, Ariba, etc.) via secure SAP BTP connectivity.

5️⃣ Secure Response Delivery

The response goes back to the user — clean, filtered, grounded, and aligned with:

↳ Enterprise security
↳ SAP authorizations
↳ Responsible AI standards

This is why Joule is different from generic copilots — it respects SAP rules.

The Architecture That Makes This Possible

1. SAP BTP (Foundation)

↳ Joule
↳ AI Core
↳ Generative AI Hub
↳ SAP Build Work Zone
↳ Joule Studio

This is the orchestration + intelligence layer.

2. SAP Cloud Identity Services (Security)

Handles:

↳ Authentication
↳ Authorization
↳ SCIM provisioning
↳ OIDC / SAML trust

This is the IAM backbone of Joule.

3. SAP Business Systems (Execution Layer)

↳ S/4HANA Public/Private Cloud
↳ SuccessFactors
↳ Ariba
↳ Concur

These systems execute the actual business processes.

Together, they allow Joule to think, retrieve, understand, and act.

Understanding this flow helps you design better:

↳ Skills
↳ Extensions
↳ AI Agents
↳ Custom integrations
↳ Secure enterprise workflows