Tuesday, March 9, 2021

Basic Daily checks (TCodes) for SAP Basis Administrator...

For Junior SAP Basis Administrator need to know the below stuff to crack the Interview:

As a “BASIS Consultant” We need to know the basics and daily activities of a basis person that needs to be performed on the system. I’ve listed a few activities which can help us to monitor the system performance.

SM51: SAP servers

Transaction code SM51 is to display a list of active application servers that have registered in the SAP message server. Further, you can manage & display the status, users, and work processes in all application servers belonging to the SAP System.

SM66: Global Work Process

SM66 is used to check the Active Work Process of all the instances. No, work process status should be in PRIVATE mode if it’s then wp is consuming more memory.

If any work process is in PRIV mode, go to SM51 and Check the server name, instance number, and contact the user if he is working in the system or not. If the user is not available Cancel the process without the core.

SP01: Spool Request Selection

In SP01 t-code we check the Spool Request data.

Execute the report with the required date and if the requested data is more than 500 we will get a popup to select the entries.

At the end of the page, we can find the following data for the spool request.

SMGW: Gateway Monitor

The Gateway Monitor (transaction SMGW) is used for analyzing and administrating the gateway in the SAP system. We check the number of Active gateway connections.

SM37: Job Selection

Transaction code SM37 is used to monitor the background, batch jobs running in the system.

From the initial screen, you can search by the job name, user name, or program name accordingly with the time condition.

  • Scheduled – Job already been defined, but the start condition has not yet been defined.
  • Released – The job has been fully defined, including a start condition.
  • Ready – The start condition of a released job has been met. A job scheduler has put the job in line to wait for an available background work process.
  • Active – The job is currently running. Active jobs can no longer be modified or deleted.
  • Finished – All steps that make up this job have completed successfully.
  • Canceled – The job has terminated. This can happen in two ways:
  1.  An administrator intentionally terminates the job
  2.  A job step contains a program that produces an error, such as:
  3.  An E or A error message in an ABAP program
  4.  A failure return code from an external SAPXPG program

SM12: Lock entries

In SM12 we check the Lock entries. No lock entries should exist for more than 24hours. The number of lock entries should not exceed more than 500.

Check the list.

SM21: System logs

Transaction code SM21 is used to check and analyze system logs for any critical log entries. The SAP System logs are all system errors, warnings, user locks due to failed login attempts from known users, and process messages in the system log.

  • From the initial screen, go to System Log -> Choose -> All remote system logs. Set the date a day before and click on the Reread system log.
  • In the system log analysis window, you can check/analyze the critical error message by double-clicking it.

The list can be restricted for Problems Only, Problems, and all messages.

SM13: Update Records

Here we check the update records of the system which are getting canceled. If the records are reached more than 50 then we need to take action.

ST22: ABAP dumps

Transaction code ST22 is used to lists the ABAP dumps generated in the system, we can check for a date, user as required.

Every dump indicates the reason for the error, transaction code, variables that caused the error. The types of error can be of various kinds for which action is to be taken to fix this error from happening again after analysis.

SMQ1: qRFC Monitor (Outbound queue)

We need to check if any outbound entries got stuck in the queue. If so check with the respective job owner and re-execute the entries.

SMQ2: qRFC Monitor (Inbound queue)

We need to check if any Inbound entries got stuck in the queue. If so check with the respective job owner and re-execute the entries.

SM58: Transactional RFC

In SM58 we can check the transactional RFC errors if any occurred.

We can observe different types of errors for different functional modules check the target system entries and perform the required analysis if the user requires it then we can re-execute those entries or else we can delete the entries from the system. For example, we are deleting the WORKFLOW_LOCAL_100 Entries.

Go to the log file -> Reorganize

Enter the target system name in Destination box->And check the boxes as shown in the below screenshot-> Execute the program.

We are supposed to delete only Connection errors, System error, and Already Executed.

ST02: SAP Memory Configuration monitor

SAP Memory Configuration monitor checks the SAP Buffers and SAP Memory areas for problems such as swapping.

It is a snapshot of the utilization of SAP shared buffers.

High watermarks of utilization for example extended, roll, paging, and heap memory can be obtained from the SAP memory configuration monitor.

ST06: Memory Overview / Operating System Monitor

The operating system provides the instance with the following resources:

  • Virtual Memory
  • Physical Memory
  • CPU
  • File system Administration
  • Physical disk
  • Network

You can use the operating system monitor to monitor the system resources that the operating system provides. The operating system collector SAPOSCOL collects these resources.

DB02: DBA Cockpit

Transaction code DB02 is to analyze and monitor database statistics (DB growth, tablespace size, missing index &, etc.).

  1. Check Tablespace size. Go to Tablespaces -> Overview. If a tablespace size is reaching a 95% level, it’s advisable to increase the size. The Auto-extend should be Yes.


SMLG: Load Distribution

We need to check this tcode for performance issue, here we check the Response Time in load distribution of the Instance.

Go to-> SMLG -> Load Distribution icon

ST03N: Workload analysis

The ST03 Workload Monitor is the central access point for analyzing performance problems in the SAP system. ST03N is a revised version of the transaction ST03. In the current SAP Releases transaction, ST03N replaces transaction ST03 and is automatically started when you enter transaction code ST03.

Here you can compare the performance values for all instances, and compare the performance of particular instances over a period of time. Due to the number of possible analysis views for the data determined in transaction ST03, you can quickly determine the cause of performance problems.

You can use the workload monitor to display the following, among other things:

  • Number of instances configured for your system
  • Number of users working on the different instances
  • Response time distribution
  • Distribution of workload by transaction steps, transactions, packages, sub-applications, and applications
  • Transactions with the largest response times and database time
  • Memory usage for each transaction or each user per dialog step
  • Workload caused by RFC, broken down by transactions, function modules, and destinations
  • Number and volume of spool requests
  • Statistics about response time distribution, with or without the GUI time
  • Optional: table accesses
  • Workload and transactions used by users, broken down by users, accounting numbers, and clients
  • Workload generated by requests from external systems

STMS_IMPORT: Transport Request

This tcode is used to import the transport requests to the system. But for daily monitoring purposes, we use it to check the import history.

Go to-> STMS_IMPORT -> select the History Icon.

 

SCC4: Clients Overview

In SCC4 we check whether the client is open or closed.

Go to -> SCC4 -> select the Production Client

If the Changes and Transports for Client-Specific Objects are -> No Changes Allowed then Client is Closed. Other than that any option is selected then the client is Opened.

Also, make sure Cross-Client Object Changes are set to No Changes to Repository and cross-client Customizing Objects.

Monday, March 1, 2021

Installing HANA Cloud Connector (HCC) and configuring HANA Cloud Platform (HCP).

Sorry friends,

With the busy schedules I was not able to write or visit my blog.

Let me start with new stuff:

Installing HANA Cloud Connector (HCC) and configuring HANA Cloud Platform (HCP).

I have created 12 page Document, let me know if anyone want to have send me your emailid, I will forward.


- Basha

Saturday, March 4, 2017

How to implement SAP Fiori Procurement Overview Page for S4 HANA...

SAP Fiori Overview Page (OVP) is the latest interaction in SAP Fiori UX. It presents a new way of visualizing data to the business users by provides real time insights in the form of cards. This allows users to gain insights into the bigger picture allowing faster decision making. This page will help you setup the standard SAP Fiori Procurement Overview Page which SAP has released as part of S4 HANA 1610.

The main objective is to setup the Procurement Overview Page end-to-end with the complete set of functioning drill down applications.

Pre-requisites 

a) This app is available only for S4 HANA 1610.

 
Snapshot from Fiori Apps Library page.
https://fioriappslibrary.hana.ondemand.com/sap/fix/externalViewer/#/detail/Apps(‘F1990’)/S6OP



Backend components

Frontend components

Activate SAPUI5 Applications

In order to get the Procurement Overview Page up and running, you need to activate the following SAPUI5 applications. The list include the associated SAPUI5 applications for the drill down apps from the Procurement Overview Page.
PRC_OVPS1 – Overview Page
MM_PCTR_ST_MTS1 – Managing Purchasing Contracts
MM_PURDOC_LSTS1 – My Purchase Document Items
SLC_ACTS1 – Manage Activities
MM_PO_CRES1 – Create Purchase Order
PLM_ATH_CRES1 – Re-usable component for Attachment service
FND_OM_OUT_CLS1 – Re-usable component for Output Control
MM_REQNS1 – Purchase Requisition Object Page
MM_REUSE_TEXTS1 – Re-usable component for displaying texts
SBRT_APPSS1 – Smart Business Runtime
These SAPUI5 application can be activated manually through SICF transaction. But since manually activating then would be a time consuming activity, you can instead leverage the standard task list SAP_BASIS_ACTIVATE_ICF_NODES to activate all of them in one shot. Below are the steps.
Generate Task list run for SAP_BASIS_ACTIVATE_ICF_NODES in transaction STC01
 
Fill Parameters and Save
Run the Tasklist
Successful.

Activate OData services

In order to get the Procurement Overview Page up and running end-to-end, you need to activate the following OData services. The list includes the associated OData services for the drill down apps from the Procurement Overview Page.
C_NONMNGDPURGSPEND_CDS
C_PURGSPENDOFFCONTRACT_CDS
C_REQUISITIONNOTOUCHRATE_CDS
MM_PRC_OVP
C_PURCHASECONTRACTLEAKAGE_CDS
C_SUPPLIERACTIVITY_FS_SRV
/SSB/SMART_BUSINESS_RUNTIME_SRV
MM_PUR_PO_MAINTAIN
CV_ATTACHMENT_SRV
CA_OC_OUTPUT_REQUEST_SRV
C_REQUISITIONTYPEANALYSIS_CDS
C_PURREQUISITION_FS_SRV
MM_PUR_OA_MAINTAIN_SRV
These OData services can be activated manually through /IWFND/MAINT_SERVICE transaction. But since manually activating then would be a time consuming activity, you can leverage the standard task list SAP_GATEWAY_ACTIVATE_ODATA_SERV. Below are the steps.
Generate Task list run for SAP_GATEWAY_ACTIVATE_ODATA_SERV in transaction STC01.
 
Parameters for “Define OData Services for Activation”
 
 
Select System Alias for Activation
 
 
Select OData services for Activation
 
Run the Tasklist
 
Successful.

Security Roles

 
Assign the following 2 standard PFCG roles to the end user in the Frontend server. Both these roles are essential for the Overview Page to work end-to-end with all the drill-down applications.
SAP_BR_PURCHASER
SAP_BR_BUYER

Run Fiori Launchpad





When you click on each of the card, it should drill down to the corresponding Transactional / Smart Business Analytical application.

Tuesday, December 13, 2016

The power of Hadoop integrated with SAP HANA...

SAP HANA and Hadoop



In the article What is Hadoop? we talked about what exactly is Hadoop, What are its advantages and how it can be best applied. 

Now let's see how Hadoop and HANA can be integrated with each other. 

The power of Hadoop integrated with SAP HANA:

As we understood so far that Hadoop can store very huge amount of data. It is well suited for storing unstructured data, is good for manipulating very large files and is tolerant to hardware and software failures. 
But the main challenge with Hadoop is getting information out of this huge data in real time. 

We also have SAP HANA and as we all know that HANA is well suited for processing data in Real time. Hence SAP HANA and Hadoop is a perfect match. 

To get real time information from massive storage such as Hadoop, we can use HANA and HANA can be directly integrated to Hadoop.
So we can combine Hadoop and HANA to get real time information from huge data. 

With the help of SAP HANA Hadoop Integration we can also combine both structured and un-structured data. Structured and un-structured data are combined and transferred to SAP HANA via a Hadoop / HANA Connector. 

What does Hadoop bring to HANA?

    • Cost efficient data storage and processing for large volumes of structured, semi-structured, and unstructured data such as web logs, machine data, text data, call data records (CDRs), audio, video data.
    • Batch Processing
    • Where fast response times are less critical than reliability and scalability.
    • Complex Information Processing
    • Enable heavily recursive algorithms, machine learning, & queries that cannot be easily expressed in SQL.
    • Low Value Data Archive & Data stays available, though access is slower.
    • Post-hoc Analysis
    • Mine raw data that is either schema-less or where schema changes over time.

SAP HANA and Hadoop Integration:

Hadoop is considered as one of the best in storing the structured, semi-structured and unstructured data. 
Combined structured and un-structured data are transferred to SAP HANA via a Hadoop / HANA Connector. BODS is one of main way to pull data to HANA. 

SAP has also set up a "big-data" partner council, which will work to provide products that make use of HANA and Hadoop. One of the key partners is Cloudera. SAP wants it to be easy to connect to data, whether it's in SAP software or software from another vendor. 
 
SAP Data Services: Simple GUI build and run ETL process 
 

 

Wednesday, November 9, 2016

What’s new in SAP HANA 2.0... !!!

What’s new in SAP HANA 2.0... !!!

Database Enhancements

High availability and disaster recovery

Load balance read-intensive operations between a primary and secondary instance instance of SAP HANA with the active / active-read enabled mode and cut downtime with automated orchestration for HA/DR processes. You can also streamline back-ups and recovery processes with one output for third-party solutions and enhanced multi-tenant capabilities.

Administration

Simplify data management with consolidated admin tools for your entire IT landscape. Streamline workload management with automatic system overload and run-away query prevention. Plus, compare multiple test-runs and fine-tune upgrade configurations with enhanced workload analysis and capture and replay functions

Security

Maintain regulatory compliance and protect your information with encryption for data-at-rest, as well as on-the-fly data masking based on viewers’ credentials. And leverage new support for LDAP groups to simplify user authorization and credentials management.  

Multi-tier storage and scale-out configurations

Achieve optimal performance while lowering costs by partitioning large data table across storage tiers. Plus benefit from automatic data movements across storage tiers and pruning of unneeded partitions for optimal usage of system resources and effective query processing


Advanced Analytical Processing Enhancements

Search

Find  information faster with improved filtering for multi-format data types (e.g. dates), dynamically configured rules that support system-specific parameters, and “batch mode” rules that remove duplicates in data sets

Graph data processing

Analyze your graph data more efficiently and achieve results faster. New graph data processing capabilities in SAP HANA 2.0 include support for vibrant visualizations and the popular Cypher query language

Text analytics

Improve outcomes with enhanced text analysis for all languages with space-separated words. Plus easily embed text analysis into your applications with native SQL interfaces and manage domain-specific custom dictionaries.

Predictive analytics and machine learning

Improve prediction accuracy with more pre-packaged algorithms and enhanced metadata to simplify their consumption. Run scoring functions faster with parallel processing across large-scale partitioned data. 

Application Development and Tools Enhancements

Application server

Choose from more development and deployment methods – on premise, in the cloud, collocated with SAP HANA, or on separate servers. Support multiple languages / runtimes, token-based authentication, JAVA / JavaScript tracing, REST APIs, web-based application monitoring, and native multi-tenancy application development. 

Tools, languages, and APIs

Accelerate application development with enhanced modeling for calculation views, core data services, and spatial and graph objects. Easily process, load, and move documents from within applications using APIs. And streamline your code with advanced SQL syntax, new MDX tools, and troubleshooting for advanced SQL syntax. 

Data Integration and Quality Enhancements

Data integration

Simplify and accelerate data movements with parallel data loads, automatic recovery, and data integration with SAP ABAP-based systems (via BAPIs that support virtual procedures). Gain access to more data sources with added support for Microsoft Access and SharePoint. 

Data quality and data federation

Improve the accuracy of location-based data by taking advantage of a new address suggestion capability in SAP HANA 2.0. You can also leverage remote metadata synchronization to enhance the accuracy of remote data schema copies. 

 

 

Thursday, June 30, 2016

SAP HANA FAQ...

1.What is SAP HANA?
SAP HANA is an in-memory computing engine (IMCE) used for real-time processing of huge volumes of data and building and deploying real-world applications. Adopting the row-based and column-based DB technology, SAP HANA is an advanced relational DB product serviced by SAP SE. With this high-performance analytic (HANA) system, the big data exists on the main memory and not on the hard disk. It replaces the onus of data maintenance separately on the legacy system and simplifies the tasks of administrators in this digital world.

2.What is the development language used by SAP HANA?
C++

3.Name the operating system SAP HANA supports.
More than 70% of customers run their SAP workloads on Linux with the use of SUSE Linux Enterprise Server, which is the best OS choice for SAP HANA.

4.Explain Parallel Processing in SAP HANA?
Using the columnar data storage approach, the workload in SAP HANA is divided vertically. The columnar approach allows linear searching and aggregation of data rather than two-dimensional data structure. If more than one column is to be processed, each task is assigned to diverse processor. Operations on one column are then collimated by column divisions processed by different processors.

5.List advantages of using SAP HANA database.
• With the HANA technology, you can create gen-next applications giving effective and efficient results in the digital economy.
• By using singe data-in memory, SAP HANA supports smooth transaction process and fault-tolerant analytics
• Easy and simple operations using an open-source, unified platform in the cloud
• High-level Data Integration to access massive amounts of data
• Advanced tools for in-depth analysis of present, past and the future.
6.List the merits and demerits of using row-based tables.
Merits:
• No data approach can be faster than row-based if you want to analyze, process and retrieve one record at one time.
• Row-based tables are useful when there is specific demand of accessing complete record.
• It is preferred when the table consists of less number of rows.
• This data storage and processing approach is easier and effective without any aggregations and fast searching.
Demerits:
• The data retrieval and processing operations involve the complete row, even though all the information is not useful.

7.List advantages of column-based tables.
• Allows smoother parallel processing of data as the data in columns is stored vertically. Thus, to access data from multiple columns, every operation can be allocated to a separate processor core.
• Only specific columns need to be approached for Select query and any column can be used for indexing.
• Efficient operations since most columns hold unique values and thus, high compression rate.
8. What table type is preferred in SAP HANA Administration: column-based or row-based?
Since analytic applications require massive aggregations and agile data processing, column-based tables are preferred in SAP HANA as the data in column is stored consequently, one after the other enabling faster and easier readability and retrieval. Thus, columnar storage is preferred on most OLAP (SQL) queries. On the contrary, row-based tables force users to read and access all the information in a row, even though you require data from few and/or specific columns.
9.What is the main SAP HANA database component?
Index Server consists of actual data engines for data processing including input SQL and MDX statements and performs authentic transactions.
10.Explain the concept of Persistence Layer.
The persistence layer in SAP HANA handles all logging operations and transactions for secured backup and data restoring. This layer manages data stored in both rows and columns and provides steady savepoints. Built on the concept of persistence layer of SAP’s relational database, it ensures successful data restores.
 Besides managing log data on the disk, HANA’s persistence layer allows read and write data operations via all storage interfaces.
11.Define Modeling Studio in SAP Hana Administration.
Modeling Studio is an operational tool in SAP HANA based on Eclipse development and administration, which includes live project creation.
• The SAP HANA Studio further builds development objects and deploys them, to access and modify data models like HTML and JavaScript files.
• It also handles various data services to perform data input from SAP warehouse and other related databases.
• Responsible for scheduling data replication tasks.
12.List the different compression techniques in HANA?
• Run-length encoding
• Cluster encoding
• Dictionary encoding
13.Explain SLT
SLT expands to SAP Landscape Transformation referring to trigger –based replication. SLT replication permits data transfer from source to target, where the source can be SAP or non-SAP while the target system has to be SAP HANA with HANA database. Users can accomplish data replication from multiple sources. The three replication techniques supported by HANA are:
• SLT
• SAP Business Objects Data Services (BODS)
• SAP HANA Direct Extractor Connection (DXC)
14. Name the replication jobs in SAP HANA.
• Master Job (IUUC_MONITOR_)
• Data Load Job (DTL_MT_DATA_LOAD__<2digits>)
• Master Controlling Job (IUCC_REPLIC_CNTR_)
• Migration Object Definition Job (IUCC_DEF_MIG_OBJ_<2digits>)
• Access Plan Calculation Job (ACC_PLAN_CALC__<2digits>)
15. What is Latency?
The time duration to perform data replication starting from the source to the target system is known as latency.
16.What are the various components of SAP HANA Administration?
• SAP HANA Studio
• SAP HANA Application Cloud
• SAP HANA Cloud
• Sap HANA DB
17.How to perform backup and recovery operations?
During a regular operation, data is by default stored to the disk at savepoints in SAPHANA. As soon a there is any update and transaction, logs become active and get saved from the disk memory. In case of power failure, the database restarts like any other DB returning to the last savepoint log state. SAP HANA requires backup to protect against disk failure and reset DB to the previous state. The backups simultaneously as the users keep performing their tasks.
18.Define SLT Configuration
Configuration is the meaningful information to establish a connection between source, SLT system and SAP HANA architecture as stated in the SLT system. Programmers are allowed to illustrate a new Configuration in Configuration and Monitoring Dashboard.
19.What is Stall?
The waiting process for data to load from the main memory to the CPU cache is called Stall.
20.Define different types of information views.
There are primarily three types of information views in SAP HANA, which are all non-materialized.
• Attribute view
• Analytic view
• Calculation View
21.What are Configuration and Monitoring dashboard?
They are SLT Replication Application Servers to provide configuration information for data replication. This replication status can also be monitored.
22.What is logging table?
Logging table records all replicated changes in the table, which can be further replicated to the target system.
23.How to define Transformation rules in HANA?
Using advanced replication settings, transformation rules are specified to transfer data from source tables during replication process. For instance, setting rules to covert fields, fill vacant fields and skip records. These rules are structured using advanced replication settings (transaction IUUC_REPL_CONT)
24.Explain the role of transaction manager and session?
SAP HANA transaction manager synchronizes database transactions keeping the record of closed and open transactions. When a transaction is committed or rolled back, the manager informs all the active stores and engines about the action so that they can perform required actions in time.
25.How is SQL statement processed in SAP HANA?
Each SQL statement in SAP HANA is carried out in the form of a transaction. Every time, a new session is allocated to a new transaction.
26. Define Master-Controller job.
A Master-controller job is responsible to build database logging table in the source system. It further creates synonyms and new entries in SLT server admin when the table loads / replicates.
27.How users can avoid un-necessary storage of logging information?
Pause the replication process and terminate the schema-related jobs.
28.Is the table size in source system and Sap HANA system same?
No
29.When to change the number of Data Transfer Jobs?
The number of data transfer jobs change when the initial loading speed or latency replication time is not up to the mark. At the end of the initial load, the number of initial load jobs may be reduced.
30.What is the default IMCE Studio perspective?
Administrator Console

Sunday, May 1, 2016

Data Load from S/4HANA into BW via SDA... !!!

Being a SAP Consultant, I am a especially interested in everything which has to do with embedded analytics in S/4, but also in the integration with existing platforms like Business Objects and last, but not least, Business Warehouse.
 
When discussing SAP Activate and Best practices, it was brought to my attention that there are a lot of best practices how to integrate S/4 with Business Objects and BW. You can find the best practices at the following location:
 
SAP Best Practices for analytics with SAP S/4HANA
 
One of the interesting integration scenario’s one might come across with is the ability to take a HANA view (or CDS view) and persist that in BW. Think about scenario’s where you might like to take the accounts payables or receivables of a certain moment and build that up from a historical perspective. As normal real time analytics would show you the open items at a certain key date, it will make sense to calculate snapshots per month to show open items in a trend.
 
In this blog post, I would like to show you how easy it is to go from a real time view of your data into a persistent way of looking at the data.


Connecting S/4 to BW
 
In short we want to take an existing HANA model (one of the 3000+ views) and use that as a datasource in BW.
 
1.png
 
We will be using a combination of SDA (smart data access), an open ODS view and an Advanced DSO to go from a virtual scenario into a persistent scenario
.
The open ODS view architecture:
 
1.png
 
The generation of an open ODS into an ADSO is something I’ll show you in the next paragraphs.


Creating the SDA connection
 
Creating a remote connection to your S/4 system is a piece of cake. Go into the HANA studio of the receiving BW system, go to provisioning and create a new connection
 
1.png
 
Be sure to select the correct driver, the IP address of the source system and log on with your system user.
 
If all is ok, you will get the following screen
 
1.png
 
Now the fun starts. Select your view. I suggest you look at this blog post to get an idea on how to select the view you’re looking for
 
SAP S/4HANA Embedded Analytics – A detailed Walkthrough (Part 1)
 
For my example I will be taking the S4_HANA_SDA_sap.erp.sfin.fi.ar/FAR_CUSTOMER_LINE_ITEMS source and add that as a virtual (remote table) by right clicking on the view.


1.png


After adding the virtual table you can do a quick SQL content query to the results from my source table
 
1.png
 
Connecting the view to BW
 
Moving along to my BW system to connect the remote table to an Open ODS view.
 
Go to your info area of choice, right mouse click and create open ODS view
 
1.png
 
As I want to keep my ADSO fairly aggregated, I only select a number of fields
 
1.png
 
We now have a virtual connection via BW to S/4!
 
From Open ODS to Advanced DSO


Persisting it only takes two more steps:
 
Press generate dataflow to create the data source
 
1.png
 
Press a second time to get your flow generated
 
1.png
 
And ready we are with our ADSO and data flow!
 
1.png
 
I am just going to make one small modification in the ADSO and transformation by adding today's date as part of my key so that I truly have a snapshot per day
 
Knipsel.PNG
 
Load and activate the ADSO
 
1.png
 
We have data!
 
Knipsel.PNG
 
Let’s do a quick test in Analysis for Office
 
1.png
 
All done, we have snapshot AR data from S/4 per day!