Data-driven decision-making, integrating custom AI models into business processes has moved from innovation to necessity. SAP Business Technology Platform (SAP BTP) empowers developers to build, deploy, and operationalize AI models directly into SAP workflows. This article offers a hands-on guide to integrating custom machine learning models built with Python and TensorFlow into SAP BTP, using SAP AI Core and AI API services.
Whether you're predicting sales, classifying documents, or automating business approvals, this guide will help you go from code to integration—step by step.
Prerequisites
Before diving in, ensure you have the following:
SAP BTP account with access to:
- SAP AI Core
- SAP AI API
- SAP AI Launchpad (optional for lifecycle management)
- Python 3.8+ and pip
- TensorFlow (v2.x)
- Docker (for containerization)
Basic understanding of ML model training and REST APIs
Step 1: Develop and Train a TensorFlow Model in Python
Let’s assume we are building a binary classification model that predicts customer churn.
import tensorflow as tffrom tensorflow.keras.models
import Sequentialfrom tensorflow.keras.layers
import Dense
import pandas as pdfrom sklearn.model_selection
import train_test_split
# Load sample data
df = pd.read_csv('customer_data.csv')X = df.drop('churn', axis=1).valuesy = df['churn'].values
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Define a simple model
model = Sequential([Dense(64, activation='relu', input_shape=(X_train.shape[1],)),Dense(1, activation='sigmoid')])model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))# Save modelmodel.save('churn_model')
Step 2: Containerize the Model with Docker
You need to serve the model using a RESTful API via Flask or FastAPI and Dockerize the app.
app.py (Flask serving example)
from flask import Flask, request,
jsonifyimport tensorflow as tfimport
numpy as npmodel = tf.keras.models.load_model('churn_model')
app = Flask(__name__)@app.route('/predict', methods=['POST'])
def predict():data = request.get_json(force=True)
input_data = np.array(data['input']).reshape(1, -1)prediction = model.predict(input_data)
return jsonify({'churn_probability': float(prediction[0][0])})
Step 3: Register the Model in SAP AI Core
a) Define an AI Resource Configuration (ai-deployment.yaml)
apiVersion: ai.sap.com/v1alpha1kind: ServingTemplatemetadata:name: churn-predictorspec:image: yourusername/churn-modelprotocol: restports:- port: 5000resources:limits:cpu: "1"memory: "1Gi"
Deploy this via the SAP AI Core CLI or Launchpad.
aicorectl apply -f ai-deployment.yaml
Step 4: Expose the Model with SAP AI API
After deployment, you can expose your model via the AI API by registering it under the respective AI Use Case.
Once registered, your service can be accessed at:
https://<your-instance>.ai.sap.com/inference/predict
Using curl or Postman, test your inference endpoint:
curl -X POST https://<your-instance>.ai.sap.com/inference/predict \-H "Authorization: Bearer <access_token>" \-H "Content-Type: application/json" \-d '{"input": [0.5, 0.8, 0.3, 0.7]}'
Step 5: Integrate with SAP Applications
Now that your model is live, you can embed it into business workflows:
- SAP S/4HANA: Use BTP extensions or SAP Workflow Service to call the AI API during sales, procurement, or finance processes.
- SAP Analytics Cloud: Use the model for embedded predictions in dashboards.
- SAP Intelligent RPA: Trigger predictions during robotic process automation flows.
Architecture Diagram
+---------------------+ +------------------+ +------------------+
|SAP Application | --> | SAP AI API | --> | Dockerized Model |
| (e.g., S/4HANA) | | (Inference Layer)| | on AI Core
+---------------------+ +------------------+ +------------------+
Business Logic Token-based Auth