Soracom SigV4 Linkage and Inference with Amazon SageMaker

By using IAM authentication, data can be sent to Amazon SageMaker for inference, but it can only be allowed in the AWS account where Beam operates, which is under the SORACOM account. By using Beam, there is no need to install authentication information on the device for inference on Amazon SageMaker. With just specifying the Beam endpoint, the device can access Amazon SageMaker and perform the inference.

This page explains how to create a model using XGBoost that predicts the electricity demand from the temperature and humidity at a certain time, using the actual electricity demand values (thousands of kW) in Tokyo in December 2022 (Source: TEPCO Power Grid Co., Ltd.) and weather data in Tokyo (Source: Japan Meteorological Agency), and sending data from the device to perform the inference.

:warning: Please use the US West (Oregon) (us-west-2) region in Amazon SageMaker for confirmed operation. Some of the code introduced on this page is known not to work in the Asia Pacific (Tokyo) (ap-northeast-1) region.

Step 1: Create an Amazon SageMaker Notebook Instance for Data Preparation

Create an Amazon SageMaker Notebook instance to create an AI model.

  1. Access the Amazon SageMaker notebook instance screen in the US West (Oregon) (us-west-2). Access the notebook instance screen.

  2. Click on [Create Notebook Instance],

  3. In the Notebook Instance Settings, enter a name for the notebook instance (e.g. beam-sagemaker) in the [Notebook Instance Name].

  4. Click on [Create a new role] under [IAM Role] in [Permissions and Encryption]. The “Create IAM Role” screen will be displayed.

  5. Verify that “Any S3 bucket” is selected in the [Specify an S3 bucket - Optional] option and click [Create role].


    An IAM role is created to utilize SageMaker and S3 from the notebook instance.

  6. Click on [Create Notebook Instance]. The creation of the notebook instance begins. So please wait until the [Status] displays InService.

Step 2: Create a model using Jupyter Notebook

Use the notebook instance created in Step 1: Create an Amazon SageMaker Notebook Instance for Data Preparation to create a model.

1.On the notebook instance screen, click on the notebook instance with InService displayed in the [Status].
The notebook instance settings screen will be displayed.

  1. Click on [Open Jupyter].

  1. Click on [New] → [conda_python3] in order. A Jupyter Notebook is created.

  2. Paste the following code into a cell and click [Run]. The code will execute, and the libraries necessary to run the code described on this page will be imported.

# import libraries
import boto3, sagemaker
import uuid, urllib.request, os
import numpy as np
import pandas as pd

  1. Paste the following code into the next cell and click [Run]. The code will execute, and the AWS region (my_region) and the URL of the image to be used for model training (xgboost_container) will be displayed.
# Define IAM role
role = sagemaker.get_execution_role()
prefix = 'sagemaker/beam-xgboost'
my_region = boto3.session.Session().region_name
xgboost_container = sagemaker.image_uris.retrieve(framework="xgboost",region=my_region,version='1.5-1')
print("my_region: " + my_region + ", xgboost_container: " + xgboost_container)

  1. The code will execute, and the AWS region (my_region) and the URL of the image to be used for model training (xgboost_container) will be displayed.
my_region: us-west-2, xgboost_container: 123456789012.dkr.ecr.us-west-2.amazonaws.com/sagemaker-xgboost:1.5-1
  1. For the latest container image compatible with Amazon SageMaker’s XGBoost, refer to Docker Registry Paths and Example Code XGBoost (algorithm).

  2. Paste the following code into the next cell and click [Run].
    In the second line of the code, specify the name of the Amazon S3 bucket. Change it as needed.

bucket_name = 'beam-sagemaker-' + str(uuid.uuid4())
print("bucket_name: " + bucket_name)
s3 = boto3.resource('s3')
try:
    if my_region == 'us-east-1':
        s3.create_bucket(Bucket=bucket_name)
    else:
        s3.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={ 'LocationConstraint': my_region })
    print('S3 bucket created successfully')
except Exception as e:
    print('S3 error: ',e)
  1. An Amazon S3 bucket named beam-sagemaker-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX will be created.
bucket_name: beam-sagemaker-XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
S3 bucket created successfully
  1. Paste the following code into the next cell and click [Run].
csv_filename = "weather_power_202212.csv"
urllib.request.urlretrieve("https://users.soracom.io/ja-jp/docs/beam/aws-sagemaker/files/" + csv_filename, csv_filename)
s3.Bucket(bucket_name).upload_file(csv_filename, csv_filename)
  1. The weather_power_202212.csv (sample data for training) will be downloaded and uploaded to the S3 bucket.

Weather_power_202212.csv is data created using the actual power demand (in 10,000 kW) for Tokyo in December 2022 (source: Tokyo Electric Power Grid Co., Ltd.) and Tokyo’s meteorological data (source: Japan Meteorological Agency).

  1. Paste the following code into the next cell and click [Run].
try:
    data_key = 'weather_power_202212.csv'
    data_location = 's3://{}/{}'.format(bucket_name, data_key)
    df_weather_power = pd.read_csv(data_location)
    print('Success: Data loaded into dataframe.')
except Exception as e:
    print('Data load error: ',e)
df_weather_power.head()
  1. The weather_power_202212.csv (sample data for training) uploaded to the S3 bucket is loaded and displayed in tabular format.
    The sample data for training includes date information, meteorological data, and actual power consumption, but we will use [time (hour)], [temperature], [humidity (%)] and [actual_power (actual power demand (10,000 kW))] as examples.
df_weather_power["time"]=df_weather_power["time"].str.replace(':00','').astype('int')
train_data, test_data = np.split(df_weather_power.loc[:,["time","temperature","humidity", "actual_power"]].sample(frac=1, random_state=1729), [int(0.7 * len(df_weather_power))])
print(train_data.shape, test_data.shape)
  1. The sample data for training is split into training data (train_data) and test data (test_data).
(520, 4) (223, 4)
  1. Paste the following code into the next cell and click [Run].
pd.concat([train_data['actual_power'], train_data.drop(['actual_power'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
s3.Bucket(bucket_name).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')
  1. The training data is uploaded to the S3 bucket.

  2. Paste the following code into the next cell and click [Run].

sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(xgboost_container,role, instance_count=1, instance_type='ml.m5.xlarge',output_path='s3://{}/{}/output'.format(bucket_name, prefix),sagemaker_session=sess)
xgb.set_hyperparameters(objective='reg:squarederror',num_round=100)
  1. An Amazon SageMaker training job is created. As an example, we use Amazon XGBoost to train a regression model to predict “actual_power (actual power demand (10,000 kW))” from the given “time (hour)”, “temperature”, and "humidity (%)”.

  2. Paste the following code into the next cell and click [Run].

xgb.fit({'train': s3_input_train})
  1. The training job is executed. It may take several minutes to complete. When the training job is finished, it will display Billable seconds: 999 as shown below.

  2. Paste the following code into the next cell and click [Run].

from sagemaker.serverless import ServerlessInferenceConfig

serverless_config = ServerlessInferenceConfig(
    memory_size_in_mb = 2048,
    max_concurrency = 5
)
serverless_predictor = xgb.deploy(serverless_inference_config = serverless_config)

  1. The trained model is deployed to a serverless endpoint, and the endpoint’s name (e.g., sagemaker-xgboost-2023-03-06-08-53-15-937) is displayed. The endpoint’s name will be referred to as ${amazon_sagemaker_model_endpoint_name} from now on. The created model can also be checked by accessing the Models screen in Amazon SageMaker.

  2. Paste the following code into the next cell and click [Run].

from sagemaker.serializers import CSVSerializer

test_data_array = test_data.drop(['actual_power'], axis=1).values
serverless_predictor.serializer = CSVSerializer()
prediction_results = serverless_predictor.predict(test_data_array).decode('utf-8')
predictions = np.fromstring(prediction_results[1:], sep='\n')

actual = test_data['actual_power'].to_numpy()
RMSE = np.sqrt(np.mean(np.power(actual-predictions,2)))
print(RMSE)
  1. Inference is performed using the test data, and the prediction accuracy is checked. In this case, the RMSE (root mean square error) between the predicted values and the actual values is calculated.
364.0653729628472

Step 3: Create an IAM role and assign it to SORACOM’s AWS account

Allow the SORACOM AWS account, where Beam is running, to execute inference with the model created in Step 2: Create a Model in Jupyter Notebook. Specifically, create an AWS IAM role that allows inference execution and assign it to SORACOM’s AWS account.

1.Access the IAM dashboard, click [Access management] → [Roles] in order, and click [Create role].

  1. Click [AWS account] → [Another AWS account] in order, and enter SORACOM’s AWS account ID in the [Account ID] field.

The AWS account ID for the SORACOM account where Beam is running depends on the coverage type.

  • Japan coverage: 762707677580
  • Global coverage: 950858143650

3.Check [Require external ID] and enter any string in the [External ID] field.
The string entered in the [External ID] will be referred to as ${external_id}from now on. Example: External-ID-gtDRpa2ZQCE7RSvj

For more information on external IDs, please refer to AWS’s Using External IDs when Granting Access to AWS Resources - AWS Identity and Access Management.

4.Click [Next]. The “Add Permissions” screen is displayed.

5.Click [Create Policy]. In another window or tab, the “Create Policy” screen is displayed.

From here, create a policy in the “Create Policy” screen.

Once you have finished the operations on the “Create Policy” screen, return to the “Add Permissions” screen displaying [Create Policy] and continue the process of creating an IAM role. Please do not close the screen.

  1. Set the following items:
Item Description
[Service] Click [Choose a service] and select [SageMaker].
[Actions] Enter “InvokeEndpoint” in [Filter actions] and check [InvokeEndpoint].

  1. Click [Resources] → [Specific] → [Add ARN] in order.
    The “Add ARN” screen is displayed.

  1. Enter the following information and click [Add]:
Item Description
[Region] Enter the Amazon SageMaker region us-west-2.
[Account] Enter your AWS account ID.
[Endpoint name] Enter the name of the deployed model ${amazon_sagemaker_model_endpoint_name}. (e.g., sagemaker-xgboost-2023-03-06-08-53-15-937)

Return to the “Create Policy” screen.

9.Click on [Next Step: Tags] and then [Next Step: Review].

10.Enter an IAM Policy name in [NAME] and click [Create policy]. The IAM Policy will be created.

  1. Close the browser tab to create IAM Policy and open the tab to add permissions.

  2. Click [:arrows_clockwise:(Update button)] and filter IAM Policied with the name in the previous step.

  1. Check the created IAM Policy and click [Next].

  2. Enter name of your role in [Role name] and click [Create Role].

  3. Check the [ARN] of your IAM role for the next step. The string in the [ARN] will be referred to as ${iam_role_arn} from now on. Example: arn:aws:iam::XXXXXXXXXXXX:role/beam-sagemaker-device-role

Step4. Configure Soracom Beam

In this step, you will configrure website entrypoint using Soracom Beam for your device with Soracom IoT Sim to send data to Amazon SageMaker for inferences.

Register AWS IAM Role Credentials

Please register the created IAM Role credential to allow Soracom Beam to call endpoints of Amazon SageMaker. Your credentials are ${iam_role_arn} and ${external_id}.

Item Description
[CREDENTIALS SET ID] Enter the name of your IAM Role. (e.g.,AWS-IAM-role-credentials-invokeEndpoint0
[TYPE] Select AWS IAM Role credentials
[ROLE ARN] Enter your AWS IAM Role ARN ${iam_role_arn}. (e.g., arn:aws:iam::XXXXXXXXXXXX:role/beam-sagemaker-device-role)
[EXTERNAL ID] Enter your external id for your IAM Role ${external_id}. (e.g., External-ID-Rs6E3TFfh5QsyFWp)

Configure Website entry point

:warning: Beam is a configuration of a Soracom SIM group. This section describes only operations to change group settings. For more information on how groups work and how to create a group, see Group Management Overview and Basic Usage.

  1. On the SIM Group page, open SORACOM Beam.
    See Group Settings for more information on configuring the SIM group.
  2. Click on + Add Configuration > Website entry point.
    The “SORACOM Beam - Website configuration” pop-up will appear.
  3. Set up as follows
Item Description
[CONFIGURATION NAME] Enter any configuration name (e.g. Amazon SageMaker).
[DESTINATION] > [PROTOCOL.] Select HTTPS
[DESTINATION] > [ HOST NAME] Enter runtime.sagemaker.{region}.amazonaws.com (e.g. runtime.sagemaker.us-west-2.amazonaws.com).
[DESTINATION] > [PORT NUMBER.] Leave it blank.
[HEADER MANIPULATIONS] > [AUTHORIZATION HEADER] Turn on and set as follows
1. TYPE: select AWS Signature V4.
2. SERVICE: Select Amazon SageMaker.
3. REGION: Select the region for the Amazon SageMaker.
4. CREDENTIALS SET ID: Select the AWS IAM role credentials registered in the previous step Register AWS IAM role credentials in the credential set.



:information_source: For more information on the Website entry point settings, see Website Entry Point.

  1. Click Register.
    Add the IoT SIM to the group you created. If you need help, see Basic Usage - Adding a Device to a Group.
    Now Beam configuration for your IoT SIM is completed.

Step5. Send data to Amazon SageMaker Endpoints for Inferences

In this step, you will execute inference with Amazon SageMaker and Soracom Beam Website Entrypoint that you have just set up.

Inference with AWS SDK for Python(Boto3)

  1. Install AWS SDK for Python (Boto3).
pip install boto3
  1. Download sagemaker_invoke_endpoint.py to your device.
wget http://users.soracom.io/ja-jp/docs/beam/aws-sagemaker/files/sagemaker_invoke_endpoint.py
  1. sagemaker_invoke_endpoint.py is a sample script to call Amazon SageMaker Inference.

  2. Execute the following commands.

python -c "import sagemaker_invoke_endpoint;\
sagemaker_invoke_endpoint.invoke_endpoint(\
  endpoint_name='sagemaker-xgboost-2023-03-06-08-53-15-937',\
  hour=9,\
  temperature=9.0,\
  humidity=81)"

The arguments of the method sagemaker_invoke_endpoint.invoke_endpoint() are as follows.

Item Description
[endpoint_name] Enter your endpoint name ${amazon_sagemaker_model_endpoint_name} (e.g. sagemaker-xgboost-2023-03-06-08-53-15-937).
[hour] Enter the hour that the temperature and humidity are measured.
[temperature] Enter temperature in celsius (e.g. 9.0).
[humidity] Enter humidity[%] (e.g. 81.0).
  1. Your device will get a prediction result for power demand (in 10,000 kW) for Tokyo.
{'predictions': [{'score': 3453.01220703125}]}

Inference with curl commands

You can execute inferences with the following command.

curl -X POST --data "${hour},${temperature},${humidity}" -H "Content-Type: text/csv" -H "Accept: application/json" http://beam.soracom.io:18080/endpoints/${amazon_sagemaker_model_endpoint_name}/invocations
#An example of the curl command
curl -X POST --data "9,9.0,81.0" -H "Content-Type: text/csv" -H "Accept: application/json" http://beam.soracom.io:18080/endpoints/sagemaker-xgboost-2023-03-06-08-53-15-937/invocations

#Result
{'predictions': [{'score': 3453.01220703125}]}

:information_source: If you use curl, you can use HTTP Entry Point stead of Website Entry point. In this case, please specify /endpoints/${amazon_sagemaker_model_endpoint_name}/invocations」(eg. /endpoints/sagemaker-xgboost-2023-03-06-08-53-15-937/invocations) as its destination path in HTTP Entry Point.

Clean Up Your SageMaker

You can delete your model and endpoint of Amazon Sagemaker.

  1. You can delete your model and endpoint of Amazon Sagemaker.
serverless_predictor.delete_model()
serverless_predictor.delete_endpoint()
  1. Please make sure your model has been deleted and is not listed in the Models of Amazon SageMaker.

  2. Please delete all the listed resources you’ve just created in this guide.

  • AWS
    • SageMaker Notebook Instances
    • Amazon S3 Bucket
    • IAM Roles
    • IAM Policies
  • SORACOM User Console
    • IAM Credential registered in Soracom Credential Store
    • Group
  • Devices
    • sagemaker_invoke_endpoint.py