Cloud Logging with SensorTile.box and AWS

Created by Taylor Roorda on Jul 24, 2020

Introduction

The SensorTile.box from ST is a complete Bluetooth sensor kit designed to be approachable by developers of all experience levels. The board features a number of sensors for environmental data such as temperature, humidity, pressure as well as an accelerometer, gyroscope, magnetometer, and even a microphone. This guide steps through one method of connecting the SensorTile.box to the cloud without writing any firmware using one of ST’s pre-made function packs. A Raspberry Pi will act as a gateway to send data from the Bluetooth connection over the internet. The goal is to save the data from the SensorTile.box to a cloud database where it can be used later and accessed from any internet capable device. AWS is the cloud provider of choice for this example with the DynamoDB, Lambda, and IoT Greengrass services being utilized.

Hardware Requirements

STEVAL-MKSBOX1V1 - ST SensorTile.box

ST-LINK/V2 - ST Link Programmer

RASPBERRY PI 3 MODEL B+ - Raspberry Pi 3 or 4

Software Requirements and Documentation

BlueST SDK – Python library for BLE communication

FP-SNS-ALLMEMS1 – SensorTile.box firmware for IoT node with BLE connectivity, digital microphone, environmental and motion sensors (v4.0.0+)

STSW-LINK004 – ST Link Utility for programming

Getting Started with AWS Greengrass – AWS docs for configuring Greengrass on Raspberry Pi

Firmware Setup

This demo uses ST’s FP-SNS-ALLMEMS1 function pack to read data from the following SensorTile.box sensors: temperature, humidity, pressure, magnetometer, gyroscope, accelerometer, and microphone. It also tracks two additional features: activity recognition (walking, stationary, etc.) and gesture recognition. ST includes a script in the function pack to easily flash the board with the firmware as demonstrated below.

  1. Extract the function pack to a convenient location and make sure the ST Link utility is installed.
  2. Connect the ST-LINK/V2 programmer to the SensorTile.box board.
  3. In the extracted function pack, navigate to STM32CubeFunctionPack_ALLMEMS1_Vx.x.x\Projects\STM32L4R9ZI-SensorTile.box\Applications\ALLMEMS1\STM32CubeIDE
  4. Create a copy of CleanALLMEMS2_STM32CubeIDE_ST.box.bat and modify it as shown to use the pre-built binaries instead of the project build result. No compilation necessary. Only the path to the application binary, NAMEALLMEMS1 , is changed. Make sure it matches the version you downloaded.
ModifiedCleanALLMEMS2_STM32CubeIDE_ST.box.bat
@echo off
set STLINK_PATH="C:\Program Files (x86)\STMicroelectronics\STM32 ST-LINK Utility\ST-LINK Utility\"
set NAMEALLMEMS1=..\Binary\STM32L4R9ZI-SensorTileBox_ALLMEMS1_v4.0.0
set BOOTLOADER="..\..\..\..\..\Utilities\BootLoader\STM32L4R9ZI\BootLoaderL4R9.bin"
color 0F
echo                /******************************************/
echo                            Clean FP-SNS-ALLMEMS1
echo                /******************************************/
echo                              Full Chip Erase
echo                /******************************************/
%STLINK_PATH%ST-LINK_CLI.exe -c UR -Rst -ME
echo                /******************************************/
echo                              Install BootLoader
echo                /******************************************/
%STLINK_PATH%ST-LINK_CLI.exe -P %BOOTLOADER% 0x08000000 -V "after_programming"
echo                /******************************************/
echo                          Install FP-SNS-ALLMEMS1
echo                /******************************************/
%STLINK_PATH%ST-LINK_CLI.exe -P %NAMEALLMEMS1%.bin 0x08004000 -V "after_programming"
echo                /******************************************/
echo                     Dump FP-SNS-ALLMEMS1 + BootLoader
echo                /******************************************/
set offset_size=0x4000
for %%I in (%NAMEALLMEMS1%.bin) do set application_size=%%~zI
echo %NAMEALLMEMS1%.bin size is %application_size% bytes
set /a size=%offset_size%+%application_size%
echo Dumping %offset_size% + %application_size% = %size% bytes ...
echo ..........................
%STLINK_PATH%ST-LINK_CLI.exe -Dump 0x08000000 %size% %NAMEALLMEMS1%_BL.bin
echo                /******************************************/
echo                                 Reset STM32
echo                /******************************************/
%STLINK_PATH%ST-LINK_CLI.exe -Rst
if NOT "%1" == "SILENT" pause
  1. Run ModifiedCleanALLMEMS2_STM32CubeIDE_ST.box.bat to use the ST Link Utility to flash the bootloader and firmware image onto the board.
Programming Results
               /******************************************/
                           Clean FP-SNS-ALLMEMS1
               /******************************************/
                             Full Chip Erase
               /******************************************/
STM32 ST-LINK CLI v3.5.0.0
STM32 ST-LINK Command Line Interface
 
ST-LINK SN: 54FF6C064984485624491087
ST-LINK Firmware version: V2J27S6
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 1.8 V
Connection mode: Connect Under Reset
Reset mode: Hardware reset
Device ID: 0x470
Device flash Size: 2048 Kbytes
Device family: STM32L4Rx/L4Sx
 
MCU Reset.
 
Full chip erase...
Flash memory erased.
 
               /******************************************/
                             Install BootLoader
               /******************************************/
STM32 ST-LINK CLI v3.5.0.0
STM32 ST-LINK Command Line Interface
 
ST-LINK SN: 54FF6C064984485624491087
ST-LINK Firmware version: V2J27S6
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 1.8 V
Connection mode: Normal
Reset mode: Hardware reset
Device ID: 0x470
Device flash Size: 2048 Kbytes
Device family: STM32L4Rx/L4Sx
Loading file...
Flash Programming:
  File : ..\..\..\..\..\Utilities\BootLoader\STM32L4R9ZI\BootLoaderL4R9.bin
  Address : 0x08000000
Memory programming...
██████████████████████████████████████████████████ 100%
Reading and verifying device memory...
██████████████████████████████████████████████████ 100%
Memory programmed in 0s and 703ms.
Verification...OK
Programming Complete.
 
               /******************************************/
                         Install FP-SNS-ALLMEMS1
               /******************************************/
STM32 ST-LINK CLI v3.5.0.0
STM32 ST-LINK Command Line Interface
 
ST-LINK SN: 54FF6C064984485624491087
ST-LINK Firmware version: V2J27S6
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 1.8 V
Connection mode: Normal
Reset mode: Hardware reset
Device ID: 0x470
Device flash Size: 2048 Kbytes
Device family: STM32L4Rx/L4Sx
Loading file...
Flash Programming:
  File : ..\Binary\STM32L4R9ZI-SensorTileBox_ALLMEMS1_v4.1.0.bin
  Address : 0x08004000
Memory programming...
██████████████████████████████████████████████████ 100%
Reading and verifying device memory...
██████████████████████████████████████████████████ 100%
Memory programmed in 9s and 172ms.
Verification...OK
Programming Complete.
 
               /******************************************/
                    Dump FP-SNS-ALLMEMS1 + BootLoader
               /******************************************/
..\Binary\STM32L4R9ZI-SensorTileBox_ALLMEMS1_v4.1.0.bin size is 235972 bytes
Dumping 0x4000 + 235972 = 252356 bytes ...
..........................
STM32 ST-LINK CLI v3.5.0.0
STM32 ST-LINK Command Line Interface
 
ST-LINK SN: 54FF6C064984485624491087
ST-LINK Firmware version: V2J27S6
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 1.8 V
Connection mode: Normal
Reset mode: Hardware reset
Device ID: 0x470
Device flash Size: 2048 Kbytes
Device family: STM32L4Rx/L4Sx
Dumping memory ...
Address = 0x08000000
Memory Size = 0x0003D9C4
 
██████████████████████████████████████████████████ 100%
Saving file [..\Binary\STM32L4R9ZI-SensorTileBox_ALLMEMS1_v4.1.0_BL.bin] ...
Dumping memory to ..\Binary\STM32L4R9ZI-SensorTileBox_ALLMEMS1_v4.1.0_BL.bin succeded
 
               /******************************************/
                                Reset STM32
               /******************************************/
STM32 ST-LINK CLI v3.5.0.0
STM32 ST-LINK Command Line Interface
 
ST-LINK SN: 54FF6C064984485624491087
ST-LINK Firmware version: V2J27S6
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 1.8 V
Connection mode: Normal
Reset mode: Hardware reset
Device ID: 0x470
Device flash Size: 2048 Kbytes
Device family: STM32L4Rx/L4Sx
MCU Reset.
 
Press any key to continue . . .

Test Bluetooth Connection

Complete the following steps on a gateway device, the Raspberry Pi in this case.

  1. Install the Python BlueST SDK according to the instructions on GitHub
sudo pip3 install bluepy
sudo pip3 install futures
sudo pip3 install blue-st-sdk
  1. Clone the application examples.
git clone https://github.com/STMicroelectronics/BlueSTSDK_Python.git
  1. Run example_ble_1.py to verify that data can be retrieved over BLE.

Add Microphone Feature to BlueST SDK

At the time of this writing, there is no feature class in the BlueST SDK that simply reads the level of the microphone despite the characteristic to do so being present in the BLE data. This section demonstrates a simple method of adding a custom feature to the SDK.

  1. Create a new Python file named feature_microphone.py with the following contents:
feature_microphone.py
from blue_st_sdk.feature import Feature
from blue_st_sdk.feature import Sample
from blue_st_sdk.feature import ExtractedData
from blue_st_sdk.features.field import Field
from blue_st_sdk.features.field import FieldType
from blue_st_sdk.utils.number_conversion import LittleEndian
from blue_st_sdk.utils.blue_st_exceptions import BlueSTInvalidOperationException
from blue_st_sdk.utils.blue_st_exceptions import BlueSTInvalidDataException
 
import sys, traceback
 
 
class FeatureMicrophone(Feature):
    """The feature handles the data coming from a microphone.
    Data is two bytes long and has one decimal value.
    """
 
    FEATURE_NAME = "Microphone"
    FEATURE_UNIT = "dB"
    FEATURE_DATA_NAME = "Microphone"
    DATA_MAX = 130      # Acoustic overload point of the microphone
    DATA_MIN = 0
    FEATURE_FIELDS = Field(
        FEATURE_DATA_NAME,
        FEATURE_UNIT,
        FieldType.UInt8,
        DATA_MAX,
        DATA_MIN)
    DATA_LENGTH_BYTES = 1
    SCALE_FACTOR = 1.0
 
    def __init__(self, node):
        """Constructor.
        Args:
            node (:class:`blue_st_sdk.node.Node`): Node that will send data to
                this feature.
        """
        super(FeatureMicrophone, self).__init__(self.FEATURE_NAME, node, [self.FEATURE_FIELDS])
 
    def extract_data(self, timestamp, data, offset):
        """Extract the data from the feature's raw data.
        Args:
            timestamp (int): Data's timestamp.
            data (str): The data read from the feature.
            offset (int): Offset where to start reading data.
 
        Returns:
            :class:`blue_st_sdk.feature.ExtractedData`: Container of the number
            of bytes read and the extracted data.
        Raises:
            :exc:`blue_st_sdk.utils.blue_st_exceptions.BlueSTInvalidDataException`
                if the data array has not enough data to read.
        """
        if len(data) - offset < self.DATA_LENGTH_BYTES:
            raise BlueSTInvalidDataException(
                'There are no %d bytes available to read.' \
                % (self.DATA_LENGTH_BYTES))
        sample = Sample(
            [data[offset] / self.SCALE_FACTOR],
            self.get_fields_description(),
            timestamp)
        return ExtractedData(sample, self.DATA_LENGTH_BYTES)
 
    @classmethod
    def get_mic_level(self, sample):
        """Get the mic level value from a sample.
        Args:
            sample (:class:`blue_st_sdk.feature.Sample`): Sample data.
 
        Returns:
            float: The mic level value if the data array is valid, <nan>
            otherwise.
        """
        if sample is not None:
            if sample._data:
                if sample._data[0] is not None:
                    return float(sample._data[0])
        return float('nan')
 
    def read_mic_level(self):
        """Read the mic level value.
        Returns:
            float: The mic level value if the read operation is successful,
            <nan> otherwise.
        Raises:
            :exc:`blue_st_sdk.utils.blue_st_exceptions.BlueSTInvalidOperationException`
                is raised if the feature is not enabled or the operation
                required is not supported.
            :exc:`blue_st_sdk.utils.blue_st_exceptions.BlueSTInvalidDataException`
                if the data array has not enough data to read.
        """
        try:
            self._read_data()
            return FeatureMicrophone.get_mic_level(self._get_sample())
        except (BlueSTInvalidOperationException, BlueSTInvalidDataException) as e:
            raise e
  1. Modify the ST example to use the custom feature
mic_test.py
import os
import sys
import time
 
from blue_st_sdk.manager import Manager, ManagerListener
from blue_st_sdk.node import NodeListener
from blue_st_sdk.feature import FeatureListener
from blue_st_sdk.features.audio.adpcm.feature_audio_adpcm import FeatureAudioADPCM
from blue_st_sdk.features.audio.adpcm.feature_audio_adpcm_sync import FeatureAudioADPCMSync
from blue_st_sdk.utils.uuid_to_feature_map import UUIDToFeatureMap
from blue_st_sdk.utils.ble_node_definitions import FeatureCharacteristic
 
from feature_microphone import FeatureMicrophone
 
SCANNING_TIME_s = 5
MAX_NOTIFICATIONS = 15
DEVICE_NAME = "AM1V400"
 
class MyManagerListener(ManagerListener):
 
    #
    # This method is called whenever a discovery process starts or stops.
    #
    # @param manager Manager instance that starts/stops the process.
    # @param enabled True if a new discovery starts, False otherwise.
    #
    def on_discovery_change(self, manager, enabled):
        print('Discovery %s.' % ('started' if enabled else 'stopped'))
        if not enabled:
            print()
 
    #
    # This method is called whenever a new node is discovered.
    #
    # @param manager Manager instance that discovers the node.
    # @param node    New node discovered.
    #
    def on_node_discovered(self, manager, node):
        print('New device discovered: %s.' % (node.get_name()))
 
 
class MyNodeListener(NodeListener):
 
    #
    # To be called whenever a node connects to a host.
    #
    # @param node Node that has connected to a host.
    #
    def on_connect(self, node):
        print('Device %s connected.' % (node.get_name()))
 
    #
    # To be called whenever a node disconnects from a host.
    #
    # @param node       Node that has disconnected from a host.
    # @param unexpected True if the disconnection is unexpected, False otherwise
    #                   (called by the user).
    #
    def on_disconnect(self, node, unexpected=False):
        print('Device %s disconnected%s.' % \
            (node.get_name(), ' unexpectedly' if unexpected else ''))
        if unexpected:
            # Exiting.
            print('\nExiting...\n')
            sys.exit(0)
 
 
class MyFeatureListener(FeatureListener):
 
    _notifications = 0
    """Counting notifications to print only the desired ones."""
 
    #
    # To be called whenever the feature updates its data.
    #
    # @param feature Feature that has updated.
    # @param sample  Data extracted from the feature.
    #
    def on_update(self, feature, sample):
        if self._notifications < MAX_NOTIFICATIONS:
            self._notifications += 1
            print(feature)
 
 
def main():
    try:
        # Creating Bluetooth Manager.
        manager = Manager.instance()
        manager_listener = MyManagerListener()
        manager.add_listener(manager_listener)
 
        # Append custom mic level feature into the BlueST library before discovery
        mask_to_features_dic = FeatureCharacteristic.SENSOR_TILE_BOX_MASK_TO_FEATURE_DIC
        mask_to_features_dic[0x04000000] = FeatureMicrophone
        try:
            Manager.add_features_to_node(0x06, mask_to_features_dic)
        except Exception as e:
            print(e)
 
        # Synchronous discovery of Bluetooth devices.
        print('Scanning Bluetooth devices...\n')
        manager.discover(SCANNING_TIME_s)
 
        # Getting discovered devices.
        discovered_devices = manager.get_nodes()
        if not discovered_devices:
            raise Exception("No devices found.")
 
        # Find the correct device name
        filtered_devices = list(filter(lambda x: x.get_name() == DEVICE_NAME, discovered_devices))
 
        # Only have the one device right now
        device = filtered_devices[0]
        print("Connecting to device: {} ({})\n".format(device.get_name(), device.get_tag()))
 
        node_listener = MyNodeListener()
        device.add_listener(node_listener)
 
        if not device.connect():
            print('Connection failed.\n')
            raise Exception("Failed to connect to device {} ({}).".format(device.get_name(), device.get_tag()))
 
        # Getting features.
        features = device.get_features()
 
        print("Available features:")
        for each in features:
            print(each.get_name())
 
        # Get only the mic feature
        filtered_features = list(filter(lambda x: x.get_name() == "Microphone", features))
        feature = filtered_features[0]
 
        print("Listening to {} feature...".format(feature.get_name()))
 
        # Enabling notifications.
        feature_listener = MyFeatureListener()
        feature.add_listener(feature_listener)
        device.enable_notifications(feature)
 
        # Handling audio case (both audio features have to be enabled).
        if isinstance(feature, FeatureAudioADPCM):
            audio_sync_feature_listener = MyFeatureListener()
            audio_sync_feature.add_listener(audio_sync_feature_listener)
            device.enable_notifications(audio_sync_feature)
        elif isinstance(feature, FeatureAudioADPCMSync):
            audio_feature_listener = MyFeatureListener()
            audio_feature.add_listener(audio_feature_listener)
            device.enable_notifications(audio_feature)
 
        # Getting notifications.
        notifications = 0
        start_time = time.time()
        while notifications < MAX_NOTIFICATIONS:
            if device.wait_for_notifications(0.05):
                start_time = time.time()
                notifications += 1
 
            if time.time() > start_time + 10:
                 print("Timed out waiting for notifications.")
                 break
 
        print("Shutting down...")
 
        # Disabling notifications.
        device.disable_notifications(feature)
        feature.remove_listener(feature_listener)
 
        # Handling audio case (both audio features have to be disabled).
        if isinstance(feature, FeatureAudioADPCM):
            device.disable_notifications(audio_sync_feature)
            audio_sync_feature.remove_listener(audio_sync_feature_listener)
        elif isinstance(feature, FeatureAudioADPCMSync):
            device.disable_notifications(audio_feature)
            audio_feature.remove_listener(audio_feature_listener)
 
        # Shut everything down
        device.remove_listener(node_listener)
        device.disconnect()
        manager.remove_listener(manager_listener)
    except KeyboardInterrupt:
        print("Program killed. Exiting...")
 
        device.disable_notifications(feature)
        feature.remove_listener(feature_listener)
        device.remove_listener(node_listener)
        device.disconnect()
        manager.remove_listener(manager_listener)
 
        sys.exit(0)
    except SystemExit:
        os._exit(0)
 
 
if __name__ == "__main__":
    main()
  1. Verify that it works.

At this point, all the basic sensor data is accessible through the Python script. The next step is to log the data to AWS and eventually find something useful to do with it.

Setup AWS Greengrass Core on Raspberry Pi

In this example, the Raspberry Pi hosts the AWS IoT Greengrass Core. This allows the Pi to act as a gateway to the cloud and simplifies communication to AWS. It also enables execution of Lambda code, messaging, and other functions at the local level so that IoT devices can continue to operate and interact without a cloud connection. AWS already has an excellent guide for the initial set up which is followed below.

  1. Use either of the following methods from the Getting Started Guide to set up the Raspberry Pi:
    a. The automatic quick start script or
    b. Module 1 (Greengrass environment setup) and Module 2 (Installing Greengrass Core software)
    c. Configure the Greengrass Daemon to start on power up.

Before proceeding, ensure you have the following results:

  • A new Greengrass Group in AWS IoT
  • Security resources for the Raspberry Pi (certificate, keys, config)
  • Greengrass daemon running on the Raspberry Pi

Create a SensorTile.box AWS Device

Next, a device will be created to uniquely identify the SensorTile.box.

  1. In the Greengrass Group from the last step, create a new device.
  2. Download the security resources and copy them to a convenient location on the Raspberry Pi. For this example, assume there is a directory called aws in the project directory to hold the credentials.
  3. Create a SensorTile.box class based on the example above with the added microphone feature.
sensortile_box.py
import os
import sys
import time
 
from blue_st_sdk.manager import Manager, ManagerListener
from blue_st_sdk.node import NodeListener
from blue_st_sdk.feature import FeatureListener
from blue_st_sdk.features.audio.adpcm.feature_audio_adpcm import FeatureAudioADPCM
from blue_st_sdk.features.audio.adpcm.feature_audio_adpcm_sync import FeatureAudioADPCMSync
from blue_st_sdk.utils.uuid_to_feature_map import UUIDToFeatureMap
from blue_st_sdk.utils.ble_node_definitions import FeatureCharacteristic
 
from feature_microphone import FeatureMicrophone
 
 
class BluetoothError(Exception):
    def __init__(self, value):
        self.value = value
 
 
# Default listner to print notifications from BT
class Listener(FeatureListener):
    def on_update(self, feature, sample):
        print(feature)
 
 
class SensorTileBox:
    DEVICE_NAME = "AM1V400"
    SCAN_TIME_s = 3
    TIMEOUT_s = 10
 
    # Maps feature name to its characteristic handle
    FEATURES = {
        "Gesture": 0,
        "Activity Recognition": 1,
        "Temperature1": 2,
        "Temperature2": 3,
        "Humidity": 4,
        "Pressure": 5,
        "Magnetometer": 6,
        "Gyroscope": 7,
        "Accelerometer": 8,
        "Microphone": 9
    }
 
    def __init__(self, bt_addr):
        # Creating Bluetooth Manager.
        self.bt_manager = Manager.instance()
 
        # Append custom mic level feature into the BlueST library before discovery
        mask_to_features_dic = FeatureCharacteristic.SENSOR_TILE_BOX_MASK_TO_FEATURE_DIC
        mask_to_features_dic[0x04000000] = FeatureMicrophone
        try:
            Manager.add_features_to_node(0x06, mask_to_features_dic)
        except Exception as e:
            print(e)
 
        # Synchronous discovery of Bluetooth devices.
        self.bt_manager.discover(SensorTileBox.SCAN_TIME_s)
 
        discovered_devices = self.bt_manager.get_nodes()
        if not discovered_devices:
            raise BluetoothError("No devices discovered.")
 
        # Find the correct device name
        filtered_devices = list(filter(lambda x: x.get_name() == SensorTileBox.DEVICE_NAME, discovered_devices))
        if not filtered_devices:
            raise BluetoothError("Could not find device named {}".format(SensorTileBox.DEVICE_NAME))
 
        # Use the first (presumably only) matched device
        self.bt_device = filtered_devices[0]
        if not self.bt_device.connect():
            raise BluetoothError("Could not connect to device {} ({})".format(self.bt_device.get_name(), self.bt_device.get_tag()))
 
        # Retrieve available features from the device, indexed by characteristic handle
        self.features = self.bt_device.get_features()
 
    def attach_feature_listener(self, feature_name, listener):
        # Register a callback function to process incoming updates from the specified feature
        feature = self.features[SensorTileBox.FEATURES[feature_name]]
        feature.add_listener(listener)
 
    def remove_feature_listener(self, feature_name, listener):
        # Register a callback function to process incoming updates from the specified feature
        feature = self.features[SensorTileBox.FEATURES[feature_name]]
        feature.remove_listener(listener)
 
    def enable_feature_notifications(self, feature_name):
        feature = self.features[SensorTileBox.FEATURES[feature_name]]
        self.bt_device.enable_notifications(feature)
 
    def disable_feature_notifications(self, feature_name):
        feature = self.features[SensorTileBox.FEATURES[feature_name]]
        self.bt_device.disable_notifications(feature)
 
    def start(self):
        while True:
            if self.bt_device.wait_for_notifications(SensorTileBox.TIMEOUT_s):
                # Notification Received
                continue
            else:
                raise BluetoothError("ERROR: Notifications timed out after {} seconds.".format(SensorTileBox.TIMEOUT_s))
 
    def shutdown(self):
        # Disconnect and reset, removing all notifications and listeners
        for f in self.features:
            self.bt_device.disable_notifications(f)
            for l in f._listeners:
                f.remove_listener(l)
        print("Disconnecting from SensorTile.box.")
        self.bt_manager.reset_discovery()
        self.bt_device.disconnect()
  1. Create a script that acts as the Greengrass “device.” Make sure to adjust the Bluetooth MAC address, AWS host URL, and certificate/key paths as necessary to match your setup.
gg_sensortile_box.py
# AWS IoT Device
# Creates an instance of the SensorTileBox and registers a listener
# that translates incoming BT notifications to JSON MQTT messages
# which are then sent to the Greengrass Core
 
import json
import logging
import os
import re
import sys
import time
 
from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider
from AWSIoTPythonSDK.core.protocol.connection.cores import ProgressiveBackOffCore
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient, AWSIoTMQTTShadowClient
 
from blue_st_sdk.feature import FeatureListener
from sensortile_box import SensorTileBox, BluetoothError
 
# Custom listener to convert BT notifcations to MQTT
class NotificationListener(FeatureListener):
    def __init__(self, mqtt_client, downsample=1):
        super().__init__()
        self.mqtt_client = mqtt_client
 
        # Downsample by this factor for rate limiting
        self.downsample = downsample
        self._counter = 0
 
    def on_update(self, feature, sample):
        # Convert data to json format and send over MQTT
        json_dict = {}
        json_dict["feature"] = feature.get_name()
        json_dict["timestamp"] = sample.get_timestamp()
        json_dict["data"] = sample.get_data()
 
        # Downsample if enabled
        self._counter += 1
        if self._counter >= self.downsample:
            json_string = json.dumps(json_dict)
            myMQTTClient.publish("/device/test", json_string, 0)
 
            self._counter = 0
 
 
# Create a SensorTileBox instance to handle the bluetooth connection
# Do this first because if the connection fails, nothing else matters
print("Establishing Bluetooth connection to SensorTile.box")
BLUETOOTH_ADDR = "E7:BE:F2:3D:1C:DA".lower()
stb = SensorTileBox(BLUETOOTH_ADDR)
 
# Connect with AWS IoT Greengrass Core (the RPi)
# TODO: these paths must be adjusted for your own credentials/system
MAX_DISCOVERY_RETRIES = 10  # MAX tries at discovery before giving up
GROUP_PATH = "./aws/GroupCA"  # directory storing discovery info
CA_NAME = "root-ca.crt"  # stores GGC CA cert
GGC_ADDR_NAME = "ggc-host"  # stores GGC host address
host = "<your aws host endpoint>"
iotCAPath = "/greengrass/certs/root.ca.pem"
certificatePath = "./aws/<yourcredentials>.cert.pem"
privateKeyPath = "./aws/<yourcredentials>.private.key"
thingName = "SensorTile-box"
clientId = "SensorTile-box"
 
# AWS Example for Greegrass Discovery
# Configure logging
logger = logging.getLogger("AWSIoTPythonSDK.core")
logger.setLevel(logging.INFO)  # set to logging.DEBUG for additional logging
streamHandler = logging.StreamHandler()
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
streamHandler.setFormatter(formatter)
logger.addHandler(streamHandler)
 
# function does basic regex check to see if value might be an ip address
def isIpAddress(value):
    match = re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}", value)
    return True if match else False
 
 
# function reads host GGC ip address from filePath
def getGGCAddr(filePath):
    f = open(filePath, "r")
    return f.readline()
 
 
# Used to discover GGC group CA and end point. After discovering it persists in GROUP_PATH
def discoverGGC(host, iotCAPath, certificatePath, privateKeyPath, clientId):
    # Progressive back off core
    backOffCore = ProgressiveBackOffCore()
 
    # Discover GGCs
    discoveryInfoProvider = DiscoveryInfoProvider()
    discoveryInfoProvider.configureEndpoint(host)
    discoveryInfoProvider.configureCredentials(iotCAPath, certificatePath, privateKeyPath)
    discoveryInfoProvider.configureTimeout(10)  # 10 sec
    print("Iot end point: " + host)
    print("Iot CA Path: " + iotCAPath)
    print("GGAD cert path: " + certificatePath)
    print("GGAD private key path: " + privateKeyPath)
    print("GGAD thing name : " + clientId)
    retryCount = MAX_DISCOVERY_RETRIES
    discovered = False
    groupCA = None
    coreInfo = None
    while retryCount != 0:
        try:
            discoveryInfo = discoveryInfoProvider.discover(clientId)
            caList = discoveryInfo.getAllCas()
            coreList = discoveryInfo.getAllCores()
 
            # In this example we only have one core
            # So we pick the first ca and core info
            groupId, ca = caList[0]
            coreInfo = coreList[0]
            print("Discovered GGC: " + coreInfo.coreThingArn + " from Group: " + groupId)
            hostAddr = ""
 
            # In this example Ip detector lambda is turned on which reports
            # the GGC hostAddr to the CIS (Connectivity Information Service) that stores the
            # connectivity information for the AWS Greengrass core associated with your group.
            # This is the information used by discovery and the list of host addresses
            # could be outdated or wrong and you would normally want to
            # validate it in a better way.
            # For simplicity, we will assume the first host address that looks like an ip
            # is the right one to connect to GGC.
            # Note: this can also be set manually via the update-connectivity-info CLI
            for addr in coreInfo.connectivityInfoList:
                hostAddr = addr.host
                if isIpAddress(hostAddr):
                    break
 
            print("Discovered GGC Host Address: " + hostAddr)
 
            print("Now we persist the connectivity/identity information...")
            groupCA = GROUP_PATH + CA_NAME
            ggcHostPath = GROUP_PATH + GGC_ADDR_NAME
            if not os.path.exists(GROUP_PATH):
                os.makedirs(GROUP_PATH)
            groupCAFile = open(groupCA, "w")
            groupCAFile.write(ca)
            groupCAFile.close()
            groupHostFile = open(ggcHostPath, "w")
            groupHostFile.write(hostAddr)
            groupHostFile.close()
 
            discovered = True
            print("Now proceed to the connecting flow...")
            break
        except DiscoveryInvalidRequestException as e:
            print("Invalid discovery request detected!")
            print("Type: " + str(type(e)))
            print("Error message: " + e.message)
            print("Stopping...")
            break
        except BaseException as e:
            print("Error in discovery!")
            print("Type: " + str(type(e)))
            print("Error message: " + e.message)
            retryCount -= 1
            print("\n" + str(retryCount) + "/" + str(MAX_DISCOVERY_RETRIES) + " retries left\n")
            print("Backing off...\n")
            backOffCore.backOff()
 
    if not discovered:
        print("Discovery failed after " + str(MAX_DISCOVERY_RETRIES) + " retries. Exiting...\n")
        sys.exit(-1)
 
 
# Run Discovery service to check which GGC to connect to, if it hasn't been run already
# Discovery talks with the IoT cloud to get the GGC CA cert and ip address
 
if not os.path.isfile("./auth/groupCA/root-ca.crt"):
    discoverGGC(host, iotCAPath, certificatePath, privateKeyPath, clientId)
else:
    print("Greengrass core has already been discovered.")
 
# read GGC Host Address from file
ggcAddrPath = GROUP_PATH + GGC_ADDR_NAME
rootCAPath = GROUP_PATH + CA_NAME
ggcAddr = getGGCAddr(ggcAddrPath)
print("GGC Host Address: " + ggcAddr)
print("GGC Group CA Path: " + rootCAPath)
print("Private Key of SensorTileBox thing Path: " + privateKeyPath)
print("Certificate of SensorTileBox thing Path: " + certificatePath)
print("Client ID(thing name for SensorTileBox): " + clientId)
print("Target shadow thing ID(thing name for SensorTileBox): " + thingName)
 
# Discovery complete. End of AWS Examples
 
# Create an MQTT client
myMQTTClient = AWSIoTMQTTClient(clientId)
myMQTTClient.configureEndpoint(ggcAddr, 8883)
myMQTTClient.configureCredentials(rootCAPath, privateKeyPath, certificatePath)
 
# Configure MQTT parameters (example defaults)
myMQTTClient.configureOfflinePublishQueueing(-1)  # Infinite offline Publish queueing
myMQTTClient.configureDrainingFrequency(2)  # Draining: 2 Hz
myMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
myMQTTClient.configureConnectDisconnectTimeout(10)  # 10 sec
myMQTTClient.configureMQTTOperationTimeout(5)  # 5 sec
 
myMQTTClient.connect()
 
# Enable notifications for desired sensors to be sent to an MQTT listener
temp1_listener = NotificationListener(myMQTTClient, downsample=6)
stb.attach_feature_listener("Temperature1", temp1_listener)
stb.enable_feature_notifications("Temperature1")
 
# temp2_listener = NotificationListener(myMQTTClient)
# stb.attach_feature_listener("Temperature2", temp2_listener)
# stb.enable_feature_notifications("Temperature2")
 
# humidity_listener = NotificationListener(myMQTTClient)
# stb.attach_feature_listener("Humidity", humidity_listener)
# stb.enable_feature_notifications("Humidity")
#
# pressure_listener = NotificationListener(myMQTTClient)
# stb.attach_feature_listener("Pressure", pressure_listener)
# stb.enable_feature_notifications("Pressure")
#
# magnet_listener = NotificationListener(myMQTTClient)
# stb.attach_feature_listener("Magnetometer", magnet_listener)
# stb.enable_feature_notifications("Magnetometer")
#
# gyro_listener = NotificationListener(myMQTTClient)
# stb.attach_feature_listener("Gyroscope", gyro_listener)
# stb.enable_feature_notifications("Gyroscope")
#
# accel_listener = NotificationListener(myMQTTClient)
# stb.attach_feature_listener("Accelerometer", accel_listener)
# stb.enable_feature_notifications("Accelerometer")
 
mic_listener = NotificationListener(myMQTTClient, downsample=6)
stb.attach_feature_listener("Microphone", mic_listener)
stb.enable_feature_notifications("Microphone")
 
# Test message
myMQTTClient.publish("/device/hello", "Hello from SensorTile!", 0)
 
# Stay here until killed then shutdown
try:
    stb.start()
except BluetoothError as e:
    print(e)
    print("Stopping SensorTile...")
except KeyboardInterrupt:
    print("Keyboard interrupt received. Shutting down...")
 
# Close and reset everything
stb.shutdown()
 
myMQTTClient.disconnect()
 
sys.exit(0)

Configure AWS Services

AWS Lambda and DynamoDB are the primary services used for this project. The Lambda function runs locally on the Raspberry Pi and formats the incoming data before writing it to DynamoDB.

Lambda

The Lambda function in this project uses the AWS Python SDK (boto3) to easily interface to DynamoDB. Normally boto3 is included in the Lambda execution environment, but in Greengrass this isn’t the case. Therefore, a deployment package must be created that includes and additional dependencies the Lambda function might have. The downside of this is that any changes to the Lambda code require re-uploading the entire deployment package.

  1. Create a new directory to hold the dependencies and Lambda code.
  2. Pip install the boto3 dependency to the new directory.
pip3 install boto3 -t <target directory>
  1. Add the source code for the Lambda function. Adjust region as necessary.
lambda_function.py
# Lambda function that runs on the Greengrass Core
import json
import logging
from datetime import datetime
 
import boto3
from botocore.exceptions import ClientError
 
# Connect to DynamoDB
dynamodb = boto3.resource("dynamodb", region_name="us-west-2")
table_name = "SensorTileBoxData"
 
# Initialize logger
logger = logging.getLogger()
logger.setLevel(logging.INFO)
 
# Create the dynamoDB table if needed
try:
    table = dynamodb.create_table(
        TableName=table_name,
        KeySchema=[{"AttributeName": "timestamp", "KeyType": "HASH"}, {"AttributeName": "feature", "KeyType": "RANGE"}],  # Partition and sort keys
        AttributeDefinitions=[{"AttributeName": "timestamp", "AttributeType": "S"}, {"AttributeName": "feature", "AttributeType": "S"}],
        ProvisionedThroughput={"ReadCapacityUnits": 5, "WriteCapacityUnits": 5},
    )
 
    # Wait until the table exists.
    table.meta.client.get_waiter("table_exists").wait(TableName=table_name)
except ClientError as e:
    if e.response["Error"]["Code"] == "ResourceInUseException":
        print("Table already created")
 
        # Use the existing table
        table = dynamodb.Table(table_name)
    else:
        raise e
 
 
def lambda_handler(event, context):
    global table
    logger.info(event)
 
    # Convert data to strings because Dynamo doesn't seem to like numeric types
    date_string = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    table.put_item(
        Item={
            "timestamp": date_string,
            "stb_time": str(event["timestamp"]),
            "feature": event["feature"],
            "data": str(event["data"])
        }
    )
  1. Create a new Python 3.7 Lambda function with a suitable name and upload the zipped directory to it.
  2. From the top Actions drop down menu, select Publish new version . Add a description if desired. Greengrass can only use published versions of Lambda functions.
  3. (Optional) From the top Actions drop down menu, select Create an alias . Aliases are visible in the Greengrass Lambda settings and may be easier to refer to than a version number in some cases. This is suggested by Amazon, but is not used for this tutorial.

This Lambda function is now available to be used by the Greengrass Core device.

DynamoDB

Next a DynamoDB table is created using the data’s timestamp as the partition (hash) key and feature name as the sort key. This allows multiple features to exist within the database with the same timestamp. Without the sort key, only the most recent data point with a certain timestamp will be saved. Once the table is created, permissions must be given to the Greengrass group to allow access to the table.

  1. Create a new table as shown below. If this is not done now, the Lambda will create one the first time it runs but you will not be able to specify the specific table in the permission policy that follows.

  2. Navigate to the AWS IAM console and select Policies from the sidebar.

  3. Select Create policy and populate it with the JSON below which provides the CreateTable, PutItem, and DescribeTable permissions to Greengrass only for the table named “SensorTileBoxData.”

Policy JSON
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Tutorial Permissions",
            "Effect": "Allow",
            "Action": [
                "dynamodb:CreateTable",
                "dynamodb:PutItem",
                "dynamodb:DescribeTable"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/SensorTileBoxData"
        }
    ]
}
  1. Select Review policy . Give the policy a name, such as greengrass_SensorBox_Table, and finish creating the policy.
  2. Navigate back to the IAM console and select Roles from the sidebar.
  3. Select Create role.
  4. Choose Greengrass as the Amazon Service. Press Next: Permissions .
  5. Find the new policy from above to attach to the role.
  6. Select Next twice to advance to the review screen. Name the role, something like Greengrass_SensorTile_Group_Role and finish creating the role.

Configure and Deploy Greengrass Group

The final step is to assign all the things created above to the Greengrass group and deploy it on the Raspberry Pi.

  1. In the AWS IoT Console, navigate to the Greengrass section and select the group created earlier.
  2. On the Greengrass group page, choose Settings from the side bar then under Group Role select Add Role . Select the role created above. The result should look something like the following.
  3. From the group’s side bar select Lambdas then Add Lambda .
  4. Add the logging Lambda function from above. You should only have one version and/or alias to choose from.
  5. Back on the Lambdas page, select the ellipsis (…) button on the newly added function. Select Edit Configuration .
  6. Under Lambda lifecycle , select Make this function long-lived and keep it running indefinitely . Then select Update to save the changes.
  7. From the group’s side bar, select Subscriptions then Add Subscription .
  8. For the Source , select DevicesSensorTile-box . For the Target , select LambdaGG_CloudLog (or the name of your function). Press Next .
  9. Enter /device/test as the topic filter. Press Next then Finish .
  10. Repeat the steps above to add another subscription, but change the Target to ServicesIoT Cloud . This is not necessary for the demo, but allows the MQTT messages to be viewed in the AWS console which is useful for debugging.
  11. Return to the Deployments tab of the group. From the Actions menu, select Deploy .

You will see the green dot with “Successfully completed” if everything has gone smoothly.

Measure Things

With the path to the cloud configured, the final step is making use of it. This section provides a simple example of measuring data with the device script created above and accessing that data through AWS.

  1. Choose features to measure. All the basic sensors are available in the gg_sensortile_box.py script above. They can be commented in or out as desired. By default, the script measures temperature and sound level of a guitar amp in an apartment. Both features are downsampled by a factor of 6 in order to stay within DynamoDB’s default write capacity of 5 writes/second.
  2. Run gg_sensortile_box.py for as long as desired. Use Ctrl+C to kill the script. The script also stops if no BLE notifications are detected from the SensorTIle.box in a 10 second period.
Run device script
sudo python3 gg_sensortile_box.py
 
 
Establishing Bluetooth connection to SensorTile.box
Greengrass core has already been discovered.
GGC Host Address: 127.0.0.1
GGC Group CA Path: ./aws/groupCA/root-ca.crt
Private Key of SensorTileBox thing Path: ./aws/<yourkey>.private.key
Certificate of SensorTileBox thing Path: ./aws/<yourkey>.cert.pem
Client ID(thing name for SensorTileBox): SensorTile-box
Target shadow thing ID(thing name for SensorTileBox): SensorTile-box
2020-07-23 15:58:58,350 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - MqttCore initialized
2020-07-23 15:58:58,351 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Client id: SensorTile-box
2020-07-23 15:58:58,351 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Protocol version: MQTTv3.1.1
2020-07-23 15:58:58,352 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Authentication type: TLSv1.2 certificate based Mutual Auth.
2020-07-23 15:58:58,353 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring endpoint...
2020-07-23 15:58:58,353 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring certificates...
2020-07-23 15:58:58,355 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring offline requests queueing: max queue size: -1
2020-07-23 15:58:58,356 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring offline requests queue draining interval: 0.500000 sec
2020-07-23 15:58:58,357 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring reconnect back off timing...
2020-07-23 15:58:58,358 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Base quiet time: 1.000000 sec
2020-07-23 15:58:58,359 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Max quiet time: 32.000000 sec
2020-07-23 15:58:58,360 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Stable connection time: 20.000000 sec
2020-07-23 15:58:58,360 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring connect/disconnect time out: 10.000000 sec
2020-07-23 15:58:58,361 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Configuring MQTT operation time out: 5.000000 sec
2020-07-23 15:58:58,362 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync connect...
2020-07-23 15:58:58,362 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing async connect...
2020-07-23 15:58:58,363 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Keep-alive: 600.000000 sec
2020-07-23 15:58:58,685 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:58:58,929 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:58:59,222 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:58:59,514 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:58:59,807 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:00,002 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:00,100 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:00,391 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:00,733 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:01,025 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:01,318 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:01,610 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:01,757 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:01,903 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:02,195 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:02,488 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:02,878 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:03,122 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
^CKeyboard interrupt received. Shutting down...
2020-07-23 15:59:03,414 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:03,707 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
2020-07-23 15:59:03,999 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync publish...
Disconnecting from SensorTile.box.
2020-07-23 15:59:04,148 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing sync disconnect...
2020-07-23 15:59:04,148 - AWSIoTPythonSDK.core.protocol.mqtt_core - INFO - Performing async disconnect...
  1. Verify that entries have been added to the database using the DynamoDB console.
  2. On the computer used to access the data, Install dependencies for the plotting script.
pip3 install boto3 matplotlib pandas
  1. Read and plot the data using the following Python script.
stbgraphy.py
# Parse and plot data from DynamoDB
import boto3
from boto3.dynamodb.conditions import Attr
import json
import pandas as pd
import matplotlib.pyplot as plt
 
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('SensorTileBoxData')
 
# Get the whole table (if it's not very big)
# response = table.scan()
 
# Or filter on specific attributes
response = table.scan(
    FilterExpression=Attr('timestamp').contains('2020-07-23 17')
)
 
items = response['Items']
print("Scanned {} items.".format(len(items)))
 
df = pd.read_json(json.dumps(items))
sorted_frame = df.sort_values(by=['timestamp'], ignore_index=True)
# Remove brackets and cast to numeric type
sorted_frame['data'] = pd.to_numeric(sorted_frame['data'].apply(lambda s: s.replace('[', '').replace(']', '')))
 
# Separate frame by feature and plot
mic_data = sorted_frame[sorted_frame.values == 'Microphone']
mic_data.plot(x='timestamp', y='data', title='Microphone Level (dB)')
 
temp_data = sorted_frame[sorted_frame.values == 'Temperature']
temp_data.plot(x='timestamp', y='data', title='Temperature (C)')
 
plt.show()
  1. Examine the resulting plots.
    image
    image

Data for the plots above was generated by placing the SensorTile.box near the amplifier’s vent with the amp starting in standby. Initial data shows a typical room noise of 40dB and rises to a roughly constant 87dB while playing. Downward spikes show momentary pauses in playing. In the last minute, the SensorTile.box is moved out of the room containing the amplifier and the door is closed while playing continues, reducing the level outside of the room to a reasonable 55dB. Temperature measurements show a steady increase from room temperature as the tubes warm up before falling off when leaving the room.

Conclusion

This tutorial has demonstrated a method of connecting a SensorTile.box to the cloud using AWS Greengrass. It makes use of the plug-and-play nature of the SensorTile.box to access a variety of sensors with minimal overhead and no firmware development. The addition of cloud services requires a more complex setup than a local approach but offers more flexibility and extensibility in exchange. Additional devices, such as multiple SensorTile.boxes, can be conveniently added or removed to take more measurements. Devices communicate securely with each other through the MQTT subscriptions. Use additional Lambdas for extra data processing or to connect to other services.