Secure IoT RFID Access Control System Using the AVR-IoT WG

AVR-IoT WG Development Board


Smart, Secure, and Connected. These are the three elements of a typical IoT solution that Microchip was able to realize with their AVR-IoT WG development board (part number AC164160). Featuring an ATmega4808 MCU as the smart application controller, an ATWINC1510 Wi-Fi module for the connection to a cloud service, and an ATECC608A CryptoAuthentication secure element; the board has every component necessary to implement a small, affordable, and expandable IoT node. We decided to take full advantage of these components to design an RFID access control system centered around the AVR-IoT WG. Because the board includes a mikroBUS socket, MikroElectronika’s RFID click board was used to provide the RFID reader functionality. The SPI interface was used to communicate with this add-on board rather than the USART interface, allowing one of the USART signals on the mikroBUS socket to be used as the control signal for a simple lock-style solenoid.

The demo application that comes pre-flashed on the AVR-IoT WG is available from the web-based Atmel START tool. Simply go to and search the projects list for “avriot”. The project is called “AVR IoT WG Sensor Node”. This demo project simply takes periodic measurements from the on-board temperature sensor and light sensor and, after each measurement, sends the collected data to Google Cloud for further processing and/or visualization. By importing this project into Atmel Studio 7.0, we were able to modify the source code to periodically check for the presence of RFID tags and, if present, attempt to read their UIDs. These UIDs can then securely be sent to Google Cloud and compared against a database of users with proper authorization. Depending on the result of this comparison, Google Cloud could then send a response to the node providing the result of this comparison. If the UID matches one on the approved list, the solenoid could be actuated, unlocking the system.

Hardware Components

For those interested, here is a list of the main hardware components on the board.

  • ATmega4808
    • 8-bit AVR microcontroller
    • 20 MHz
    • 48 KB Flash, 6 KB SRAM, and 265 B integrated EEPROM
    • Operating voltage: 1.8V – 5.5V
    • 32 pin package
  • WINC1510
    • Low-power consumption Wi-Fi module (802.11 b/g/n)
    • SPI interface
    • On-chip Network stack to offload MCU
      • Integrated Network IP stack to minimize host CPU requirements
      • Network features TCP, UDP, DHCP, ARP, HTTP, TLS, and DNS
      • Hardware accelerators for Wi-Fi and TLS security to improve connection time
    • Small form factor (21.7 x 14.7 x 2.1 mm)
  • ATECC608A
    • CryptoAuthenticatioin Device
    • Cryptographic co-processor with secure hardware-based key storage
    • Hardware support for ECDSA, ECDH, SHA-256, AES-128
    • Unique 72-bit serial number
  • MCP9808
    • ±0.5°C (±1°C) maximum accuracy digital temperature sensor
    • -20°C to 100°C (-40°C to 125°C) sensing range
    • User programmable temperature limits and alerts
  • TEMT6000
    • Ambient light sensor
    • Adapted to human eye responsivity
  • Battery connector
  • mikroBUS connector
    • Not a component per se, but still an excellent feature. Here’s the pinout:


The AVR-IoT WG includes a Nano Embedded Debugger (nEDBG) IC to simplify interfacing for the user by providing a mass storage interface, a serial port interface, and a UPDI interface over USB. The mass storage interface allows the board to show up as a removable storage device on host PC. This drive, labeled “CURIOSITY”, contains the demo webpage as well as several files providing information about the board such as serial number, firmware revision, and public key. Users can also use the ‘drag and drop’ functionality to change the Wi-Fi settings and reprogram the board. The serial command line interface allows the user to communicate with the board through a terminal application such as PuTTY or TeraTerm. Using the available commands, the user can reset the board, change the Wi-Fi credentials, get diagnostic information, etc. Finally, the UDPI (Unified Program and Debug Interface) allows Atmel Studio 7.0 to recognize this board once it is plugged in to the PC. The user can then download and open the demo application from Atmel START and make whatever changes they want and reprogram the board.

a) Mass storage device

b) Serial command line interface

c) Atmel Studio 7.0

Figure 1: The various way to interface with the AVR-IoT WG

For more details on how to use these interfaces, see the User Guide.

Secure Connection to Google Cloud

Most are probably wondering, what does the ATECC608A secure element actually do to make the board “secure”? Well, on its own, it doesn’t! In the case of this reference design, it’s really the combination of the ATECC608A and WINC1510 Wi-Fi module that implements a secure connection to Google Cloud. To understand how, one must be familiar with the elements associated with a secure system; confidentiality, integrity, and authenticity. Always remember: CIA!

Because the WINC1510 uses TLS to communicate with Google Cloud’s MQTT server (as is required by Google), all messages sent to and received from Google Cloud are encrypted and therefore cannot be read by an eavesdropping third-party. Thus, the WINC1510 module implements confidentiality in the application. TLS also performs an integrity check on the message by hashing them before they are sent and after they are received. Hashing means that the message is used as the input to a one-way function which produces a value called a digest. Given only this digest, it is next to impossible to determine what the original message was. This digest is then appended to the end of the message and, therefore, should the message be altered in any way between transmission and reception, the digest of the modified message will not match the digest of the original message.

All that leaves is authentication, which is where the ATECC608A comes into play. Google Cloud needs to ensure that the device it is communicating with is in fact our board and not some other device masquerading as our board. Therefore, we need to provide proof of identity before we can begin sending and receiving data. We do this by using the private/public key pair generated by the ATECC608A. Once the ATECC608A has generated this key pair, the private key is locked within the device so there is no way it will ever be hacked or stolen. This makes the private key far more suitable for authentication than a traditional password. However, the question remains: how do we use this private key to prove our identity when it is locked away in secure memory and not even we can read it?

The answer is to use the public key! The idea behind a private/public key pair is very simple: if the public key is used to encrypt some data, then only the private key can decrypt it, and vice versa. So, by providing Google Cloud with our public key when registering the device, it can attempt to decrypt data that the ATECC608A encrypted with our private key. If it can successfully decrypt the data, then Google Cloud knows that the data came from our device because only it contains the private key that corresponds to the public key.

Let’s take a closer look at this and how the AVR-IoT WG demo project connects to Google Cloud’s MQTT server. When the MQTT client constructs a connection packet, it must include a client ID, username, and password. However, in Google Cloud the username is ignored, so it can be empty. The client ID takes the following form: projects/{project-id}/locations/{cloud-region}/registries/{registry-id}/devices/{device-id} . For example, in the example project we set up below, the client ID would be projects/avr-iot-rfid-ac/locations/us-central1/registries/rfid_readers/devices/d012313D1E521D10BFE . The password is a JSON Web Token (JWT) which takes the following form: {Base64url encoded header}.{Base64url encoded payload}.{Base64url encoded signature} . The header simply contains two fields indicating the signing algorithm and type of token. The payload contains 3 fields: the timestamp when the token was created (“iat”), the timestamp when the token expires (“exp”), and the cloud project ID (“aud”). The signature is computed by sending the base64 encoded header and payload to the ATECC608A where they will be encrypted with the private key. The resulting encrypted data is then base64 encoded and used to complete the JWT, as illustrated in Figure 2.

Figure 2: How the ATECC608A secure element is used to create the JWT for device authentication

Once Google Cloud receives this password, it essentially performs this process in reverse, except it uses the public key to decrypt the signature. If the decryption results in an array of bytes that matches the header and payload sections, then the device is considered authentic. Note also that the JWT is timebound, i.e. only valid between the issue time and expiration time, to prevent replay attacks. This is why I am comfortable sharing this JWT with you. It expired on January 4, 2019 14:27:46 CST (the unix timestamp for which is 1546633666) and is no longer valid.

RFID Click Board


Mikroelectronika’s RFID click board (MIKROE-1434) features the CR95HF 13.56MHz contactless transceiver from STMicroelectronics along with the analog front end and trace antenna. The default communication interface is mikroBUS SPI, but USART lines also become available with a simple solder bridge reconfiguration. It runs on 3.3V and supports the following protocols:

  • ISO/IEC 14443 type A and B tags
  • ISO/IEC 15693 tags
  • ISO/IEC 18000-3M1 tags
  • NFC Forum tags: type 1, 2, 3, and 4
  • ST SRI and LRI tags
  • ST Dual Interface EEPROM

Unfortunately, at the time of writing, this click board is not supported in Atmel START. So instead, the CR95HF library files added to the application were derived from ST’s STSW-STM32031 example project, which demonstrates how to use the STM3210B-EVAL board to interface with the CR95HF using the ISO/IEC 15693 standard. Rather than port the entire library to the ATmega, only the functions required to initialize the module and read tag UIDs were copied from the example project to save time. Figure 3 shows the library files included in the project.

Figure 3: The modified CR95HF library file added to the project solution

This project was tested with these inexpensive RFID inlays. However, Digi-Key offers many other types of RFID tags in the RFID Transponders, Tags category.

Solenoid and Driver Circuitry


The actuator in this project is the 1512 lock-style solenoid from Adafruit. Since it draws about 650mA, it cannot be driven directly by the AVR-IoT WG board. Rather, the board will turn on a MOSFET which will actuate the solenoid. The schematic for this circuit is shown below. A weak pull-down resistor was added to the gate of the MOSFET to prevent the solenoid from actuating when the board is not powered. Note that the gate is connected to the “RX” pin of the mikroBUS socket because we are not using USART communication. The RX pin has been configured as GPIO in the project code.

Atmel Studio Project

This project was created by modifying the demo project on the AVR-IoT WG board. The project is called “AVR IoT WG Sensor Node” on I won’t walk through every change that was made, but I will call out the main ones. The first significant modification was adding subscribe functionality to the cloud middleware. This allowed the board to subscribe to commands from the Google Cloud IoT Core. The next big change was adding libraries for the CR95HF NFC reader IC to allow the UIDs to be read from ISO15693 tags. A callback function was then added to retract the solenoid for a few seconds when a “YES” command is received from Google Cloud.

The final project code can be found here: It is still configurable via Atmel START simply because it is more convenient to configure the pins that way. START should be used with caution, however, because some changes it makes may conflict with the aforementioned modifications. If you would like to explore every change that was made to the demo project from start to finish, feel free to dig through the commit history.

There is still some future work that could be done on the project, for example; caching UIDs so the cloud would not have to be used every time or as a backup in case the connection fails, adding the ability to provision tags with one of the pushbuttons on the board, etc.

Google Cloud

Project Architecture

By connecting our devices to Google Cloud, we can offload several tasks from the MCU, including data processing and database management. For this project, all the MCU on the evaluation board does is maintain the Google Cloud connection and periodically check for the presence of an RFID tag. If a tag is found, its UID is read and sent to the cloud. If a command is received from Google Cloud, the device will react accordingly. In this case, that command will be a simple “YES” or “NO” depending on whether or not the aforementioned UID has the proper authorization. The project diagram below explains in more detail how this is achieved in Google Cloud.

Figure 4: The architecture of the Google Cloud project

The remote devices connect to Google Cloud via the Cloud IoT Core service (in the case of this project, there is only one RFID reader, but there could be many more). Every time a device sends data to IoT Core, that data is forwarded to a Cloud Pub/Sub topic. There could be a separate Pub/Sub topic for each device, or for distinct groups of devices. The Pub/Sub topic will then trigger a Cloud Function and pass it the data it received from IoT Core. The Cloud Function will make queries to an SQL database which we populate by means of a Cloud Storage Bucket. Based on the result of the query, the Cloud Function will use the IoT Core API to send a command back to the remote device that originally sent the data. This is not the only way it could be done. The Bigtable service could be used rather than Cloud SQL, there could be multiple Cloud Functions for each device, etc. Note that the SQL database does require a few dollars a week on Google Cloud to operate, but for a small project like this, every other service is free.

Setting up the Google Cloud Project

For those who would like to recreate this project or are just curious about how it was set up, the following procedure has been documented for your reference.

  1. Create a new Google Cloud project. I created one called “AVR-IoT RFID AC” with the project ID “avr-iot-rfid-ac”.

  2. From the navigation menu, chose Pub/Sub and enable the Pub/Sub API if required. Create a topic. I’ve created one called “level_9” as an example (Figure 5). You could call yours whatever you want (e.g. “front_entrance”, “my_vault”, “vip_restroom”, etc.). If you have multiple RFID readers, you could create a topic for each of them.

Figure 5: Creating a new Pub/Sub topic

  1. From the navigation menu, chose IoT Core and enable the IoT Core API if required. Create a device registry. This is a collection of registered IoT devices with similar properties. In Figure 6 I created one called “rfid_readers”. If I had readers for level 1, level 2, etc., they would all go into this registry. Select the proper region and protocol (we’re just using MQTT). In the ‘Default telemetry topic’ dropdown, select the Pub/Sub topic we just created. This is where IoT Core will forward any data received by a device if that device does not specify a subfolder (which our device does not). At this point, you could add more telemetry topics for each of your Pub/Sub topics.

  2. In the registries settings, create a device and add its public key. To do this, click ‘Create device’ to get to the form shown in Figure 7. Enter a device ID. I chose to use the unique ID provided by the ATECC608A preceded with ‘d’ (the same scheme used by Microchip for their demo application). Under ‘Authentication’ select ‘Upload’ and under ‘Public key format’, select ‘ES256’. Then, browse for the public key value and select PUBKEY.TXT from the CURIOSITY drive (see Figure 1a). Click ‘Create’.

Figure 6: Creating a new device registry

Figure 7: Creating a new device in the registry

  1. From the navigation menu, chose SQL and create a new SQL instance. Choose ‘MySQL’ and click Next. Click ‘Choose Second Generation’. Give your MySQL instance an ID and set a password for the root user as shown in Figure 8. Click ‘Create’. To save cost, you can click ‘Edit’ on the instance details page and change the machine type to ‘db-f1-micro’ and disable auto backups.

  2. Activate the Cloud Shell by clicking the icon circled in red in Figure 9 and enter the commands provided in Listing 1, substituting your own values into the placeholders (see Figure 9 as example). This will create a database and table. The table will have two columns; a UID column and a Name column. The UID column is a primary key, meaning it must be unique and cannot be empty.

Figure 8: Creating an SQL instance

Figure 9: Creating a database called “permissions” and within it, creating a table called “level_9”

Listing 1: Commands to connect to an SQL instance and create a new database and table

gcloud sql connect <instance_id> --user=root
> CREATE DATABASE <database_name>;
> USE <database_name>;
> CREATE TABLE <table_name> ( UID CHAR(16), Name VARCHAR(255), PRIMARY KEY ( UID ) );
> exit;
  1. At this point you can add names and IDs to the database manually using the shell (e.g. INSERT INTO <table_name> VALUES ( "E004015006068493", "Matt Mielke" ); ). Another option is to create a CSV file of the IDs and names and import that file into your database. As an example, I created the file “Level 9 Access.txt”, which contains several random names and UIDs along with my name and the UID of my RFID tag (see Listing 2). Since only data on Cloud Storage can be imported into our SQL instance, we must first upload the file to Cloud Storage. Simply chose Storage from the navigation menu and create a bucket, give it a name, and click ‘Create’ as shown in Figure 10. On the bucket details page, click ‘Upload files’ and browse for your CSV file. Once it’s uploaded, go back to your SQL instance detail page and click ‘Import’ to import the data as shown in Figure 11. Finally, Figure 12 demonstrates how the Cloud shell can be used to verify that the CSV file contents have been successfully imported into the table. Note that since the UID column of the table is the primary key, you can add names to the CSV file and re-import it without having to first empty the table. The duplicate rows just won’t be added. However, you will have to manually delete rows from the table because removing them from the CSV file will have no effect.

Listing 2: The contents of “Level 9 Access.txt”

"E0040171DF8B7267","Safia Hunt",
"E0040194D2A82AC7","Myles Cooley",
"E00401FE2DA29176","Stella Findlay",
"E00401D3CDB4A3F0","Ella Marshall",
"E004011D64B96F89","Darrel Goodman",
"E004010A66C51AE5","Marion Kenny",
"E004013CBF3A9B46","Marli Frazier",
"E00401BB66668BD1","Sumaiya Moses",
"E00401A743A2327C","Ansh Green",
"E004013FEB644E0B","Kirstie Jaramillo",
"E0040102FA202721","Loki Hills",
"E00401000114BEB9","Maariyah May",
"E004016655C5E457","Nabeel North",
"E004012F95648AE1","Ciara Rodriguez",
"E004015006068493","Matt Mielke",

Figure 10: Creating a Storage Bucket

Figure 11: Importing a CSV file into an SQL instance

Figure 12: Verifying that the CSV file was successfully imported into the table

  1. Finally, from the navigation menu, choose Cloud Functions and enable the Cloud Functions API if required. Create a new cloud function and give it a suitable name. In the ‘Trigger’ drop-down, choose ‘Cloud Pub/Sub’ and in the ‘Topic’ drop-down, choose the topic your telemetry data gets published to. Use the ‘Runtime’ drop-down to choose your preferred programming language. I chose Python and then placed the code provided in Listing 3 in the tab and the contents of Listing 4 in the requirements.txt tab. If you would like to use my code, be sure to first change the parameters declared in the “TODO (developer)” section to match your SQL instance. You can remove the debugging print statements if you like. If you leave them in, their output can be read by opening a Cloud Shell and running the command: gcloud functions logs read –limit x , where x is the number of entries to be fetched. This function could be modified to query different tables in your database depending on which Pub/Sub topic triggered it, or even which subfolder the telemetry event was sent to.

Figure 13: Creating a Cloud Function

Listing 3:

from google.auth import compute_engine
import googleapiclient.discovery
import base64
import json
from os import getenv
import pymysql
from pymysql.err import OperationalError
# TODO(developer): specify SQL connection details
CONNECTION_NAME = getenv( 'INSTANCE_CONNECTION_NAME', 'avr-iot-rfid-ac:us-central1:access-control' )
DB_USER = getenv( 'MYSQL_USER', 'root' )
DB_PASSWORD = getenv( 'MYSQL_PASSWORD', "********" )
DB_NAME = getenv( 'MYSQL_DATABASE', 'permissions' )
TABLE_NAME = 'level_9'
mysql_config = {
    'user': DB_USER,
    'password': DB_PASSWORD,
    'db': DB_NAME,
    'charset': 'utf8mb4',
    'cursorclass': pymysql.cursors.DictCursor,
    'autocommit': True
# Create SQL connection globally to enable reuse
# PyMySQL does not include support for connection pooling
mysql_conn = None
def __get_cursor():
    Helper function to get a cursor
    PyMySQL does NOT automatically reconnect,
    so we must reconnect explicitly using ping()
        return mysql_conn.cursor()
    except OperationalError:
        return mysql_conn.cursor()
def check_permissions(event, context):
    """Triggered from a message on a Cloud Pub/Sub topic.
         event (dict): Event payload.
         context ( Metadata for the event.
    global mysql_conn
    # debugging
    print( ">>> Event:", event )
    print( ">>> Context:", context )
    ### Check Permissions ###
    # extract the uid from the event payload
        data = base64.b64decode( event['data'] ).decode( 'utf-8' )
        uid = json.loads( data )["UID"]
        print( ">>> Error: Cannot extract UID from event payload" )
    # Initialize connections lazily, in case SQL access isn't needed for this
    # GCF instance. Doing so minimizes the number of active SQL connections,
    # which helps keep your GCF instances under SQL connection limits.
    if not mysql_conn:
            mysql_conn = pymysql.connect(**mysql_config)
        except OperationalError:
            # If production settings fail, use local development ones
            mysql_config['unix_socket'] = '/cloudsql/{}'.format( CONNECTION_NAME )
            mysql_conn = pymysql.connect(**mysql_config)
    # Remember to close SQL resources declared while running this function.
    # Keep any declared in global scope (e.g. mysql_conn) for later reuse.
    with __get_cursor() as cursor:
        cursor.execute( "select * from {} where UID=\"{}\"".format( TABLE_NAME, uid ) )
        sql_query_result = cursor.fetchone()
    # debugging
    print( ">>> SQL Query Result: ", sql_query_result )
    # if UID is not in database, do not allow access
    if sql_query_result == None:
        access = "NO"
        access = "YES"
    ### Report whether or not user has proper permissions ###
    # Build the cloudiot service object. Better way to do this?
    credentials = compute_engine.Credentials()
    cloudiot = 'cloudiot', 'v1', credentials=credentials )
    # create the 'name' of the device for the Cloud IoT API
    project_id = event['attributes']['projectId']
    reg_location = event['attributes']['deviceRegistryLocation']
    registry_id = event['attributes']['deviceRegistryId']
    device_id = event['attributes']['deviceId']
    name = "projects/{}/locations/{}/registries/{}/devices/{}".format( project_id, reg_location, registry_id, device_id )
    # create the 'payload' for the sendCommandToDevice method based on SQL query
    payload = { "binaryData": base64.b64encode( access.encode( 'utf-8' ) ).decode( 'utf-8' ) }
    # debugging
    print( ">>> Cmd Payload:", payload )
    # send a command to the specified device
        response = cloudiot.projects().locations().registries().devices().sendCommandToDevice( name=name, body=payload ).execute()
    except googleapiclient.errors.HttpError:
        response = "Device not subscribed."
        response = "Something went terribly wrong..."
    # debugging
    print( ">>> sendCommandToDevice Response:", response )

Listing 4: requirements.txt

# Function dependencies, for example:
# package>=version
  1. In Atmel Studio, import the project and open the file IoT_Sensor_Node_config.h so you can modify the cloud configuration section to match your Google Cloud project (Figure 14). Do not use the ‘Re-Configure Atmel Start Project’ option because it will undo many of the changes that were made to the original demo project.

Figure 14: Updating the project code to match your cloud configuration

Note that several of the elements utilized in Google Cloud are still in beta at the time of writing (e.g. using Python 3.7 to write a cloud function) and therefore may change without notice in backward-incompatible ways.