17 August 2001Initial Release



    0. Pre-requisites

  1. Objective
    • Demonstrate an Alexa skill
    • Demonstrate an AWS-Lambda function - Demonstrate an AWS Thing with its credentials
    • Import an SDK to the MCUXpresso IDE
    • Load AWS Thing credentials and download your demo application
    • Control LED and update accelerometer data on LPC board using Alexa
    • Control LED and update accelerometer data on LPC board using an Android application
  2. Hardware
      • Micro-USB cable
    • Alexa Echo (optional, you can also use the Alexa test tool from Alexa Developer Console)
    • Android device ( Cell Phone )
  3. Lab high level description
    1. The user speaks to the Echo device to activate an Alexa skill
    2. The Alexa Skill triggers an AWS-Lambda function which sends a message to the AWS-IoT
    3. The NXP LPC55S69 is registered on the AWS-IoT and receives the message to perform the targeted action (turn on/off LED, read accelerometer)
    4. The NXP LPC55S69 sends a feedback message to the AWS-IoT platform
    5. AWS-Lambda retrieves the feedback message and sends it back to the Alexa platform. This makes the Echo device respond to user: "The LED is on/off!" or “The accelerometer data was updated”
    6. Instructions to enable demo functionality using Android applications and Alexa Echo
  4. AWS Configuration (Lamda, Skill and Thing)
    • Create an AWS Lamda Function
      1. Enter the Amazon AWS management console ( )
      2. Select the US East (N. Virginia) region on the top right (Alexa is available at this region). IoT Things need to be created in same region
      3. Click on 'Services -> Compute -> Lambda'
      4. Click on "Create Function"
      5. Select the "Author from Scratch" option and edit the Name you wish to assign to the Lambda Function (for example ‘NXPLambdaFunction’)
      6. Select Python 3.7 in the "Runtime" deop-down menu
      7. Open the "Execution Role" drop-down menu and select "Create a new role with basic Lambda permissions"
      8. Identify the name of the execution role that will be created
      9. Click on the “Create function” button.
      10. Verify the Lambda function was created.
      11. From the Lambda Dashboard click on the number below the Lambda
      12. This brings up the list of Lamda Functions
      13. Click on the NXPIOTLambdaFunction you created
      14. Click on the “Permissions” tab
      15. Once in the Permissions Tab click on the Role Name
        • Note: On the top right, you can see the ARN (Amazon Resource Number) of the Lambda Function that you've just created. That's the ARN that will allow us to link this function to the Alexa service: we'll get back to this later. This will take us to the IAM console, where we can assign the permissions to our NXPLambdaFunction role.
      16. Click on “Attach policies” button.
      17. Type “awsiotfull” in the filter.
      18. Select the “AWSIoTFullAccess” policy.
      19. Click on the “Attach policy” button.
      20. Verify that your role has now two policies: the AWSLambdaBasicExecutionRole and the AWSIoTFullAccess policies.
      21. Go back to your NXPLambdaFunction , Select the Permissions Tab and verify that you now has “AWS IoT” resource available for your Lambda function under the resource summary
      22. Go to the configurations tab and scroll down and locate the editable function code window and erase the preconfigured code.
      23. Copy-paste the content of code below into code window.
        • Note: contains functions to interact between Alexa Voice Service and your Thing. Please take some minutes to analyze it and understand how the Lambda function receives and handles a request from Alexa, depending on the Indents and attributes that were received. content.
        • import boto3  # http messaging
          import json # json text builder/reader
          import time
          clientIOT = boto3.client('iot-data', region_name='us-east-1')
          NXP_module_name = 'myNXPIoTTHING'
          # --------------- Helpers that build all of the responses ----------------------
          def build_speechlet_response(response_message, reprompt_text, should_end_session):
              return {
                  'outputSpeech': {
                      'type': 'PlainText',
                      'text': response_message
                  'reprompt': {
                      'outputSpeech': {
                          'type': 'PlainText',
                          'text': reprompt_text
                  'shouldEndSession': should_end_session
          def build_response(session_attributes, speechlet_response):
              return {
                  'version': '1.0',
                  'sessionAttributes': session_attributes,
                  'response': speechlet_response
          # ---------------- Updating and reading Device's Shadow ----------------
          def update_shadow(device, mypayload):
                  thingName = device,
                  payload = mypayload
          def read_from_shadow(device):
              return(clientIOT.get_thing_shadow(thingName = device))
          # ----------------- Functions ------------
          def send_LED_message(session_attributes, session):
              redLedIdx  = 1
              greenLedIdx= 2
              blueLedIdx = 4
              action = session_attributes['action']
              ledIndex = session_attributes['LEDselection']
              addressed_device = NXP_module_name
              should_end_session = False
              shadow = read_from_shadow(addressed_device)                   #read LED state after sleeping 2 sec
              streamingBody = shadow["payload"]
              jsonState = json.loads(
              state_of_LED = jsonState['state']['reported']['LEDstate']
              desired_state = state_of_LED
              if ledIndex == 'red':
                  if action == 'on':
                      desired_state |= redLedIdx
                  elif action == 'off':
                      desired_state &= ~redLedIdx
                  elif action == 'toggle':
                      desired_state ^= redLedIdx
              elif ledIndex == 'green':
                  if action == 'on':
                      desired_state |= greenLedIdx
                  elif action == 'off':
                      desired_state &= ~greenLedIdx
                  elif action == 'toggle':
                      desired_state ^= greenLedIdx
              elif ledIndex == 'blue':
                  if action == 'on':
                      desired_state |= blueLedIdx
                  elif action == 'off':
                      desired_state &= ~blueLedIdx
                  elif action == 'toggle':
                      desired_state ^= blueLedIdx
              else: #'all' or no selection
                  if action == 'on':
                      desired_state = (redLedIdx | greenLedIdx | blueLedIdx)
                  elif action == 'off':
                      desired_state = 0
                  elif action == 'toggle':
                      desired_state ^= (redLedIdx | greenLedIdx | blueLedIdx)
              stato = {"state" : { "desired" : { "LEDstate": desired_state }}} #generate json
              mypayload = json.dumps(stato)
              update_shadow(addressed_device, mypayload)
              shadow = read_from_shadow(addressed_device)                   #read LED state after sleeping 1 sec
              streamingBody = shadow["payload"]
              jsonState = json.loads(
              state_of_LED = jsonState['state']['reported']['LEDstate']
              if state_of_LED == 0:
                  return_message = 'all LEDs are off'
              elif state_of_LED == redLedIdx:
                  return_message = 'the red LED is on'
              elif state_of_LED == greenLedIdx:
                  return_message = 'the green LED is on'
              elif state_of_LED == blueLedIdx:
                  return_message = 'the blue LED is on'
              elif state_of_LED == (redLedIdx | greenLedIdx | blueLedIdx):
                  return_message = 'all LEDs are on'
                  return_message = 'multiple LEDs are on'
              answer_for_alexa = build_speechlet_response(return_message, None, should_end_session)
              return(build_response(session_attributes, answer_for_alexa))
          # ------------------------- EVENTS --------------------------
          # this is called when user wants to interact with module's LED 
          def manage_LED_request(request, session):
              # get desired LED state (on | off)
              action = request['intent']['slots']['LEDMessage']['value']
                  LEDselection = request['intent']['slots']['LED_ID']['value']
                  LEDselection = 'all'
              session_attributes_init = {"request": request['intent']['name'], "action": action, "LEDselection":LEDselection}
              session_attributes = session_attributes_init
              return(send_LED_message(session_attributes, session))
          # -- end of function
          # this is called when user wants to update Accelerometer reading from thing 
          def manage_ACCEL_request(request, session):
              should_end_session = False
              session_attributes = None
              #REQUEST TO BOARD
              stato = {"state" : { "desired" : { "LEDstate": None, "accelUpdate":1 }, }} #generate json
              mypayload = json.dumps(stato)
              update_shadow(NXP_module_name, mypayload)
              return_message = 'The accelerometer data was updated'
              answer_for_alexa = build_speechlet_response(return_message, None, should_end_session)
              return(build_response(session_attributes, answer_for_alexa))
          # -- end of function
          # ------------------------- MAIN ----------------------------
          def lambda_handler(event, context):
              # request_type comes from Alexa (two possible cases, see below)
              request_type = event['request']['intent']['name'] # could be "LEDIntent" or "AccelIntent"
              # case 1: user wants to interact with module's LED
              if request_type == 'LEDIntent':
                  return(manage_LED_request(event['request'], event['session']))
              # case 2: user wants to interact with module's accelerometer
              if request_type == 'ACCELIntent':
                  return(manage_ACCEL_request(event['request'], event['session']))
      24. Click on "Save" at top right.
        • Note: Notice in the python file that the NXP_module_name is the name of the Thing you will use on the your LPCxpresso55S69 project. In this lab, we will use “myNXPIoTTHING”.
      25. Change the timeout settings of the lambda function: Scroll down and find the dedicated Basic settings section and select Edit
      26. Raise time out up to 10 sec, that's enough to allow your lambda function to interact with the other cloud platforms without early stopping. Don’t forget to save.
      27. SUCCESS! You've just created and configured the AWS Lambda function. Now let’s create the skill for Alexa to interact with our Lambda function.
    • Create an Alexa skill and link it to the AWS Lambda function
        • What's a skill? It's a set of expressions and utterances which we want Alexa to be able to recognize. By setting a new skill we will be able to interact with Alexa and trigger our Lambda function by saying those specific expressions to our Echo device.
      1. Enter the Amazon Developer console (
        • Note: Create an Amazon Developer account if you haven’t done it yet.
      2. Click on the Alexa tab
      3. Click on Skill Builders then select the Developers Console
      4. Click on “Create Skill”
      5. Set a name to your Skill, for example “myIoTRemoteControlSkill”, select ‘English (US)’ language and Custom Model. Then, click “Create Skill”
      6. Select to “Hello World Skill” (default) as a template and click on the Continue with template button.
      7. You may be asked to enter a security code.
        • Notice on the right, the Skill builder checklist that needs to be fulfilled for the Skill to be ready. You can watch the Alexa skills kit developer video to learn how you might do this step. We will use the provided script for this lab.
      8. Open the Interaction Model and Click on the JSON Editor.
      9. Delete the default JSON data and copy-paste the content of Alexa_RC_json_skill.json below into the JSON Editor.
        • {
              "interactionModel": {
                  "languageModel": {
                      "invocationName": "my board",
                      "intents": [
                              "name": "AMAZON.CancelIntent",
                              "samples": []
                              "name": "AMAZON.HelpIntent",
                              "samples": []
                              "name": "LEDIntent",
                              "slots": [
                                      "name": "LEDMessage",
                                      "type": "LED_LIST"
                                      "name": "LED_ID",
                                      "type": "LED_ID_LIST"
                              "samples": [
                                  "to switch {LED_ID} LED {LEDMessage}",
                                  "to switch the {LED_ID} LED {LEDMessage}",
                                  "to turn {LEDMessage} {LED_ID} LED",
                                  "to {LEDMessage} the LED",
                                  "to turn {LED_ID} LEDs {LEDMessage}",
                                  "to {LEDMessage} {LED_ID} LEDs",
                                  "to {LEDMessage} {LED_ID} LED",
                                  "to {LEDMessage} the {LED_ID} light",
                                  "to turn {LED_ID} LED {LEDMessage}",
                                  "to turn {LEDMessage} {LED_ID} lights",
                                  "to turn {LEDMessage} {LED_ID} LEDs",
                                  "to set the {LED_ID} LED {LEDMessage}",
                                  "to {LEDMessage} the {LED_ID} LED",
                                  "to turn the {LED_ID} LED {LEDMessage}",
                                  "to turn {LEDMessage} the {LED_ID} LED",
                                  "to make the {LED_ID} LED {LEDMessage}",
                                  "to switch {LEDMessage} the {LED_ID} LED",
                                  "to turn {LEDMessage} the {LED_ID} light"
                              "name": "AMAZON.NavigateHomeIntent",
                              "samples": []
                              "name": "ACCELIntent",
                              "slots": [
                                      "name": "AccelMessage",
                                      "type": "Accel_LIST"
                              "samples": [
                                  "the {AccelMessage}",
                                  "to read the {AccelMessage}",
                                  "to refresh the {AccelMessage}",
                                  "to update the {AccelMessage}"
                      "types": [
                              "name": "LED_LIST",
                              "values": [
                                      "name": {
                                          "value": "toggle"
                                      "name": {
                                          "value": "on"
                                      "name": {
                                          "value": "off"
                              "name": "LED_ID_LIST",
                              "values": [
                                      "name": {
                                          "value": "all"
                                      "name": {
                                          "value": "blue"
                                      "name": {
                                          "value": "green"
                                      "name": {
                                          "value": "red"
                              "name": "Accel_LIST",
                              "values": [
                                      "name": {
                                          "value": "accelerometer"
      10. Click on Save Model.
        • The JSON file is the translation of what you can manually add using the user-interface menu on the left. Notice that there is now are a LEDIntent and ACCELIntent on your Intent’s list, and the invocation name is set to “my board”. You can manually add or delete Intents for your application using the Graphical Interface or writing it on the JSON Editor.
        • Navigate through your intents and identify the different Utterances your Alexa will be able to recognize to manipulate the LED and Accelerometer.
      11. For custom Intents you need to add a AMAZON.StopIntent or the build will fail. Using an existing intent from Alexa’s build in Library, click Intents then click on +Add Intent.

      12. Scroll down and expand Standard, Scroll to the bottom and you will see the option to add AMAZON.StopIntent. Select +Add Intent

      13. Click on Save Model. (top of page)
      14. Click on Build Model. (top of page)
      15. Select the “Endpoint” section and select the AWS Lambda ARN, then copy the Alexa Skill ID, it’s something like “amazn1.ask.skill[…]”
      16. Leaving the Skills tab opened to come back to in a minute. Open your NXP Lambda function (
      17. Click on your Lambda function name to display the “Add triggers” section. Select Add trigger.
      18. On the “Add triggers” section, add an Alexa Skills Kit trigger
      19. Scroll down to “Configure triggers” and paste the Alexa Skill ID that you just copied from your Skill (step 15).
      20. Click on Add at the bottom right and then Save the NXPLambdaFunction
      21. Copy the ARN (Amazon Resource Name) of your Lambda function, it’s located on the top right of the Lambda function (Just click on the icon next to it). You will need this on the Skill
      22. Back to your skill (, set the Endpoint of the Alexa service on the Default Region and paste the copied skill ID in the Default Region window.
      23. . Click on Save Endpoints. (Top of Page)
        • SUCCESS! We have just created and configured the Alexa Skill and the AWS Lambda function. Now let’s create the ‘Thing’ that your NXP board will use.
    • Create an IoT thing, policy, private key and certificates for your device
        • AWS IoT policies grant or deny access to AWS IoT resources such as things, thing shadows, and MQTT topics. We need to grant access to our Thing by attaching an AWS IoT policy to the certificate associate with our thing.
      1. Open the AWS IoT console website (
      2. In the left navigation pane, choose Secure, and then choose Policies. Then select Create a policy.
      3. Type myIoTPolicy in the Name text box to identify your policy. In the Add statements section, click Advanced mode. Modify lines 5, 6, and 7 with the following content. If you receive an error fix your typing to make it exactly like this.
        • {
             "Effect": "Allow",
             "Action": "iot:*",
             "Resource": "*" 
      4. Choose Create
      5. In the left navigation pane, choose Manage, and then choose Things. If you do not have any IoT things registered in your account, the You don't have any things yet page is displayed. If you see this page, choose Register a thing. Otherwise, choose to Create
      6. On the Creating AWS IoT things page, choose to Create a single thing
      7. On the Add your device to the thing registry page, type myNXPIoTTHING in the name text box, then click Next.
        • Note: This will be the name of our “thing”, this is the name we will use on the Lambda function.
      8. On the ‘Add a certificate for your thing’ page, under One-click certificate creation, choose to Create certificate
      9. Download your private key and certificate by choosing the Download links for each. Choose Activate to switch on your certificate. Then click Attach a policy
      10. Select the checkbox next to myIoTPolicy (that we created before) and choose Register Thing
        • SUCCESS! Now, we have a Thing with its credentials.
  1. LPCxpresso55S69 Configuration
        • This section includes instructions to load and configure the AWS project that will be loaded to your LPCXpresso55S69.
    • Import the latest version of MCUXpresso SDK
      1. Open the MCUXpresso IDE
      2. Select "Installed SDKs" tab within the MCUXpresso IDE windows
      3. Open Windows Explorer, and drag and drop the previously downloaded latest version of the “” SDK file into the Installed SDKs view.
      4. Click OK to the confirmation window.
      5. The installed SDK will appear in the Installed SDKs tab.
    • Import the configure the aws_remote_control_Celluar_Head SDK example project
      1. Go to XXXXX and download the
      2. On the Quickstart Panel, select “Import project(s) from file system
      3. . Click on the Browse… button (red one if you got an archive, blue one if the project is already unpacked) , go to the archive or repository of the project and click next.
      4. You should now see the project loaded into the workspace as shown below
      5. Configuring the Cellular settings and aws ‘thing’ credentials. Locate the file …/amazon-freertos/demos/aws_clientcredential.h and configure:
        • Thing name change to – “myNXPIoTTHING
      6. Unzip the SDK file and locate the Certificate Configurator in the following path:
        • …/SDK_2.8.2_LPCXpresso55S69 /rtos\freertos\tools\certificate_configuration/ CertificateConfigurator.html
        • Note: You can locate your file by right-clicking on your Installed SDK item and click on “Open Location.”
      7. Open the CertificateConfigurator file; this will generate a "aws_clientcredential_keys.h" header file, based on the certificate files you previously downloaded. It’s located it at: /rtos\freertos\tools\certificate_configuration\CertificateConfigurationTool
      8. Browse to the Certificate and Key files you previously downloaded from your Thing (myNXPIoTTHING) and click on “Generate and save aws_clientcredential_keys.h”
        • Note: If no file is downloaded, allow blocked content in the explorer.
      9. Copy the newly generated “aws_clientcredential_keys.h” and replace the one from your aws_remote_control_Cellular_Head project. The file is located at:
        • …/lpcxpresso55s69_aws_remote_control_Cellular_Head\amazon-freertos\demos
      10. Open the aws_clientcredential_keys.h file in the IDE development window.
      11. Scroll down the the bottom of the page, delete everything below line 80.
      12. Copy and paste this text to the bottom of the aws_clientcredential_keys.h file.
          /* The constants above are set to const char * pointers defined in aws_dev_mode_key_provisioning.c,
           * and externed here for use in C files.  NOTE!  THIS IS DONE FOR CONVENIENCE
          extern const char clientcredentialCLIENT_CERTIFICATE_PEM[];
          extern const char* clientcredentialJITR_DEVICE_CERTIFICATE_AUTHORITY_PEM;
          extern const char clientcredentialCLIENT_PRIVATE_KEY_PEM[];
          extern const uint32_t clientcredentialCLIENT_CERTIFICATE_LENGTH;
          extern const uint32_t clientcredentialCLIENT_PRIVATE_KEY_LENGTH;
          #endif /* AWS_CLIENT_CREDENTIAL_KEYS_H */
      13. Save the project.
    • Verify the fsl_spi_freertos.h file is in your project
      1. For some reason in the latest IDE and SDK the fsl_spi_freertos.h header file isn't being included in the driver set of this project. Lets verify it is present on your setup, and if not we will add it.
      2. Navigate to lpcpresso55s69_aws_remote_control_Celluar_Head -> drivers
      3. Scroll down and see if the fsl_spi_freertos.h header file is included in your project. This file would be located right below the fsl_spi_freertos.c file.
      4. In my instance the fsl_spi_freertos.h file is not present. We will need to go into the SDK, manually pull the header file out and add it to the project.
      5. Navigate back to the SDK directory like we had in the previous section (where we used the Certifcate Configurator)
      6. Navigate through the unzipped SDK to devices -> LPC55S69 -> drivers -> fsl_spi_freertos.h
      7. Drag and drop the fsl_spi_freertos.h file into the drivers section of your lpcxpresso55s69_aws_remote_control_Cellular_Head.
      8. Save the project
    • Disable the msft_Azure_IoT files
      1. Select the msft_Azure_IoT project, right click and select Resource Configurations -> Exclue from Build...
      2. You will then be prompted to exclue objects from the release and debug build, click Select All and then OK
      3. The msft_Azure_IoT project has been excludes from build.
    • Update the C/C++ Build settings MCU Linker Libraries path
      1. By default the power_hardabi library search path is incorrect in the latest IDE/SDK, we will need to update the Libray search path for power_hardabi
      2. Go to Project -> Properties
      3. Navigate throught Settings -> MCU Linker -> Libraries. You will see the library search path location, Select the Red X to remove the default library search path
      4. Now we have to delete the default search path, select the Green + to add a new library search path.
      5. Enter the new library path below, click OK and then select Apply and Save
        • "${workspace_loc:/${ProjName}/libs}"
    • Pin configuration for the Monarch Go Arduino Shield
      1. The current SDK needs to be targeted to use the Monarch Go Arduino Shield. Open up the Pin Configuration by going ConfigTools -> Pins
      2. Once the page has loaded your setup, verify we are targeting the lpcxpresso55s69_aws_remote_control_Celluar_Head example. Then change to Functional Group to Board_InitMonarchGoArduinoShield.
      3. Save the project (top left corener)
      4. Go back to the developers page (top right corner)
    • Compile and download the aws_remote_control_Cellular_Head_project
      1. Select your project folder and click on Build.
      2. Wait until the project is built.
      3. Verify the build finished without any errors.
      4. Mount the Monarch Go Arduino Shield on the LPCXpresso55S69
      5. Connect the Debug Link port from your LPCXpresso55S69 to your PC using a micro-USB cable
      6. Program your LPCXpresso55S69 by clicking on “Debug” from the ‘Quickstart Panel.’
      7. Select your probe and click “OK” to start programming your board.
      8. Select OK to select the default device
      9. Wait until the program is successfully downloaded.
      10. Open the “Device Manager” on your Windows PC and identify the COM port of your board.
      11. Open a Terminal emulator software (TeraTerm, PuTTY) and connect to the COM port using the following settings:
        • 115200 baud rate
        • 8 data bits
        • No parity
        • One stop bit,
        • No flow control
      12. Press the Resume All Debug sessions tab in the IDE
      13. Your Thing is ready and accessible through the NXP AWS Remote Control Android application and Alexa voice control.
  1. Alexa Echo configuration
        • This section provides steps to enable your newly created skill in your Alexa environment.
    • Configure Alexa Application to enable your new Skill
      1. Download the “Amazon Alexa app” from Google Play Store
      2. Open the Alexa App with your Amazon Developer credentials (
      3. Select More -> Skills & Games
      4. Select Dev -> myIoTRemoteControlSkill, this will open the skill you created previously
      5. Select enable skill
  2. Enable and confiure Android application
        • This section describes the procedure to enable your android application to communicate and control your AWS thing.
    • Enable Android applciation to communicate with our Thing
      1. In the Amazon Cognito Console select "Manage Identity Pools"
      2. Ensure Enable access to unauthenticated identities is checked. This allows the sample application to assume the unauthenticated role associated with this identity pool.
        • Note: to keep this example simple it makes use of unauthenticated users in the identity pool. This can be used for getting started and prototypes but unauthenticated users should typically only be given read-only permissions in production applications.
        • (verify the name of the Unauthenticated pool):
      3. As part of creating the identity pool, Cognito will setup two roles in Identity and Access Management (IAM) These will be named something similar to: "Cognito_PoolNameAuth_Role" and "Cognito_PoolNameUnauth_Role". We will use Unauthanticated role.
      4. Select the Policies tab in the left list
      5. Create a policy to be attached into "Cognito_PoolNameUnauth_Role" through "Policies" menu, selecting "Create policy".
      6. Select the JSON tab,
      7. Delete the existing content and copy the example policy below into "Policy Document" JSON field and name it for example "<THING NAME>Policy". Replace <REGION> , <ACCOUNT ID>  and <THING NAME>  with your respective values (don’t forget to remove the < > symbols). This policy allows the application to get and update the shadow used in this sample. The note below shows you where to find the required values to put in here.
        • {
              "Version": "2012-10-17",
              "Statement": [
                      "Effect": "Allow",
                      "Action": [
                          "Resource": [
                          "Effect": "Allow","Action": [
                              "Resource": [
                                  "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/update",
                                  "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/get"
                          "Effect": "Allow",
                          "Action": [
                              "Resource": [
      8. After updating the JSON text, select Review Policy
      9. Set a name to your new policy and click on Create policy.
      10. Newly created policy now needs to be attached to the unauthenticated role which has permissions to access the required AWS IoT APIs:
    • Configure and install Android application
      1. Prepare "" file with your AWS credentials.
        • File is located at <SDK Folder> \boards\lpcxpresso55s69\aws_examples\remote_control_android\
        • Where do I obtain these parameters? See below:
        • <Rest API ENDPOINT>
        • <COGNITO POOL ID>
          • To obtain the Pool ID constant, select your Federated Entity Pool ( and select myNewPool
          • select "Edit identity pool" in the top right corner. Copy Identity pool ID (it will look like :). See below.
          • Make sure you select the whole ID, including the region.
          • The file should look like this:
      2. Locate the AWS Remote Control Android application (AwsRemoteControl.apk) under same SDK folder as “properties” file in \boards\lpcxpresso55s69\aws_examples\remote_control_android\AwsRemoteControl.apk Connect your Android device to your PC. Make sure you select to "File Transfer" instead of just charging in your android device.
      3. Drag & drop the and AwsRemoteControl.apk files to a known location in your Android device.
      4. Install the AwsRemoteControl.apk on the Android device. Application requires at least Android version 5.1 (Android SDK 22).
      5. Run the application. You will be asked to select a properties file with AWS IoT preferences. Browse to the dropped file ( and select it. Then application will establish MQTT connection to AWS server, download last state of thing's shadow and it will be ready for user input.
  3. Test your new Skill and Android application
        • Now that your setup is complete, you can test it with your LPCXpresso55S69 using the Android application and Alexa voice commands.
    • Control the evaluation board using Android Application
        • Verify that you can control the LED status and read Accelerometer data using the AWS Remote Control Android application
    • Control the evaluation board using Alexa voice commands
        • Verify that you can control the LED status and read Accelerometer using Alexa voice commands:
          • “Alexa, ask my board to update the accelerometer”
          • “Alexa, ask my board to turn the red LED on”
          • “Alexa, ask my board to toggle the blue LED”
        • Note: If you don't have an Echo device you can run your test using the Alexa application or the Alexa developer console -> Test (Skills -> myRemoteControlSkill -> Test).
        • Note: If you ask the board to update the accelerometer data, it should be reflected in the android application.
        • Now you can develop your own Skills and Lambda functions!
  4. LPC55S69 hardware acceleration for AWS demo
        • The LPC55S69 MCU improves the mbedtls generic driver with some of the security features supported by the SoC and the SDK, refer to the /mbedtls/port/ksdk for further analysis of these features.
        • HASHCRYPT module
          • AES (Advanced Encryption Standard)
        • CASPER module
          • TLS ECP (Elliptic Curves over GF(P))
          • RSA operations with public key
          • NIST P-256 operations
          • ECDSA sign/verify