The AWS Snow family as an IoT gateway – Architecting for Disconnected Edge Computing Scenarios

When running AWS IoT Greengrass on an AWS Snow family device, customers find the ability to host small ML models that have been pre-trained in the cloud particularly useful.

Here are some real-world examples:

Transcription/translation: Passengers on trains cannot always hear the announcer (or hear them clearly). To improve accessibility, passengers can subscribe to an SMS service that will send transcribed text versions of what was said – translated into the passenger’s native language if desired. Trains do have internet connections these days, but they tend to be slow and unreliable.

Mobile healthcare: During the pandemic, customer demand for AWS Snow family devices exploded due to the need for portable on-site ML inferences for patient assessment and identification. It is straightforward to integrate a camera that streams to an AWS IoT Greengrass component, where facial recognition is performed. At the same time, a thermal camera sends an infrared map to a separate component that assesses whether the patient has a fever. Further, a third set of cameras observes the queue and another component estimates the queue waiting time based on object movement patterns. These interactions are then transferred to the main data store in the cloud as connectivity becomes available – or sometimes the device is simply returned to AWS for physical data loading and exchanged for a fresh one.

Let’s assume that the relatively small amount of compute and storage available on an AWS Snowcone is sufficient for our use case. The following figure shows the high-level architecture we wish to achieve:

Figure 9.1 – AWS Snowcone physical connection via SATCOM

The first thing we will need to do is establish a communication channel over a SATCOM terminal to the internet.

Configuring the SATCOM terminal

In this example, we will use Inmarsat’s L-band service (BGAN) to communicate with a geostationary satellite to reach the internet via an AddValue Ranger 5000 terminal:

Figure 9.2 – A Ranger 5000 L-Band terminal (left) and using an IOS app to align it (right)

Recall from Chapter 3 that geostationary satellites sit at a fixed point in the sky relative to a user on the ground. You will also recall from the same chapter that circular polarization is used in such situations, so our terminal’s antenna is a flat square that needs to be pointed at whichever satellite in the constellation we have the best line of sight to.

There are multiple applications freely available for both iOS and Android that will assist in this process. See the preceding figure for an example of this. This particular terminal can also emit a tone while you are pointing the antenna to help fine-tune its positioning:

Figure 9.3 – Ranger 5000 terminal admin interface

Similar to the router an ISP might provide, the terminal has Ethernet ports that hand out DHCP addresses over to client machines. In this case, we will connect our laptop to Ethernet port 1 on the terminal and AWS Snowcone to Ethernet port 2. Before powering on the AWS Snowcone device, we must log in to the web interface of the terminal to ensure its SIM is registered on the network and the terminal is successfully transmitting data. See the preceding figure for an example of this interface.

Figure 9.4 – Connection quality from the terminal to AWS

Once we have a good connection, we can do some ping tests to amazonaws.com from our laptop. Looking at the preceding figure, you’ll notice a few things. First, the average RTT is 1,902 ms – nearly 2 whole seconds. This is long enough to cause outright timeouts for some applications, and very low throughput for anything TCP-based. Second, the mean deviation of 479, or almost 25%, represents an extreme amount of jitter. Finally, while a packet loss of nearly 2% would be considered extremely high and unacceptable over a terrestrial internet connection, it is not uncommon over geostationary SATCOM links such as this.

These are important considerations for services such as DNS and NTP – you should consider running local versions of these services and tuning the caching behavior to prevent the need to reach over the SATCOM link for every single address resolution.

Now, we can power up, unlock, and configure AWS Snowcone using AWS OpsHub or the CLI per the AWS Snow family documentation. Note that you must deploy the preinstalled Greengrass AMI on an snc1.medium instance.