Tag Archives: WSN

20170628_211524s

Itead Studio Sonoff SC Revisited

A few months ago I wrote about the Sonoff SC sensor hub by Itead Studio. It’s a device with a Sharp GP2Y1010AU0F [Aliexpress] dust sensor, a DHT11 humidity and temperature sensor, an LDR as light sensor and a mic. The sensors are driven by an ATMega328P microcontroller but there is also an ESP8266 on board for WiFi communication, a pretty standard set up when you have several sensors and the ESP8266 GPIOs are just not enough.

20170105_125851s

On the first post I already did a small mod to replace the DHT11 humidity and temperature sensor with a more accurate and pin compatible DHT22. Since then, several readers have contributed with code and ideas. My progress implementing and testing them is slow, so I though about writing a first post about some modifications I (and others) have done to the device.

So this is a work in progress post. At the moment, all the code for these modifications is in the dev branch of the repository.

This custom Sonoff SC firmware is released as free open software and can be checked out at my SonoffSC repository on Github.

Continue reading

20160819_002338x

RFM69 WIFI Gateway

Some 3 years ago I started building my own wireless sensor network at home. The technology I used at the moment has proven to be the right choice, mostly because it is flexible and modular.

MQTT is the keystone of the network. The publisher-subscriber pattern gives the flexibility to work on small, replaceable, simple components that can be attached or detached from the network at any moment. Over this time is has gone through some changes, like switching from a series of python daemons to Node-RED to manage persistence, notifications and reporting to several “cloud” services.

But MQTT talks TCP, which means you need some kind of translators for other “languages”. The picture below is from one of my firsts posts about my Home Monitoring System, and it shows some components I had working at the time.

Home WSN version 2

All those gears in the image are those translators, sometimes called drivers, sometimes bridges, sometimes gateways. Most of them have been replaced by Node-RED nodes. But not all of them. This is the story of one of those gateways.

Continue reading

Rentalito goes Spark Core

The Rentalito is a never ending project. It began as a funny project at work using an Arduino UNO and an Ethernet Shield, then it got rid of some cables by using a Roving Networks RN-XV WIFI module, after that it supported MQTT by implementing Nick O’Leary’s PubSubClient library and now it leaves the well known Arduino hardware to embrace the powerful Spark Core board.

Spark Core powered Rentalito

Spark Core powered Rentalito – prototype

Spark Core

The Spark Core is a development board based on the STM32F103CB, an ARM 32-bit Cortex M3 microcontroller by ST Microelectronics, that integrates Texas Instruments CC3000 WIFI module. It makes creating WIFI-enabled devices extremely easy.

The main benefits of migrating from the Arduino+RN-XV bundle to Spark Core are:

  • Powerful 32 bit microcontroller
  • Reliable WIFI connection (auto-reset on failure)
  • Smaller foot print
  • OTA programming (even over the Internet)

And of course it’s a good opportunity to add a couple of features: temperature and humidity sensor and IR remote control to switch the display on and off or fast forward messages.

MQTT support

Spark forum user Kittard ported Nick’s MQTT library to the Spark Core. Since the Spark team implemented the Wiring library for the Spark Core it normally takes very little effort to port Arduino code to the Core.

The library supports both subscribing and publishing. You can subscribe to as many topic as you wish and you get the messages in a callback function with the following prototype:

void (*callback)(char*,uint8_t*,unsigned int);

From here it’s very easy to just store the last value for every topic we are subscribed to, along with some metadata like the precision or the units.

Publishing is even easier. A simple call to publish method is all it takes:

bool PubSubClient::publish(char* topic, char* payload);

DHT22 support

DHT22 support is provided by another port, in this case from Adafruit’s DHT library for Arduino. Forum user wgbartley (this guy is from the Spark Elite, people that basically live on the Spark forums) published the ported DHT library for the Spark Core in github.

Recently another user (peekay123, also from the Elite) has published a non-blocking version of the DHT22 library. It uses interrupts to trap transitions on the data line and calculate timings and a state machine to track message structure. The previous one performs all the calculations in a single method call and disables interrupts to keep control over the timing.

HT1632C dot matrix display support

This one I ported it myself from my previous fork of the original HT1632C library for Arduino by an anonymous user. You can checkout the code at bitbucket (Holtek’s HT1632C library for the Spark Core). The library supports:

  • 32×16 displays
  • drawing graphic primitives (points, lines, circles,…)
  • drawing single colour bitmaps
  • printing chars and texts in fixed positions or aligned to the display boundaries
  • red, green and orange colours
  • 23 different fonts
  • 16 levels of brightness
  • horizontal and vertical scroll

It’s still a work in progress but it’s almost in beta stage.

IR remote support

I had an old-verison Sparkfun IR Control Kit (check it here) laying around and I thought it was a good idea to have a way to switch the LED display on and off. I struggled for a couple of days with the IRRemote library for Arduino (like some others) but finally I quit and I decided to implement my own simpler version.

The approach is very much the same as for the DHT22 non-blocking library before: an interrupt-driven routine that calculates and stores pulse lengths and a state machine to know where in the decoding process we are.

void ir_int() {

    if (ir_status == STATUS_READY) return;

    unsigned long now = micros();
    unsigned long width = now - ir_previous;

    if (width > BIT_1) {
        ir_pulses = 0;
        ir_status = STATUS_IDLE;
    } else {
        ir_data[ir_pulses++] = width;
        ir_status = (ir_pulses == 16) ? STATUS_READY : STATUS_DECODING;
    }

    ir_previous = now;

}

Then in the main loop we check if the message is ready, perform the corresponding action and reset the state:

if (ir_status == STATUS_READY) {

    if (millis() > ir_timer) {

        int key = ir_decode();

        switch(key)  {
            case 10: next(); break;
            case 18: previous(); break;
            case 34: brightness(1); break;
            case 42: brightness(-1); break;
            case 2: toggle(); break;
            default: break;
        }

    }

    ir_status = STATUS_IDLE;
    ir_timer = millis() + IR_DEBOUNCE;

}

The decoding is a matter of translating pulse lengths to bits.

int ir_decode() {
    unsigned int result = 0;
    for (byte i = 0 ; i < 16 ; i++)
        if (ir_data[i] > BIT_0) result |= (1<<i);
    if (REMOTE_CHECK != (result & REMOTE_CHECK)) return 0;
    return result >> 8;
}

It’s very important to add some noise reduction components around the IR receiver, otherwise you will only get streams of semi-random numbers every time you press a key in the remote. You can check the datasheet for the specific model you are using (for instance, check the “application circuit” in the first page of the TSOP382 IR receiver Sparkfun sells) or check the schematic in the next section.

Schematic and layout

The project is mostly a software Frankenstein (well, not quite so, you can check the code in bitbucket). The hardware part is pretty simple. You can get all the information you need from tutorials and datasheets. My only advice is to add noise suppression circuitry around the IR receiver.

Schematic

Schematic

Next steps

I’m ready to try to do my first home etching experiment and this project looks like a good candidate. The first step was to create a board layout using Eagle. The board should be one-layered and the form factor the same as the Arduino, so it could use the same holes the Arduino did in the display frame.

And this is the result:

Board layout

Board layout

As you can see it’s mostly one-layered, I will have to use one wire to connect the DHT22 sensor VCC pin. The layout looks promising and I’m eager to see the final result. Hopefully I will post it here soon.

Thanks for reading!

MQTT topic naming convention

Naming stuff is one of the core decisions one has to take while designing an architecture. It might not look as important as utilising the right pattern in the right place or defining your database model but my experience says that a good naming convention helps identifying design flaws.

In a previous post I introduced the network I’m building for my home monitoring system. As I said it will be based on MQTT, a lightweight messaging protocol. An MQTT message has 4 attributes: topic, value, QoS and retain value. I will focus on the “topic” in this post but I will come back to the QoS and retain attributes sometime in the future.

The MQTT specification defines topic as “(…) the key that identifies the information channel to which payload data is published. Subscribers use the key to identify the information channels on which they want to receive published information”. But the cool thing about MQTT topics is that the protocol defines a hierarchy structure very much like the Filesystem Hierarchy Standard in use in unix, linux and mac boxes. This, along with the possibility of using wildcards to match topics, makes this structure very suitable for a WSN.

Some examples of topics are:

  • /home/llivingroom/bulb1/status
  • /home/door/sensor/battery
  • /home/door/sensor/battery/units
  • /home/outdoors/temperature
  • /home/outdoors/temperature/yesterday/max
  • /zigbee/0013a20040401122/dio3
  • /zigbee/0013a20040410034/adc7

Semantic vs. physical approach

There is a bunch of small decisions to take here. Let’s start from the beginning… When building a topic hierarchy there are two different approaches (at least). Using a semantic approach and name things for where they are and what they measure. A humidity sensor in the bathroom to detect shower times (??) could publish its data under “/home/bathroom/humidity”.

The second option is a physical approach and name things for what they are or what they are attached to. Like in the previous example the humidity sensor might be attached to the analog pin 3 of an end device radio in a Xbee mesh network, so it could as well publish its data under “/xbee/0013a20040410034/adc3”, why not?

Generally the semantic approach is preferable since it is more human friendly but the physical approach is more machine friendly (even if only slightly). Using again the previous example, the Xbee gateway could subscribe to “/xbee/+/+/set” to get all the messages that should be sent to the different radios.

Semantic approach structure

For the physical network it is easy to define a topic structure based on the path to get to the data, like in the last example: “the sensor attached to the AD converter pin 3 in the radio with address 0x00 0x13 0xa2 0x00 0x40 0x41 0x00 0x34”.

For the semantic approach there are a bunch of possibilities but most of the networks out there use a location based structure: first you physically identify the sensor by its location and then the magnitude: “/home/2ndfloor/bathroom/temperature”. As you can see this can be read quite naturally, albeit reversed: “the temperature in the bathroom of the 2nd floor at home”.

It’s worth noting that MQTT provides a way to split a large scale networks into different chunks, each with it’s own scope, via the mount_points feature. Check an interesting thread about mount_points here. So it can be a good idea to foresee how my network might grow, not only downwards but also upwards, and that’s why the “/home” in some of the examples I’m showing might not be a good root location, better use something more specific like “/buckingham palace” or “/lovenest” (I will keep using /home in the examples anyway).

And after the location part I will just provide the magnitude: temperature, pressure, humidity, air_quality, power, battery,… and status. Status is normally a discrete value (typically 0 or 1) indicating the state of the sensor or device. I find it preferable to use “/home/entrance/light/status” that simply “/home/entrance/light” to publish whether the lights are on or off.

Modifiers, prefixing and postfixing

I have already used some “particles” in the example topics above, words like ‘set’, ‘yesterday’, ‘max’,… I’ve gathered some of these particles browsing the internet searching for MQTT topic names. I have tried to classify them into different types:

  • Metadata: timestamp, units, alarm,…
  • Agregation: time ranges like today, last24h, yesterday, month, year, ever,… and operators like max, min or average. A special case of time range could be “now”, “last” or “current” for the last value published on a certain topic although it is usually omitted.
  • Actions: get or query, set
  • Structure-related: raw for, well, raw values

Some of the modifiers are clearly attributes or metadata of the data itself. In these cases postfixing makes perfect sense:

  • /home/bedroom/temperature 21
  • /home/bedroom/temperature/units C
  • /home/bedroom/temperature/timestamp 2012-12-10T12:47:00+01:00

The reading from the bedroom temperature sensor was 21 celsius on Dec 10 at 12:47 UTC+1. As I’ve said, some people uses “current”, “now” or “last”. I used to think this as redundant but it may be necessary when graphing your network messages the way Ben Hardill explains in his d3 MQTT topic tree visualiser post, where only the leaves can have values.

Another reason to use “last” (or any of the others) is when you are also publishing aggregated information for that magnitude. In this case it looks more logical to have a structure like this one:

  • /home/bedroom/temperature/last
  • /home/bedroom/temperature/last/timestamp
  • /home/bedroom/temperature/last24h/max
  • /home/bedroom/temperature/last24h/max/timestamp
  • /home/bedroom/temperature/last24h/min
  • /home/bedroom/temperature/last24h/min/timestamp
  • /home/bedroom/temperature/ever/max

But first you should ask yourself if your messaging network is the place to publish this info. Who will use it? If the answer is only you then you should add some graphing solution like Cosm, Nimbits, Open Sen.se,  or your own. Keep in mind MQTT is a Machine to machine protocol.

But for actions and structure modifiers it’s not so evident. Postfixing (appending at the end) for actions is coherent with the “reversed natural reading” naming convention: “switch on the lights in the stairs” will be a “/home/stairs/light/status/set 1”.

But prefixing in MQTT is equivalent to creating new hierarchy roots, thus “splitting” the topics into different sub-networks, so it fits quite well for structure modifiers. You could have a /home root for sensor data using a semantic approach and a /raw root for raw sensor data using a physical approach. The network should then provide a service to map topics back and forth between both sub-networks:

  • /raw/xbee/0013a20040410034/adc3 => /home/bedroom/temperature
  • /home/bedroom/lights/set => /raw/xbee/0013a20040410034/dio12

This republishing service has been proposed by Robert Heckers in his MQTT: about dumb sensors, topics and clean code post and you can even use an open source implementation of an MQTT republisher by Kyle Lodge using the Mosquitto python library.

Republishing vs. publishing the right contents

There are some details I don’t like about the “republishing” approach. First you are setting up a service that will have to know about the physical network (gateways, technologies, radio addresses, pins…). Second you are doubling the traffic in your network without adding any more value apart from the topic renaming.

So my point is to make the mapping in the gateway before publishing anything. This way the messaging is agnostic of the physical structure of the radio network, the gateway is the only holder of that information. Besides, the mapper will double as a filter, filtering out unnecessary messages *and* processing values. Let’s say you configure a MCU-less sensor with an Xbee radio to report the input of an analog pin. Chances are you will have to do some maths with the reported value to get a meaningful one. For example, the supply voltage reported by the radio has to been scaled by 1200/1024 to get the actual value in mV.

Conclusions

To be honest, I’ve written this quite large post to make up my mind about the subject. These are some of the conclusions I will apply to my own system:

  • The message topics should be independent from the underlying technology.
  • Topics will have semantic meaning, starting with the location and then the magnitude they represent. More particles can be added to the topic to add attributes or metadata.
  • The different gateways and publishers will be responsible for:
    • Abstracting the physical network architecture.
    • Filtering unnecessary messages.
    • Processing values before publishing them.
    • Adding metadata.
    • Listening and processing messages aimed to sensors under their “control”.
  • Republishing will be avoided if possible.
  • Aggregation data goes somewhere else.

I am not really sure about publishing content only to leaf nodes. The analogy with a linux file system is quite obvious: you only put content into leaf nodes (files), but still I find it somewhat ugly (and for me that means there is something wrong).

The final test will be to actually apply this rules to implement my MQTT topics hierarchy to see if it works. Let’s go!

Home monitoring system

All of us tinkermen eventually end up working on a home monitoring/automation system sooner or later. And that’s for me the big background project at the moment.

I have already some of the pieces in several stages of readiness but I was lacking an overall view of the system as a whole. My initial approach was to store everything in a MySQL database and develop a web application to graph the time series values. The plan was to deploy a Wireless Sensor Network with an undefined number of sensors transmitting their information over Zigbee. An Xbee coordinator, laying on a Sparkfun Xbee Explorer USB pluged to the server, will receive the information and a python script will decode the packages and store the message in the database.

Home WSN version 1

I was planning to design the database based on the physical building blocks (device, sensor, value) and then provide a REST API (I am a REST fanboy) to consume the data.

Then I thought about storing the information from other services I’m using like Efergy Engage (I will talk about that in another post). Unfortunately Efergy does not provide an API for their products. I’ve asked them a couple of times and they said that that was not priority but I think that will go against what they feel their business is so I don’t think they will ever open it. Anyway it was not hard to look at the requests they are doing at their online graphics app and I wrote a little script that logs in and makes a single request to read the data for the last 24 hours and stores it in the database.

And then I thought that it won’t harm start graphing all that data in Cosm.com or an RRD tool. After all it will take me some time to put the UI I wanted in place. But then where should I plug that? Do I have to create a consumer for the REST API that just pushes data to Cosm.com with a cron every minute? Or should the Xbee driver and the efergy script dump the information to MySQL *and* to Cosm.com? In the second option MySQL is a simple data backup storage and there are a plethora of drivers performing various tasks. Moreover, if I want more consumers in the future I will have to add code to all those drivers… The first option seems more mantainable and decouples the several components of the network. MySQL becomes the center of the network and clients will pull data from it on regular basis using the REST API. No real-time data here, thou. And each component will have to support my custom API.

And then one day, reading the news feeds in my phone on my way to work, I stumbled on a post in Robbert Henkkens blog (on of my favourites) about publishing sensor data using MQTT. MQTT (Message Queue Telemetry Transport) is a lightweight publish/subscribe messaging protocol aimed to machine-to-machine communication. It was created at IBM in 1999 but it has an open license and it’s royalty free. The infrastructure requires of a broker that receives and dispatches messages and a number of clients. Each message has a topic and a content (and a header that can be as short as two bytes). Clients can publish or subscribe to any number of topics (you can define access control rules) and specify I message to post on a topic when they disconnect. There are 3 QoS levels and subscribers can request messages they lost while disconnected.

It took me a few days to realize that that was the solution I needed to decouple the components of my network. I could work on each component of my network in a isolated way: a publisher or consumer of MQTT messages, no matter what or who is on the other side of the communication. There are a number of MQTT brokers available, including Mosquitto, an open-source project that provides a daemon broker, C library, python library, and command line utilities for publishing and subscribing which is perfect for me. Besides I would be using a (de facto) standard protocol, no more custom APIs or other solutions. Even Cosm.com supports MQTT (although it does not support the full specification )!

This is how my MQTT-centered WSN might look like:

Home WSN version 2

I am already adapting what I have to this pattern. There are still some issues I have to address, like how to store the sensor data (MySQL is not the best option for sure) and define a topic naming convention or how to publish data from dumb sensors under the right topic. Robert suggests a couple of solutions for this last problem. I think the republishing topics option is the one I like the most but this is a topic (ehem) for another post.