Monthly Archives: January 2013

Storing and publishing sensor data

Now that I have started to monitor some magnitudes at home, like power consumption or the front door opening, I have to do something with this information. The sensor information can be useful for two purposes, mainly:

  • analysing it to know more about your environment or your life-style patterns (like when and how you spend the money you spend on energy)
  • taking real-time actions based on events from your sensors (open a light when you pass by a dark corridor at night, receive a notification when someone enters your house,…)

To analyse the information you will first have to store it in some way you could retrieve it later, graph it, summarize it, perform different time range roll ups or moving averages or detect time patterns,… whatever. Even if the goal of the sensor information is to trigger events (watch out: the door sensor is running out of batteries!) you will probably want to have it stored somewhere and save a log of actions as well.

So, where to store this information? You have different options available:

  • Relational databases (RDBMS) like MySQL or PostgreSQL. If you have some programming background this is pretty easy, we are all used to store data on a MySQL or similar. But mind the tables will GROW very quickly. A sensor reporting every minute will create more than half a million entries a year.
  • NO-SQL databases like MongoDB or Cassandra. They are popular and everyone wants to use them. Cassandra is often used to store time-series data like server load or log files and the Internet is full of information about how to store it, retrieve it, partition it,… MongoDB has also some nice features like capped collections.
  • Round-robin databases. They have been around for a while. RRDtool version 1.0 was born in 1999 and MRTG is even older. Data is stored in circular buffers, you can define the size of the buffer and the step or how often a new value is inserted into the buffer. When the buffer is full old data gets overwritten and this happens over and over. So you have another buffer for aggregated data (like daily data) and another one for even larger scale data (like weekly data) and so on.
  • Time series databases. And finally there are a number of databases that have been specifically designed to store time-series data: fast inserts, fast range lookups, roll ups,…

Now, some companies are providing time series storage services online. They provide a public API, graphing capabilities and some of them even roll ups or data aggregation. They have been born around the Internet of Things hype. Maybe the most popular is Cosm (formerly Pachube) but it’s not the only one: TempoDB, Nimbits or ThingSpeak are very good options.

Cosm has been around for a while and it is still the most used by the IoT community. AFAIK, there is no restriction on the number of data points you can store for free if you keep your data “public”. Data is structured in “feeds” and “datastreams”. They have an API you can use to push data individually or in batches. You can also use it to backup the data you have in Cosm. Their charts are nice looking but they have predefined time ranges that automatically aggregate your data so you have little control on what and how you want to graph. They have a number of goodies like triggers, graph builder (to insert your Cosm charts in your site), tags, apps, localization based search,…

TempoDB is a relatively new player. It’s free up to 5 million data points per database, that’s roughly 9.5 years for a 1 samples/minute data. You can have several “series” of data per database. Their API is really good and they have a number of clients including a python one :^). Their charts might not be as “pretty” as those from Cosm but you have full control on the data you are visualizing: date range, interval and aggregation functions, and they are FAST (Cosm can be really sluggish sometimes). Your data is private, so there is no sense in having a search tool. They have tags and I’ve been told they are about to release a notification service.

Nimbits has a desktop-like interface base on ExtJS. It is basically a paid service (their free quota is 1000 API requests per day, which is OK for 1 sensor reporting at a rate of 1 samples every 2 minutes) and it costs $20/2 million requests (roughly $20 per year if you have 4 sensors reporting at a 1 sample/minute rate).

ThingSpeak is very similar to Cosm, data is presented in a “channel” where you can see multiple charts for different magnitudes (they call them “fields”), a map and even a video from Youtube. It’s open source, so you can clone the code from Github and install it in you server. Their web service is limited to 1 API request every 15 seconds per channel, which is OK as long as you group all the data from one channel in a single request (that’s it if you have more than one field per channel). For each field you have a lot of options to define your chart: time range, time scale, different aggregation functions, colors, scale,…

The information in this post is not very detailed but I hope it can be an introduction to anyone who is looking for ways to store or publish sensor data. Right now I’m using Cosm and TempoDB, as well as storing all my data in a local MySQL database (not the best option but it works for now). There are plenty of options I still have to explore. In my next post I will talk about the daemon I’m using to push data to Cosm and TempoDB, in the meantime you can check the code on Github.

Smartmeter pulse counter (4)

This is going to be the last post for the smart meter pulse counter setup series. I want to wrap up several things like the final hardware, the code and the data visualization.

Final hardware

This is what the pulse counter sensor looks like, almost. The final version that’s already “in production” has a switch to hard-reset the radio from outside the enclosure. Nothing special otherwise. Everything goes in a socket so I could reuse the components, the photocell probe connects to the 3.5mm jack that’s on the top of the box and I’m using 3 alkaline AA batteries (not the rechargeable ones in the picture).

Code

The code is freely available under GPLv3 license on github. The code itself is pretty simple: it uses the Arduino LowPower library by RocketScream to keep the arduino sleeping for most of the time. It only wakes on an event on any of the two possible interrupt pins:

void setup() {

pinMode(LDR_INTERRUPT_PIN, INPUT);
pinMode(XBEE_INTERRUPT_PIN, INPUT);
pinMode(XBEE_SLEEP_PIN, OUTPUT);

Serial.begin(9600);

// Using the ADC against internal 1V1 reference for battery monitoring
analogReference(INTERNAL);

// Send welcome message
sendStatus();

// Allow pulse to trigger interrupt on rising
attachInterrupt(LDR_INTERRUPT, pulse, RISING);

// Enable interrupt on xbee awaking
attachInterrupt(XBEE_INTERRUPT, xbee_awake, FALLING);

}

The LDR_INTERRUPT pin is there the photocell based voltage divider is plugged to. When the photocell resistance drops due to a light pulse the pin sees a RISING transition and the Arduino counts the pulse. The XBEE_INTERRUPT pin is connected to the ON_SLEEP pin of the XBee (pin 13). When the XBee is sleeping this pin is pulled high and when it awakes the pin goes low and the Arduino sends the message.

void pulse() {
  ++pulses;
}
void xbee_awake() {
  ready_to_send = true;
}

On the main loop the arduino sleeps until an event awakes it. If the event has been triggered by the XBee then it calls the message sending methods.

void loop() {

// Enter power down state with ADC and BOD module disabled
LowPower.powerDown(SLEEP_FOREVER, ADC_OFF, BOD_OFF);

// Check if I have to send a report
if (ready_to_send) {
ready_to_send = false;
sendAll();
}

}

Results

The messages are being received by an XBee coordinator radio that’s connected to an USB port in my home server. On the server my xbee2mqtt daemon is running listening to incoming messages from the radio port. The messages are mapped to MQTT topics (namely /benavent/general/power and /benavent/powermeter/sensor/battery).

The the mqtt2cosm mqtt2cloud daemon (I will write a post about this one soon) pushes the data to Cosm.com or Tempo-db.com. And the final result looks like this:

First issues

The pulse counter has been running for some days now and the first issue has arised. You may notice in the previous graph that from time to time the sensor stops reporting data for several minutes. I still have to find out what’s going wrong but my guess is that there is some issue with the interrupts and the transmissions. I am not disabling interrupts while transmitting because I thought it was not necessary when using the hardware UART, but maybe I was wrong.

The problem doesn’t seem to be related to the time of day, the power measure and in the tests I did while testing the XBee sleep cycle it did not happen (the probe was not plugged in so there were no additional interrupts…). The distance to the coordinator radio was one of the problem generation candidates in my first tests but now I am testing another sensor that’s just one meter apart from the pulse counter and it reports flawlessly…

Any suggestions?

XBee to MQTT gateway

So far I’ve posted about hardware and theoretical stuff like network architecture or naming conventions. I think it’s time to move to the software side.

The core of the sensor network I’m deploying at home is the Mosquitto broker that implements MQTT protocol. It manages the messaging queue, listening to messages posted by publishers and notifying the subscribers.

I’ve been working in parallel to have at least some pieces in place to get and store information from the pulse counter sensor. These are an XBee to MQTT gateway and a couple of consumers: one storing info into a MySQL database and another one pushing it to cosm.com.

I want to introduce you the first piece: the xbee2mqtt daemon. It’s already available on github under GPL v3 license. It publishes the messages received by an XBee radio to the Mosquitto instance. The radio must have a Coordinator API firmware loaded. Right now the gateway understands frame IDs 0x90 and 0x92 which account for “Zigbee received packet” (i.e. data sent through the serial link of the transmitting radio) and “Zigbee IO data sample” (that’s an automatic sample of selected analog and/or digitals pins on the transmitting radio).

I’ve tailored the daemon to my needs, but trying to be as generic as possible. The design is based on small components:

  • The “xbee” component takes care of the connection to the radio and the packet parsing.
  • The “router” maps xbee addresses/ports to MQTT topics.
  • The “processor” pre-processes values before publishing them.
  • The “mqtt” component takes care of the message publishing.
  • And the XBee2MQTT class (which extends Sander Merechal’s fabulous daemon class) glues everything together.

You can read the code to get a full insight of what it does but I’d like to explain here some decisions I’ve taken.

I’ve abstracted the message source to an address and a port. The address is the 8 bytes physical serial number of the radio (SH and SL) and the port is the pin (adc0, adc1,… dio1, dio2,…). 0x90 packets are mapped to virtual ports. The sender can define the name of the virtual port (like “battery:4460\n”) or otherwise a generic name will be used (“serial” by default).

The routing is a basic functionality. As I already explained in my previous post about topic naming conventions I think the mapping should be done in the gateway because no other component should have to know about the physical structure of the wireless (XBee) network. So the xbee2mqtt daemon maps all the messages to MQTT topics with semantic meaning. You can also allow default topic mapping which will publish any message received by an undefined address/port combination to a topic like /raw/xbee/<address>/<port>.

The processor uses a strategy pattern to pre-process any incoming value. I will be using this to do some calculations on the adc7 value the XBees report (that’s the voltage monitor pin) to convert it to the real voltage the batteries are providing.

All the components have been designed so they can be injected to any code that depends on them. This is a common pattern (dependency injection) that favours decoupling and provides a clean way to define strategies at runtime, for instance when mocking components in the unit tests.

As always, comments are more than welcome!

Smartmeter pulse counter (3)

The smartmeter pulse counter will be the first standalone sensor I will deploy so power economy is a requirement.

I will have to power an Arduino Pro Mini and an Xbee radio. I plan to power the Xbee from the 3V3 regulated output. The Arduino VCC output can provide as much as 200mA which is far enough to power the Xbee. The built-in regulator requires at least 3.35V and up to 12V so 3 AA cells will provide enough potential to drive the whole setup. But for how long?

The Arduino will wake briefly to count every pulse and the radio has been configured to wake once every minute for 200ms and ask the uC to report the count number for since the last report. The figures for the number of pulses are based on an average consumption of 300W, that is one pulse every 3 seconds or 1200 per hour.

Once I had breadboarded the sensor I started to take some measurements in different operational statuses. You can check my results in the following table:

Arduino XBee mA ms/event events/h ms/h mA (avg)
normal transmiting ~54 200 60 12000 0.180
normal sleep 7.940 1 1200 1200 0.003
sleep sleep 0.290 - - 3586800 0.289
0.472

So 0.472 mA is the average power consumption of the sensor. The AA cells I will be using are rated 2000mAh so the sensor should be able to run for 4237 hours or about 177 days. Not bad. Off course there is room for improvement, like 290uA when doing nothing is really too much but I prefer to give it a try now before over optimizing. After all these are theoretical values and I want to know if the will match the reality.

I plan to test different approaches to power the next sensors: coin cells, LiPo cells, solar panels, rechargable NiMH batteries,…