Monday, April 28, 2014

Home Alone "elderly monitoring" Project reboot coming...

A much needed reboot of this project is poised to happen.  My mother-in-law is moving in and I am going to need a monitoring system for when she is left alone in the house.  In particular, I am concerned about the "wandering" potential (i.e. leaving the house).  I am less interested in the stove monitoring (which introduced a number of manufacturing show stoppers for the project).

Over the past year, I have used X10 sensors on and off (as part of  the "Home Alone" prototype).  The batteries have lasted impressively.  While X10 is not the future, it represents a cost effective "available now" starting point.

A new perspective comes with this reboot:  I am less interested in the subscriber model (a website/server to aggregate the collected data for perusal and dispersal).  I am (re)looking at GSM/SMS as a first tier notification solution (with email notification an option -- via Internet).

With that in mind, the proliferation of ARM based linux boxes is offering relief from the >$100 Intel SBCs I've been resigned to using. A Beaglebone/RaspberryPi (and their ilk) are not my targets. I want an industrial quality ARM SBC that can be trusted.  A few are starting to appear.

If my system *has* to be AC powered (you can't effectively do wi-fi nor ethernet nor SMS indefinitely under battery power -- maybe a few weeks at best), then considering Microcontrollers (like the Cortex-M4) doesn't make much sense.

If I did a Cortex M4 approach, we are looking at:

  • $12 board (start with an eval board)
  • Hardwired sensors (and then build BT LE wireless nodes)
  • A GSM modem
  • Development using MPE Forth
On the other hand, if I did an ARM/Linux SBC:

The BOM works out roughly the same, but with less work for the Linux SBC.  In addition I can do development on my laptop and target my Intel NUC for the prototype (porting down to the Olimex later?)

So, anyway, I am still mulling this over, but in the next few days I will be making a definitive decision on direction.

Saturday, November 30, 2013

Lua(JIT), Perl, Erlang (RabbitMQ) all playing nicely in Home Alone

Home Alone is a melting pot of technology, all playing together.  RabbitMQ is the transport and Linux is the foundation.

The base station (hub for all the house sensors) runs a stripped down Ubuntu 12.04 Server (to fit on a 1GB eMMC flash drive) and hosts RabbitMQ (as temporary storage and transport) along with the sensor  reading code, written in LuaJIT (calling libusb directly -- no C).  The LuaJIT talks to  RabbitMQ via STOMP (it could has just as well used AMQP, but I was itching to test a pure Lua STOMP library I've developed). The server is efficient and cranks along seamlessly.  With RabbitMQ, you can say that each base station is running Erlang!

The collected sensor data is sent via a RabbitMQ shovel to a RabbitMQ in the cloud.  There the sensor data is read by a "logger", which simply dumps the data directly into daily rolling log files.

Any house monitoring app on the cloud server is able to "subscribe" to sensor data, but currently the apps periodically read the daily log (building up a picture of the house in memory), determine the state of the house (is the stove on? are monitoring motion? is anyone home?), generate appropriate events, feed them into RabbitMQ event queues  and terminate.

This batch approach is used to keep the system manageable (performance-wise) since each arriving event doesn't trigger monitoring apps directly.  The monitoring apps are essentially cron'd to run periodically every few minutes. (Each base station is required to send data or a heartbeat every 5 minutes -- this is how we determine if the base station is running properly.)  This approach should scale well (but not fantastically well).

The first house monitors were written in Lua, but I rewrote them in Perl.  Why?  The house monitors are mostly pure logic.  They model the house and I needed a richer language than Lua (quick: how do you sort a table in Lua based on a time stamp and then walk that table in sorted order?).

Perl is old hat, but still has too much built in facility for describing data structures and algorithms to be ignored.  I've could have used Python, but I enjoy coding in Perl more. "Modern Perl" looks nice...

But, it really doesn't matter as the architecture allows using any programming language you want.  The monitors are essentially just modules in a pipeline data flow.  I may be using Haskell for some monitors.  I want to use a language that expresses what a monitor needs to do, but as succinctly as possible.


Friday, November 29, 2013

Telling a Story with Sensors

More than just reporting events, your sensors should tell a story.

Collecting temperature and motion data from around your house can tell you a lot about how you live.

My elderly monitoring system strives to tell you (the care taker) an ongoing story about what your loved one is doing during the day.  From just motion and temperature, the system can tell:

  • Did she have breakfast this morning? When?
  • Is she getting regular meals (spending time in the kitchen)?
  • Did she get up at her usual time in the morning?
    • Maybe she is isn't feeling well and is still in bed.
  • Is she wandering about the home?
  • Did she leave the home?
  • Is she taking naps in the living room chair?
  • Did she fall asleep last night in the living room chair?
  • Is is too cold or too hot in the house?
  • Is she constantly changing the thermostat?
  • Did she leave the stove on unattended?
  • Did she leave the water running in the bathroom (causing a flood)?
This is much more useful than just alert events.  At the end of the day, a care taker should be able to track the well being of the elderly, not just be notified when something is wrong.


Sunday, May 12, 2013

Home Alone base station is built with Unix principles in mind.. using LuaJIT!

Home Alone leverages Unix design principles.  That is, it embraces Unix (in the current case, the flavor is Linux) rather than ignore it.

Home Alone follows the principle that if your Linux system can run for months without a hiccup, then that is a good model for reliability. Not a great model, but a good one.  For instance: There are a lot of crazy things that go on in  an Ubuntu 12.04 server (which is used for both the base station and cloud instances of Home Alone). Take a look at syslog sometime.  But it works, for months.

Home Alone uses Upstart as a process monitor daemon. There are several Home Alone processes running on the base station: X10 (CM19A) USB interface, temperature control USB interface, a process to filter messages, one to log messages to disk (for transactional delivery to the cloud), and one to push messages to the cloud.

The base station processes communicate with each other through POSIX message queues. Look them up, they're in Linux and they offer a loosely coupled way for multiple processes to talk to each other.

All Home Alone processes generate rolling log messages (courtesy of Upstart).  The ext4 filesystem is my database for message storage (why not? A good journaling filesystem can be an excellent poor man's index store -- directories and files!).

The Home Alone processes don't depend on any "third party" middleware.  Each process is written in LuaJIT (100% Lua, no C) and links directly to system libraries. Any memory leaks or crashes are *mine*. (So far: 18 days of uptime, feeding the cloud at least every 30 seconds, and with no crashes or memory leaks).

This all runs amazingly light and well coordinated. I can kill a process and Upstart restarts it and it picks up where it left off.   If I unplug the CM19a USB transceiver, it waits until it has been plugged back in.

No Home Alone processes hold state in memory. They are part of a workflow. Any message or event that cannot be dropped is placed into the ext4 filesystem until they are acknowledged to be in possession of the cloud server.  Each message is a file. This file is written "atomically" from the perspective of any consumer. (The file is created hidden, the message is written, the file is then closed and then "renamed" so that the consumer can see it).

A heartbeat is generated (at the application level) from the base station to the cloud server. This allows the ability to detect whether the base station is connected and properly communicating in a timely manner.

I've still got plenty of fault tolerance testing to do, and some recovery stategies to implement. For instance:

  1. If a USB sensor has been unplugged or fails (such as the X10 CM19a), an event should be sent to the cloud.
  2. The base station should be "rebootable" from the cloud server. There should be some remote control.
  3. If the cloud server isn't getting (at least) "heartbeats" from the base station, there should be an email alert.
The base station, when hardened will have at least a 1 month "burn in" before I consider it done.

/todd

Friday, April 26, 2013

Thoughts on Architecture: Fault Tolerance, Reliability, etc

Let's turn our thoughts to infrastructure (or Architecture).  Your thousand dollar home monitoring considers "architecture" doesn't it?  You know, security, availability, reliability, those boring things, right?

Home Alone uses SSL to connect from the home base to the "cloud" server.  We don't want people peeking at your data (or capturing plain text passwords). I don't know if the other monitors do secure connections, but *everything* (in the internet-of-things) should.

The home base software is designed with failure in mind. There will be internet outages. They happen. There will be moments (or minutes or hours) that the you could have service interruption. The home base software is designed to log sensor data to "persistent storage" (flash or hard disk) until a good connection is made to the server. If that connection goes away, data is still retained in persistent storage until the connection comes back.

I mentioned "fault tolerance" before. While there is no real recovery from total hardware failure (if the base station hardware dies, it dies -- there is no redundancy there unless you want to supply a backup base station), I do everything I can to make sure that data isn't lost.  Above, I mentioned logging sensor data to "persistent storage" when there isn't internet connectivity. Well, the system actually logs data regardless of connectivity. As soon as sensor data is collected (and filtered) it is written to non-volatile storage. The data is only deleted when it has been "confirmed" successfully received by the server.

Bad connection? No data is lost.  Home base loses power suddenly? Any logged data is not lost (which is probably most if not all sensor data collected up to the moment of power loss).

On the internet side, the data is also persisted. It is never thrown away. It is archived. Pick a day; it can be "played back".

There are plans for server "fault tolerance" too. I plan to have mirrors (US east and US west). Data will be replicated between the servers. A server outage (Amazon East Coast I am looking at you) won't result in total failure.

Architecture matters.

Tuesday, April 23, 2013

Monitor your "Independent" Grandmother for under $200 (hardware cost)

I'm looking into appropriate ARM based devices for the base station, one with good wi-fi coverage and enough memory/horsepower to be the base station. The current contender is about $75.  Cheaper ones can be had for under $50, but I wanted one with a good antenna.

Pair that with a few sensors and you can get started for under $200. What would this get you?

Let's assume X10 for now (until I get my Bluetooth LE sensor project going).

Let's also assume an apartment or condo...

You'll need

  • 1 Wi-fi Base Station ($75)
  • 1 X10 USB RF Transceiver CM19a : (ebay *new* ~ $20 each)
  • 1 Door sensor, DS-10a or DS-12a : (ebay *new* ~ $13 each)
  • 4 Motion sensors MS-14a or MS-16a (ebay *new* ~ $12 each)
    • Living/Dining Room
    • Bed Room
    • Kitchen
    • Bath Room
  • 1 USB temperature probe (~$15 Amazon, ebay, etc)
  • 2 "I need you now" buttons KR-15a (ebay *new* ~ $8 each)
    • Mounted Next to bed
    • Mounted in Bathroom
So, to monitor Grandma,  you'll need to buy around $188 in hardware.
With this hardware, you can monitor stuff like:
  1. Did Grandma go to bed last night?
  2. Is Grandma in bed unusually late in the morning?
  3. Did Grandma leave the apartment (wandering...)?
  4. Did Grandma eat breakfast, lunch and dinner (activity in kitchen followed by dining room)?
  5. Did Grandma leave the kitchen stove on unattended?
  6. Is Grandma in the bathroom an unusually long time?
  7. Does Grandma need assistance? (Pressed Big Red Button)
This isn't a scary amount of money.  What do you think?

Sunday, April 21, 2013

Stove Monitoring

Right now, I have a monitor in place for stove usage. This monitor will detect if the stove is "on" but unattended. This is a non-invasive monitor (i.e. no modifications made to the stove). The monitor utilizes a motion sensor (for the whole kitchen) and a USB temperature probe (TEMPerl).

The probe is mounted on the back wall (behind the stove) up near the vent.  It is not in the direct path of heat, but it doesn't need to be.

I am taking rolling/moving averages of sampled temperature (in 30 second intervals) and any "sudden" spike in temperature (say from 73.3F average to 75F -- one or two degrees) will make the monitor believe the stove is on.   While the stove is "on", if there is no motion in the kitchen for N minutes (e.g. 30 minutes) then an alert is emailed.  If there is motion, the timer is reset. If the sampled temperature drops (a degree or two) below the now high average, then the stove is cooling down (turned off?).

There is a lot of tuning to do, but the smoothing caused by the rolling/moving average helps significant rises and declines to stand out while normal warming/cooling of the kitchen (due to the sun or thermostat) is averaged out. It may be worth looking into Bayesian classifiers to see if it can better tune out normal warming/cooling trends.