When I think about my early childhood, there is one picture that comes up from time to time: It’s my grandfather standing before the so called “weather station”, a wooden panel with three brass gauges: a thermometer, a hygrometer and a barometer. Then my grandfather gently knocked against the glass lid of the barometer with his knuckle and checked the barometer again. As a child I was convinced he woke up the little worker inside, who in return checked if the weather would become fair, rainy or if it was changing. This made perfect sense to me, because I knew there was a special force responsible for the weather: the weather fire dwarfs. After the evening news the news anchor would say “And now the weather fire dwarfs for tomorrow”. It was much later that I asked my mom and she explained to me that it wasn’t the “Wetterfeuerzwerge” (weather fire dwarfs), but the “Wettervorhersage” (weather forecast). Until then, I was convinced that being a weather fire dwarf was just a job like being a firefighter, a baker or a teacher. They looked out for the weather and made assumptions on how it was going to be. It also made perfect sense that this job was done by dwarfs, given the small instruments they needed to operate in.
You might wonder where I’m heading to with those seemingly random childhood memories? Now, my grandfather certainly did neither wake up the little weather fire dwarf, nor did he perform some other kind of magic. He just made sure the barometer, which had a small metal box called an aneroid for measuring the pressure inside, was showing the tendency. When you tap them, the pointer will move to the current pressure, giving you an indication if the air pressure is rising or falling. He also knew that the little markers that read “very stormy”, “rain”, “change”, “fair” and “very dry” were not to be take literally. In winter “fair” weather often meant it became very cold. He not only looked at the barometer – he also knew how to interpret what he saw.
Fast forward to 2018 I struggle with explaining the use and the accuracy of sensors to colleagues. I can’t fight the feeling that digital has somehow taken a toll on our imagination and our expectations towards instruments. It’s especially true for hygrometers, as humidity is far more complicated to measure exactly than temperature 1. A digital hygrometer usually provides you with two digits behind the comma. So instead of reading a little above 50% on the analog hygrometer it provides you with a straight 51.23 % relative humidity. That’s astounding. Sadly, it doesn’t mean what people think it means. Because it seems to provide such an exact figure, people assume it’s more accurate than the analog reading of grandpa’s old weather station. But it isn’t necessarily so. If it’s a very good digital humidity sensor with a 1.5% accuracy the reading will mean it’s somewhat 50 to 52ish. If it is a more common sensor with a given 5% accuracy it tells you that it can be anything along the lines of 46 to 56% 2
But why are they giving two digits behind the comma anyway, is this just a fancy feature? Yes and no. For that, we have to understand the difference between accuracy and resolution, both specifications you will find when you are buying a datalogger or sensor. What we just talked about is the accuracy. The resolution is often higher than the accuracy. Your humidity sensor might have an accuracy of +/- 2% RH, but a resolution of 0.1. This seems like a contradiction at first, but it isn’t. Let us imagine a little weather fire dwarf who is able to feel how wet it is. He says: “It’s 52%”. Now, while we saw that this can mean anything between 50% and 54%, if we send him out again, he will be able to tell us if it is becoming wetter or dryer than before. And he will be able to tell us more detailed than he can do it with the general humidity. He is able to say: “It’s getting damper, now it’s 52.1%.” So, while the digits behind the comma mean nothing in terms of general humidity, they can help us understand where the climate of a room is heading to and how severe the changes are. If you measure every 5 minutes and get a reading of 52.1%, 52.3%, 52.2%, 52.1%, 52.2% and 52.1% over half an hour it will tell you something else than a series that reads 52.0%, 52.2%, 52.3%, 52.4%, 52.6%, 52.8%. While both series can still mean it’s 2% more or less humid, the tendency of the second series shows that something is going on which makes the room damper. Think of the resolution as a measurement that helps you to see change, just like grandpa tapping on that old barometer.
Next up, we will ask the question if sensors really read the temperature and humidity of a room.
Here comes the 2nd part…
- Which is also the reason why you get temperature sensors for just a few cents while you can invest a good deal of money in good humidity sensors ↩
- Fun fact: the accuracy given by loggers in +/- percentage values are not standardized. So, a 5% accuracy can mean it is 5 percentage points off or it can mean that it is really 5% off of your reading. This leaves you either with a range between 46 and 56% or between about 48.5. And 53.5% for 51% relative humidity. As most datalogger and sensor manufacturers refuse to document what their accuracy percentage means, I always assume it’s the worst of both possibilities. ↩