Let’s Encrypt with NGINX and pfSense

I’ve been using self-signed certificates for a few of the applications and services I run at home on the local network for a year now and with Firefox it’s not an issue as you accept the risk the very first time and never get asked again. However in the kitchen, the Splunk dashboard runs using the Chrome browser because it’s a little more stable running 24/7 on the tablet. Chrome treats self-signed certificates in a different manner to Firefox – every week it pops up the same boring warning screen and you have to press Advanced then proceed to carry on to the site…

Chrome self signed certificate warning

So to fix the issue with the annoying warning screen, I need to get hold of a non-self-signed certificate.

To get a non-self signed certificate, I need to use a domain that I own or can prove I have control over so that rules out the local network fake domain set in pfSense which is “.home”. I have a few other domains but don’t want to mix those up with internal addresses, so I purchased a new domain from HOST-IT.co.uk.

Next up, trying out Let’s Encrypt – I’d read about Let’s Encrypt about two years ago and wanted to use it in a project at work for test environments but since we have a paid for wildcard certificate, we used that for all the test environments instead. This little project at home gives me a the perfect opportunity to try out Let’s Encrypt and get a non-self-signed certificate for free 🙂

I started with the docker image certbot/certbot but since I’ve never used certbot before, I decided after a few attempts to install certbot on my server instead and at least that way it prompts you to complete the certificate registration. For those interested, the command I used can be found below.

certbot certonly --config-dir config/ --work-dir work/ --logs-dir logs/ --manual --preferred-challenges dns --email email@address.com --agree-tos -d test.example.co.uk

To request a wildcard certificiate instead of a specific host certificate, use: -d *.example.co.uk

Once you run the above, you’ll get prompted to agree to your IP address being logged – entering No will cancel your certificate request! After you’ve agreed to your IP being logged, it will display a DNS TXT record that you need to place on your domain.

certbot request certificate dns challenge

Take the challenge code and put a DNS TXT file on your domain, e.g.

acme challenge text example

I’d highly recommend testing whether the TXT file can be found at this point before pressing enter on the certbot window. I found that the TTL on my hosting account was set to a day by default and had to change it to 1 minute. To test the DNS TXT can be seen, run something similar to:

dig -t TXT _acme-challenge.example.co.uk
dig text check

Once you’ve got your certificate, there’s a whole bunch more commands that may come in useful like:

certbot --config-dir config/ --work-dir work/ --logs-dir logs/ certificates

For listing all the certificates you own.

certbot list certificates
certbot --config-dir config/ --work-dir work/ --logs-dir logs/ renew

For renewing all the certificates you own.

certbot renew certificates
certbot --config-dir config/ --work-dir work/ --logs-dir logs/ --help

For a list of all the other commands available.

Next up, I’m keeping my home network domain as “.home” because I have too many services to change over now, so I need to override any requests in pfSense that go to specific hosts on the new domain name. To do this, go to the DNS Forwarder -> Host Override section and add a new entry, e.g. if a request to splunk.example.co.uk is received, use IP address locally.

pfSense host override

And finally, the NGINX config needs to be updated to use the new certificates. Temporarily while I’ve been testing out these new certificates, I’ve left in the server_name configuration that I previously used “splunk.home” but in addition added the new domain so either could work.

nginx config

And the result is that the Chrome browser on the tablet no longer objects to the certificate. Cool 🙂

Splunk dashboard with new certificate

p.s. for those trying to use this guide, some of the images show an example whereas others are based off my real splunk setup – sorry! So the first set of certbot images I’ve asked for test.example.co.uk and therefore if you wanted to put a corresponding record into pfSense, you’d need to put host=test, domain=example.co.uk and in the nginx config it would say server_name test.example.co.uk.

Home Monitoring Re-Write Number Four and the HTTP Status Code 499 Errors

As previously mentioned in the home monitoring re-write number four post, I have a number of Arduinos around the house that collect data and send the data using an HTTP POST to the home monitoring application. During the upgrade of the home monitoring stack, I decided to introduce Nginx to proxy the requests that go to the home monitoring application for visibility and also to do a little bit of url re-writing.

Before Nginx was introduced I had Arduino’s making POST requests to a set of PHP files which received the request, added a timestamp and a header and re-posted the data to the home monitoring application directly. That process was only slightly changed to instead of POST directly to the home monitoring application, POST to Nginx which would proxy the home monitoring endpoint.

I noticed a few weeks after this introduction of the new stack that the grid import figures seemed rather low (factor of 10!) compared to the units that the import meter was reporting and found that the majority of POST requests from the Arduinos were getting stopped mid flight between Nginx and the home monitoring application with a 499 HTTP status code, e.g.

192.168.XXX.XXX - - [27/Aug/2018:07:04:11 +0000] "POST /meter HTTP/1.1" 499 0 "-" "-" "-"
192.168.XXX.XXX - - [27/Aug/2018:07:05:11 +0000] "POST /meter HTTP/1.1" 499 0 "-" "-" "-"
192.168.XXX.XXX - - [27/Aug/2018:07:06:11 +0000] "POST /meter HTTP/1.1" 499 0 "-" "-" "-"
192.168.XXX.XXX - - [27/Aug/2018:07:08:11 +0000] "POST /meter HTTP/1.1" 499 0 "-" "-" "-"
192.168.XXX.XXX - - [27/Aug/2018:07:10:11 +0000] "POST /meter HTTP/1.1" 499 0 "-" "-" "-"

After a small bit of googling and checking the code used on the Arduinos it was obvious that the previous method of sending data would need to change a little to work with Nginx acting as a proxy. As mentioned in this stackoverflow answer “HTTP 499 in Nginx means that the client closed the connection before the server answered the request” the Arduino isn’t waiting for the response from the server and is closing the connection too early – therefore the data didn’t reach the home monitoring application. In the previous setup the Arduino could fire the data and forget (close the connection) immediately after sending the request as it wasn’t interested in the response.

To fix this, all I needed to do was introduce a small wait after sending the data to reduce the number of 499’s. The code now reads, establish a connection, send data, wait 15ms, close connection – or for those that want to see the full code:

  String postData = getPostData();

  if (pompeiiClient.connect(pompeii, pompeiiPort)) {
    Serial.println("connected to pompeii");
    // Make a HTTP request:
    pompeiiClient.println("POST " + String(pompeiiService) + " HTTP/1.1");
    pompeiiClient.println("Host: " + String(pompeii) + ":" + pompeiiPort);
    pompeiiClient.println("Content-Type: application/json");
    pompeiiClient.println("Content-Length: " + String(postData.length()));
    pompeiiClient.println("Pragma: no-cache");
    pompeiiClient.println("Cache-Control: no-cache");
    pompeiiClient.println("Connection: close");


    Serial.println("Called pompeii");

Home Monitoring Re-Write Number Four!

Since getting the Tesla Powerwall installed, our trusted Wattson has not been able to display correct figures as it can’t tell if we are importing or exporting until the Powerwall is full.  The Wattson displays a relatively static value of +150W indicating that we’re importing, yet the data from the various other devices in the house contradicts that figure.

So it’s time to say goodbye to Wattson and hand it on to a neighbour and hope they get some use out of it.

Wattson’s demise is a great excuse to upgrade to a tablet and display a lot more information than just whether we’re importing or exporting, so I’ve gone out and bought a Samsung Galaxy Tab A from JL to replace Wattson.

In order to display more information on the tablet, I needed to re-write the home monitoring application and start graphing the data at home rather than relying on PVOutput.  PVOutput is a great website, but it’s limited to a 5 minute picture of what’s going on and I’ve run out of fields to upload data, even though I donate to get extra fields! Wattson has gotten us used to being able to see what’s going on instantly rather than waiting for a snapshot 5 minutes later.

The second re-write I did of the home monitoring application in 2015 has been running well for a few years, but despite what I wrote back then about it being maintainable, it was a pain to add in a new datasource and it was written in my least favourite framework – Mule.

Since then I’ve tried re-writing it in Node.js, but that code was less than elegant and not tested at all… It also relied on a heavy weight MySQL database which I wanted to avoid if possible. HSQLDB may be a bit basic, but it’s served me well for many years and allows me to make changes to the files in a text editor if required.

I did learn something valuable from the Node.js re-write – consolidate the five tables I had before into one large table. I’ve changed the following five tables

to a single table for ease of storing the data and to save space.

The previous database file size was 640MB (note that’s more than 200MB per year as I blogged about the database being 400MB only last year) vs. the new single table layout file size of 240MB. Every field in the database except the composite primary keys are nullable. This allows the data to be stored into the table in any order, after all I can’t guarantee which Arduino will send it’s data first.

The next step was to work out how to convert the database from the original layout to the new layout without having my pc running at 100% for over 2 hours (the first time I loaded the data from the old tables to the new table, this is exactly what happened!). The trick was to not insert based on a select union, but to use the HSQLDB merge functionality. The two hour ETL turned into a three minute ETL. This much improved ETL time allows me to take a copy of the old database (the in use one) at any time, transform it and check the new app is compatible with the schema and can write data into the new layout correctly.

As I’ve mentioned above, the new application is no longer based on Mule and instead is a Spring Boot app.   The home monitoring application receives input using Spring MVC controllers and persists the data to the database against the date and time (rounded to the minute).

At the service layer, there’s also three separate scheduled services, one for uploading PVOutput data once a minute, one for requesting the EE addons status page and scraping the data every hour and one for calling the Tesla Powerwall API every five seconds.

EE addons status page scraping I hear you say… “what’s that for?”  We no longer have fixed line internet and rely on EE 4G internet, which is great until we run out of data two days before the end of the month!  The EE addons status page displays how much data you have used, how much is remaining and how long until the next period.  Since I’ve now got the option to display a lot of different data on the tablet, it seemed sensible to display the EE data allowance too!

For anyone interested in doing something similar, here’s a class I’ve written to read the HTML and trim it to extract the right bits of information. The fields aren’t accessible as I don’t store the information – I simply pass it straight to Splunk via toString.

package uk.co.vsf.home.monitoring.service.ee;

import java.util.regex.Matcher;
import java.util.regex.Pattern;

import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.builder.ReflectionToStringBuilder;
import static org.apache.commons.lang3.StringUtils.*;

public class EeDataStatus {

	private static final String ALLOWANCE_LEFT = "allowance__left";
	private static final String ALLOWANCE_TIMESPAN = "allowance__timespan";
	private static final String BOLD_END = "</b>";
	private static final String BOLD_START = "<b>";
	private static final String SPAN_END = "</span>";
	private static final String SPAN_START = "<span>";
	private static final String DOUBLE_SPACE = "  ";

	private final String allowance;
	private final String remaining;
	private final String timeRemaining;

	public EeDataStatus(final String response) {
		String allowance = response.substring(response.indexOf(ALLOWANCE_LEFT) + ALLOWANCE_LEFT.length());
		allowance = allowance.substring(0, allowance.indexOf(SPAN_END));

		Pattern pattern = Pattern.compile("(\\d+.*\\d*GB)");
		Matcher matcher = pattern.matcher(allowance);

		this.remaining = matcher.group();
		this.allowance = matcher.group();

		String timespan = response.substring(response.indexOf(ALLOWANCE_TIMESPAN) + ALLOWANCE_TIMESPAN.length());
		timespan = timespan.substring(0, timespan.indexOf(SPAN_END));
		timespan = timespan.substring(timespan.indexOf(SPAN_START) + SPAN_START.length());
		timespan = timespan.replaceAll(BOLD_END, EMPTY).replaceAll(BOLD_START, EMPTY);
		timespan = timespan.replaceAll(CR, EMPTY);
		timespan = timespan.replaceAll(LF, EMPTY);
		timespan = timespan.replaceAll(DOUBLE_SPACE, SPACE);
		timespan = StringUtils.trim(timespan);
		this.timeRemaining = timespan;

	public String toString() {
		return new ReflectionToStringBuilder(this).toString();

When I tried writing the home monitoring application in Node.js I gave Prometheus a go to see whether that would be a good tool for graphing at home.  It worked well when graphing small sets of data, but when I tried to graph over a years worth of data, it either errored because there was too much data coming back from the query, or took a vast amount of time to refresh the graph.  It’s possible I wasn’t using the tool correctly, but I decided it wasn’t for me in this use case because of the inability to graph large amounts of data and because it’s not as intuitive as the graphing tool I’ve chosen to go with.

So what graphing tool have I chosen?  Splunk 🙂

I chose Splunk for a number of reasons:

  1. I’ll be sending less than 500MB to Splunk a day, so it’s free 😀
  2. It’s incredibly intuitive to search through data in Splunk, so I should be able to give my dad a basic lesson and he can create graphs for himself. I had considered the ELK stack, but the searching language isn’t quite as intuitive…
  3. Splunk doesn’t care about the schema of the data you throw at it.  This makes it easy to work with as I can add/remove fields when required and not have to change a schema.

Writing the data to Splunk uses the ToStringBuilder JSON format and a Log4j socket appender.  The ToStringBuilder format is configured at bootup via the following component.

package uk.co.vsf.home.monitoring;

import org.apache.commons.lang3.builder.ToStringBuilder;
import org.apache.commons.lang3.builder.ToStringStyle;
import org.springframework.stereotype.Component;

public class ToStringBuilderStyleComponent {

	public ToStringBuilderStyleComponent() {

And I chose the Log4j socket appender because it doesn’t require the use of tokens to talk to Splunk.

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn">
        <Socket name="socket" host="SERVER NAME" port="9500">
            <PatternLayout pattern="%m%n"/>
        <Console name="STDOUT" target="SYSTEM_OUT">
        <Logger name="uk.co.vsf.home.monitoring" level="info" additivity="false">
            <AppenderRef ref="socket" />
            <AppenderRef ref="STDOUT" />



Bringing it all together, we’ve gone from Wattson which displayed only one figure – house load – as shown in the (albeit not great) picture below:

To this 😀

And this complicated device/application diagram

Hopefully this incarnation of the home monitoring application will last a few years, but I suspect I’ll be re-writing it all again at some point 🙂

Tesla Powerwall 2 API https://github.com/vloschiavo/powerwall2/
Log4j2 Socket Appender https://logging.apache.org/log4j/2.x/manual/appenders.html#SocketAppender