Secure traffic to ZNC on Synology with Let’s Encrypt

0
September 10, 2017

I’ve been using IRC since late 1990’s, and I continue to do so to this day due to it (still) being one of the driving development forces in various open source communities. Especially in Linux development … and some of my acquintances I can only get in touch with via IRC :)

My Setup

On my Synology NAS I run ZNC (IRC bouncer/proxy) to which I connect using various IRC clients (irssi/XChat Azure/AndChat) from various platforms (Linux/Mac/Android). In this case ZNC serves as a gateway and no matter which device/client I connect from, I’m always connected to same IRC servers/chat rooms/settings when I left off.

This is all fine and dandy, but connecting from external networks to ZNC means you will hand in your ZNC credentials in plain text. Which is a problem for me, even thought we’re “only” talking about IRC bouncer/proxy.

With that said, how do we encrypt external traffic to our ZNC?

HowTo: Chat securely with ZNC on Synology using Let’s Encrypt SSL certificate

For reference or more thorough explanation of some of the steps/topics please refer to: Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate

Requirements:

  • Synology NAS running DSM >= 6.0
  • Sub/domain name with ability to update DNS records
  • SSH access to your Synology NAS

1: DNS setup

Create A record for sub/domain you’d like to use to connect to your ZNC and point it to your Synology NAS external (WAN) IP. For your reference, subdomain I’ll use is: irc.hodzic.org

2: Create Let’s Encrypt certificate

DSM: Control Panel > Security > Certificates > Add

Followed by:

Add a new certificate > Get a certificate from Let's Encrypt

Followed by adding domain name A record was created for in Step 1, i.e:

Get a certificate from Let's Encrypt for irc.hodzic.org

After certificate is created, don’t forget to configure newly created certificate to point to correct domain name, i.e:

Configure Let's Encrypt Certificate

3: Install ZNC

In case you already have ZNC installed, I suggest you remove it and do a clean install. Mainly due to some problems with package in past, where ZNC wouldn’t start automatically on boot which lead to creating projects like: synology-znc-autostart. In latest version, all of these problems have been fixed and couple of new features have been added.

ZNC can be installed using Synology’s Package Center, if community package sources are enabled. Which can simply be done by adding new package sources:

Name: SynoCommunity
Location: http://packages.synocommunity.com

Enable Community package sources in Synology Package Center

To successfuly authenticate newly added source, under “General” tab, “Trust Level” should be set to “Any publisher”

As part of installation process, ZNC config will be run with most sane/useful options and admin user will be created allowing you access to ZNC webadmin.

4: Secure access to ZNC webadmin

Now we want to bind our sub/domain created in “Step 1” to ZNC webadmin, and secure external access to it. This can be done by creating a reverse proxy.

As part of this, you need to know which port has been allocated for SSL in ZNC Webadmin, i.e:

ZNC Webadmin > Global settings - Listen Ports

In this case, we can see it’s 8251.

Reverse Proxy can be created in:

DSM: Control Panel > Application Portal > Reverse Proxy > Create

Where sub/domain created in “Step 1” needs to be point to ZNC SSL port on localhost, i.e:

Reverse proxy: irc.hodzic.org setup

ZNC Webadmin is now available via HTTPS on external network for the sub/domain you setup as part of Step 1, or in my case:

ZNC webadmin (HTTPS)

As part of this step, in ZNC webadmin I’d advise you to create IRC servers and chatrooms you would like to connect to later.

Step 5: Create .pem file from LetsEncrpyt certificate for ZNC to use

On Synology, Let’s Encrypt certificates are stored and located on:

/usr/syno/etc/certificate/_archive/

In case you have multiple certificates, based on date your certificate was created, you can determine in which directory is your newly generated certificated stored, i.e:

drwx------ 2 root root 4096 Sep 10 12:57 JeRh3Y

Once it’s determined which certifiate is the one we want use, generate .pem by running following:

sudo cat /usr/syno/etc/certificate/_archive/JeRh3Y/{privkey,cert,chain}.pem > /usr/local/znc/var/znc.pem

After this restart ZNC:

sudo /var/packages/znc/scripts/start-stop-status stop && sudo /var/packages/znc/scripts/start-stop-status start

6: Configure IRC client

In this example I’ll use XChat Azure on MacOS, and same procedure should be identical for HexChat/XChat clients on any other platform.

Altough all information is picked up from ZNC itself, user details will need to be filled in.

In my setup I automatically connect to freenode and oftc networks, so I created two for local network and two for external network usage, later is the one we’re concentrating on.

On “General” tab of our newly created server, hostname for our server should be the sub/domain we’ve setup as part of “Step 1”, and port number should be the one we defined in “Step 4”, SSL checkbox must be checked.

Xchat Azure: Network list - General tab

On “Connecting” tab “Server password” field needs to be filled in following format:

johndoe/freenode:password

Where, “johndoe” is ZNC username. “freenode” is ZNC network name, and “password” is ZNC password.

Xchat Azure: Network list - Connecting tab

“freenode” in this case must first be created as part of ZNC webadmin configuration, mentioned in “step 4”. Same case is for oftc network configuration.

As part of establishing the connection, information about our Let’s Encrypt certificate will be displayed, after which connection will be established and you’ll be automatically logged into all chatrooms.

Happy hacking!

Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

2
May 21, 2017

In this blog post, I’ve described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images.

If you’re not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to “Birth of: containerized-wordpress-project (TL;DR)” section.

Migrating WordPress blog to AWS (EC2, Lightsail?)

Since I’ve been sold on Amazon’s AWS idea of cloud computing “services” for couple of years now. I’ve wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out.

Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill.

When AWS Lightsail was first released, it seemed that was an answer to all my problems.

But it wasn’t, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region.

Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential.

Its regional limitations were also good in a sense that they made me realize one important thing. And that’s once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2’s and different regions once they were available.

If done properly, it meant I could even have it moved across different clouds (I’m talking to you Google Cloud).

P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless.

Fundamental problem of every migration … is migration

Phase 1: Don’t reinvent the wheel?

When you have a WordPress site that’s not self hosted. You want everything to work, but yet you really don’t want to spend any time managing infrastructure it’s on.

And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great!

But when I took a look, although everything was running out of box, I wasn’t happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck.

With this setup, when it was time to upgrade (and it’s already that time) you wouldn’t be thinking about upgrade. You’d only be thinking about another migration.

Phase 2: What if I built everything myself?

Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big.

Being a huge Ansible fan, automating this step was a natural next step.

I even found an awesome Ansible role which seemed like it’s going to do everything I need. Except, I realized I needed to update all software that’s deployed with it, and customize it since configuration it was deployed on wasn’t as generic.

So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update.

Phase 3: Marriage made in heaven: Ansible + Docker + AWS

Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear.

Use Ansible to configure and setup host ready for Docker ecosystem. Ecosystem consisting of multiple separate containers for each required service (WordPress + Nginx + MariaDB). Link them all together as a single service using Docker Compose.

Idea was backed by thought to spend minimum to no time (and effort) on manual configuration of anything on the server. Level of attachment to this server was so low that I didn’t even want to SSH to it.

If there was something wrong, I could just nuke the whole thing and deploy code on a new healthy rolled out server with everything working out of box.

After it was clear what needed to be done, I got to work.

Birth of: containerized-wordpress-project (TL;DR)

After a lot of work, end result is project which allows you to automagically deploy & run containerized WordPress instance which consists of 3 separate containers running:

  • WordPress (PHP7 FPM)
  • Nginx
  • MariaDB

Once run, containerized-wordpress playbook will guide you through interactive setup of all 3 containers, after which it will run all  Ansible roles created for this project. End result is that host you have never even SSH-ed to will be fully configured and running containerized WordPress instace out of box.

Most importantly, this whole process will be completed in <= 5 minutes and doesn’t require any Docker or Ansible knowledge!

containerized-wordpress demo

Console output of running “containerized-wordpress” Ansible Playbook:

Console output of running "containerized-wordpress" Ansible Playbook

Accessing WordPress instance created from “containerized-wordpress” Ansible Playbook:

Accessing WordPress instance created from "containerized-wordpress" Ansible Playbook

Did I end up migrating to AWS in the end?

You bet. Thanks to efforts made in containerized-wordpress-project, I’m happy to report my whole WordPress migration to AWS was completed in matter of minutes and that this blog is now running on Docker and on AWS!

I hope this same project will help you take a leap in your migration.

Happy hacking!

Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate

1
February 17, 2017

Secure public access to your Synology?

Every time I’m outside of my home network, and I need to get something from my Synology NAS, I’m facing the same dillema. Who’s sniffing the network I’m on, and who will I hand over my credentials in plain text using HTTP.

Of course, you can add extra security to your Synology account by using 2 step authentication, or first establishing connection to (preferably private) VPN connection. But even then … footprint of sensitive data you’re leaving behind you is just not worth it.

To resolve this problem, you could get a self-signed SSL certificate, but whole process will cost you time and money. But thanks to good people at Let’s Encrypt, this whole process now takes 15 minutes process and is free!

Secure (HTTPS) access to Synology NAS using Let’s Encrypt (free) SSL certificate

There are couple of tutorials which cover this same topic, however reason why I wrote my own is because none of them worked for me.

Requirements:

1. DNS

First thing you need to do is add A record for sub/domain you want to point to your Synology’s external (WAN) IP. Example:

Name:   synology.hodzic.org
Class:  IN
Type:   A
Record: WAN IP

You can find your external IP/WAN IP if you have external access enabled via DDNS

Go to: Control Panel > External Access > DDNS

Control Panel > External Access > DDNS

DDNS dialog

Enabling Synology external access using DDNS

2. Certificate

Now it’s time to obtain Let’s Encrypt SSL certificate, which you can do by going to:

Control Panel > Security > Certificate > Add

Control Panel > Security > Certificate > Add

On next dialog click on “Add a new certificate”

Create certificate > Add a new certificate

Then select “Get a certificate from Let’s Encrypt”

Create certificate > Get a certificate from Let's Encrypt

Finally get a certificate from Let’s Encrypt with data as follows:

  • Domain name – sub/domain you setup as part of 1. DNS
  • Email – email you want to use to renew this certificate
  • Subject alternative name – alternative DNS record.
    You can use URL you use for external access (DDNS)

Create certificate - Get a certificate from Let's Encrypt

and click on Apply

If obtaining certificate fails, you need to setup port forwarding on your router, as mentioned under Requirements section.

3. Reverse Proxy

While other suggested setting up a web server, idea which makes most sense to me in this case is Reverse Proxy.

Go to Control Panel > Application Portal > Reverse Proxy > Create

Control Panel > Application Portal > Reverse Proxy > Create

By default, DSM ports will be running behind 5000/5001, so in that case we want to point external ports 80 and 443 to them with two rules.

ProxyPass/redirect traffic from example domain port 80 to 5000

ProxyPass/redirect traffic from example domain port 80 > 5000

ProxyPass/redirect traffic from example domain port 443 > 5001

ProxyPass/redirect traffic from example domain port 443 > 5001

4. Select certificate for your domain

Almost done, now you have to select which certificate you want to use for your sub/domain.

Go to Control Panel > Security > Certificate > Configure

 Control Panel > Security > Certificate > Configure

Finally, select newly generated Let’s Encrypt SSL certificate for your sub/domain and click Ok

Select newly generated Let's Encrypt SSL certificate for your sub/domain

After this, if you go to your sub/domain with https prefix (i.e: https://synology.hodzic.org) you’ll be greeted with secure page!

Synology external HTTPS request

However, only problem is that if you go to http://synology.hodzic.org, you won’t be automatically redirected to https and you’ll be left on unsecure page.

Synology external HTTP request

5. Redirect All traffic from http to https

To resolve this problem, go to:

Control Panel > Network > DSM Settings and select Automatically redirect HTTP connections to HTTPS

Control Panel > Network > DSM Settings > Automatically redirect HTTP connections to HTTPS

After which every http request will automatically redirect to https.

Recommendations:

  • Change DSM ports from default 5000/5001 to something else
  • Optionally you can also consider enabling following settings:
    • Enable HTTP/2
    • Enable HSTS

If you set “Subject Alternative Name” with your certificate, and want want to secure that URL with this same certificate. Don’t forget to set Reverse Proxy rules for that URL as well, and select correct certificate for that URL. Result:

External URL (DDNS) behind same certificate

Happy hacking!

anon-hotspot: On demand Debian Linux (Tor) Hotspot setup tool

0
September 18, 2016

Today it’s not easy to anonymize internet traffic and protect our online privacy. From advertisers to various other parties, everyone seems to be interested in what we’re doing online, and it’s our traffic that allows them to track our behaviour and interests.

To make our internet traffic anonymous we could turn to various VPN/Proxy solutions, but in the end need you still need to have ultimately trust that your traffic on other side of the tunnel won’t end up in wrong hands.

That’s why if I want anonymity I’ll always turn to Tor (anonymity network).

Turn Raspberry Pi 3/or any other Debian Linux based device into a (Tor) WiFi Hotspot

You need two things:

  1. Clone anon-hotspot git repo
  2. Raspberry PI 3 or any other Debian Linux based device with ethernet port and wifi card

RPI3 or any other device you want to run this on needs to be connected to internet via ethernet port, while WiFi interface will be turned into an AP/Hotspot. 

While this tool was made and tested on Raspbian (jessie) on RPI3. It’ll work on any other Debian Linux based device. So, if you don’t have RPI3 laying around, but have an old computer which you’d like to turn into Tor Hotspot be my guest. If you run into problems, please create an issue.

Since this tool is still under development, it’s recommended you run it on freshly installed Raspbian (>= Jessie) and not on your prod environments.

anon-hotspot (GitHub)

Simply get the latest source from Github, and run anon-hotspot

anon-hotspot welcome screen

Running “sudo ./anon-hotspot tor-hotspot” will automatically turn your RPI3 into Tor WiFi Hotspot within 2 minutes.

Tor enabled on Tor Wifi Hotspot created by app-hotspot

Features:

configuration

  • tor-hotspot (configure Tor WiFi hotspot)
  • hotspot (configure WiFi hotspot)
  • tor (configure Tor for existing Wifi hotspot)
  • cred (change Tor/WiFi Hotspot ssid/passphrase)
  • remove (remove Tor/Wifi Hotspot & revert to original settings)

operations

  • start (start Tor/WiFi hotspot)
  • stop (stop Tor/WiFi hotspot)

Supported platforms:

  • Raspbian: >= Jessie 8.0
  • Debian: >= Jessie 8.0
  • Ubuntu: >= 15.04
  • Elementary OS: >= Loki
  • Kali Linux: >= 2.0

Happy hacking!

Kernel agnostic, DisplayLink Debian GNU/Linux driver installer (Debian/Ubuntu/Elementary)

32
November 29, 2015

I use DisplayLink at work for multi display setup/Ethernet/etc, all by connecting to a single USB port. Although it’s a nifty little device, its software support isn’t that great.

Only Linux driver they have is for Ubuntu. Which is only optimized to work with 14.04, and latest kernel they support is 3.19!

Their installer script can be modified to work with Debian and Systemd, but even so if you’re using any Linux kernel version other then >=3.14 && <=3.19 you’re not going to have a good time.

displaylink-debian (github)

That’s why I decided to take things in my own hands, and created displaylink-debian.

Tool which allows you to seamlessly install and uninstall DisplayLink drivers on Debian/Ubuntu based Linux distributions..

Supported platforms are:

  • Debian: Jessie 8.0/Stretch 9.0/Sid (unstable)
  • Ubuntu: 14.04 Trusty/15.04 Vivid/15.10 Wily/16.04 Xenial/16.10 Yakkety
  • elementary OS: O.3 Freya/0.4 Loki
  • Mint: 15 Olivia/16 Petra/17.3 Rosa/18 Sarah
  • Kali: 2016.2/kali-rolling

Regardless of which kernel version you’re using.

displaylink-debian licence is GPLv3 and if you’d like to extend it to any other distribution then Debian, be my guest!