CompTIA Server+ and Low Poly People…

So, I missed a few posts over the past week or so. Actually, I’ve been really busy, but for a good cause. I’ve been fortunate enough to be selected by CompTIA to beta test the new Server+ exam. The busy part is all of the studying I am doing to try to pass the exam. I have a very short window of time in which to take the exam as well. Servers are not a new thing for me, but I have my doubts about passing the test, so I’m trying really, really hard to study up for it.

At work we use a few servers. From file hosts to PXE boot servers, but we don’t use them the way most of the IT world does. Here at work, our servers are offline and not connected to the internet at all. We manage and maintain nodes, as well as re-image them “remotely”, but by remotely, I mean that we do it in the same room from the servers to the machines. We don’t do patch management or updates, either. Some of our servers are running the same way they started out five years ago, because they are closed loop systems. Often component fail, but we don’t upgrade them, we repair them with pre-stocked and approved replacements of the exact same parts, even if those parts are no longer manufactured. Some of the motherboards and graphics cards we use have been out of production before we installed the servers. It’s just not exactly the way servers are typically used, especially in the scope of this exam.

Of course my server at home is a web, FTP, photo, and Jabber server, but I’m not sure that it’s being implemented the way a typical IT setup would be. It is only accessed by myself and my wife, and it is tied specifically to our cell phones with dynamic IP addresses. I manage users on a one on one basis, because there are only 2 users. Not your typical use case, for sure.

So, I’m doing a lot of studying. Fortunately for me, my company provides a video series specific to the Server+ exam, albeit the previous version of the test (of course, the one I’m taking is a beta, after all), which is free for me to peruse. I also picked up a Sybex Server+ book, which I’m reading through as well. Kindle readers can be really handy! If I had to order the book, I would lose a week of study time waiting for it to arrive in Alaska!

All of that to say I’ve been busy.

In the interim, however, I’ve been playing around with the Low Poly course lessons some more, and been trying to make some low poly people. There are several styles of low poly people, so I’m trying out a few of them. If you can see this posts pictures in your viewer, then hopefully you can see the little police man that I made. Well, I guess at the moment, he’s just a man in a blue shirt, but I was going to put a badge and tie on him later.

Linux – keep it simple.

Home Photo Server, Part 4: Battery Backup

download

One of the problems that I have with a local server is that we frequently have power “blips”. I don’t want to say outages, because the power is on about 99% of the time. The issue that we have is occasional “brown outs” where the power dips, but doesn’t drop out about once a month. We also have noticed about every other week it seems that the power will blink for just a moment.

Usually about three or four times a year the power will actually go out for more than an hour. It’s not uncommon for a minute long outage on a regular basis though. The worst outage we had was about 3 years ago, when the power was out at our house for 4 days! It was winter time, and I had to run the generator to keep the boiler going. Needless to say, we are prepared around here for the power to go out.

And that’s why I wanted my computer to be prepared as well. Anytime you are using something as a server that you want to make sure it is still running, you really should have it on a backup. So, I purchased an APC BackUPS 1350. By the specs, and it’s own digital display, it should keep my server and monitor running for 15 minutes during an outage.

That’s handy, but I wanted to take it one step further. I’m at work during the week day, and at church on Sunday, so I may not be around when the power goes out. If that happens, having it run for 15 minutes and then die really didn’t help me much. Fortunately, I found some great articles, and installed apcaccess to help me automatically control everything. First I added the EPEL repository, and then installed apcaccess:

# yum install epel-release

# yum install apcupsd

Then I navigated to the apc directory to change my settings…

# cd /etc/apcupsd/

# nano apcupsd.conf

This is a bit lengthy, but here is my apcupsd.conf file, without all the comment lines:

## apcupsd.conf v1.1 ##
#
# for apcupsd release 3.14.14 (31 May 2016) – redhat
#
# “apcupsd” POSIX config file

#
# Note that the apcupsd daemon must be restarted in order for changes to
# this configuration file to become active.
#

#
# ========= General configuration parameters ============
#

UPSNAME SRVRAPC
UPSCABLE usb
UPSTYPE usb
DEVICE
POLLTIME 60
LOCKFILE /var/lock
SCRIPTDIR /etc/apcupsd
PWRFAILDIR /etc/apcupsd
NOLOGINDIR /etc
#
# ======== Configuration parameters used during power failures ==========
#
ONBATTERYDELAY 6
BATTERYLEVEL 20
MINUTES 5
TIMEOUT 0
ANNOY 60
ANNOYDELAY 60
NOLOGON disable
KILLDELAY 0
#
# ==== Configuration statements for Network Information Server ====
#
NETSERVER on
NISIP 127.0.0.1
NISPORT 3551
EVENTSFILE /var/log/apcupsd.events
EVENTSFILEMAX 10
#
# ========== Configuration statements used if sharing =============
# a UPS with more than one machine
UPSCLASS standalone
UPSMODE disable
#
# ===== Configuration statements to control apcupsd system logging ========
#
STATTIME 0
STATFILE /var/log/apcupsd.status
LOGSTATS off
DATATIME 0
#FACILITY DAEMON
#
# ========== Configuration statements used in updating the UPS EPROM =========
#
SELFTEST 336

The important parts being that I set MINUTES to 5, meaning that if the UPS believes that it only has 5 minutes of power left at the current usage rate, it will initiate a computer shutdown command to the server computer. Likewise, if the BATTERYLEVEL drops below 20%, it will also initiate a shutdown command (which, theoretically, would be the 3 minute mark). I suppose you don’t have to set both, but I wanted to make sure under either condition, it shut down the server computer gracefully.

A really funny parameter is NETSERVER on and NISIP 127.0.0.1. At first I turned these off, but then found that the USB connection to the UPS didn’t work unless they were on and set to the local interface. So be sure you turn those on if you are using one of these as well.

Be sure to start the service and make sure it’s enabled:

# systemctl stop apcupsd
# systemctl start apcupsd
# systemctl enable apcupsd

After all of that, now I can just type apcaccess to see the UPS status:

APC : 001,036,0880
DATE : 2019-07-01 09:13:37 -0800
HOSTNAME : localhost.localdomain
VERSION : 3.14.14 (31 May 2016) redhat
UPSNAME : SRVRAPC
CABLE : USB Cable
DRIVER : USB UPS Driver
UPSMODE : Stand Alone
STARTTIME: 2019-06-27 14:34:23 -0800
MODEL : Back-UPS NS 1350M2
STATUS : ONLINE
LINEV : 121.0 Volts
LOADPCT : 34.0 Percent
BCHARGE : 100.0 Percent
TIMELEFT : 16.3 Minutes
MBATTCHG : 20 Percent
MINTIMEL : 5 Minutes
MAXTIME : 0 Seconds
SENSE : Medium
LOTRANS : 88.0 Volts
HITRANS : 139.0 Volts
ALARMDEL : No alarm
BATTV : 27.3 Volts
LASTXFER : No transfers since turnon
NUMXFERS : 0
TONBATT : 0 Seconds
CUMONBATT: 0 Seconds
XOFFBATT : N/A
SELFTEST : NO
STATFLAG : 0x05000008
SERIALNO : 3B1907X23168
BATTDATE : 2019-02-13
NOMINV : 120 Volts
NOMBATTV : 24.0 Volts
NOMPOWER : 810 Watts
FIRMWARE : 954.e3 .D USB FW:e3
END APC : 2019-07-01 09:14:06 -0800

I might play with the high/low transfer, it seems a bit to wide of a spectrum to me, but we’ll see how it does. Either way, I feel a lot safer now in terms of power problems for my server.

Linux – keep it simple.

Home Photo Server, Part 3: https, keys, and certificates

In the continuing saga of the Google Photos alternative, by running a home server with Dynamic Domain Name Service (DDNS), very secure FTP (vsftp), and an Apache web server coupled with Piwigo, we’ve automated the entire process of getting the photographs I take with my phone all put on the server and view-able by those we share it with. And, to God be the glory, it even works! But there are still a few more steps to consider, and today I’d like to look at a more secure interface with https.

So, for https, one of the big things you need to have secure http (the language of the web), is a public/private key pair, for assymetric encryption, and a digital certificate, certifying that those keys are in fact yours. There are a couple of ways to do this on the cheap. Of course, you could pay for a domain name and certificate from a company (like VeriSign and others). Or, you can go with a free certificate from Let’s Encrypt. The only problem that I had with them is that they didn’t seem to support DDNS. Perhaps further research is required. But, if you are using a static IP address, with a domain name, then you certainly can do these options.

Another option, no matter what the use, is self generating your certificate. Now, we do have to clarify, most browsers will warn the user that something is wrong when they go to your web page because the certificate is not signed by a “trusted” source. However, since, in my case, my wife and I are the only ones who log in, I think we can trust our own certificate. The benefit to having your own certificate verses not having one is that now we can use https. If we add the certificate as trusted to the browser, then we will be able to know that we are still talking to our own server, as long as the certificate doesn’t change.

First thing we need is mod_ssl:

# yum install mod_ssl

Once that installs, we need to make a directory for our keys and set the permissions:

# mkdir /etc/ssl/private
# chmod 700 /etc/ssl/private

And of course, generate the keys themselves, as well as set Diffie-Hellman parameters on them for more security:

# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
# openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
# cat /etc/ssl/certs/dhparam.pem | tee -a /etc/ssl/certs/apache-selfsigned.crt

Then, we need to tell Apache, our web server, to use our certificates:

# nano /etc/httpd/conf.d/ssl.conf

Edit as appropriate to point to your new keys and certificate. You can also make a forced redirect to have all http traffic be redirected to https:

# nano /etc/httpd/conf.d/non-ssl.conf

In the file, write this, but replace with your server name:

<VirtualHost *:80>
ServerName yourServerName
Redirect “/” “https://yourServerName&#8221;
</VirtualHost>

Now you need to punch a few holes in the firewall for https traffic….

# firewall-cmd –add-service=https
# firewall-cmd –runtime-to-permanent

And restart everything:

# systemctl restart httpd.service

# firewall-cmd –reload

Now you should be able to check out your website via https! Here you can see the warning message I get for my self generated certificate.

httpsPiwigo

Linux – keep it simple.

Real World Example: Setting up a dynamic IP for remote ssh sessions – for free!

Yet another Real World Example (RWE)! I know it has been a while since I have posted one of these, but it is better late than never, right?

The situation: An undisclosed party needed a way to log into their Linux computer from anywhere. Particularly from their home computer, or their cell phone while on the go. This is pretty handy, as an admin can log in, restart/start/stop services, and check on status without driving back to the office. They also didn’t want to spend any money for services, or purchase any new equipment, it just wasn’t in their budget.

The Solution:

The first thing that we had to discuss was what kind of connection they needed. In this case it was an SSH session, which is great as far as bandwidth and setup. However, this scenario would work just as well for Xrdp, vnc, ftp, etc., it just helps that in this case it was a “simple” fix that would be quick to set up.

The second thing that we needed was a way to establish the address of the computer in question. We will refer to the computer that they wish to connect to as the server, and we will refer to the gadget that they are using to connect as their cell phone or home computer. We needed a way to tell the cell phone the ip number of their remote server. Since they do not have a staic WAN IP address assigned to their router, they do not have a hard and fast “set” WAN IP address that does not change, instead they have the dynamically assigned IP address that gets changed potentially every time they connect to the internet, or every DHCP renewal to their router, etc.

The best answer to this problem would be to pay for a static IP address and assosiated Dynamic Name Services (DNS) so that you have a hostname that you type into your home computer or cell phone which always leads to your server. Ths service actually cost money, and hense would be outside of their budget.

So how did we solve this issue? Well, enter No-IP for free ( http://www.noip.com/free ), which is an internet based company that has a few very handy tools to do just that. As we will see from the following, setup was really easy, and FREE. By free, they mean, FREE. Which is pretty nice these days. Now, they do have upgrade options which are not free, but the basic package is completely free.

The first thing that we had to do was register an account for them. Once registered, we added a “hostname” for them. They have many to choose from, such as “myhostname.ddydns.org” and “myhostname.zapto.net”. We will not discuss their actual choice to protect the innocent, but for this example I will continue with myhostname.zapto.net.

Second, we set up the computer with their DUC (dynamic updater client) which we downloaded from their website. It comes as a tar file, which we unzipped, and then we ran make install to put the files where they needed to be. It was really pretty simple and had a few straitforward questions in the config file, as well as instructions on how to start and stop the service, and how often it should update.

This DUC tool is what makes it all work. The DUC tool finds out what your router’s dynamic (changing) WAN IP address is at any given moment, and sends a packet to the No-IP server to tell it your present dynamic IP address. So when you connect to myhostname.zapto.net, it is actually asking No-IP’s server what the current WAN IP address is for your server. Pretty slick trick actually.

Once that was done, we of course needed to do the basics, such as installing the SSH server. In this case we chose to:

$ sudo apt-get install openssh-server

Which istalled OpenSSH Server. Now I don’t want to get too deep into the details of their setup, for security purposes, but I will cover the basics here. I highly recommend that you choose an arbitrary port other than 22 for your SSH connections, to help trick those who do wrong from attempting the connection. It may even be wise to pick a port that is used for something else. For the remainder of this explanation, though, I will explain what we did using the standard SSH port 22.

For security reasons, you should also not allow root logins, but rather use a special user who can then sudo or switch user to root with a password. This double door password feature will likely save your bacon if someone gets in. You should also use an obscure username, not something like admin, ssh, maintenance, or server. Finally, you should have a really complicated password, one that is as complicated as you can remember. In this case, we actually had to change their password to something a little tougher, because theirs was too short and a dictionary word, which is bad. Also be sure to properly set up your firewall, which is something we will not talk about here.

The next step on the bucket list was to configure the router. Which has several smaller steps, but which were easy to manage locally. Their server already had a static IP on the Local Area Network, but if you are setting this up for yourself, say at home, be sure to give your server a local static IP address from your home router. This is usually done based on the server’s mac address. In this case, consider it 192.168.2.41.

After all that, we were down to the final step, which is to configure the router for port forwarding. Most routers have a web gui that is fairly intuitive, and you will need a guide for your router to complete this step. In this case, we assigned anything coming in on port 22 to be forwarded to the local IP address of the server computer, or 192.168.2.41:22. Note that the :22 means to send it to port 22 of the local computer.

Now the setup was done and it was time to test it. Which I did from my cell phone. Having an Android phone running SlimLP (5.1.1) I used an app called JuiceSSH. It is a really great app that is well constructed, looks snazzy, and works flawlessly. It also allows screen rotation, and even has special keys such as tab and ctrl. So in the app, I chose to set up a new connection to myhostname.zapto.net, port 22, with the appropriate username and credentials. Once I clicked connect, I was greeted by some scrolling information about echanging keys, passwords, etc., and eventually the remote server’s command prompt. Success!

All told, that was a pretty simple fix for a complex problem! Not to mention it was completely free. One down side to the free option, however, is the need to renew your free hostname every 30 days. That’s a pretty easy trade off for the free service though. Also, the free package only includes 3 dynamic addresses. So if you want to do this for a large business, you are better off with a paid package.

Linux – Keep it simple.