Having a household of several children, it can at times be hard to find some space for my projects and equipment. As it was, I had a perfect spot in the laundry room with a whole desk to myself. However, the desk was a bit crowded with my TS-820S radio, my 3D printer, and my home server computer. With this equipment on the bench, I didn’t really have a space to work on anything new.
So, I decided to try to rearrange the laundry room a bit and build a shelf to help organize my stuff.
As you can see in the pictures, the before shot shows the laundry room as it was, with my desk. The after picture shows my new shelf and the shortened desk.
This has worked out incredibly well in the short time I’ve been using it since the switch. The entire contents of the desk are now on four shelves, and the shortened desk is completely empty and ready for me to work on equipment and projects! It’s not the most aesthetically pleasing shelf, but it is very sturdy and quite functional, and since it is hidden away in the laundry room, it is not in view of anyone who comes over. It also makes great use of what was once wasted and empty space.
A while back, I started using CentOS, with Apache, to host my own website. As I talked about here on this blog, the website is for my Piwigo server, which is a Google Photo’s alternative. My pictures from my phone are backed up to my home server automatically, and the Piwigo server acts as an interface where people with appropriate passwords can log in and see the photos. Typically, just me and my wife.
One problem that I had, however, was difficulty getting a certificate from a CA (Certificate Authority), and I had to use a self signed certificate. This worked great, to be honest, except that some browsers have a pesky “this is not secure” message that you had to accept alot. It got old if I was showing some one, either client or friend, the setup but had to acknowledge a big security warning.
So, I set out once again to try to get that fixed. I heard a lot of good things about Let’s Encrypt, the free, open source encryption method, and that they now support DDNS, so I thought I’d give it a try. So, logging into the terminal, I followed the instructions, and got this in the terminal:
[root@localhost alaskalinuxuser]# certbot --apache
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org
Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for alaskalinuxuser.ddns.net
Cleaning up challenges
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.
If you are unfamiliar with μlogger, it is a handy Android application paired with a self hosted server application to keep track of your location. Or, in the author’s own words:
μlogger (‘micro-logger’) is an application for continuous logging of location coordinates, designed to record hiking, biking tracks and other outdoor activities. Application works in background. Track points are saved at chosen intervals and may be uploaded to dedicated server in real time. This client works with μlogger web server. Together they make a complete self owned and controlled client–server solution.
You can actually just use the app on your phone to record yourself walking, biking, flying, etc., and then save your track as a gpx file for editing or viewing on any supporting app on your phone. But, if you wanted to save your adventures easily, or keep your adventures updating live for someone you know to view, then you can use the web server application as well. It works incredibly well, and if you are out of cell/wifi range, it will update with your personal server once you return, which is very useful.
The only problem was, while it had clear instructions for installation, they were written for someone smarter than me, so it took a bit of work to get it set up on my home CentOS 7 server. Hopefully, by writing this down, others can save themselves a little bit of a headache.
First, I would like to mention that this assumes you already are running a web server on your machine. In my case, I am running the Apache web server. So, I wont cover web server installing and setup here, but there are some great tutorials for that out there, like this one.
The instructions look like this, per the README.md:
Download zipped archive or clone the repository on your computer
Move it to your web server directory (unzip if needed)
Fix folder permissions: uploads folder (for uploaded images) should be writeable by PHP scripts
Create database and database user (at least SELECT, INSERT, UPDATE, DELETE privileges, CREATE, DROP for setup script, SEQUENCES for postgreSQL)
Create a copy of config.default.php and rename it to config.php. Customize it and add database credentials
Edit scripts/setup.php script, enable it by setting $enabled value to true
Make sure you have a web server running with PHP and chosen database
This updated PHP to version 7.2, so you could skip that line if you didn’t want it, but I figured it might save me from having to update it in the future, so I went with it. Note that I did have to test out my other server functions that use PHP to make sure they were compatible.
After getting PHP up to date, I then downloaded the μlogger server repository from GitHub. It is pretty small, and only took a few seconds, even on my slow 10 mb internet to download it. I extracted in place, and went to work in the terminal, moving it to my web server location and giving it the proper ownership:
cp -Rav ./* /var/www/html/ulogger/
chown -R apache:apache ./ulogger
Now that it is in the right place, I needed a database for it to work with….
mysql -u root -p
MariaDB> create database uloggerdb;
MariaDB> grant all privileges on uloggerdb.* TO 'EDITED_USERNAME'@'localhost' identified by 'EDITED_PASSWORD';
MariaDB> flush privileges;
Keep in mind, the author of μlogger only suggests a few privileges, which he states in his read me as “(at least SELECT, INSERT, UPDATE, DELETE privileges, CREATE, DROP for setup script, SEQUENCES for postgreSQL)” however, as I monkeyed around with this, the script only seemed to run when I gave my user all privileges. Not sure if that’s just me, but here’s what I did, and praise God, it worked, because I was getting a little frustrated at this point. This write-up is the end result, not the “how many times I failed setting this up” story….
Now I needed to proceed and make a copy of the config file and edit it per the instructions:
cp config.default.php config.php
* Copyright(C) 2017 Bartek Fabiszewski (www.fabiszewski.net)
* This is free software; you can redistribute it and/or modify it under
* the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 3 of the License, or
* (at your option) any later version.
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
// This is default configuration file.
// Copy it to config.php and customize
// Database config
// PDO data source name, eg.:
//$dbdsn = "mysql:host=localhost;port=3306;dbname=uloggerdb;charset=utf8"
//$dbdsn = "mysql:unix_socket=/tmp/mysql.sock;dbname=uloggerdb;charset=utf8"
//$dbdsn = "uloggerdb";
$dbdsn = 'mysql:dbname=uloggerdb;host=127.0.0.1';
// Database user name
$dbuser = "EDITED_USERNAME";
// Database user password
$dbpass = "EDITED_PASSWORD";
// Optional table names prefix, eg. "ulogger_"
$dbprefix = "";
The key part that went wrong here, was that there were several examples of how to set the “$dbdsn” or database name and location. I tried the ones included in the file, but none of them worked, so I web searched and found this line worked: $dbdsn = ‘mysql:dbname=uloggerdb;host=127.0.0.1’; so that is what I used in the end, instead of any of the included examples.
Now I needed to enable the setup script:
.... edited for space ....
// This script is disabled by default. Change below to true before running.
$enabled = true;
.... edited for space ....
With the setup script enabled, it was now time to get started! I opened https://EDITED_HOST/ulogger/scripts/setup.php page in my browser, and followed the on screen instructions, logging in with the username and password I made earlier, and everything went smoothly! Keep in mind, once the setup is done, you need to delete or disable the setup script to prevent it’s future use. So I just deleted it. Here is the end result when you bring up the web page https://EDITED_HOST/ulogger/
The mobile app is very simple to setup. Under settings, you simply input the username, password, and URL for your web server. You can then choose if you want it to auto upload as you go, or you can update it manually by pressing an upload button on the main screen.
I thought it best to mention that the original user you make is an administrator, and can edit several settings. I recommend that you make a new user with lesser credentials (non-admin) for use with the phone app. When you are logged in as the administrator (the first account you made) you can choose to make new users. This allows you to have lesser accounts for daily logging of activity. It also is handy if you are using this for your company as a way to track drivers/workers or if you have multiple family members, because everyone can have their own username and password.
Hopefully, this will save others using CentOS 7 a little bit of time to set this up!
So, I missed a few posts over the past week or so. Actually, I’ve been really busy, but for a good cause. I’ve been fortunate enough to be selected by CompTIA to beta test the new Server+ exam. The busy part is all of the studying I am doing to try to pass the exam. I have a very short window of time in which to take the exam as well. Servers are not a new thing for me, but I have my doubts about passing the test, so I’m trying really, really hard to study up for it.
At work we use a few servers. From file hosts to PXE boot servers, but we don’t use them the way most of the IT world does. Here at work, our servers are offline and not connected to the internet at all. We manage and maintain nodes, as well as re-image them “remotely”, but by remotely, I mean that we do it in the same room from the servers to the machines. We don’t do patch management or updates, either. Some of our servers are running the same way they started out five years ago, because they are closed loop systems. Often component fail, but we don’t upgrade them, we repair them with pre-stocked and approved replacements of the exact same parts, even if those parts are no longer manufactured. Some of the motherboards and graphics cards we use have been out of production before we installed the servers. It’s just not exactly the way servers are typically used, especially in the scope of this exam.
Of course my server at home is a web, FTP, photo, and Jabber server, but I’m not sure that it’s being implemented the way a typical IT setup would be. It is only accessed by myself and my wife, and it is tied specifically to our cell phones with dynamic IP addresses. I manage users on a one on one basis, because there are only 2 users. Not your typical use case, for sure.
So, I’m doing a lot of studying. Fortunately for me, my company provides a video series specific to the Server+ exam, albeit the previous version of the test (of course, the one I’m taking is a beta, after all), which is free for me to peruse. I also picked up a Sybex Server+ book, which I’m reading through as well. Kindle readers can be really handy! If I had to order the book, I would lose a week of study time waiting for it to arrive in Alaska!
All of that to say I’ve been busy.
In the interim, however, I’ve been playing around with the Low Poly course lessons some more, and been trying to make some low poly people. There are several styles of low poly people, so I’m trying out a few of them. If you can see this posts pictures in your viewer, then hopefully you can see the little police man that I made. Well, I guess at the moment, he’s just a man in a blue shirt, but I was going to put a badge and tie on him later.
One of the problems that I have with a local server is that we frequently have power “blips”. I don’t want to say outages, because the power is on about 99% of the time. The issue that we have is occasional “brown outs” where the power dips, but doesn’t drop out about once a month. We also have noticed about every other week it seems that the power will blink for just a moment.
Usually about three or four times a year the power will actually go out for more than an hour. It’s not uncommon for a minute long outage on a regular basis though. The worst outage we had was about 3 years ago, when the power was out at our house for 4 days! It was winter time, and I had to run the generator to keep the boiler going. Needless to say, we are prepared around here for the power to go out.
And that’s why I wanted my computer to be prepared as well. Anytime you are using something as a server that you want to make sure it is still running, you really should have it on a backup. So, I purchased an APC BackUPS 1350. By the specs, and it’s own digital display, it should keep my server and monitor running for 15 minutes during an outage.
That’s handy, but I wanted to take it one step further. I’m at work during the week day, and at church on Sunday, so I may not be around when the power goes out. If that happens, having it run for 15 minutes and then die really didn’t help me much. Fortunately, I found some great articles, and installed apcaccess to help me automatically control everything. First I added the EPEL repository, and then installed apcaccess:
# yum install epel-release
# yum install apcupsd
Then I navigated to the apc directory to change my settings…
# cd /etc/apcupsd/
# nano apcupsd.conf
This is a bit lengthy, but here is my apcupsd.conf file, without all the comment lines:
# Note that the apcupsd daemon must be restarted in order for changes to
# this configuration file to become active.
# ========= General configuration parameters ============
# ======== Configuration parameters used during power failures ==========
# ==== Configuration statements for Network Information Server ====
# ========== Configuration statements used if sharing =============
# a UPS with more than one machine
# ===== Configuration statements to control apcupsd system logging ========
# ========== Configuration statements used in updating the UPS EPROM =========
The important parts being that I set MINUTES to 5, meaning that if the UPS believes that it only has 5 minutes of power left at the current usage rate, it will initiate a computer shutdown command to the server computer. Likewise, if the BATTERYLEVEL drops below 20%, it will also initiate a shutdown command (which, theoretically, would be the 3 minute mark). I suppose you don’t have to set both, but I wanted to make sure under either condition, it shut down the server computer gracefully.
A really funny parameter is NETSERVER on and NISIP 127.0.0.1. At first I turned these off, but then found that the USB connection to the UPS didn’t work unless they were on and set to the local interface. So be sure you turn those on if you are using one of these as well.
Be sure to start the service and make sure it’s enabled: