Convert AVI to MP4 for Piwigo

As it turns out, Piwigo and AVI files don’t play nicely together. So, I shamelessly ripped off this thread, and wrote a script to automatically convert my camcorder’s AVI files into MP4 videos and put them into my gallery:


echo "Converting videos..."
cd /home/alaskalinuxuser/Videos/
for i in *.avi; do ffmpeg -i "$i" -c:a aac -strict -2 -b:a 128k -c:v libx264 -crf 20 "${i%.avi}.mp4"; done
for i in *.AVI; do ffmpeg -i "$i" -c:a aac -strict -2 -b:a 128k -c:v libx264 -crf 20 "${i%.avi}.mp4"; done

echo "Moving videos..."
mv /home/alaskalinuxuser/Videos/*.mp4 /var/www/html/galleries/camcorder/
cd /home/alaskalinuxuser/Videos/
rm -rf *

chown -R apache:apache /var/www/html/galleries/
echo "Changed owndership"



Note the double quotes, I learned that this is important because single quotes will not properly handle some characters and spaces in file names. As it turns out, the camcorder we have does not allow you to change the naming convention, nor to use anything other than AVI as the video format.

Linux – keep it simple.

Adding A Video Plugin To My CentOS Piwigo Server

Previously, I had decided to ditch Google apps, including Google Photos. That meant that I needed a new photo backup solution, of which I have written several articles on this blog. The main portion of the server was Piwigo, a photo viewing/sharing/organizing server that you can view from other devices over the internet. Feel free to check out my previous posts by searching Piwigo to see the setup.

One feature that was missing, however, was the ability to display and view videos. This brought me on a long adventure that I will summarize here, because it wasn’t as easy as it first seemed.

The first thing I needed was to download the right plugin. There were several to pick from, but the one that seemed to be the easiest to integrate was called video-js. I simple logged into my Piwigo server as the administrator and went to the plugins and clicked install. Seemed pretty simple so far….

But, that’s when the problems began.

Following the video-js documentation, I set all my settings under the settings tab, and then moved over to the synchronization tab. I was immediately greeted by yellow triangles because ffmpeg, mediatool, ffprobe, and exiftool were not found. So, I jumped into the terminal, but found that those packages don’t exist in CentOS’s yum repositories. A little bit of internet searching lead me to do this (wish I had written down the reference) :

# yum install mediainfo
# rpm -Uvh
# yum install ffmpeg ffmpeg-devel -y
# yum install perl-Image-ExifTool.noarch

After installing those packages, now I could go to the synchronize page and received no errors. Unfortunately, after setting the synchronization settings to my liking, pressing submit returned only this error: “You ask me to do nothing, are your sure?”

So, after more web searching, I went to the video-js plugin issue tracker and found others with the same problem. Included was also a fix by a user named ipsedix:

Back at the command line again, I jumped into my MariaDB like so:

[root@localhost alaskalinuxuser]# mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 1302
Server version: 5.5.65-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW DATABASES;
| Database           |
| information_schema |
| mysql              |
| performance_schema |
| piwigo             |
| test               |
| uloggerdb          |
6 rows in set (0.01 sec)

MariaDB [(none)]> USE piwigo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [piwigo]> SHOW TABLES;
| Tables_in_piwigo              |
| piwigo_caddie                 |
| piwigo_categories             |
| piwigo_comments               |
| piwigo_config                 |
| piwigo_favorites              |
| piwigo_group_access           |
| piwigo_groups                 |
| piwigo_history                |
| piwigo_history_summary        |
| piwigo_image_category         |
| piwigo_image_format           |
| piwigo_image_tag              |
| piwigo_image_videojs          |
| piwigo_images                 |
| piwigo_languages              |
| piwigo_old_permalinks         |
| piwigo_plugins                |
| piwigo_rate                   |
| piwigo_search                 |
| piwigo_sessions               |
| piwigo_sites                  |
| piwigo_tags                   |
| piwigo_themes                 |
| piwigo_upgrade                |
| piwigo_user_access            |
| piwigo_user_auth_keys         |
| piwigo_user_cache             |
| piwigo_user_cache_categories  |
| piwigo_user_feed              |
| piwigo_user_group             |
| piwigo_user_infos             |
| piwigo_user_mail_notification |
| piwigo_users                  |
33 rows in set (0.00 sec)

MariaDB [piwigo]> UPDATE piwigo_config SET value="a:0:{}" WHERE param="vjs_sync";
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [piwigo]> exit

Now we were getting somewhere! Unfortunately, because I had over 1000 videos in the server, I kept getting errors and time outs. This caused the process to fail repeatedly when I try to synchronize. It would usually only get through the first 200 or so videos before stopping. So, as a quick and dirty fix for that I unchecked the setting to “Overwrite existing posters”. This way, even though it would time out or fail, it would make the posters (thumbnails) for about 200 videos before it quit. Then all I had to do was run the synchronization process five or six times to get them all done!

Linux – keep it simple.

Let’s Encrypt with DDNS on CentOS 7

My new A+ rating for my personal web server, with certificates from Let’s Encrypt!

A while back, I started using CentOS, with Apache, to host my own website. As I talked about here on this blog, the website is for my Piwigo server, which is a Google Photo’s alternative. My pictures from my phone are backed up to my home server automatically, and the Piwigo server acts as an interface where people with appropriate passwords can log in and see the photos. Typically, just me and my wife.

One problem that I had, however, was difficulty getting a certificate from a CA (Certificate Authority), and I had to use a self signed certificate. This worked great, to be honest, except that some browsers have a pesky “this is not secure” message that you had to accept alot. It got old if I was showing some one, either client or friend, the setup but had to acknowledge a big security warning.

So, I set out once again to try to get that fixed. I heard a lot of good things about Let’s Encrypt, the free, open source encryption method, and that they now support DDNS, so I thought I’d give it a try. So, logging into the terminal, I followed the instructions, and got this in the terminal:

[root@localhost alaskalinuxuser]# certbot --apache
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Starting new HTTPS connection (1):

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for
Cleaning up challenges
Unable to find a virtual host listening on port 80 which is currently needed for Certbot to prove to the CA that you control your domain. Please add a virtual host for port 80.

This was a bit confusing to me, since I could browse to my own website on port 80. But, fortunately, I found the answer here:

So, I made a new file at /etc/httpd/conf.d/alaskalinuxuser.conf and filed it in with this:

<VirtualHost *:80>  
    DocumentRoot /var/www/html 

After that, I exited nano and restarted the httpd daemon, and was able to re-run certbot:

[root@localhost conf.d]# certbot --apache -d
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Starting new HTTPS connection (1):
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/httpd/conf.d/ssl.conf
Redirecting vhost in /etc/httpd/conf.d/alaskalinuxuser.conf to ssl vhost in /etc/httpd/conf.d/ssl.conf

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled

And now I have a CA vouching for my web server!

Linux – keep it simple.

Home photo server, part 2: Apache, phpAlbum, and Piwigo

Last time we talked about how to set up the domain name with DDNS for free, and then using very secure FTP, vsftp, and SyncTool (the Android app) to automatically move photos from your phone to your home server. But that’s only half of the battle. One of the big perks to using Google Photos is that you can browse them anytime from anywhere. You can share them with friends, make comments on them, and other socially related things. We need a way to do that from our home server, too.

There are many, many methods to accomplish this, and I tried several, but I’d like to point out two options that worked really well, were simple to use and set up, and used little to no JavaScript. There are several reasons why one might not want to use JS, like security, etc., but I’ll save that for another post. The two best options were phpAlbum, which is 100% JS free, and Piwigo, which only uses JS for some display themes, I chose one without it and it seems to work just fine.

Either way, I needed a web server, so I installed Apache.

# yum install httpd

# systemctl enable httpd

#firewall-cmd –add-port=80/tcp

# firewall-cmd –add-port=80/tcp –permanent

Right out of the box, you could browse to my web server. Of course, it was insecure and it only displayed the main page. I’ll break down adding certs and making it TLS and SSL compliant in another post. I was ready to set up my router, make sure you route that traffic to your server! Since that is rather router specific, you’ll have to look that up on your own, but there are tons of guides on it. Now it was time to install a photo gallery web server.

So, first on the list, I tried phpAlbum. It worked great and seems to be a good fit, provided that you have a smaller photo collection. One big plus is that it doesn’t need a MySQL database or anything. The only downside: once I surpassed 8000 photos, and numerous folders, it seemed to not be able to keep up. So, for a short time I got around this by setting up multiple web pages, each running a unique instance of phpAlbum, each with it’s own set of photos. This was simple, really, I made one for a couple of years worth, and one for the next 3 or 4 years, and then had a main web page that linked to the different ones. This is a bit tedious and cumbersome though, but wasn’t too bad if you have a lot of static images.

Since it is so simple, and has a good installation guide, which you can read here, I’ll be very brief about how I set this up. You can also click here for a phpAlbum demo.

# yum install php php-xml php-mbstring php-gd

# yum install ImageMagick

# yum install unzip zip

After downloading the latest phpAlbum zip from their website, unzip it and edit config_change_it.php to your needs with nano or vi, basically by setting the data directory and the photo directory, and rename it config.php. Now copy the entire directory to /var/www/html. So in my setup, it was /var/www/html/phpAlbum/ with all of the files in it.

Next, chmod 777 the cache, data, and photo directory inside your folder, so you can write to it with other users (the uploading functions). Then:

# chown -R apache:apache /var/www/html/phpAlbum

so your web server can own it (since I’m using Apache as the web server). Now all you have to do is navigate to http://yourDomainNameIPaddress/phpAlbum and you will be greeted by a login, to which the default is admin and admin for the username and password. Once your in, it will ask you a few questions, and you can be up and running in a few minutes! It worked really great, like I mentioned earlier, when I had a smaller photo collection (under 8000 for me, but your mileage may vary).

Since it was struggling with more photos, I decided to switch to Piwigo. You can check out a demo of it here. There is a really, really great guide on TecMint, that I drew from when putting this together, but it was relatively simple. Start by installing the dependencies – note that a lot of these were already installed for phpAlbum, and I’m not 100% sure you need all of these, but this is what I installed:

# yum install php php-xml php-mbstring php-gd wget unzip zip ImageMagick python python-tk python-psyco mariadb-server php-mysqlnd

Now you need to enable and set up your MySQL/mariaDB:

# systemctl enable mariadb

# mysql -u root -p
MariaDB [(none)]> create database piwigo;
MariaDB [(none)]>grant all privileges on piwigo.* to ‘piwigouser’@’localhost’ identified by ‘pass123’;
MariaDB [(none)]>flush privileges;
MariaDB [(none)]>exit

# systemctl restart httpd

Be sure to use your own usernames and passwords. I don’t recommend the defaults! Then you need to download Piwigo and unzip it, placing it in your Apache web server root directory. After you put it in place, you need to set the proper read and write permissions, as well as ownership:

# wget -O
# unzip
# cp -rf piwigo/* /var/www/html/
# chown -R apache:apache /var/www/html/
# chmod -R 755 /var/www/html/
# chmod -R 777 /var/www/html/_data/

# systemctl restart httpd
# systemctl restart mariadb

Now it’s up to you how you want to handle moving the pictures we automatically uploaded to the server into the Piwigo directory. For instance, you can have your photos upload from your phone to your home directory, and then make a cron job or script, or manually put them into your Piwigo directory. One benefit of this is deleting blurry or useless photos. I like this option best, but that’s my opinion.

Another option is to set the /var/www/html/galleries photo permissions to be writable/readable by others, or add your user to the group and upload your photos directly to it. I tried this as well, and it works good, too. Either way, now all you have to do is use your web browser to navigate to your server, and you should see the Piwigo first login/setup screen.

Choose your options, such as language, etc., tell Piwigo what the MySQL/MariaDB is called and what that password is, and you should be up and running in no time! Now you can log into the web interface as the admin user (that you just set) and start choosing themes, uploading photos with the web app, or syncing the galleries folder for the photos you put there via FTP or moved manually or scripted.


In this screenshot, you can see the light theme that I went with, and how there are some photos available for the public to see. If I log in, then I can see the rest of the pictures:


You can have more users and give each user access to certain files, folders, or groups. Groups is nice because you can give “family” the option to see this, and “friends” the option to see that. You can control everything from the web admin screen:


Another plus to Piwigo is that there are several Android apps out there, some paid, some free, some open source. So there are options, but quite honestly the web browser of Android works great, since Piwigo has two theme settings, one for desktop browsing, and one for mobile browsing as well. So you can have two different themes, one for each desktop and mobile, to maximize what works best for you.

There’s still a few things to cover though. Right now, it is only using port 80 for unsecure web traffic. We definitely want to use secure http on port 443, so we will cover certificates and security next.

Linux – keep it simple.

Home photo server, part 1: Server Setup, SCP and FTP

While there are many, many options out there for photo storage, if you are looking for a home storage solution that does NOT involve just plugging in your phone and dumping the pictures to your hard drive, you have to get a little technical. (By the way, if that is what you have to do, there is no shame in it. It is probably a lot safer doing that than letting Google hold all of your photos.)

The first thing I needed was a server. Granted, you could use just about anything these days, and there are a lot of open source/open hardware type solutions, but I was gifted an older, generation 1 Dell PowerEdge 1950 from a friend. Granted, it was made in 2006, but it still is 64 bit, has two quad core Xeon 2 GHz processors, and I loaded it with 24 GB of ram. You can get them on eBay now for about $60. A little overkill for this sort of thing, but the price was right! As a bonus, it supported hardware raid, and I put two 2 TB drives in a mirror array, so that 2 TB of space that was backed up.

From there I loaded CentOS 7 on it per the usual installation method, and updated the system. I also purchased an APC battery backup unit, a Back-UPS 1350. This would only hold the power on for about 15 minutes, but it would help for brown outs, and frequent “blips” where the power goes out for only a second or so, which is common where I live. Later I’ll have to do a post on setting up the auto-shutdown and controls, because that was rather interesting.

So the next thing I needed, if I wanted this to work, was a domain. I needed a way to contact my home computer from my cell phone, especially while not at home. Granted, you could set all of this up so when you come home your phone would automatically back up your photos, but I wanted to be able to do this from abroad. Thus enter I’ve used them before, and it is great if you are looking for a cheap, cheap solution. Because it is free.

Granted, a free account your hostname will need to be manually renewed every 30 days, but they send you an email, and all you have to do is click the link to keep it active, so it is pretty easy. After creating an account, logging in, getting a dynamic IP address, then all I had to do was install the DUC software. DUC is the Dynamic Update Client software that allows:

“Remote access your computer, DVR, webcam, security camera or any internet connected device easily. Dynamic DNS points an easy to remember hostname to your dynamic IP address.” (

All you have to do is download the source code and compile it. It went like this:

$ cd noip-2.1.9-1/
$ make
$ sudo make install

After entering my password, it ran through an installation script and asked me for my account name, password for the account, and which DDNS I wanted to associate with this computer. It is interesting, you can have several.

From here, it then became a matter of preference on how to continue. I toyed with several options on my Android phone for how to get the photos from the phone to the computer over the internet.  One of the first methods I tried was using scp, or secure copy over ssh. So, I installed ssh on my server.

# yum install ssh-server

# cd /etc
# cd ssh/
# ls
# nano sshd_config

I then edited the sshd_config to my liking, there are a lot of guides on this on the internet, so I wont belabor the point here. I will note that I use non-standard ports as a form of extra security, however slight that may be, so you may consider doing the same, but essentially it works as is once installed. Then I opened the ports in the firewall – I list the standard ports here, for ease of following along:

# firewall-cmd
# firewall-cmd –help\
# firewall-cmd –help
# firewall-cmd –add-port=22/tcp
# firewall-cmd –add-port=22/tcp –permanent

And that worked great. Unfortunately, scp is slow and can be cumbersome from an Android phone, especially since I didn’t find any apps that would sync my directories automatically (that were open source so I knew what was really being sync’ed). However, I found several open source options that would sync automatically via FTP. So I decided to install “very Secure FTP”, or vsftp, like so:

# yum install vsftpd

# cd /etc/vsftpd/

# ls

# nano vsftpd.conf

Again, I set it up to my needs, but you can check out this guide for ideas. I also needed to punch some holes in the firewall for the service and for both active and passive mode, since several Android apps would use either.

# firewall-cmd –add-port=21/tcp
# firewall-cmd –add-port=21/tcp –permanent
# firewall-cmd –add-port=20/tcp
# firewall-cmd –add-port=20/tcp –permanent

# firewall-cmd –permanent –add-port=40000-50000/tcp

And viola! All that was left was a quick restart of the processes:

# firewall-cmd –reload

# systemctl restart vsftpd

And now I could use FTP apps on my Android phone to sync my pictures from my phone automatically to the home server! In case you are wondering, a great app for this is on F-Droid, the open source app repository of open source apps. It is called SyncTool, and it is very handy. It supports FTP sync one way, both ways, automatic scheduling or running jobs manually.

Wheeow, that was a long post, but now my photos were being automatically backed up. However, that’s only part of the story, because if I was convincing my wife to ditch Google Photos, I needed to also have a way to browse them online, share them, organize them, etc…. It was time for a web server. Guess we’ll cover that next.

Linux – keep it simple.

Slic3r and 3D prints for my LTE project….

After printing the low resolution LTE project case, I found that I needed to make a few adjustments. Fortunately, since I had saved all the work, I could just edit the file and print again. This time I tried several “hi-res” prints. If you are working through this project, or want to see the stl files, you can download them from my MediaFire account.

Overall, I think the new version of the print will work well. However, I had several issues with the Slic3r settings that made things interesting. Slic3r is a program that takes your stl file and slices it into layers of G-Code that the 3D printer actually reads to build objects. What I found was rather interesting to me, perhaps you’ll think so as well.

Using Slic3r as a stand alone program with default settings yields different results that using Slic3r through the Repetier-Host program with the same settings.

This may seem odd, but with the same default settings applied to Slic3r, you get entirely different results if you slice it in Repetier than if you run Slic3r stand alone. For convenience, Repetier (the 3D printer controller program) can call Slic3r for you, meaning you just load an stl file, click slice, and it calls Slic3r, gets the g-code, and prints. Or, you can open Slic3r by itself, load the stl file, and slice it there. Doing this both ways with the same default settings will print two very different looking objects.

I found the better option is to open Slic3r directly, slice the stl to g-code, then open it as g-code in Repetier. However, when I do this, I have to offset the Y coordinates to get it centered on the Repetier “plate”. This can be very annoying when your print is the maximum sized object that you can print. But doing it this way produces the best product.

There are dozens of settings that dramatically change what your print will look like.

There are settings for everything in Slic3r. And each setting seems to make an entirely different print when I’m done. A person could spend days fiddling with the settings to try to make perfect prints. I guess I sort of thought that by now the process would be a bit more “plug and play”. Any thing that I’ve downloaded thus far to print, the author must have known all of the good settings, because the objects print perfectly. But my own projects is another story. Lots of time was spent online trying to figure out what settings would work best, and then it still came down to a bit of trial and error.

Side notes about printing.

After having printed this project, I came to a few conclusions:

  1. Print smaller parts if possible. My LTE project top is twice the size of the LTE project bottom, and the bottom is a much better print than the top (smaller size, less warping, less defects).
  2. While I’m glad I bought this cheaper printer, you do get what you pay for. Although I’d still recommend it for beginners.
  3. If the power goes out in the middle of printing, you pretty much wasted 4+ hours and a lot of filament. (Suggestion: UPS battery backup for the computer AND the 3D printer.)
  4. If the print temperature is too hot, you can’t get your model off of the tray.
  5. If the print temperature is too cold, you can’t get your model to stick together.

Just a few random musings on 3D printing. Now on to using the new case for my project!

Linux – keep it simple.