Category: System Administration

Home / Category: System Administration

GIT: Remove File History

October 24, 2019 | Coding, System Administration | No Comments


Ever accidentally commit a file that you meant to have in your .gitignore? Perhaps a file which includes an API key? It’s easier than you are probably thinking.

$ git rm --cached the-file-i-want-to-remove
$ git commit --amend -CHEAD
$ git filter-branch --force --index-filter "git rm --cached --ignore-unmatch FILE-TO_REMOVE" --prune-empty --tag-name-filter cat -- --all

But, you’re better off just not committing files that you don’t want in the first place.


I stumbled onto this Reddit post today. A rant, but one I agree with hardheartedly.

I recently changed jobs. The person following me to my last job will come in to thorough documentation, step by step instructions, etc. The job I just started hasn’t a shred. Word is the guy before me (only at the job 6 months, so there is some excuse) didn’t document, and the guy before him (12 years) destroyed all his documentation on the way out the door.
I don’t know a sysadmin who doesn’t keep some kind of notes. At minimum, throw them in a shared doc or notebook or something. At this new place I get the brunt of the two types of non-documenters: the lazy and the malicious.
Isn’t documenting as you go and leaving the documentation behind for the next poor sap just part of the ethical code of the job, or am I wrong?

r/sysadmin • Posted byu/rabadashridiculous

What really got to me is his citation that a sysadmin, who had been in the job for twelve years destroyed all his notes on the way out. It doesn’t surprise me however that there are no formal documentation policies.

That sysadmin, who destroyed his notes effectively destroyed the companies property, yes that work product was owned by his former employer. They could take action against him.

No matter how he felt, this is not professional or acceptable behavior.

However, every place I have worked, when i arrived was in the same boat. No formal documentation policies, if I was lucky a few print outs or a collection of saved emails.

If you encounter this situation in a new job, do yourself a favor and take it upon yourself to set a new standard. Fire up a small web server on the local network, load a CMS like WordPress or Drupal, give you fellow admins access, and start documenting.

You will make life easier for everyone in the office, including and especially, yourself.

How to train SpamAssassin

June 28, 2019 | System Administration | No Comments


In order for this to be effective you need to have a collection of good email (HAM) and a collection of bad email (SPAM). These collections of ham and spam should be 1000+ messages each, and you should probably have more HAM than SPAM.

You should keep updating these folders with new messages as time goes on as spam practices change and you don’t want to run the risk of SpamAssassin thinking a specific year or month is spam.

I like to create a mailbox for this purpose. Lets call it and through IMAP create folders called HAM and SPAM to keep things organized.

If you are using Ubuntu Server, the default bayes path for your SpamAssassin DB is /var/lib/amavis/.spamassassin so that is where we will do our work. Otherwise, check your distro package for details.

$ sudo su -
$ cd /var/lib/amavis/.spamassassin

Next lets check that status of the bayes db.

$  sa-learn --dbpath . --dump magic
0.000          0          3          0  non-token data: bayes db version
0.000          0       1207          0  non-token data: nspam
0.000          0       3784          0  non-token data: nham
0.000          0     177278          0  non-token data: ntokens
0.000          0 1079041431          0  non-token data: oldest atime
0.000          0 1561554929          0  non-token data: newest atime
0.000          0 1561558640          0  non-token data: last journal sync atime
0.000          0 1561526651          0  non-token data: last expiry atime
0.000          0          0          0  non-token data: last expire atime delta
0.000          0          0          0  non-token data: last expire reduction count

Now we will train it, but not SYNC it. Syncing makes any new data live, and you may not want that until you’ve built a sufficiently detailed database.

$ sa-learn --no-sync --dbpath . --progress --ham /Path/To/Mailbox/{cur,new}
96% [=========================================  ]  25.00 msgs/sec 02m31s DONE
Learned tokens from 33 message(s) (3783 message(s) examined)
$ sa-learn --no-sync --dbpath . --progress --spam /Path/To/Mailbox/{cur,new}
98% [========================================== ]  26.23 msgs/sec 00m47s DONE
Learned tokens from 355 message(s) (1242 message(s) examined)

The {cur,new} tell sa-learn to look into both the cur and new sub directories of the HAM AND SPAM folders.

Run the dump magic command again, and if satisfied with the number of tokens, sync the database.

$  sa-learn --dbpath . --sync

That’s all. SpamAssassin is trained up and live.

How to test bind9 config

June 27, 2019 | System Administration | No Comments


Note: Depending on your install file names and locations may be different

After making changes to the main named.conf it is advised to check the config before restarting the service.

$named-checkconf /etc/bind/named.conf.local

If running in a chrooted environment

$ named-checkconf -t /var/named/chroot /etc/bind/named.conf.local

Like the main config, after editing a zone file it is advised to test it directly.

#$amed-checkzone /var/cache/bind/zones/

Windows Terminal Preview

June 22, 2019 | System Administration | No Comments


Well, Microsoft released a preview version of its new “Windows Terminal” It’s available in the Windows Store. I’ve been looking forward to checking it out since it was announced. The slow march of Windows conversion to Linux, but they wont call it that.

But, alas. The laptop from which I am typing this is, unfortunately, not sufficient for its install. Being, apparently, behind on updates. Which are now downloading.

So long as this antiquated hardware doesn’t overhead and die, as it is prone to do. Ill get to check out the Terminal tomorrow. I’m crossing my fingers…

Hopefully, this survives long enough to update.


Congratulations on using a Version Control System for your code! Believe it or not, not everyone is, well, let’s say, “informed” enough to do it. You’re probably also using GitHub, which is great too. Especially since they now offer private repositories for free.

It’s not always the case, however, that you want to use a solution like GitHub, BitBucket, etc. Sometimes you’re work requirements are such that you’re code is all kept internally to your work network and you need to set up your own remotes. Here is how you do it.

On the server that will host your repos you will need to create a directory to contain them all, create the bare repository for your project then initialize the empty repo. Lets say your repository directory will live off of root and be named git.

$ mkdir /git
$ mkdir /git/my-repo.git && cd /git/my-repo.git
$ git --bare init

Your empty repository is ready to go, so go back to your workstation. Replace username and host with your username and the dns name or ip address of the remote server.

$ git remote add origin username@host:/git/my-repo.git
$ git push origin master

And that’s it. You have have a remote repository under your control.


bcrypt requires both python > 2.6 < 3.0 and the Microsoft C++ build tools.If Python 3+ is in your path you will need to specify the path to python 2

npm config set python C:\Python27\python.exe

For the Microsoft C++ build tools you will have to install Visual Studio Express. If you have Visual Studio Express 2010 installed you should be able to install bcrypt with npm fine, however if you have a later version or download the current (2013) than you need to specify the version in the npm install command.

npm install bcrypt –msvs_version=2013

Note: This post was recovered from a old, now defunct, blog


I’ve been playing with the idea of converting to Arch Linux, at least on a trial basis, for some time now.  A couple weeks ago I spun up a VM and installed it without a fuss, so it seemed like it would be okay to install on my laptop, but not having a lot of free time I put it off.  The popularity of Arch Linux in my What’s your favorite Linux Distro poll pushed me over the top and I managed to find some time this weekend to sit down and do the install.

It took much longer than my install on the VM for a couple of reasons and would have gone a lot smoother had I been a bit more prepared.  However, now that I’ve been through it and made my mistakes, I’m quite confident I could get through the process in a reasonable amount of time as most of the difficulty came right at the start of the install.   This is some of what I learned from the process of installing Arch Linux: 

  1. UEFI is a pain in the … you know what I am talking about
  2. If using UEFI, disable secure boot.  Not as straight forward as it would seem, at least not on my Acer Aspire.  In order to disable the secure boot, you have to first set a supervisor password, then disable the secure boot, then remove the supervisor password.  It seems like backwards logic to me, in my world the supervisor password shouldn’t become an option unless secure boot is turned on, and turning off secure boot should disable the supervisor password.  But they didn’t ask me when they wrote the UEFI so why should it make sense?
  3. Your Boot partition needs to be flagged for UEFI-EF00. The Arch Linux Beginners guide is excellent and walks you through the install process almost flawlessly with the exception of this.  It tells you how to make the partition, but not how to flag it.  I spent way to much time trying to figure out why my install wouldn’t boot, and scouring the guide for an answer.  The solution? install gdisk and add the EF00 type flag.
  4. If you have multiple partitions or multiple drives, make sure you write down which slice is being mounted where as you create them and make sure you mount them in the correct order before running pacstrap, your boot partition probably doesn’t have enough free space for the whole operating system.
  5. The Arch Linux Wiki is excellent.  Unlike the Ubuntu Answers pages, it actually has answers.  The Arch Linux Wiki is very comprehensive, and covers pretty much everything I had to look up very well, and at the very least, gave me an idea of what I needed to look for to identify the source of the problem.
  6. UbuntuOne client doesn’t like to run unless you have installed extra fonts or a Desktop Environment.
  7. Chrome currently isn’t compatible with the installed libcrypt, so I am relegated to chromium, which is pretty much the same thing.
  8. Installing base-devel is not optional

And that is basically it.  

After the initial UEFI / partitioning battle, everything went pretty well as advertised and has really refreshed my knowledge of the underline workings of Linux.  I’ve been using Ubuntu for my desktop for two long (read: when they first started shipping install disks for free) and it’s fogged my memory of configuration on the command line for desktop services such as X11(I spend most of my working life on server consoles).

Probably the best part for me(so far) is with breaking away from Ubuntu, I’ve been forced to take a serious look at the various window managers and desktop environments out there. Which has brought me to i3.wm, on it’s own its excellent although there are a lot of new key bindings for me to learn and a almost crazy amount of customization to play with, and it is missing some of the standard functionality I want in a desktop environment, such as wall papers, lock screen etc.

Luckily it can replace the window manager in XFCE4 easily and makes for an excellent combination.

Update: After playing with i3 configuration, I’ve dropped XFCE and am using straight i3.wm and loving it.  Also, I’ve seen a lot of comments about this article on reddit.  If you are visiting from reddit and have a comment, please post it below so I see it and can respond. Thanks!

Note: This page was recovered from an old, now defunct, blog.


     For those that administer postfix email servers that see high volumes of email, dealing with Spam can be quite a challenge. Initially I was relying on the server to do its own Spam checking with tools such as amavisd-new, spamassassin, etc.

Which was working really well for the most part, but when the Spam volume gets high, performing all that processing, in the hundreds of thousands of emails, was putting unnecessary load on the server.      

Enter Real-Time Black-hole lists or DNS Black-hole lists — among other names — which allow us to block suspected Spam before it is even processed by the server. I was originally against RBLs having had servers land on them and having to go through the annoyance of getting unlisted.

There are even a few that I would consider outright extortion; asking you to pay a fee to expedite the un-listing process or a one time fee to never be listed again. That being said, the more reputable ones — at the time I’m writing this, I’m thinking spamhaus, sorbs, spamcop. — will remove you on request within 24 hours without demanding payment, and are quite effective. 

How does a email server get listed? 

There are several reasons a email server will get listed on a Real-time Black-hole List, some of which are: Sending copious amounts of Spam. (Duh)Not having a valid reverse DNS entry. Not having a proper SPF record.Server is in a Dynamic IP range.Server is in the same subnet as a known and aggressive spammer.The Server is new.      

These are all pretty easy to fix, with the exception of the first one, Sending copious amounts of Spam. The only way to truly put an end to this is to make sure your system is secure, an ever ongoing process at best, and when compromised it can be hard to catch it in time to prevent getting listed.

I suggest requesting AOL white list your servers; when it is approved any report from an AOL customer of Spam from your server will generate a email report, which will help catch any problems early as AOL is a prime target for Spam. 

How do RBLs work? 

Real-time Black-hole Lists are basically a Reverse DNS pointer which look something like this.  IN A                                                       IN TXT “SPAMMER, Banish to Oblivion” 

That should be a obvious enough of a fake IP, but thats it. The email server performs a reverse lookup with the appropriate RBLs info attached, and acts accordingly. Pretty simple! 

How do I make Postfix use a RBL?      

Configuring Postfix to use a Real-Time Black-hole List is extremely simple, once you have decided on which RBLs you wish to use.

In your Postfix you should already have smtpd_recipient_restrictions configured, if you don’t you may want to look into that before you go any further.

Assuming you already have it set up, you can simply add a reject_rbl_client to the end of it. 

Here are a few real entries you can add to the end of your smtpd_recipient_restrictions: reject_rbl_client,,

And that’s sudo postfix reload Your good to go!

Note: This post was recovered from an old, now defunct, blog.