Script Day: SSH to a host behind a NAT

I use SSH daily to work with different remote services, and its always a very straight-forward process… unless the remote server you want to work with is on LAN somewhere behind NAT1. When you need to access such an internal server, the only option is to SSH into the firewall2, and then SSH again to your server of choice.

But there’s a better way, and you don’t even have to fiddle with the firewall server!

(this is not actually a script, though minimal text editing is required)

The solution is actually quite simple: set up an alias in your .ssh/config file that you can use to call the remote server when you are outside the LAN (if you are inside the LAN its better to access it directly), and for that alias we will set up a ProxyCommand that will tell SSH to first access the firewall server and open a tunnel to the target LAN server.

It looks like this:

Host remote-alias
ProxyCommand ssh firewall-user@firewally-server nc lan-server 22

This set up works best if your access to the firewall-user account is without password or passphrase (using an SSH private key that is either without passphrase or already loaded in the agent), then the login is as streamlines as a direct access – but the worst is that you’d need to type in two passwords.

Enhanced by Zemanta
  1. router that does Network Address Translation so the servers address is not accessible from outside the LAN []
  2. or some other server that has legs both inside and outside the LAN – I’m using a DNATed server, what most off-the-shelf routers incorrectly call “DMZ []

Recovering InnoDB Tables In MySQL 5

The following procedure can be used to recover InnoDB database tables from a backup of a MySQL server that had the “innodb_file_per_table” setting but all the metadata (in the “ibdata*” files) was lost – for example due to a server crash.

The process involves two steps:

  1. Recover the table structure from the .frm files
  2. Recover the data from the .ibd files (InnoDB tablespace)

There is a lot of copying the backed up files over and over to the MySQL datadir, so its useful to have the backup available on the database server machine. In my setup the backup for the databases was copied to the directory “backup” under the database’s datadir, so – for example – for the table somedb.sometable there exist files somedb/backup/sometable.frm and somedb/backup/sometable.ibd.

Additionally the process for recovering the table structures creates a lot of superfluous metadata in the InnoDB data files, so after the first stage I’m going to destroy the InnoDB data files and let the InnoDB engine re-generate them – as a result any existing InnoDB tables will be destroyed. This is important so I’ll iterate: using the procedure detailed here will destroy any existing and working InnoDB databases! So this procedure is useful to recover a destroyed database server to a new server or as a temporary measure on a temporary server to be able to dump the data to SQL files that will later be loaded into an existing server.

There likely a way to do this which is less heavy handed – for example, check out this article from Percona’s MySQL blog, but for my purpose this is enough.

(more…)

Script Day: Automatically backup your EC2 instance using snapshots

The following script I install as a cron job on Amazon AWS virtual machines I deploy, to allow them to backup themselves automatically. The script uses the EC2 management utilities that are normally available on “Amazon Linux” installations (and can be easily installed on other Linux distributions) to create EBS snapshots of the current mounted root EBS volume1.
(more…)

  1. I don’t expect this script to work for instances that have an instance-stored root device, but I don’t expect to encounter these any more []

Script Day: find the oldest file in a directory structure

This piece of script came in handy when I wrote a utility that “recycles” space on a logging partition: before log rotation archives the current log file, we move some old log files (depending on some archive freshness policy) to a remote storage that archives older files.

The problem is that the “old archive storage” also has limited disk space and I got fed up managing the archive by hand. The solution I came up is to scan the hierarchy of  log files in the storage (logs are stored hierarchically according to origin and type) and delete old files until I have enough room to move some newer files in. That way the “old archive storage” is always kept full and keeps as much back-log as possible and does this automatically.

The piece of code that determines which files we want to delete works like this:

  1. Use find to list all the files in the directory structure
  2. Pipe it to perl and collect all the file names in a list
  3. Use perl’s sort operator to compare the modification times of each file in the list and show them in the order (i.e. oldest first)
  4. Use head to get just the first file

So it looks like this:

find /mnt/httpd_back/ -type f | perl -nle 'next unless -f; push @files, $_; END { foreach $file (sort { @a=stat($a); @b=stat($b); $a[9] <=> $b[9] } @files) { print $file; }}' | head -n1

Note: normally we use head to get some initial output and terminate the process early before it does more costly work – when head has enough data it terminates the pipe sending SIGPIPE to the upstream process and that usually terminates the process that generates the data. In this case – and in all other cases involving sort – the upstream process buffers all the data in its own memory before outputting anything, so it can sort everything, and using head here is just a filter to get what I want and does not actually save me from doing all the work. I could have easily done the same thing inside the perl script itself by replacing the block of  print $file; with print $file; last; – this has the same effect as using head, because head will send SIGPIPE to perl after getting the first print and will terminate it. Deciding which way you want to go is probably more about readability of the code and I prefer my original version because its easier to read to non-perl specialists.

I can then just remove that file, see if I have enough room to move in the newer log file and if no – repeat the process.

This would work well, I believe, but it may be inefficient if I find a bunch of small files and I want to copy in a large file. So what I did next is to take advantage of the fact that all the log files I have are named using the following simple format:

<service>-<type>_log-<year><month><day>.gz

and that allows me to easily find all the log files that record the same day and eliminate them at the same time. Subsequent moving of additional files will likely succeed because I cleared out all the log files of an entire day. If not, I can always go and clear up another day’s worth of logs.

Enhanced by Zemanta

Script Day: automatically locate the next valid transaction in MySQL binlog

Sometimes the MySQL replication breaks due to some corruption in the binary log files1. When your binary log files are corrupted, the only option (other then trying to rebuild a database of hundreds of gigabytes) is to try to skip over the corrupted region and get the slave to pick up from where the transactions are valid.

Locating the correct position in the binary log from which the server can carry on is difficult but can be made easier by the mysqlbinlog utility that can scan the binary log files and show you which position is valid using the --start-position to try random positions in the binary log file and see which position will let you read from the file2.

(more…)

  1. I have yet to find a good explanation to why it happens and how to prevent that []
  2. because in the binary log transactions can have any size, so they can end and start at any point []

Script day: output the tail of a log based on time

As system administrators we often want to list the last few lines from a log file in order to track problems and see system reports. The UNIX command tail is very useful for that purpose and lets you display an arbitrary number of lines from the bottom of any file.

But often this is not really what you want – an administrator might want to see what happens in the last X minutes and the common practice to do this is to run tail with a guessed number of lines, see if you get what you want and if its not enough increase the number and try again.

Here’s another approach that works well if the log file you want to trace has time stamps for its lines (more…)