Archiv der Kategorie: Linux

System Backup Script with tar

While BackupPC with it’s web interface is certainly a nice thing to have it imposes a few problems if you want to use it to backup your system. Bare metal restores are more tricky, because files are stored in a proprietary format. Hence you will need to set up BackupPC before you can restore your system, something that is not so trivial.
I looked into another system that would allow me to create a backup of my system completely automated and that would allow for simple bare metal restores. I ended up using tar, as it is a proven tool and is supported even on the most basic live distro you’ll find.
I wrote a small shell script that gives me a few options:

  • backup the master boot record
  • let exclude certain directories from my backup
  • keep defined number of versions of my backup

My entire system backed up like this ends up in a tarball of about 1.8GB.

#!/bin/bash#backup script

#check if you are root
if [ $(whoami) != ‚root‘ ]; then
echo „Must be root to run $0″
exit 1;
fi

#base dir to store backup tarball
DIRECTORY=“/path/to/backup“
DATE=`date +%Y%m%d`

#files to exclude
#exclude dynamically created system directories such as /dev or /sys and any other dirs
#you don’t want to include
#also exclude the backup dir
EXCLUDES=(/dev /home /lost+found /media /mnt /proc /sys $DIRECTORY)

#
#
# start of backup
#
#

#copy mbr to file
echo „copying master boot record to /root/mbr.bin…“

dd if=/dev/sda of=/root/mbr.bin bs=512 count=1

len=${#EXCLUDES[*]} #Num elements in EXCLUDES

echo „Backup will exclude the following $len directories:“

i=0
while [ $i -lt $len ]; do
echo „$i: ${EXCLUDES[$i]}“
let i++
done

#prepend –exclude option to every directory
for EXCLUDE in ${EXCLUDES[@]}
do
EXCLUDE=“–exclude=“${EXCLUDE}
EXCLUDELIST=“$EXCLUDELIST $EXCLUDE“
done

#check if backup disk is available
if [ -d „$DIRECTORY“ ]; then
# Will enter here if $DIRECTORY exists

echo „Starting backup with tar cvpzf $DIRECTORY/${DATE}_backup.tgz $EXCLUDELIST /“
#start actual backup process
tar cvpzf $DIRECTORY/${DATE}_backup.tgz $EXCLUDELIST /

echo „change permissions of backup to root only access…“

chmod 700 $DIRECTORY/${DATE}_backup.tgz

echo „${DATE}_backup.tgz has been successfully created.“
echo „Size of backup is `ls -lh $DIRECTORY | grep ${DATE}_backup.tgz | awk ‚{print $5 }’`“

#deleting old versions
BACKUP_VERSIONS=(`ls $DIRECTORY | grep _backup.tgz`)

len=${#BACKUP_VERSIONS[*]}
i=0

#if more than 4 versions exist
if [ $len -gt 4 ]; then

#calculate number of versions to remove
rmno=$(($len-4))

#delete versions at the beginning of the list
while [ $i -lt $rmno ]; do

echo „removing ${BACKUP_VERSIONS[$i]} …“
rm $DIRECTORY/${BACKUP_VERSIONS[$i]}

let i++
done
fi
else
echo „cannot start backup, backup disk is not available“
exit 1
fi

If you want to restore the system you can do so simply by untaring to another disk:

tar xvpfz backup.tgz -C /path/to/disk_mount/

If you are restoring to another disk with a different partition scheme it may be necessary to recreate some entries in /etc/fstab as well as in grub.conf regarding the root device for grub and the kernel.
Restoring the master boot record is equally simple:

dd if=/root/mbr.bin of=/dev/target_disk count=1 bs=446

bs equals 446 if we do not want to overwrite the partition table that may be different on our new disk. If it’s the same we can also restore with bs=512.

HowTo use LVM to create filesystem snapshots for backups

If you have large amounts of data you’d like to backup, it can be cumbersome to use the normal backup tools such as tar. If data changes during the backup you might end up with an unusable backup.

If I want to backup my videos, recordings, music etc. of my MythTV box I cannot afford to take the filesystem offline for an entire night. Hence this solution.

I use XFS as my filesystem to store my stuff. It is a robust and modern filesystem designed for large files, precisely what I need for my recordings.

You need your volumes to be part of an LVM VG for this to work. You can find more information on LVM on IBM’s website: http://www.ibm.com/developerworks/linux/library/l-lvm/

The process to create a backup from a snapshot is quite easy:

# xfs_freeze -f /mountpoint/of/backup-fs

This will freeze the filesystem you want to backup essentially stalling all IO. For example if I have a LVM volume on /dev/myth/production mounted as /storage/mythtv I run

# xfs_freeze -f /storage/mythtv

Afterwards we can create a new logical volume with the same contents using LVM:

# lvcreate -l 500 -s -n snap /dev/myth/production

using the s argument tells LVM that I want a snapshot that is named „snap“ using the -n option.

Now I can safely mount the snapshot. I need the nouuid option because otherwise XFS would think it is mounting the same filesystem a twice. Basically that is the case though.

# mount -o nouuid,ro /dev/mapper/myth-snap /var/myth-snap

Now its time to unfreeze the „real“ filesystem and resume IO:

# xfs_freeze -u /storage/mythtv

At this point you can start the actual backup process using your favourite tool. Be it tar, rsync etc.

Once the backup run is complete you can safely unmount and destroy the snapshot.

# unmount /var/myth-snap

# lvremove -f /dev/myth/snap

The beauty of this is, that it can easily be put in a little script that runs every night. For a lot of applications it is also better to rely on a backup rather than a RAID which will only keep availability up but does in no way protect your data. Add a hotswap drive bay to your case and get a couple disk sleds and you have a fast, cheap and reliable backup solution up into multi terabyte territory…

Label removable disks under Linux

If you are using removable disks such as USB pendrives, USB harddisks or Firewire disks and would like to have them mounted under /media/drivename (which does happen automatically with Ubuntu), you might want to label them so they always show up with the same name instead of disk, disk-1 and so fourth.

If you label a drive it will always mount using the same name, which facilitates using them with scripts and the like. I use an external Firewire disk for my system backup using BackupPC.

I use an XFS filesystem on that disk. To label it you use the following commands (requires the xfstools package):

# xfs_admin -l <DEVICE>

This will list you the current label. To label the disk write

# xfs_admin -L <LABEL> <DEVICE>

You can verify using your operation with

# xfs_admin -l <DEVICE>

To label disks containing ext2, ext3 or ext4 you use the e2label program and the mtools package for FAT16/32 volumes. more information can be found here.

Digital Devices Cine CT V6 and MythTV – DVB-C with UPC Cablecom

Since my first MythTV system I relied on analog PAL TV. When digital TV was introduced by UPC Cablecom in Switzerland it was mandatory to use their cable box, all channels were encrypted. Now of course it would have been possible to use an IR blaster to control that unit and digitize the output of the set-top box.  It took a few years but now FTA digital (and partly HD) TV is available in Switzerland on UPC Cablecom’s network.

As before I wanted a dual tuner card that would allow me to record one show while watching another. After a bit of research I found only Digital Devices Cine CT V6 Card would fit the bill. It is a PCIe 1x card that is downright tiny, about the size of a business card. Compared to my analog Hauppauge PVR-500 this is a nice development. The Cine CT V6 is a hybrid dual tuner card that either takes a DVB-C or DVB-T signal on either of its tuners that records straight into h.264. or MPEG2. yay!

The card can be upgrade with a DuoFlex CT card that adds another two tuners – all running off the same PCIe slot. And if that is not enough that system can yet be upgraded with an Octopus bridge bringing up to 8 tuners to one PCIe port.

There is even the possibility to include CI modules. It almost seems like the card we’ve all been waiting for.

And best of all, Linux driver support is there too.

I am currently running kernel 3.2.0. Driver support should be built into kernels for versions greater 3.6. So I guess the whole process will be easier once 14.04 is released. For the time being on my Mythbuntu 12.04 installation was straight forward.

The installation consists of two parts:

  • driver
  • firmware
[Edit]

As of today this will only work up to Ubuntu 14.04

Instead of building your own drivers, you can also just add the following to /etc/apt/sources list for your Mythbuntu 12.04 system:

# https://launchpad.net/~yavdr/+archive/main
# linux-media-dkms for digital devices cine ct v6 (ddbridge)
deb http://ppa.launchpad.net/yavdr/main/ubuntu precise main
deb-src http://ppa.launchpad.net/yavdr/main/ubuntu precise main

afterwards it is a simple run of

# apt-get update
# apt-get install linux-media-dkms

and you should be good to go!

The package linux-media-dkms takes care of everything. After rebooting or manually adding the needed kernel modules, you should see the adapters as

/dev/dvb/adapter0/frontend0
/dev/dvb/adapter1/frontend0

frontend0 is used for DVB-C whereas frontend1 would be used for DVB-T.  Switching between the two is done via software.

Now you simply modify your entries in mythbackend-setup for the new cards and input connections and make a full channel scan in MythTV.

Mac OS X TimeMachine Backups on a Linux Server

With Mac OS X 10.5 Apple introduced it’s first backup solution for the rest of us. Most other programs I’ve used were either too complicated, too unreliable or a combination of the two or just to plain expensive. TimeMachine aims to solve all of that.

In the past I’ve used anything from plain copies via the Finder to Retrospect, Archiware (a great tool if you have a few machines and a server), rsync etc. For my dad for example all of those were not quite what was needed. The problem with most of these solutions in our modern world are, when you use a Laptop, you are not always connected to the backup system and hence schedules fail to run. As soon as manual intervention is needed, backups usually don’t happen.

TimeMachines approach seems logical with the use of external disks. As I have a Linux server running in my home and my MacBook Pro is always connected to the network be it via AirPort or Ethernet it makes sense to use the server for backups.

After a bit of googling I found a post on http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-m… outlining
This is what I had to do:

Prerequisites: I run an Ubuntu based system in version 8.10. Most of these things should apply to other Linux machines as well though.

The ubuntu/debian netatalk package comes without ssl support that is needed for Mac OS X to work because OpenSSL is not compatible with the GPL. Hence you either need to compile your own package or download this one here:

Rolling your own involves downloading the source packages

# apt-get build-dep netatalk
# apt-get install cracklib2-dev fakeroot libssl-dev
# apt-get source netatalk
# cd /usr/src/netatalk-2*
#DEB_BUILD_OPTIONS=ssl dpkg-buildpackage -rfakeroot

Install it with

# dpkg -i netatalk*.deb

In order to prevent Ubuntu from automatically upgrading you newly installed package you need to set it on hold:

# echo „netatalk hold“ | sudo dpkg –set-selections

All you need to do next is configure /etc/default/netatalk and turn off everything but AFPD_RUN=yes

edit /etc/netatalk/afpd.conf to have the last line read

– -transall -uamlist uams_randnum.so,uams_dhx.so -nosavepassword -advertise_ssh

and finally in /etc/netatalk/AppleVolumes.default you get to list the shares netatalk shall serve you. Edit the file according to your wishes. It could be something like this:

/srv/TimeMachine/sg/ „TimeMachine“ options:usedots,upriv allow:myusername

That way only the user „myusername“ can access the share which is probably what you want. Also the share is nicely advertised as „TimMachine“.

Next in line is a Bonjour/Zeroconf daemon that will advertise the netatalk services on the network. In this case Avahi is used for that purpose.

a simple

# apt-get install avahi-daemon
# apt-get install libnss-mdns

should be all that is needed.

edit the hosts line on /etc/nsswitch.conf to read

hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns

Now we need to tell Avahi that it needs to broadcast the availability of AFP across the network so that the server will automatically show up on the MacBook Pro.

Open /etc/avahi/services/afpd.service

<?xml version=“1.0″ standalone=’no‘?><!–*-nxml-*–>
<!DOCTYPE service-group SYSTEM „avahi-service.dtd“>
<service-group>
<name replace-wildcards=“yes“>%h</name>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=Xserve</txt-record>
</service>
</service-group>

now restart avahi

at this point the share pops up under network on the Mac.

Last but not least TimeMachine needs to be configured to use that share as its storage pool. Nicely enough Apple hides all network volumes except the ones from Mac OS X Server and TimeCapsule in the TimeMachine control panel. Heck, not even an AirPort base station with an attached USB disk can be used. Sometimes I really don’t understand Apple…

To get the Mac to see the network volume as a TimeMachine storage pool open a terminal and write

defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

TimeMachine should create a sparsebundle disk image for the files. That is a special disk image that only uses as much space as is really just needed and grows in size afterwards. On my box this wasn’t created automaically, but you can easily create one with the disk utility on the Mac and copy it over to the Linux box. The filename of the image must be computername_MACADDRESS-OF-ETH0-WITHOUT-COLONS.sparsebundle
computername ist not your actual computername as seen in the sharing panel of system preferences but always simply „computername“.
That should do.