All posts tagged Backups

Crashplan has an auto-update feature built into it’s software, which is a good thing except that Crashplan on a headless QNAP box will start failing when this happens.   If the package maintainer would update the QNAP package when this happens this wouldn’t become a problem.  But I’ve yet to see this ever happen.

Here’s the fix.

You will notice an upgrade directory in your root Crashplan directory, my default directory is /share/MD0_DATA/.qpkg/CrashPlan/upgrade.  If your directory is anything like mine, you will have a bunch of directories with #s on them…these are all the failed automatic upgrades the Crashplan attempted to perform and has failed.  I didn’t even noticed it was failing until my weekly email came out from Code42 and noticed my backups have been failing for about a week due to this.

First stop crashplan on your Qnap,

Find the last failed attempt..

cd into the last one listed.

In this directory you will find two jar files, c42_protolib.jar and com.backup42.desktop.jar .

(make note of this directory as we will be needing this in a second..)

Now jump over to the main default CrashPlan lib directory, mine is in


Need to copy the two jar files listed above into this lib directory, but first make backups of these files..

Once these are copied, now restart crashplan.

Now re-open your Crashplan client that runs over the SSH tunnel and viola.. you should be good to go..  It should be scanning for changed files to back up.

Once you have verfied this is working again, then remember to clean up the upgrade directory .. you can remove all those numbered directories.


Continue Reading


CrashPlan is designed with the assumption that the CrashPlan app and the CrashPlan service are running on the same machine. Although running the CrashPlan service on a machine without a graphical environment (i.e. running a headless client) is anunsupported feature, this article provides a process for doing so that some users have found useful.

Unsupported Process
The information presented here is intended to offer information to advanced users. However, Code42 does not design or test products for the use described here. This information is presented because of user requests.
Our Customer Champions cannot assist you with unsupported processes, so you assume all risk of unintended behavior. You may want to search our support forum for information from other users.


A headless CrashPlan app means that the CrashPlan service is running on a machine without a graphical environment (headless mode), like a Linux or Solaris server. Running a headless CrashPlan app allows you to remotely administer the CrashPlan service that is running as a backup destination.
CrashPlan is comprised of two components:

  1. CrashPlan service: This is always running from the moment you install CrashPlan and continues to run even if you log out. It is responsible for the actual backup functions.
  2. The CrashPlan app:  This runs as an application that you can launch from a user’s desktop. This is what most people mean when they refer to “CrashPlan.” Headless means you do not open the CrashPlan app.

Before You Begin

  • Have a good understanding of networking, TCP/IP
  • Feel comfortable using a command line terminal
  • Be familiar with SSH


  • Installing CrashPlan directly on a NAS device is unsupported. That means that our Customer Champions are unable to assist if you encounter any issues with this configuration.
  • Most NAS hardware isn’t able to handle high-I/O operations like compression, encryption, and de-duplication, which are essential components of CrashPlan. We strongly recommend directly attached storage for best performance.
  • CrashPlan normally tries to use more CPU when it detects that a user is “away” or idle. Headless clients are almost always in this state, so CrashPlan will try to use a larger percentage of available CPU. If you observe high load when running a hosted client, consider lowering the allowed CPU percentage in the CrashPlan app.
  • If the CrashPlan app you use to run the GUI is configured only to connect to the headless client, then you must upgrade manually. It will not upgrade on its own if it does not connect to a local CrashPlan service.
  • When you launch the CrashPlan app, it connects to the CrashPlan service on port 4282, which is bound to the loopback device, or localhost. This is the key point to being able to connect to the service remotely. Because the port is bound to the loopback device, you cannot connect to it directly via a public network interface.
  • CrashPlan does support backing up NAS where the share-point is mounted directly on the computer itself for Mac, Linux, and Solaris (Windows is unsupported).

Using SSH

Use an SSH tunnel to connect the CrashPlan app on one machine ( to the CrashPlan service on a computer that is text-only (

Configuring a Headless Client


  1. Install and start engine on the text-only server (
  2. Install the CrashPlan app on the CrashPlan Desktop computer (
    Use Mac, Windows, Linux. It doesn’t matter which platform you use.
  3. Close the CrashPlan app if it’s running.
  4. On, navigate to CrashPlan\conf on
  5. Open file in a text editor. (locations)
  6. Edit the line:

to this:

  1. Open a terminal window on
  2. Using SSH, forward port 4200 on to port 4243 on
    The command is: ​

  1. On, open the CrashPlan app.
    Your CrashPlan app is now connected to the CrashPlan service on You can now configure CrashPlan on

Switch Your CrashPlan App Back

When you’re done using the CrashPlan app on the text-only computer, switch your CrashPlan app back.

  1. On, open the file in a text editor. (locations)
  2. Comment out the servicePort line.

  1. Save your changes.

Using PuTTY

Putty is a free Windows SSH client that you can use to do the port forwarding necessary to control a remote CrashPlan client.

Before You Begin

  • Be sure CrashPlan is running on your remote machine.
  • Verify on the headless machine (with netstat) that it is listening on port 4242 on all addresses and on port 4243 on the local address (this is the UI service port).

Output should look like this:

We want to use SSH to tunnel a local Windows port (4200) to the remote host’s service port (4243).


  1. Enter the IP for SSH as you normally would, but don’t open the connection yet.
    Putty Step 1
  2. In the Connection > SSH > Tunnels section, add the following:
    Putty Step 2
  3. Click the Add button.
    If you don’t click the Add button, the CrashPlan connection will fail.
  4. Now open the session and log in.
  5. You can use telnet to confirm the connection:

A successful connection displays “Connected to HOST_IP” with a long, encrypted string. It looks something like this:

Once you have confirmed the connection you should be able to stop the local CrashPlan app. Make sure the servicePort is 4200 in the conf/ file and restart the CrashPlan app.


  • Linux (if installed as root): /usr/local/crashplan/conf/
  • Mac: /Applications/
  • Solaris (if installed as root): /opt/sfw/crashplan/conf/
  • Windows: C:\Program Files\CrashPlan\conf\

Headless Mode FAQ

How do I put the CrashPlan app into Headless Mode?

You do not have to do anything to run in ‘headless mode’. Headless just means you do not run the CrashPlan app UI. The CrashPlan service is running once you install the CrashPlan app.

Grabbed from Novells site


Disaster recovery of SMT

This document (7004986) is provided subject to the disclaimer at the end of this document.


Novell Subscription Management Tool 1.0
Novell Subscription Management Tool 1.1
SUSE Linux Enterprise Server 10 Service Pack 2
SUSE Linux Enterprise Server 10 Service Pack 3
SUSE Linux Enterprise Server 11


The first section of this document explains what data to proactively back up in order to provide for  a smooth recovery of an SMT server in case it for some reason should become unusable.
The second section describes how to recover by utilizing the information gathered in the first section.


Preparation / backup of relevant information.
A normal rule is to ensure to have a usable backup of the complete system.
Besides that it is recommended to create easily-accessible copies of the data mentioned in the following steps, which should all be performed while being authenticated as the root user.

  • To back up the server certificate and be able to create a new server certificate when it expires, the easiest way is to create a copy of /var/lib/CAM/*
  • Record the IP address, DNS and routing configuration of the server in a file.
  • Create a copy of the following :
    • If special configuration of apache2 has been done, then make a copy of the modified files (e.g./etc/apache2/default-server.conf)
    • /etc/smt.conf
    • /etc/smt.d/*
    • /etc/zypp/credentials.d/NCCcredentials
  • Regularly create a backup of the SMT database.
    Since the database is constantly updated with changes it is recommended to create a script that backs up the database. In the examples below the file is named

    • Create a script  in /usr/sbin/ that includes the following commands  :
    • Set the appropriate permissions to the script – also to prevent normal users from being able to read it, since it contains the password of the MySQL root user :
    • Include the job in the SMT framework by appending a line like the following to the cron table in /etc/smt.d/
      22 2 * * * root /usr/sbin/
    • This will execute the script daily at 02:22
    • Restart the cron daemon to pick up the new job :
  • Depending on factors like :
    • the amount of repository data mirrored
    • backup capacity and costs
    • internet bandwidth constraints
    • requirements for SMT service restoration

    it might or might not be feasible to back up the repository structure (repo/), which is located in the directory specified in the MirrorTo parameter in /etc/smt.conf.

  • By default this means /srv/www/htdocs/repo/ and everything below

Disaster recovery :

In case the SMT server needs to be reinstalled from scratch, the information saved during the preparation can be used to get the server back to operational state as quickly as possible.
Always recover to the same version of SMT that the backups were created on !
The execution order of the individual steps is key and should be performed in the sequentially as listed below.

  • Install SLES 10 (for SMT 1.0) or SLES 11  (SMT 1.1) and configure the same hostname IP address etc. as the server had before.
  • Restore the backup copy of the /etc/zypp/credentials.d/NCCcredentials file.
  • Register it against an update source and apply the current updates (reboot).
  • Restore the Certificate Authority and server certificate from the removable device :
    • Overwrite /var/lib/CAM/ directory structure with the backup copy of the same directory – e.g. :
    • Verify the server certificate has been restored correctly and export the server certificate :
      • Start YaST and open CA Management under Security and Users
      • Enter the CA
      • Verify that the correct server certificate is present and valid
      • Select it, and click on Export | Export as common server certificate
        Enter the certificate password, click OK and a confirmation that “Certificate has been written as common server certificate” should appear.
      • Exit the YaST module.
      • With the next restart of SMT the Certificate Authority is copied to the apache document root directory
  • Restore the repository structure
  • Since the uid of the smt user might be different, the permissions to
    /etc/zypp/credentials.d/NCCcredentials must be set with
  • Verify that the owner/group of /srv/www/htdocs/repo/ is smt.www
    If not then set it recursively with
  • Install SMT as per the documentation :

    and complete the setup wizard.

  • Apply the current updates for SMT
  • Stop the SMT services :

  • Restore and verify the backup of the SMT database :
    • After entering the password it will silently load the backup file and return to the prompt.

    • Enter the MySQL monitor and verify that the tables in the smt database are present :

      Enter the password and when the monitor is loaded execute the command :

      It should return the following output :

| Tables_in_smt            |
| Catalogs                 |
| ClientSubscriptions      |
| Clients                  |
| MachineData              |
| ProductCatalogs          |
| Products                 |
| Registration             |
| Subscriptions            |
| Targets                  |
| migration_schema_log     |
| migration_schema_version |
11 rows in set (0.00 sec)
The above output is from SMT 1.0.
On SMT 1.1, the following four additional tables should exist :
Filters, JobQueue, Patchstatus and RepositoryContentData
    • Exit the monitor :

  • Adapt the configuration files to the customizations in the backed up ones and start SMT with

  • Syncronize data between local database and the Novell Customer Center with

  • Verify the catalogs/repositories enabled for mirroring with

  • List the registered clients with

  • SMT 1.1 only : View the job queue with

  • Kick off a mirror job to fix up the database :
    • SMT 1.0 :

    • SMT 1.1 :

Once the mirror job has completed the SMT server should be fully operational.
To verify client functionality perform a service refresh on a client and check that the “Last Contact” time-stamp for its entry of registered clients on the server gets updated

Was in a little jam today and we needed to take backups of a couple of systems. The problem was the local drives didn’t have enough space to take a full disk image. This system was also failing to mount a file system over NFS due to some firewall issues. I had my mind set on taking down the hosts and either using partimage or clonezilla, which I know for sure would have worked. I’m sure you don’t have to use root and could use unpriv’d user with sudo. I’ve used this with tar a bunch of times and was quite amazed when dd worked with it.

$ sudo dd if=/dev/cciss/c0d0 | ssh root@backupserver 'dd of=/some/backup/path/disk.img'

How you can dd a image file directly from within a tar.gz archive.

$ sudo tar xzOf filename.tar.gz | dd of=/dev/sdb bs=1M

How to dd an image from one host and pipe over ssh to remote host

-O = extract files to standard out

How to create a tar.gz from a directory and pipe over ssh