Speed up resizes - Part 2

In the first part of this how-to we looked at how resize time is extended when a slice hosts many small files or many files that are being updated during the resize.

In this second part we look at the effect on resize time of large, constantly-updated files and how to mitigate it.


Slices with huge, constantly-updated files

Very large files that are being updated during a resize pose similar problems for resize downtime as smaller, constantly-updated files — but on steroids. If your slice hosts files that are frequently written to and which are larger than 10Gb, this section is for you.

These large, constantly updating files will need to be completely re-copied during the reboot phase to capture any updates made since the resize began. This will extend 'resize downtime' considerably due to the size of that second copy, which must complete before the resized slice can be brought back online.

MySQL database servers that use the InnoDB data-file format write data to a single file that can — and will — grow very large indeed. Similarly, MySQL in InnoDB mode usually logs to a single very large log file.

An update to a large InnoDB MySQL data or log file (/var/lib/mysql/ibdata1, etc) forces the resize process to re-copy the entire file in the final copy phase. If these files are large then the re-copy can take some time, which keeps the database offline.

Another source of large files are application logs, especially logs produced by mail servers and some web servers. Apache can easily produce log files that are 16Gb or larger so it's not safe to assume that Apache's default logging will help you avoid this large-file issue.

MySQL transaction logs can also grow large if you have turned on transaction logging. It's rare that people do, and it's even rarer that they turn on transaction logging without running out of disk space shortly afterwards! Still, it's wise to keep an eye out for this possibility.

How to mitigate

As described above, pruning databases of cruft may help reduce the total copy time. Archive and delete old or obsolete databases before resizing.

Again, turning off database writes, if possible, will reduce 'resize downtime'. Turning off logging may also help in the case of InnoDB databases.

If you have turned on MySQL transaction logging and the transaction log file is large, it's worth turning it off and archiving then deleting it on the slice before starting a resize.

On mail servers, check the size of /var/log/mail.log or /var/log/maillog before resizing. Consider turning the mail server off before resizing (you have a secondary 'backup' mail server to pick up the load, right?).

Similarly, check how Apache is logging. If it is logging all requests to one file, check the size of that file and consider archiving and deleting it or turning Apache off prior to starting the resize. The same advice applies to any other application that you know is writing to a large log file.

For the above applications and any others, review your logrotate policy (if you have one) to check that it is keeping a check on your log file sizes. This will save you downtime during resizes and and make life with any Linux server easier.

Pack your toolbag

Of course, it is difficult to track what files have been created after a slice has been set up. That is especially true for applications that create session files. It pays to find — and cull — these large collections of little files.

You can identify the ten largest directories and files quite easily by issuing this command as root:

du -a / | sort -n -r | head -n 10

Change that final '10' to any other number to alter how many files and directories the search returns. This command is a good middle-ground tool for identifying large directories of small files and large files.

If you only want to look for large files, try this large file finder (as root):

find ~/ -mount -type f -ls|sort -rnk7 |head -30|awk '{printf "%10d MB\t%s\n",($7/1024)/1024,$NF}'

And if you want to find large directories only, try this large directory finder (as root):

du -x --max-depth=4 ~/|sort -rn|head -n30|awk '{printf "%10d MB\t%s\n",$1/1024,$2}'

Technical details

If your slice does not match any of the common types we have examined above, you may be able to assess how its resize time might look if you consider your applications with an understanding of how the resize process works.

The first stage of a resize is a live copy of the slice's entire file system. All applications are left running during this stage.

Here's where predicting resize time runs into its first uncertainty. Without detailed knowledge of your usage of your slice's file-system, the resize function cannot accurately predict how long the 'file copying' stage of a resize will take to complete.

This unpredictability is particularly true for the final directory on Linux file systems: the /var/ directory. It's called 'var' because the size of the data it holds is 'variable' in size and likely to change while the slice's applications are running. The resize process copies this /var/ directory last, which is why the resize progress counter sometimes sits at '98%' or '99% complete' for what may feel like an uncomfortably long time.

The second uncertainty is that the final phase of a resize includes a reboot (downtime) component during which any files updated since the resize preparation began are copied again. The length of the downtime depends on the size and number of updated files. Again, the resize process cannot tell in advance how many updates applications like MySQL are writing to data files so it cannot predict how long this final 'update' copy will take.

Summary

If you know how your applications are using diskspace and writing to files you may be able to judge when a resize will take longer than you would like and make preparations accordingly. At the very least, you should be able to use your new-found resize knowledge to better schedule resizes to fit your timing requirements.

Lee

Want to comment?


(not made public)

(optional)

(use plain text or Markdown syntax)