I am taring and then compressing a bunch of files&directories on my Ubuntu Server VPS for a backup. It only has 1GB of RAM and 128MB of Swap (I can't add more - OVH use OpenVZ as their virtualisation software), and every time tar runs it uses a ton of memory for it's buffer, causing everything else to get swapped out - even when using nice -n 10.
Is there any way to force tar to use a small buffer and reduce it's memory usage? I am worried that once the backup gets to be a certain size, my server will go down because tar won't have enough memory for it's buffer.
I am using bzip2 to compress, and I have already limited it's memory usage with the -4 option.
Edit: Here is what htop looks like when I have had tar running for a while:

Here is a link to the full gif
Edit 2: Here is the tar command I am using:
nice -n 20 tar --exclude "*node_modules*" --exclude "*.git/*" --exclude "/srv/www-mail/rainloop/v*" -cf archive.tar /home /var/log /var/mail /srv /etc
taris using much memory? I guess it just causes linux to remove useful "hot" data from its cache and replace it with useless "cold" data which are being backup up (and not needed in the cache)htopto observe my memory and swap usage. I used this tutorial to view which proecesses were using the most swap before and after, and I noticed thattaring a large amount of stuff causes almost everything else to get swapped out :/htopinto your question?/tmpis mounted astmpfs, then yes, it does. tar itself doesen't seem to use much memory in the screenshot.