Merge pull request #65 from mariusvw/feature/whitespace

Whitespace corrections
This commit is contained in:
DatuX 2021-02-02 21:24:04 +01:00 committed by GitHub
commit b7ef6c9528
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 66 additions and 82 deletions

View File

@ -2,58 +2,51 @@ name: Regression tests
on: ["push", "pull_request"] on: ["push", "pull_request"]
jobs: jobs:
ubuntu20: ubuntu20:
runs-on: ubuntu-20.04 runs-on: ubuntu-20.04
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2.3.4 uses: actions/checkout@v2.3.4
- name: Prepare - name: Prepare
run: lsmod && sudo apt update && sudo apt install zfsutils-linux && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls run: lsmod && sudo apt update && sudo apt install zfsutils-linux && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test - name: Regression test
run: sudo -E ./run_tests run: sudo -E ./run_tests
- name: Coveralls - name: Coveralls
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github run: coveralls --service=github
ubuntu18: ubuntu18:
runs-on: ubuntu-18.04 runs-on: ubuntu-18.04
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2.3.4 uses: actions/checkout@v2.3.4
- name: Prepare - name: Prepare
run: lsmod && sudo apt update && sudo apt install zfsutils-linux python3-setuptools && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls run: lsmod && sudo apt update && sudo apt install zfsutils-linux python3-setuptools && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test - name: Regression test
run: sudo -E ./run_tests run: sudo -E ./run_tests
- name: Coveralls - name: Coveralls
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github run: coveralls --service=github

View File

@ -26,7 +26,7 @@
## Introduction ## Introduction
This is a tool I wrote to make replicating ZFS datasets easy and reliable. This is a tool I wrote to make replicating ZFS datasets easy and reliable.
You can either use it as a **backup** tool, **replication** tool or **snapshot** tool. You can either use it as a **backup** tool, **replication** tool or **snapshot** tool.
@ -256,13 +256,13 @@ Or just create a script and run it manually when you need it.
## Use as snapshot tool ## Use as snapshot tool
You can use zfs-autobackup to only make snapshots. You can use zfs-autobackup to only make snapshots.
Just dont specify the target-path: Just dont specify the target-path:
```console ```console
root@ws1:~# zfs-autobackup test --verbose root@ws1:~# zfs-autobackup test --verbose
zfs-autobackup v3.0 - Copyright 2020 E.H.Eefting (edwin@datux.nl) zfs-autobackup v3.0 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
#### Source settings #### Source settings
[Source] Datasets are local [Source] Datasets are local
[Source] Keep the last 10 snapshots. [Source] Keep the last 10 snapshots.
@ -270,22 +270,22 @@ root@ws1:~# zfs-autobackup test --verbose
[Source] Keep every 1 week, delete after 1 month. [Source] Keep every 1 week, delete after 1 month.
[Source] Keep every 1 month, delete after 1 year. [Source] Keep every 1 month, delete after 1 year.
[Source] Selects all datasets that have property 'autobackup:test=true' (or childs of datasets that have 'autobackup:test=child') [Source] Selects all datasets that have property 'autobackup:test=true' (or childs of datasets that have 'autobackup:test=child')
#### Selecting #### Selecting
[Source] test_source1/fs1: Selected (direct selection) [Source] test_source1/fs1: Selected (direct selection)
[Source] test_source1/fs1/sub: Selected (inherited selection) [Source] test_source1/fs1/sub: Selected (inherited selection)
[Source] test_source2/fs2: Ignored (only childs) [Source] test_source2/fs2: Ignored (only childs)
[Source] test_source2/fs2/sub: Selected (inherited selection) [Source] test_source2/fs2/sub: Selected (inherited selection)
#### Snapshotting #### Snapshotting
[Source] Creating snapshots test-20200710125958 in pool test_source1 [Source] Creating snapshots test-20200710125958 in pool test_source1
[Source] Creating snapshots test-20200710125958 in pool test_source2 [Source] Creating snapshots test-20200710125958 in pool test_source2
#### Thinning source #### Thinning source
[Source] test_source1/fs1@test-20200710125948: Destroying [Source] test_source1/fs1@test-20200710125948: Destroying
[Source] test_source1/fs1/sub@test-20200710125948: Destroying [Source] test_source1/fs1/sub@test-20200710125948: Destroying
[Source] test_source2/fs2/sub@test-20200710125948: Destroying [Source] test_source2/fs2/sub@test-20200710125948: Destroying
#### All operations completed successfully #### All operations completed successfully
(No target_path specified, only operated as snapshot tool.) (No target_path specified, only operated as snapshot tool.)
``` ```
@ -363,7 +363,7 @@ If you want to keep ALL the snapshots, just specify a very high number.
We will give a practical example of how the thinner operates. We will give a practical example of how the thinner operates.
Say we want have 3 thinner rules: Say we want have 3 thinner rules:
* We want to keep daily snapshots for 7 days. * We want to keep daily snapshots for 7 days.
* We want to keep weekly snapshots for 4 weeks. * We want to keep weekly snapshots for 4 weeks.
@ -379,7 +379,7 @@ A block can only be assigned one snapshot: If multiple snapshots fall into the s
The colors show to which block a snapshot belongs: The colors show to which block a snapshot belongs:
* Snapshot 1: This snapshot belongs to daily block 1, weekly block 0 and monthly block 0. However the daily block is too old. * Snapshot 1: This snapshot belongs to daily block 1, weekly block 0 and monthly block 0. However the daily block is too old.
* Snapshot 2: Since weekly block 0 and monthly block 0 already have a snapshot, it only belongs to daily block 4. * Snapshot 2: Since weekly block 0 and monthly block 0 already have a snapshot, it only belongs to daily block 4.
* Snapshot 3: This snapshot belongs to daily block 8 and weekly block 1. * Snapshot 3: This snapshot belongs to daily block 8 and weekly block 1.
* Snapshot 4: Since daily block 8 already has a snapshot, this one doesn't belong to anything and can be deleted right away. (it will be keeped for now since its the last snapshot) * Snapshot 4: Since daily block 8 already has a snapshot, this one doesn't belong to anything and can be deleted right away. (it will be keeped for now since its the last snapshot)
@ -609,7 +609,7 @@ Host pve3
Port 10003 Port 10003
``` ```
### Backup script ### Backup script
I use the following backup script on the backup server. I use the following backup script on the backup server.
@ -657,7 +657,3 @@ This script will also send the backup status to Zabbix. (if you've installed my
This project was sponsorred by: This project was sponsorred by:
* (None so far) * (None so far)

View File

@ -10,7 +10,7 @@ import time
from pprint import * from pprint import *
from bin.zfs_autobackup import * from bin.zfs_autobackup import *
from mock import * from mock import *
import contextlib import contextlib
import sys import sys
import io import io

View File

@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
set -e set -e
rm -rf dist rm -rf dist

View File

@ -1,6 +1,6 @@
#!/bin/bash #!/bin/bash
set -e set -e
rm -rf dist rm -rf dist
@ -14,4 +14,3 @@ source tokentest
python3 -m twine check dist/* python3 -m twine check dist/*
python3 -m twine upload --repository-url https://test.pypi.org/legacy/ dist/* --verbose python3 -m twine upload --repository-url https://test.pypi.org/legacy/ dist/* --verbose

View File

@ -7,7 +7,7 @@ if [ "$USER" != "root" ]; then
fi fi
#reactivate python environment, if any (usefull in Travis) #reactivate python environment, if any (usefull in Travis)
[ "$VIRTUAL_ENV" ] && source $VIRTUAL_ENV/bin/activate [ "$VIRTUAL_ENV" ] && source $VIRTUAL_ENV/bin/activate
# test needs ssh access to localhost for testing # test needs ssh access to localhost for testing
if ! [ -e /root/.ssh/id_rsa ]; then if ! [ -e /root/.ssh/id_rsa ]; then
@ -19,13 +19,13 @@ fi
coverage run --source bin.zfs_autobackup -m unittest discover -vvvvf $@ 2>&1 coverage run --source bin.zfs_autobackup -m unittest discover -vvvvf $@ 2>&1
EXIT=$? EXIT=$?
echo echo
coverage report coverage report
#this does automatic travis CI/https://coveralls.io/ intergration: #this does automatic travis CI/https://coveralls.io/ intergration:
# if which coveralls > /dev/null; then # if which coveralls > /dev/null; then
# echo "Submitting to coveralls.io:" # echo "Submitting to coveralls.io:"
# coveralls # coveralls
# fi # fi
exit $EXIT exit $EXIT

View File

@ -7,14 +7,14 @@ with open("README.md", "r") as fh:
long_description = fh.read() long_description = fh.read()
setuptools.setup( setuptools.setup(
name="zfs_autobackup", name="zfs_autobackup",
version=bin.zfs_autobackup.VERSION, version=bin.zfs_autobackup.VERSION,
author="Edwin Eefting", author="Edwin Eefting",
author_email="edwin@datux.nl", author_email="edwin@datux.nl",
description="ZFS autobackup is used to periodicly backup ZFS filesystems to other locations. It tries to be the most friendly to use and easy to debug ZFS backup tool.", description="ZFS autobackup is used to periodicly backup ZFS filesystems to other locations. It tries to be the most friendly to use and easy to debug ZFS backup tool.",
long_description=long_description, long_description=long_description,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url="https://github.com/psy0rz/zfs_autobackup", url="https://github.com/psy0rz/zfs_autobackup",
scripts=["bin/zfs-autobackup"], scripts=["bin/zfs-autobackup"],
packages=setuptools.find_packages(), packages=setuptools.find_packages(),

View File

@ -30,7 +30,7 @@ class TestZfsNode(unittest2.TestCase):
with self.subTest("missing dataset of us that still has children"): with self.subTest("missing dataset of us that still has children"):
#just deselect it so it counts as 'missing' #just deselect it so it counts as 'missing'
shelltest("zfs set autobackup:test=child test_source1/fs1") shelltest("zfs set autobackup:test=child test_source1/fs1")
@ -102,7 +102,7 @@ class TestZfsNode(unittest2.TestCase):
with self.subTest("Should leave test_source1 parent"): with self.subTest("Should leave test_source1 parent"):
with OutputIO() as buf: with OutputIO() as buf:
with redirect_stdout(buf), redirect_stderr(buf): with redirect_stdout(buf), redirect_stderr(buf):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())

View File

@ -79,7 +79,7 @@ class TestExecuteNode(unittest2.TestCase):
with self.subTest("exit code both ends of pipe ok"): with self.subTest("exit code both ends of pipe ok"):
output=nodea.run(["true"], pipe=True) output=nodea.run(["true"], pipe=True)
nodeb.run(["true"], inp=output) nodeb.run(["true"], inp=output)
with self.subTest("error on pipe input side"): with self.subTest("error on pipe input side"):
with self.assertRaises(subprocess.CalledProcessError): with self.assertRaises(subprocess.CalledProcessError):
output=nodea.run(["false"], pipe=True) output=nodea.run(["false"], pipe=True)
@ -106,7 +106,7 @@ class TestExecuteNode(unittest2.TestCase):
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0,2]) (stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0,2])
self.assertEqual(stdout,[]) self.assertEqual(stdout,[])
self.assertEqual(stderr,[] ) self.assertEqual(stderr,[] )
@ -132,4 +132,4 @@ class TestExecuteNode(unittest2.TestCase):
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -8,7 +8,7 @@ class TestZfsNode(unittest2.TestCase):
prepare_zpools() prepare_zpools()
self.longMessage=True self.longMessage=True
# generate a resumable state # generate a resumable state
#NOTE: this generates two resumable test_target1/test_source1/fs1 and test_target1/test_source1/fs1/sub #NOTE: this generates two resumable test_target1/test_source1/fs1 and test_target1/test_source1/fs1/sub
def generate_resume(self): def generate_resume(self):
@ -43,7 +43,7 @@ class TestZfsNode(unittest2.TestCase):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
#did we really resume? #did we really resume?
if "0.6.5" in ZFS_USERSPACE: if "0.6.5" in ZFS_USERSPACE:
#abort this late, for beter coverage #abort this late, for beter coverage
@ -58,7 +58,7 @@ class TestZfsNode(unittest2.TestCase):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
#did we really resume? #did we really resume?
if "0.6.5" in ZFS_USERSPACE: if "0.6.5" in ZFS_USERSPACE:
#abort this late, for beter coverage #abort this late, for beter coverage
@ -98,7 +98,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
#did we really resume? #did we really resume?
if "0.6.5" in ZFS_USERSPACE: if "0.6.5" in ZFS_USERSPACE:
#abort this late, for beter coverage #abort this late, for beter coverage
@ -112,7 +112,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
#did we really resume? #did we really resume?
if "0.6.5" in ZFS_USERSPACE: if "0.6.5" in ZFS_USERSPACE:
#abort this late, for beter coverage #abort this late, for beter coverage
@ -218,7 +218,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
#create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume ) #create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
def test_abort_unwanted_resume(self): def test_abort_unwanted_resume(self):
if "0.6.5" in ZFS_USERSPACE: if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version") self.skipTest("Resume not supported in this ZFS userspace version")
@ -236,7 +236,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-target=0 --debug --allow-empty".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
self.assertIn(": aborting resume, since", buf.getvalue()) self.assertIn(": aborting resume, since", buf.getvalue())
r=shelltest("zfs list -H -o name -r -t all test_target1") r=shelltest("zfs list -H -o name -r -t all test_target1")
@ -253,7 +253,7 @@ test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000002 test_target1/test_source2/fs2/sub@test-20101111000002
""") """)
def test_missing_common(self): def test_missing_common(self):
with patch('time.strftime', return_value="20101111000000"): with patch('time.strftime', return_value="20101111000000"):
@ -271,5 +271,5 @@ test_target1/test_source2/fs2/sub@test-20101111000002
############# TODO: ############# TODO:
def test_ignoretransfererrors(self): def test_ignoretransfererrors(self):
self.skipTest("todo: create some kind of situation where zfs recv exits with an error but transfer is still ok (happens in practice with acltype)") self.skipTest("todo: create some kind of situation where zfs recv exits with an error but transfer is still ok (happens in practice with acltype)")

View File

@ -10,11 +10,10 @@ class TestZfsNode(unittest2.TestCase):
# #resume initial backup # #resume initial backup
# def test_keepsource0(self): # def test_keepsource0(self):
# #somehow only specifying --allow-empty --keep-source 0 failed: # #somehow only specifying --allow-empty --keep-source 0 failed:
# with patch('time.strftime', return_value="20101111000000"): # with patch('time.strftime', return_value="20101111000000"):
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run()) # self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run())
# with patch('time.strftime', return_value="20101111000001"): # with patch('time.strftime', return_value="20101111000001"):
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run()) # self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run())

View File

@ -73,7 +73,7 @@ class TestThinner(unittest2.TestCase):
result=[] result=[]
for thing in things: for thing in things:
result.append(str(thing)) result.append(str(thing))
print("Thinner result incremental:") print("Thinner result incremental:")
pprint.pprint(result) pprint.pprint(result)
@ -129,7 +129,7 @@ class TestThinner(unittest2.TestCase):
result=[] result=[]
for thing in things: for thing in things:
result.append(str(thing)) result.append(str(thing))
print("Thinner result full:") print("Thinner result full:")
pprint.pprint(result) pprint.pprint(result)
@ -137,4 +137,4 @@ class TestThinner(unittest2.TestCase):
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -4,7 +4,7 @@ import time
class TestZfsAutobackup(unittest2.TestCase): class TestZfsAutobackup(unittest2.TestCase):
def setUp(self): def setUp(self):
prepare_zpools() prepare_zpools()
self.longMessage=True self.longMessage=True
@ -26,7 +26,7 @@ class TestZfsAutobackup(unittest2.TestCase):
self.assertFalse(ZfsAutobackup("test --verbose --allow-empty --keep-source 0".split(" ")).run()) self.assertFalse(ZfsAutobackup("test --verbose --allow-empty --keep-source 0".split(" ")).run())
#on source: only has 1 and 2 (1 was hold) #on source: only has 1 and 2 (1 was hold)
#on target: has 0 and 1 #on target: has 0 and 1
#XXX: #XXX:
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,""" self.assertMultiLineEqual(r,"""
@ -108,7 +108,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
with patch('time.strftime', return_value="20101111000001"): with patch('time.strftime', return_value="20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,""" self.assertMultiLineEqual(r,"""
test_source1 test_source1
@ -435,7 +435,7 @@ test_target1/fs2/sub@test-20101111000000
def test_clearrefres(self): def test_clearrefres(self):
#on zfs utils 0.6.x -x isnt supported #on zfs utils 0.6.x -x isnt supported
r=shelltest("zfs recv -x bla test >/dev/null </dev/zero; echo $?") r=shelltest("zfs recv -x bla test >/dev/null </dev/zero; echo $?")
if r=="\n2\n": if r=="\n2\n":
self.skipTest("This zfs-userspace version doesnt support -x") self.skipTest("This zfs-userspace version doesnt support -x")
@ -474,7 +474,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 refreservation -
def test_clearmount(self): def test_clearmount(self):
#on zfs utils 0.6.x -o isnt supported #on zfs utils 0.6.x -o isnt supported
r=shelltest("zfs recv -o bla=1 test >/dev/null </dev/zero; echo $?") r=shelltest("zfs recv -o bla=1 test >/dev/null </dev/zero; echo $?")
if r=="\n2\n": if r=="\n2\n":
self.skipTest("This zfs-userspace version doesnt support -o") self.skipTest("This zfs-userspace version doesnt support -o")
@ -703,7 +703,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
r=shelltest("zfs set compress=off test_source1") r=shelltest("zfs set compress=off test_source1")
r=shelltest("touch /test_source1/fs1/change.txt") r=shelltest("touch /test_source1/fs1/change.txt")
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1") r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
#too small change, takes no snapshots #too small change, takes no snapshots
with patch('time.strftime', return_value="20101111000001"): with patch('time.strftime', return_value="20101111000001"):
@ -840,8 +840,5 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# TODO: # TODO:
def test_raw(self): def test_raw(self):
self.skipTest("todo: later when travis supports zfs 0.8") self.skipTest("todo: later when travis supports zfs 0.8")

View File

@ -108,7 +108,7 @@ test_target1
node=ZfsNode("test", logger, description=description) node=ZfsNode("test", logger, description=description)
# -D propably always supported # -D propably always supported
self.assertGreater(len(node.supported_send_options),0) self.assertGreater(len(node.supported_send_options),0)
def test_supportedrecvoptions(self): def test_supportedrecvoptions(self):
logger=Logger() logger=Logger()
@ -116,8 +116,8 @@ test_target1
#NOTE: this couldnt hang via ssh if we dont close filehandles properly. (which was a previous bug) #NOTE: this couldnt hang via ssh if we dont close filehandles properly. (which was a previous bug)
node=ZfsNode("test", logger, description=description, ssh_to='localhost') node=ZfsNode("test", logger, description=description, ssh_to='localhost')
self.assertIsInstance(node.supported_recv_options, list) self.assertIsInstance(node.supported_recv_options, list)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()