Compare commits

...

52 Commits
v3.2 ... master

Author SHA1 Message Date
DatuX
81fa5c5bab
bump version 2024-12-17 13:21:13 +01:00
Edwin Eefting
f1dda6cc9f
rc1 2024-09-24 20:31:36 +02:00
Reno Reckling
c5f1e38b18 Implement Issue #245 Snapshot exclude patterns 2024-09-18 11:10:25 +02:00
Edwin Eefting
9e2476ac84
use mbuffer to simulate actual slow transfer (test_progress) 2024-09-17 14:25:32 +02:00
Edwin Eefting
4c5339dedd
better output to fixed progress test 2024-09-17 13:27:34 +02:00
Edwin Eefting
b115f4b081
fix test 2024-09-17 12:46:07 +02:00
Edwin Eefting
8879519e32
fix test 2024-09-17 12:37:20 +02:00
Edwin Eefting
b247b0408b
fix test 2024-09-17 12:31:30 +02:00
Edwin Eefting
a2f4dd4227
some fixes to run tests from pycharm with a suid-python binary 2024-09-17 12:10:02 +02:00
Edwin Eefting
c52857f7b9
version 2024-04-15 11:35:44 +02:00
DatuX
359bfde4c9
Update python-publish.yml 2024-04-15 11:34:07 +02:00
DatuX
5705afc37f
Update README.md 2024-03-13 11:56:40 +01:00
Edwin Eefting
6d4f22b69e
Revert "wip"
This reverts commit de3dff77b85524ac5280d0a9292461d16a30afea.
2023-11-22 11:13:02 +01:00
DatuX
7122dc92af
Update CliBase.py 2023-11-02 23:01:36 +01:00
Pierre-Elliott Bécue
843b87f319 Rename chunk-size to buffer-chunk-size 2023-10-16 11:20:02 +02:00
Pierre-Elliott Bécue
7feae675a6 Implement chunk size argument and refactor mbuffer command generation
Fixes #203
2023-10-16 11:20:02 +02:00
DatuX
7586cacb49
Update README.md 2023-10-15 19:59:06 +02:00
DatuX
e0c09e9975
Update README.md 2023-10-15 19:58:24 +02:00
Edwin Eefting
de3dff77b8
wip 2023-10-15 16:16:05 +02:00
Edwin Eefting
a62e793247
add obscure kernel test 2023-10-12 22:19:30 +02:00
Edwin Eefting
439ea6a3bc
more tests 2023-10-04 13:29:39 +02:00
Edwin Eefting
ff86e3c67f
improve ssh speed during testing 2023-10-03 12:49:32 +02:00
Edwin Eefting
8b8be80ab7
tests can be run in a dockercontainer now. (just start ./tests/run_tests_docker to magically do it) changed time patching during testing to use mocktime() instead. fixed alpine issues. fixed #206 2023-10-02 23:15:37 +02:00
Edwin Eefting
5cca819916
central timehandling and better mocking during test 2023-10-02 16:29:46 +02:00
Edwin Eefting
477e980ba2
more test fixing 2023-09-28 00:00:10 +02:00
Edwin Eefting
b817df8779
fix test 2 2023-09-27 23:47:54 +02:00
Edwin Eefting
46580fb500
fix test 2023-09-27 23:43:37 +02:00
Edwin Eefting
aa2c283746
various automount fixes 2023-09-27 23:34:29 +02:00
Edwin Eefting
16ab4f8183
dont automount/read props in testmode 2023-09-27 01:32:29 +02:00
Edwin Eefting
50f8aba101
dont automount when encryption is enabled but no key is loaded 2023-09-27 01:22:30 +02:00
Edwin Eefting
771127d34a
fix #112, pretty big change in mounting behaviour 2023-09-27 00:52:06 +02:00
Edwin Eefting
ea8beee7c8
analyse missing debug output 2023-09-26 22:17:17 +02:00
Edwin Eefting
defbc2d0bf
added --include-received to overrule auto enabling of exclude received. :) fix #150 2023-09-26 22:01:02 +02:00
Edwin Eefting
4e4de2de5a
fix #195 2023-09-26 21:49:13 +02:00
Edwin Eefting
de898fc258
fix #217 2023-09-26 21:36:03 +02:00
Edwin Eefting
bdc156e48d
types 2023-09-26 21:24:19 +02:00
Edwin Eefting
f3caca48f2
transfer output is now in the form of source -> target 2023-09-26 19:17:00 +02:00
Edwin Eefting
c0a8cb33ad
less verbose output when not finding common snapshot 2023-09-26 19:04:39 +02:00
Edwin Eefting
feb3972cd7
better output 2023-09-26 19:00:19 +02:00
Edwin Eefting
e30a393d0e
cleaner error output when destroy-incompatible fails 2023-09-26 18:51:24 +02:00
Edwin Eefting
f8cd77e6e4
--destroy-incompatible now only rolls back if needed 2023-09-26 18:39:03 +02:00
Edwin Eefting
06420978d5
better output 2023-09-26 18:23:08 +02:00
Edwin Eefting
54e590175d
readability of yellow notes on white terminals :) 2023-09-26 18:17:34 +02:00
Edwin Eefting
6e5a6764c5
fix #219 2023-09-26 18:01:09 +02:00
Edwin Eefting
a7d05a7064
allow disabling guid-checking as well, for performance 2023-09-26 17:13:40 +02:00
Edwin Eefting
d90ea7edd2
reduce number of dataset exist-checks 2023-09-26 16:52:48 +02:00
Edwin Eefting
090a2d1343
update version 2023-09-26 16:18:50 +02:00
Edwin Eefting
7cffec1d26
check guid of common snapshot, fix #218 2023-09-26 16:16:32 +02:00
Edwin Eefting
aac62f3fe6
issue tempalte 2023-09-25 17:45:03 +02:00
Edwin Eefting
a12b651d17
only publish python 3 2023-08-29 15:53:49 +02:00
Edwin Eefting
62f078eaec
github ubuntu doesnt support testing python2 anymore 2023-08-29 15:39:54 +02:00
Edwin Eefting
fd1e7d5b33
fix 2023-08-29 15:33:49 +02:00
37 changed files with 945 additions and 555 deletions

View File

@ -8,4 +8,4 @@ assignees: ''
---
(Please add the commandline that you use to the issue. Also at least add the output of --verbose. Sometimes it helps if you add the output of --debug-output instead, but its huge, so use an attachment for that.)
(Please add the commandline that you use to the issue. AT LEAST add the output of --verbose, but usual --debug is needed as well. Sometimes it helps if you add the output of --debug-output instead, but its huge, so use an attachment for that.)

View File

@ -5,7 +5,7 @@ name: Upload Python Package
on:
release:
types: [created]
types: [published]
jobs:
deploy:
@ -20,20 +20,20 @@ jobs:
with:
python-version: '3.x'
- name: Set up Python 2.x
uses: actions/setup-python@v2
with:
python-version: '2.x'
# - name: Set up Python 2.x
# uses: actions/setup-python@v2
# with:
# python-version: '2.x'
- name: Install dependencies 3.x
run: |
python -m pip install --upgrade pip
pip3 install setuptools wheel twine
- name: Install dependencies 2.x
run: |
python2 -m pip install --upgrade pip
pip2 install setuptools wheel twine
# - name: Install dependencies 2.x
# run: |
# python2 -m pip install --upgrade pip
# pip2 install setuptools wheel twine
- name: Build and publish
env:
@ -41,6 +41,6 @@ jobs:
TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
run: |
python3 setup.py sdist bdist_wheel
python2 setup.py sdist bdist_wheel
# python2 setup.py sdist bdist_wheel
twine check dist/*
twine upload dist/*

View File

@ -46,29 +46,3 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github || true
ubuntu20_python2:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@v3.5.0
- name: Set up Python 2.x
uses: actions/setup-python@v2
with:
python-version: '2.x'
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test
run: sudo -E ./tests/run_tests
- name: Coveralls
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github || true

View File

@ -57,9 +57,12 @@ An important feature that's missing from other tools is a reliable `--test` opti
Please look at our wiki to [Get started](https://github.com/psy0rz/zfs_autobackup/wiki).
Or read the [Full manual](https://github.com/psy0rz/zfs_autobackup/wiki/Manual)
# Sponsor list
This project was sponsorred by:
This project was sponsored by:
* JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ )
* JetBrains
* https://rsync.net
* [DatuX](https://www.datux.nl)

1
scripts/autoupload Executable file
View File

@ -0,0 +1 @@
find zfs_autobackup | entr rsync -avx . "$1":zfs_autobackup

17
tests/Dockerfile Normal file
View File

@ -0,0 +1,17 @@
FROM alpine:3.18
#base packages
RUN apk update
RUN apk add py3-pip
#zfs autobackup tests dependencies
RUN apk add zfs openssh lzop pigz zstd gzip xz lz4 mbuffer udev zfs-udev
#python modules
COPY requirements.txt /
RUN pip3 install -r requirements.txt
#git repo should be mounted in /app:
ENTRYPOINT [ "/app/tests/tests_docker" ]

3
tests/autorun_tests_docker Executable file
View File

@ -0,0 +1,3 @@
#!/bin/sh
find tests zfs_autobackup -name '*.py' |entr ./tests/run_tests_docker $@

View File

@ -1,9 +1,11 @@
import os
# To run tests as non-root, use this hack:
# chmod 4755 /usr/sbin/zpool /usr/sbin/zfs
import sys
import zfs_autobackup.util
#dirty hack for this error:
#AttributeError: module 'collections' has no attribute 'MutableMapping'
@ -28,15 +30,17 @@ import contextlib
import sys
import io
import datetime
TEST_POOLS="test_source1 test_source2 test_target1"
ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
# ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
# ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
print("###########################################")
print("#### Unit testing against:")
print("#### Python :"+sys.version.replace("\n", " "))
print("#### ZFS userspace :"+ZFS_USERSPACE)
print("#### ZFS kernel :"+ZFS_KERNEL)
print("#### Python : "+sys.version.replace("\n", " "))
print("#### ZFS version : "+subprocess.check_output("zfs --version", shell=True).decode('utf-8').rstrip().replace('\n', ' '))
print("#############################################")
@ -47,6 +51,10 @@ if sys.version_info.major==2:
else:
OutputIO=io.StringIO
# for when we're using a suid-root python binary during development
os.setuid(0)
os.setgid(0)
# for python2 compatibility (python 3 has this already)
@contextlib.contextmanager
@ -73,7 +81,7 @@ def redirect_stderr(target):
def shelltest(cmd):
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
ret=(subprocess.check_output("SUDO_ASKPASS=./password.sh sudo -A "+cmd , shell=True).decode('utf-8'))
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
print("######### result of: {}".format(cmd))
print(ret)
@ -85,7 +93,7 @@ def prepare_zpools():
print("Preparing zfs filesystems...")
#need ram blockdevice
subprocess.check_call("modprobe brd rd_size=512000", shell=True)
# subprocess.check_call("modprobe brd rd_size=512000", shell=True)
#remove old stuff
subprocess.call("zpool destroy test_source1 2>/dev/null", shell=True)
@ -105,3 +113,18 @@ def prepare_zpools():
subprocess.check_call("zfs set autobackup:test=child test_source2/fs2", shell=True)
print("Prepare done")
@contextlib.contextmanager
def mocktime(time_str, format="%Y%m%d%H%M%S"):
def fake_datetime_now():
return datetime.datetime.strptime(time_str, format)
with patch.object(zfs_autobackup.util,'datetime_now_mock', fake_datetime_now()):
yield

View File

@ -18,6 +18,17 @@ if ! [ -e /root/.ssh/id_rsa ]; then
ssh -oStrictHostKeyChecking=no localhost true || exit 1
fi
cat >> ~/.ssh/config <<EOF
Host *
addkeystoagent yes
controlpath ~/.ssh/control-master-%r@%h:%p
controlmaster auto
controlpersist 3600
EOF
modprobe brd rd_size=512000
umount /tmp/ZfsCheck*
coverage run --branch --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1

16
tests/run_tests_docker Executable file
View File

@ -0,0 +1,16 @@
#!/bin/sh
set -e
#remove stuff from previous local tests
zpool destroy test_source1 2>/dev/null || true
zpool destroy test_source2 2>/dev/null || true
zpool destroy test_target1 2>/dev/null || true
#is needed
modprobe brd rd_size=512000 || true
# builds and starts a docker container to run the test suite
docker build -t zfs-autobackup-test -f tests/Dockerfile .
docker run --name zfs-autobackup-test --privileged --rm -it -v .:/app zfs-autobackup-test $@

View File

@ -9,11 +9,11 @@ class TestCmdPipe(unittest2.TestCase):
p=CmdPipe(readonly=False, inp=None)
err=[]
out=[]
p.add(CmdItem(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2), stdout_handler=lambda line: out.append(line)))
p.add(CmdItem(["sh", "-c", "echo out1;echo err1 >&2; echo out2; echo err2 >&2"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0), stdout_handler=lambda line: out.append(line)))
executed=p.execute()
self.assertEqual(err, ["ls: cannot access '/nonexistent': No such file or directory"])
self.assertEqual(out, ["/","/"])
self.assertEqual(out, ["out1", "out2"])
self.assertEqual(err, ["err1","err2"])
self.assertIsNone(executed)
def test_input(self):
@ -56,16 +56,16 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(CmdItem(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2), stdout_handler=lambda line: out.append(line)))
p.add(CmdItem(["sh", "-c", "echo err1 >&2"], stderr_handler=lambda line: err1.append(line), ))
p.add(CmdItem(["sh", "-c", "echo err2 >&2"], stderr_handler=lambda line: err2.append(line), ))
p.add(CmdItem(["sh", "-c", "echo err3 >&2"], stderr_handler=lambda line: err3.append(line), stdout_handler=lambda line: out.append(line)))
executed=p.execute()
self.assertEqual(err1, ["ls: cannot access '/nonexistent1': No such file or directory"])
self.assertEqual(err2, ["ls: cannot access '/nonexistent2': No such file or directory"])
self.assertEqual(err3, ["ls: cannot access '/nonexistent3': No such file or directory"])
self.assertEqual(err1, ["err1"])
self.assertEqual(err2, ["err2"])
self.assertEqual(err3, ["err3"])
self.assertEqual(out, [])
self.assertIsNone(executed)
self.assertTrue(executed)
def test_exitcode(self):
"""test piped exitcodes """
@ -74,9 +74,9 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(CmdItem(["bash", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
p.add(CmdItem(["bash", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["bash", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3), stdout_handler=lambda line: out.append(line)))
p.add(CmdItem(["sh", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
p.add(CmdItem(["sh", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["sh", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3), stdout_handler=lambda line: out.append(line)))
executed=p.execute()
self.assertEqual(err1, [])

View File

@ -13,10 +13,10 @@ class TestZfsNode(unittest2.TestCase):
def test_destroymissing(self):
#initial backup
with patch('time.strftime', return_value="test-19101111000000"): #1000 years in past
with mocktime("19101111000000"): #1000 years in past
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000000"): #far in past
with mocktime("20101111000000"): #far in past
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())

View File

@ -29,6 +29,12 @@ class TestZfsEncryption(unittest2.TestCase):
except:
self.skipTest("Encryption not supported on this ZFS version.")
def load_key(self, key, path):
shelltest("rm /tmp/zfstest.key 2>/dev/null;true")
shelltest("echo {} > /tmp/zfstest.key".format(key))
shelltest("zfs load-key {}".format(path))
def prepare_encrypted_dataset(self, key, path, unload_key=False):
# create encrypted source dataset
@ -49,11 +55,11 @@ class TestZfsEncryption(unittest2.TestCase):
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsourcekeyless", unload_key=True) # raw mode shouldn't need a key
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
@ -86,11 +92,11 @@ test_target1/test_source2/fs2/sub encryption
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
@ -121,13 +127,13 @@ test_target1/test_source2/fs2/sub encryptionroot -
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(" ")).run())
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
self.assertEqual(r, """
@ -156,14 +162,14 @@ test_target1/test_source2/fs2/sub encryptionroot -
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
self.assertFalse(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received".split(
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(
" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup(
@ -191,3 +197,117 @@ test_target1/test_source2/fs2 encryptionroot -
test_target1/test_source2/fs2/sub encryptionroot - -
""")
def test_raw_invalid_snapshot(self):
"""in raw mode, its not allowed to have any newer snaphots on target, #219"""
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress".split(" ")).run())
#this is invalid in raw mode
shelltest("zfs snapshot test_target1/test_source1/fs1/encryptedsource@incompatible")
with mocktime("20101111000001"):
#should fail because of incompatble snapshot
self.assertEqual(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty".split(" ")).run(),1)
#should destroy incompatible and continue
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-snapshot --destroy-incompatible".split(" ")).run())
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
self.assertMultiLineEqual(r,"""
NAME PROPERTY VALUE SOURCE
test_target1 encryptionroot - -
test_target1/test_source1 encryptionroot - -
test_target1/test_source1/fs1 encryptionroot - -
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
test_target1/test_source1/fs1/sub encryptionroot - -
test_target1/test_source2 encryptionroot - -
test_target1/test_source2/fs2 encryptionroot - -
test_target1/test_source2/fs2/sub encryptionroot - -
""")
def test_resume_encrypt_with_no_key(self):
"""test what happens if target encryption key not loaded (this led to a kernel crash of freebsd with 2.1.x i think) while trying to resume"""
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
r = shelltest("zfs set compress=off test_source1 test_target1")
# big change on source
r = shelltest("dd if=/dev/zero of=/test_source1/fs1/data bs=250M count=1")
# waste space on target
r = shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
# should fail and leave resume token
with mocktime("20101111000001"):
self.assertTrue(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --clear-mountpoint".split(
" ")).run())
#
# free up space
r = shelltest("rm /test_target1/waste")
# sync
r = shelltest("zfs umount test_target1")
r = shelltest("zfs mount test_target1")
#
# #unload key
shelltest("zfs unload-key test_target1/encryptedtarget")
# resume should fail
with mocktime("20101111000001"):
self.assertEqual(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --no-snapshot --clear-mountpoint".split(
" ")).run(),3)
#NOTE: On some versions this leaves 2 weird sub-datasets that should'nt be there (its probably a zfs bug?)
#so we ignore this, and just make sure the backup resumes correctly after reloading the key.
# r = shelltest("zfs get -r -t all encryptionroot test_target1")
# self.assertEqual(r, """
# NAME PROPERTY VALUE SOURCE
# test_target1 encryptionroot - -
# test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source1/fs1@test-20101111000000 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
# test_target1/encryptedtarget/test_source1/fs1/encryptedsource@test-20101111000000 encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
# test_target1/encryptedtarget/test_source1/fs1/encryptedsource@test-20101111000001 encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
# test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source1/fs1/sub@test-20101111000000 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source1/fs1/sub/sub encryptionroot - -
# test_target1/encryptedtarget/test_source1/fs1/sub/sub@test-20101111000001 encryptionroot - -
# test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source2/fs2/sub@test-20101111000000 encryptionroot test_target1/encryptedtarget -
# test_target1/encryptedtarget/test_source2/fs2/sub/sub encryptionroot - -
# test_target1/encryptedtarget/test_source2/fs2/sub/sub@test-20101111000001 encryptionroot - -
# """)
#reload key and resume correctly.
self.load_key("22222222", "test_target1/encryptedtarget")
# resume should complete
with mocktime("20101111000001"):
self.assertEqual(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --no-snapshot --clear-mountpoint".split(
" ")).run(),0)

View File

@ -33,9 +33,9 @@ class TestExecuteNode(unittest2.TestCase):
#return std err as well, trigger stderr by listing something non existing
with self.subTest("stderr return"):
(stdout, stderr)=node.run(["ls", "nonexistingfile"], return_stderr=True, valid_exitcodes=[2])
(stdout, stderr)=node.run(["sh", "-c", "echo bla >&2"], return_stderr=True, valid_exitcodes=[0])
self.assertEqual(stdout,[])
self.assertRegex(stderr[0],"nonexistingfile")
self.assertRegex(stderr[0],"bla")
#slow command, make sure things dont exit too early
with self.subTest("early exit test"):
@ -110,19 +110,17 @@ class TestExecuteNode(unittest2.TestCase):
with self.subTest("check stderr on pipe output side"):
output=nodea.run(["true"], pipe=True, valid_exitcodes=[0])
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], inp=output, return_stderr=True, valid_exitcodes=[2])
(stdout, stderr)=nodeb.run(["sh", "-c", "echo bla >&2"], inp=output, return_stderr=True, valid_exitcodes=[0])
self.assertEqual(stdout,[])
self.assertRegex(stderr[0], "nonexistingfile" )
self.assertRegex(stderr[0], "bla" )
with self.subTest("check stderr on pipe input side (should be only printed)"):
output=nodea.run(["ls", "nonexistingfile"], pipe=True, valid_exitcodes=[2])
output=nodea.run(["sh", "-c", "echo bla >&2"], pipe=True, valid_exitcodes=[0])
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0])
self.assertEqual(stdout,[])
self.assertEqual(stderr,[])
def test_pipe_local_local(self):
nodea=ExecuteNode(debug_output=True)
nodeb=ExecuteNode(debug_output=True)
@ -209,5 +207,3 @@ class TestExecuteNode(unittest2.TestCase):
if __name__ == '__main__':
unittest.main()

View File

@ -32,7 +32,7 @@ class TestExternalFailures(unittest2.TestCase):
def test_initial_resume(self):
# inital backup, leaves resume token
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.generate_resume()
# --test should resume and succeed
@ -42,12 +42,7 @@ class TestExternalFailures(unittest2.TestCase):
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
self.assertIn(": resuming", buf.getvalue())
# should resume and succeed
with OutputIO() as buf:
@ -56,12 +51,7 @@ class TestExternalFailures(unittest2.TestCase):
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
self.assertIn(": resuming", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """
@ -81,11 +71,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_incremental_resume(self):
# initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
# incremental backup leaves resume token
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
# --test should resume and succeed
@ -95,12 +85,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
self.assertIn(": resuming", buf.getvalue())
# should resume and succeed
with OutputIO() as buf:
@ -110,11 +95,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
self.assertIn(": resuming", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """
@ -134,11 +115,9 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# generate an invalid resume token, and verify if its aborted automaticly
def test_initial_resumeabort(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
# inital backup, leaves resume token
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.generate_resume()
# remove corresponding source snapshot, so it becomes invalid
@ -148,11 +127,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
shelltest("zfs destroy test_target1/test_source1/fs1/sub; true")
# --test try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
# try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all test_target1")
@ -172,26 +151,23 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# generate an invalid resume token, and verify if its aborted automaticly
def test_incremental_resumeabort(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
# initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
# icremental backup, leaves resume token
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
# remove corresponding source snapshot, so it becomes invalid
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
# --test try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
# try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all test_target1")
@ -212,22 +188,19 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
def test_abort_unwanted_resume(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty".split(" ")).run())
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty --debug".split(" ")).run())
print(buf.getvalue())
@ -250,14 +223,11 @@ test_target1/test_source2/fs2/sub@test-20101111000002
# test with empty snapshot list (this was a bug)
def test_abort_resume_emptysnapshotlist(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
@ -265,7 +235,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --no-snapshot".split(
" ")).run())
@ -277,14 +247,14 @@ test_target1/test_source2/fs2/sub@test-20101111000002
def test_missing_common(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
# remove common snapshot and leave nothing
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
@ -295,7 +265,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
# #recreate target pool without any features
# # shelltest("zfs set compress=on test_source1; zpool destroy test_target1; zpool create test_target1 -o feature@project_quota=disabled /dev/ram2")
#
# with patch('time.strftime', return_value="test-20101111000000"):
# with mocktime("20101111000000"):
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --no-progress".split(" ")).run())
#
# r = shelltest("zfs list -H -o name -r -t all test_target1")

View File

@ -11,17 +11,17 @@ class TestZfsNode(unittest2.TestCase):
def test_keepsource0target10queuedsend(self):
"""Test if thinner doesnt destroy too much early on if there are no common snapshots YET. Issue #84"""
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty".split(
" ")).run())
@ -65,7 +65,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
shelltest("zfs set autobackup:test=true test_target1/target_shouldnotbeexcluded")
shelltest("zfs create test_target1/target")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
"test test_target1/target --no-progress --verbose --allow-empty".split(
" ")).run())

View File

@ -33,26 +33,28 @@ class TestZfsScaling(unittest2.TestCase):
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000000"):
with mocktime("20101112000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
expected_runs=343
print("ACTUAL RUNS: {}".format(run_counter))
expected_runs=342
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS : {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000001"):
with mocktime("20101112000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
expected_runs=47
print("ACTUAL RUNS: {}".format(run_counter))
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS : {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
def test_manydatasets(self):
@ -73,12 +75,12 @@ class TestZfsScaling(unittest2.TestCase):
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000000"):
with mocktime("20101112000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
expected_runs=636
#this triggers if you make a change with an impact of more than O(snapshot_count/2)`
expected_runs=842
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS: {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
@ -88,12 +90,12 @@ class TestZfsScaling(unittest2.TestCase):
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000001"):
with mocktime("20101112000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
expected_runs=842
expected_runs=1047
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS: {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)

View File

@ -14,15 +14,15 @@ class TestSendRecvPipes(unittest2.TestCase):
"""send basics (remote/local send pipe)"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress", "--clear-mountpoint",
"--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
@ -30,7 +30,7 @@ class TestSendRecvPipes(unittest2.TestCase):
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
@ -38,7 +38,7 @@ class TestSendRecvPipes(unittest2.TestCase):
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M",
@ -72,7 +72,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
for compress in zfs_autobackup.compressors.COMPRESS_CMDS.keys():
with self.subTest("compress " + compress):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--verbose",
"--compress=" + compress]).run())
@ -83,15 +83,14 @@ test_target1/test_source2/fs2/sub@test-20101111000003
"""test different buffer configurations"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--buffer=1M"]).run())
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress", "--clear-mountpoint", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--verbose", "--exclude-received", "--no-holds",
"--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
@ -99,7 +98,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-target=localhost", "--buffer=1M"]).run())
@ -107,7 +106,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
@ -139,7 +138,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
"""test rate limit"""
start = time.time()
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k"]).run())

View File

@ -85,7 +85,7 @@ class TestThinner(unittest2.TestCase):
if random.random()>=0.5:
things.append(Thing(now))
(keeps, removes)=thinner.thin(things, now=now)
(keeps, removes)=thinner.thin(things, keep_objects=[], now=now)
things=keeps
@ -143,7 +143,7 @@ class TestThinner(unittest2.TestCase):
if random.random()>=0.5:
things.append(Thing(now))
(things, removes)=thinner.thin(things, now=now)
(things, removes)=thinner.thin(things, keep_objects=[], now=now)
result=[]
for thing in things:

View File

@ -38,7 +38,7 @@ class TestZfsVerify(unittest2.TestCase):
shelltest("dd if=/dev/urandom of=/dev/zvol/test_source1/fs1/bad_zvol count=1 bs=512k")
#create backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-holds".split(" ")).run())
#Do an ugly hack to create a fault in the bad filesystem

View File

@ -35,7 +35,7 @@ class TestZfsAutobackup(unittest2.TestCase):
def test_snapshotmode(self):
"""test snapshot tool mode"""
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -55,11 +55,12 @@ test_target1
""")
def test_defaults(self):
self.maxDiff=2000
with self.subTest("no datasets selected"):
with OutputIO() as buf:
with redirect_stderr(buf):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug --no-progress".split(" ")).run())
print(buf.getvalue())
@ -69,7 +70,7 @@ test_target1
with self.subTest("defaults with full verbose and debug"):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -98,7 +99,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
""")
with self.subTest("bare defaults, allow empty"):
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --no-progress".split(" ")).run())
@ -168,47 +169,43 @@ test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
""")
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
#So in this case we only want to see 2 snapshots of 2011, and none of the 2010's anymore.
with self.subTest("test time checking"):
with patch('time.strftime', return_value="test-20111111000000"):
with mocktime("20111211000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --no-progress".split(" ")).run())
time_str="20111112000000" #month in the "future"
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
with patch('time.time', return_value=future_timestamp):
with patch('time.strftime', return_value="test-20111111000001"):
with mocktime("20111211000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,"""
test_source1
test_source1/fs1
test_source1/fs1@test-20111111000000
test_source1/fs1@test-20111111000001
test_source1/fs1@test-20111211000000
test_source1/fs1@test-20111211000001
test_source1/fs1/sub
test_source1/fs1/sub@test-20111111000000
test_source1/fs1/sub@test-20111111000001
test_source1/fs1/sub@test-20111211000000
test_source1/fs1/sub@test-20111211000001
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20111111000000
test_source2/fs2/sub@test-20111111000001
test_source2/fs2/sub@test-20111211000000
test_source2/fs2/sub@test-20111211000001
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20111111000000
test_target1/test_source1/fs1@test-20111111000001
test_target1/test_source1/fs1@test-20111211000000
test_target1/test_source1/fs1@test-20111211000001
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20111111000000
test_target1/test_source1/fs1/sub@test-20111111000001
test_target1/test_source1/fs1/sub@test-20111211000000
test_target1/test_source1/fs1/sub@test-20111211000001
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20111111000000
test_target1/test_source2/fs2/sub@test-20111111000001
test_target1/test_source2/fs2/sub@test-20111211000000
test_target1/test_source2/fs2/sub@test-20111211000001
""")
def test_ignore_othersnaphots(self):
@ -216,7 +213,7 @@ test_target1/test_source2/fs2/sub@test-20111111000001
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -251,7 +248,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --other-snapshots".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -286,7 +283,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_nosnapshot(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -310,7 +307,7 @@ test_target1/test_source2/fs2
def test_nosend(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -333,7 +330,7 @@ test_target1
def test_ignorereplicated(self):
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -362,7 +359,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_noholds(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --no-progress".split(" ")).run())
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
@ -394,7 +391,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
def test_strippath(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1 --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -437,10 +434,10 @@ test_target1/fs2/sub@test-20101111000000
r=shelltest("zfs set refreservation=1M test_source1/fs1")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-refreservation".split(" ")).run())
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
r=shelltest("zfs get -r refreservation test_source1 test_source2 test_target1")
self.assertMultiLineEqual(r,"""
NAME PROPERTY VALUE SOURCE
test_source1 refreservation none default
@ -475,10 +472,10 @@ test_target1/test_source2/fs2/sub@test-20101111000000 refreservation -
self.skipTest("This zfs-userspace version doesnt support -o")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-mountpoint --debug".split(" ")).run())
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
r=shelltest("zfs get -r canmount test_source1 test_source2 test_target1")
self.assertMultiLineEqual(r,"""
NAME PROPERTY VALUE SOURCE
test_source1 canmount on default
@ -493,13 +490,13 @@ test_source2/fs2/sub@test-20101111000000 canmount - -
test_source2/fs3 canmount on default
test_source2/fs3/sub canmount on default
test_target1 canmount on default
test_target1/test_source1 canmount on default
test_target1/test_source1 canmount off local
test_target1/test_source1/fs1 canmount noauto local
test_target1/test_source1/fs1@test-20101111000000 canmount - -
test_target1/test_source1/fs1/sub canmount noauto local
test_target1/test_source1/fs1/sub@test-20101111000000 canmount - -
test_target1/test_source2 canmount on default
test_target1/test_source2/fs2 canmount on default
test_target1/test_source2 canmount off local
test_target1/test_source2/fs2 canmount off local
test_target1/test_source2/fs2/sub canmount noauto local
test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
""")
@ -508,18 +505,17 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
def test_rollback(self):
#initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
#make change
r=shelltest("zfs mount test_target1/test_source1/fs1")
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
#should fail (busy)
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
#rollback, should succeed
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --rollback".split(" ")).run())
@ -527,36 +523,35 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
def test_destroyincompat(self):
#initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
#add multiple compatible snapshot (written is still 0)
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible2")
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
#should be ok, is compatible
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#add incompatible snapshot by changing and snapshotting
r=shelltest("zfs mount test_target1/test_source1/fs1")
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
r=shelltest("zfs snapshot test_target1/test_source1/fs1@incompatible1")
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
#--test should fail, now incompatible
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --test".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
#should fail, now incompatible
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
#--test should succeed by destroying incompatibles
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
#should succeed by destroying incompatibles
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible".split(" ")).run())
@ -594,13 +589,13 @@ test_target1/test_source2/fs2/sub@test-20101111000003
#test all ssh directions
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-target localhost --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
@ -645,7 +640,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
def test_minchange(self):
#initial
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
#make small change, use umount to reflect the changes immediately
@ -655,7 +650,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
#too small change, takes no snapshots
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
#make big change
@ -663,7 +658,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
#bigger change, should take snapshot
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -696,7 +691,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_test(self):
#initial
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -713,12 +708,12 @@ test_target1
""")
#actual make initial backup
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
#test incremental
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --allow-empty --verbose --test".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -754,7 +749,7 @@ test_target1/test_source2/fs2/sub@test-20101111000001
shelltest("zfs create test_target1/test_source1")
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -787,15 +782,15 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_keep0(self):
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0".split(" ")).run())
#make snapshot, shouldnt delete 0
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
@ -827,7 +822,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
""")
#make another backup but with no-holds. we should naturally endup with only number 3
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
@ -857,7 +852,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
# run with snapshot-only for 4, since we used no-holds, it will delete 3 on the source, breaking the backup
with patch('time.strftime', return_value="test-20101111000004"):
with mocktime("20101111000004"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
@ -888,23 +883,28 @@ test_target1/test_source2/fs2/sub@test-20101111000003
def test_progress(self):
r=shelltest("dd if=/dev/zero of=/test_source1/data.txt bs=200000 count=1")
r=shelltest("dd if=/dev/urandom of=/test_source1/data.txt bs=5M count=1")
r = shelltest("zfs snapshot test_source1@test")
l=LogConsole(show_verbose=True, show_debug=False, color=False)
l=LogConsole(show_verbose=True, show_debug=True, color=False)
n=ZfsNode(utc=False, snapshot_time_format="bla", hold_name="bla", logger=l)
d=ZfsDataset(n,"test_source1@test")
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True, zfs_compressed=True)
with OutputIO() as buf:
with redirect_stderr(buf):
try:
n.run(["sleep", "2"], inp=sp)
p=n.run(["mbuffer", "-R1M", "-m4096", "-o" ,"/dev/null"], inp=sp)
# p=n.run(["dd", "of=/dev/null"], inp=sp)
except:
pass
print(buf.getvalue())
print(list(buf.getvalue()))
# correct message?
self.assertRegex(buf.getvalue(),".*>>> .*minutes left.*")

View File

@ -10,10 +10,10 @@ class TestZfsAutobackup31(unittest2.TestCase):
def test_no_thinning(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -54,10 +54,10 @@ test_target1/test_source2/fs2/sub@test-20101111000001
shelltest("zfs create test_target1/a")
shelltest("zfs create test_target1/b")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1/a --no-progress --verbose --debug".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1/b --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t snapshot test_target1")
@ -75,7 +75,7 @@ test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
def test_zfs_compressed(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())
@ -84,7 +84,7 @@ test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
shelltest("zfs set autobackup:test=true test_source1")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --force --strip-path=1".split(" ")).run())
@ -101,13 +101,13 @@ test_target1/fs2/sub@test-20101111000000
shelltest("zfs snapshot -r test_source1@somesnapshot")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(
ZfsAutobackup(
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
#everything should be excluded, but should not return an error (see #190)
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(
ZfsAutobackup(
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())

View File

@ -0,0 +1,200 @@
from basetest import *
class TestZfsAutobackup32(unittest2.TestCase):
"""various new 3.2 features"""
def setUp(self):
prepare_zpools()
self.longMessage=True
def test_invalid_common_snapshot(self):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#create 2 snapshots with the same name, which are invalid as common snapshot
shelltest("zfs snapshot test_source1/fs1@invalid")
shelltest("zfs snapshot test_target1/test_source1/fs1@invalid")
with mocktime("20101111000001"):
#try the old way (without guid checking), and fail:
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --no-guid-check".split(" ")).run(),1)
#new way should be ok:
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,"""
test_source1
test_source1/fs1
test_source1/fs1@test-20101111000000
test_source1/fs1@invalid
test_source1/fs1@test-20101111000001
test_source1/fs1/sub
test_source1/fs1/sub@test-20101111000000
test_source1/fs1/sub@test-20101111000001
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20101111000000
test_source2/fs2/sub@test-20101111000001
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@invalid
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
""")
def test_invalid_common_snapshot_with_data(self):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#create 2 snapshots with the same name, which are invalid as common snapshot
shelltest("zfs snapshot test_source1/fs1@invalid")
shelltest("touch /test_target1/test_source1/fs1/shouldnotbeHere")
shelltest("zfs snapshot test_target1/test_source1/fs1@invalid")
with mocktime("20101111000001"):
#try the old way and fail:
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --no-guid-check".split(" ")).run(),1)
#new way should be ok
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-incompatible".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,"""
test_source1
test_source1/fs1
test_source1/fs1@test-20101111000000
test_source1/fs1@invalid
test_source1/fs1@test-20101111000001
test_source1/fs1/sub
test_source1/fs1/sub@test-20101111000000
test_source1/fs1/sub@test-20101111000001
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20101111000000
test_source2/fs2/sub@test-20101111000001
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
""")
#check consistent mounting behaviour, see issue #112
def test_mount_consitency_mounted(self):
"""only filesystems that have canmount=on with a mountpoint should be mounted. """
shelltest("zfs create -V 10M test_source1/fs1/subvol")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
r=shelltest("zfs mount |grep -o /test_target1.*")
self.assertMultiLineEqual(r,"""
/test_target1
/test_target1/test_source1/fs1
/test_target1/test_source1/fs1/sub
/test_target1/test_source2/fs2/sub
""")
def test_mount_consitency_unmounted(self):
"""only test_target1 should be mounted in this test"""
shelltest("zfs create -V 10M test_source1/fs1/subvol")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --clear-mountpoint".split(" ")).run())
r=shelltest("zfs mount |grep -o /test_target1.*")
self.assertMultiLineEqual(r,"""
/test_target1
""")
def test_transfer_thinning(self):
# test pre/post/during transfer thinning and efficient transfer (no transerring of stuff that gets deleted on target)
#less output
shelltest("zfs set autobackup:test2=true test_source1/fs1/sub")
# nobody wants this one, will be destroyed before transferring (over a year ago)
with mocktime("20000101000000"):
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
# only target wants this one (monthlys)
with mocktime("20010101000000"):
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
# both want this one (dayly + monthly)
# other snapshots should influence the middle one that we actually want.
with mocktime("20010201000000"):
shelltest("zfs snapshot test_source1/fs1/sub@other1")
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
shelltest("zfs snapshot test_source1/fs1/sub@other2")
# only source wants this one (dayly)
with mocktime("20010202000000"):
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
#will become common snapshot
with OutputIO() as buf:
with redirect_stdout(buf):
with mocktime("20010203000000"):
self.assertFalse(ZfsAutobackup("--keep-source=1d10d --keep-target=1m10m --allow-empty --verbose --clear-mountpoint --other-snapshots test2 test_target1".split(" ")).run())
print(buf.getvalue())
self.assertIn(
"""
[Source] test_source1/fs1/sub@test2-20000101000000: Destroying
[Source] test_source1/fs1/sub@test2-20010101000000: -> test_target1/test_source1/fs1/sub (new)
[Source] test_source1/fs1/sub@other1: -> test_target1/test_source1/fs1/sub
[Source] test_source1/fs1/sub@test2-20010101000000: Destroying
[Source] test_source1/fs1/sub@test2-20010201000000: -> test_target1/test_source1/fs1/sub
[Source] test_source1/fs1/sub@other2: -> test_target1/test_source1/fs1/sub
[Source] test_source1/fs1/sub@test2-20010203000000: -> test_target1/test_source1/fs1/sub
""", buf.getvalue())
r=shelltest("zfs list -H -o name -r -t snapshot test_source1 test_target1")
self.assertMultiLineEqual(r,"""
test_source1/fs1/sub@other1
test_source1/fs1/sub@test2-20010201000000
test_source1/fs1/sub@other2
test_source1/fs1/sub@test2-20010202000000
test_source1/fs1/sub@test2-20010203000000
test_target1/test_source1/fs1/sub@test2-20010101000000
test_target1/test_source1/fs1/sub@other1
test_target1/test_source1/fs1/sub@test2-20010201000000
test_target1/test_source1/fs1/sub@other2
test_target1/test_source1/fs1/sub@test2-20010203000000
""")

View File

@ -1,3 +1,5 @@
from os.path import exists
from basetest import *
from zfs_autobackup.BlockHasher import BlockHasher
@ -9,6 +11,10 @@ class TestZfsCheck(unittest2.TestCase):
def test_volume(self):
if exists("/.dockerenv"):
self.skipTest("FIXME: zfscheck volumes not supported in docker yet")
prepare_zpools()
shelltest("zfs create -V200M test_source1/vol")
@ -50,7 +56,7 @@ class TestZfsCheck(unittest2.TestCase):
shelltest("mkfifo /test_source1/f")
shelltest("zfs snapshot test_source1@test")
ZfsCheck("test_source1@test --debug".split(" "), print_arguments=False).run()
with self.subTest("Generate"):
with OutputIO() as buf:
with redirect_stdout(buf):
@ -178,15 +184,16 @@ whole_whole2_partial 0 309ffffba2e1977d12f3b7469971f30d28b94bd8
shelltest("cp tests/data/whole /test_source1/testfile")
shelltest("zfs snapshot test_source1@test")
#breaks pipe when grep exists:
#breaks pipe when head exists
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
shelltest("python -m zfs_autobackup.ZfsCheck test_source1@test --debug | grep -m1 'Hashing tree'")
# time.sleep(5)
shelltest("python -m zfs_autobackup.ZfsCheck test_source1@test --debug | head -n1")
#should NOT be mounted anymore if cleanup went ok:
self.assertNotRegex(shelltest("mount"), "test_source1@test")
def test_brokenpipe_cleanup_volume(self):
if exists("/.dockerenv"):
self.skipTest("FIXME: zfscheck volumes not supported in docker yet")
prepare_zpools()
shelltest("zfs create -V200M test_source1/vol")
@ -194,7 +201,7 @@ whole_whole2_partial 0 309ffffba2e1977d12f3b7469971f30d28b94bd8
#breaks pipe when grep exists:
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
shelltest("python -m zfs_autobackup.ZfsCheck test_source1/vol@test --debug | grep -m1 'Hashing file'")
shelltest("python -m zfs_autobackup.ZfsCheck test_source1/vol@test --debug| grep -m1 'Hashing file'")
# time.sleep(1)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)

1
tests/tests Symbolic link
View File

@ -0,0 +1 @@
.

42
tests/tests_docker Executable file
View File

@ -0,0 +1,42 @@
#!/bin/sh
#NOTE: This script will started inside the test docker container
set -e
if ! [ -e /.dockerenv ]; then
echo "only run this script inside a docker container!"
exit 1
fi
if ! [ -e /dev/ram0 ]; then
echo "Please load this module outside container:" >&2
echo "sudo modprobe brd rd_size=512000" >&2
exit 1
fi
#start sshd and other stuff
ssh-keygen -A
/usr/sbin/sshd
udevd -d
#config ssh
if ! [ -e /root/.ssh/id_rsa ]; then
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ''
fi
cat >> ~/.ssh/config <<EOF
Host *
addkeystoagent yes
controlpath ~/.ssh/control-master-%r@%h:%p
controlmaster auto
controlpersist 3600
EOF
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ssh -oStrictHostKeyChecking=no localhost 'echo SSH OK'
cd /app
python -m unittest discover /app/tests -vvvvf $@

View File

@ -10,7 +10,7 @@ class CliBase(object):
Overridden in subclasses that add stuff for the specific programs."""
# also used by setup.py
VERSION = "3.2"
VERSION = "3.3"
HEADER = "{} v{} - (c)2022 E.H.Eefting (edwin@datux.nl)".format(os.path.basename(sys.argv[0]), VERSION)
def __init__(self, argv, print_arguments=True):

View File

@ -36,7 +36,7 @@ class LogConsole:
def warning(self, txt):
self.clear_progress()
if self.colorama:
print(colorama.Fore.YELLOW + colorama.Style.BRIGHT + " NOTE: " + txt + colorama.Style.RESET_ALL)
print(colorama.Fore.YELLOW + colorama.Style.NORMAL + " NOTE: " + txt + colorama.Style.RESET_ALL)
else:
print(" NOTE: " + txt)
sys.stdout.flush()

View File

@ -1,4 +1,3 @@
import time
from .ThinnerRule import ThinnerRule
@ -37,7 +36,7 @@ class Thinner:
return ret
def thin(self, objects, keep_objects=None, now=None):
def thin(self, objects, keep_objects, now):
"""thin list of objects with current schedule rules. objects: list of
objects to thin. every object should have timestamp attribute.
@ -49,8 +48,6 @@ class Thinner:
now: if specified, use this time as current time
"""
if not keep_objects:
keep_objects = []
# always keep a number of the last objets?
if self.always_keep:
@ -68,9 +65,6 @@ class Thinner:
for rule in self.rules:
time_blocks[rule.period] = {}
if not now:
now = int(time.time())
keeps = []
removes = []

View File

@ -1,7 +1,9 @@
import argparse
import re
import sys
from .CliBase import CliBase
from .util import datetime_now
class ZfsAuto(CliBase):
@ -46,8 +48,8 @@ class ZfsAuto(CliBase):
self.verbose("NOTE: Source and target are on the same host, excluding target-path from selection.")
self.exclude_paths.append(args.target_path)
else:
if not args.exclude_received:
self.verbose("NOTE: Source and target are on the same host, adding --exclude-received to commandline.")
if not args.exclude_received and not args.include_received:
self.verbose("NOTE: Source and target are on the same host, adding --exclude-received to commandline. (use --include-received to overrule)")
args.exclude_received = True
if args.test:
@ -58,7 +60,11 @@ class ZfsAuto(CliBase):
self.snapshot_time_format = args.snapshot_format.format(args.backup_name)
self.hold_name = args.hold_format.format(args.backup_name)
dt = datetime_now(args.utc)
self.verbose("")
self.verbose("Current time {} : {}".format(args.utc and "UTC" or " ", dt.strftime("%Y-%m-%d %H:%M:%S")))
self.verbose("Selecting dataset property : {}".format(self.property_name))
self.verbose("Snapshot format : {}".format(self.snapshot_time_format))
self.verbose("Timezone : {}".format("UTC" if args.utc else "Local"))
@ -103,6 +109,17 @@ class ZfsAuto(CliBase):
group.add_argument('--exclude-received', action='store_true',
help='Exclude datasets that have the origin of their autobackup: property as "received". '
'This can avoid recursive replication between two backup partners.')
group.add_argument('--include-received', action='store_true',
help=argparse.SUPPRESS)
def regex_argument_type(input_line):
"""Parses regex arguments into re.Pattern objects"""
try:
return re.compile(input_line)
except:
raise ValueError("Could not parse argument '{}' as a regular expression".format(input_line))
group.add_argument('--exclude-snapshot-pattern', action='append', default=[], type=regex_argument_type, help="Regular expression to match snapshots that will be ignored.")
return parser

View File

@ -1,9 +1,7 @@
import time
import argparse
from datetime import datetime
from signal import signal, SIGPIPE
from .util import output_redir, sigpipe_handler
from .util import output_redir, sigpipe_handler, datetime_now
from .ZfsAuto import ZfsAuto
@ -33,8 +31,8 @@ class ZfsAutobackup(ZfsAuto):
if args.allow_empty:
args.min_change = 0
if args.destroy_incompatible:
args.rollback = True
# if args.destroy_incompatible:
# args.rollback = True
if args.resume:
self.warning("The --resume option isn't needed anymore (it's autodetected now)")
@ -72,6 +70,8 @@ class ZfsAutobackup(ZfsAuto):
help='Send over other snapshots as well, not just the ones created by this tool.')
group.add_argument('--set-snapshot-properties', metavar='PROPERTY=VALUE,...', type=str,
help='List of properties to set on the snapshot.')
group.add_argument('--no-guid-check', action='store_true',
help='Dont check guid of common snapshots. (faster)')
group = parser.add_argument_group("Transfer options")
@ -97,7 +97,7 @@ class ZfsAutobackup(ZfsAuto):
group.add_argument('--force', '-F', action='store_true',
help='Use zfs -F option to force overwrite/rollback. (Useful with --strip-path=1, but use with care)')
group.add_argument('--destroy-incompatible', action='store_true',
help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
help='Destroy incompatible snapshots on target. Use with care! (also does rollback of dataset)')
group.add_argument('--ignore-transfer-errors', action='store_true',
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
'acltype errors)')
@ -119,6 +119,8 @@ class ZfsAutobackup(ZfsAuto):
help='Limit data transfer rate in Bytes/sec (e.g. 128K. requires mbuffer.)')
group.add_argument('--buffer', metavar='SIZE', default=None,
help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
parser.add_argument('--buffer-chunk-size', metavar="BUFFERCHUNKSIZE", default=None,
help='Tune chunk size when mbuffer is used. (requires mbuffer.)')
group.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs send output through COMMAND (can be used multiple times)')
group.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
@ -142,7 +144,10 @@ class ZfsAutobackup(ZfsAuto):
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def thin_missing_targets(self, target_dataset, used_target_datasets):
"""thin target datasets that are missing on the source."""
"""thin target datasets that are missing on the source.
:type used_target_datasets: list[ZfsDataset]
:type target_dataset: ZfsDataset
"""
self.debug("Thinning obsolete datasets")
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
@ -150,6 +155,7 @@ class ZfsAutobackup(ZfsAuto):
count = 0
for dataset in missing_datasets:
self.debug("analyse missing {}".format(dataset))
count = count + 1
if self.args.progress:
@ -167,7 +173,11 @@ class ZfsAutobackup(ZfsAuto):
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def destroy_missing_targets(self, target_dataset, used_target_datasets):
"""destroy target datasets that are missing on the source and that meet the requirements"""
"""destroy target datasets that are missing on the source and that meet the requirements
:type used_target_datasets: list[ZfsDataset]
:type target_dataset: ZfsDataset
"""
self.debug("Destroying obsolete datasets")
@ -193,7 +203,7 @@ class ZfsAutobackup(ZfsAuto):
else:
# past the deadline?
deadline_ttl = ThinnerRule("0s" + self.args.destroy_missing).ttl
now = int(time.time())
now = datetime_now(self.args.utc).timestamp()
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
dataset.verbose("Destroy missing: Waiting for deadline.")
else:
@ -234,11 +244,22 @@ class ZfsAutobackup(ZfsAuto):
"""determine the zfs send pipe"""
ret = []
_mbuffer = False
_buffer = "16M"
_cs = "128k"
_rate = False
# IO buffer
if self.args.buffer:
logger("zfs send buffer : {}".format(self.args.buffer))
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m" + self.args.buffer])
_mbuffer = True
_buffer = self.args.buffer
# IO chunk size
if self.args.buffer_chunk_size:
logger("zfs send chunk size : {}".format(self.args.buffer_chunk_size))
_mbuffer = True
_cs = self.args.buffer_chunk_size
# custom pipes
for send_pipe in self.args.send_pipe:
@ -256,7 +277,14 @@ class ZfsAutobackup(ZfsAuto):
# transfer rate
if self.args.rate:
logger("zfs send transfer rate : {}".format(self.args.rate))
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m16M", "-R" + self.args.rate])
_mbuffer = True
_rate = self.args.rate
if _mbuffer:
cmd = [ExecuteNode.PIPE, "mbuffer", "-q", "-s{}".format(_cs), "-m{}".format(_buffer)]
if _rate:
cmd.append("-R{}".format(self.args.rate))
ret.extend(cmd)
return ret
@ -278,11 +306,19 @@ class ZfsAutobackup(ZfsAuto):
logger("zfs recv custom pipe : {}".format(recv_pipe))
# IO buffer
if self.args.buffer:
if self.args.buffer or self.args.buffer_chunk_size:
_cs = "128k"
_buffer = "16M"
# only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
if self.args.ssh_source != None or self.args.ssh_target != None or self.args.recv_pipe or self.args.send_pipe or self.args.compress != None:
logger("zfs recv buffer : {}".format(self.args.buffer))
ret.extend(["mbuffer", "-q", "-s128k", "-m" + self.args.buffer, ExecuteNode.PIPE])
if self.args.buffer_chunk_size:
_cs = self.args.buffer_chunk_size
if self.args.buffer:
_buffer = self.args.buffer
ret.extend(["mbuffer", "-q", "-s{}".format(_cs), "-m{}".format(_buffer), ExecuteNode.PIPE])
return ret
@ -342,6 +378,7 @@ class ZfsAutobackup(ZfsAuto):
and target_dataset.parent \
and target_dataset.parent not in target_datasets \
and not target_dataset.parent.exists:
target_dataset.debug("Creating unmountable parents")
target_dataset.parent.create_filesystem(parents=True)
# determine common zpool features (cached, so no problem we call it often)
@ -360,10 +397,8 @@ class ZfsAutobackup(ZfsAuto):
destroy_incompatible=self.args.destroy_incompatible,
send_pipes=send_pipes, recv_pipes=recv_pipes,
decrypt=self.args.decrypt, encrypt=self.args.encrypt,
zfs_compressed=self.args.zfs_compressed, force=self.args.force)
zfs_compressed=self.args.zfs_compressed, force=self.args.force, guid_check=not self.args.no_guid_check)
except Exception as e:
# if self.args.progress:
# self.clear_progress()
fail_count = fail_count + 1
source_dataset.error("FAILED: " + str(e))
@ -371,8 +406,6 @@ class ZfsAutobackup(ZfsAuto):
self.verbose("Debug mode, aborting on first error")
raise
# if self.args.progress:
# self.clear_progress()
target_path_dataset = target_node.get_dataset(self.args.target_path)
if not self.args.no_thinning:
@ -439,7 +472,8 @@ class ZfsAutobackup(ZfsAuto):
snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
ssh_config=self.args.ssh_config,
ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description, thinner=source_thinner)
debug_output=self.args.debug_output, description=description, thinner=source_thinner,
exclude_snapshot_patterns=self.args.exclude_snapshot_pattern)
################# select source datasets
self.set_title("Selecting")
@ -454,8 +488,7 @@ class ZfsAutobackup(ZfsAuto):
################# snapshotting
if not self.args.no_snapshot:
self.set_title("Snapshotting")
dt = datetime.utcnow() if self.args.utc else datetime.now()
snapshot_name = dt.strftime(self.snapshot_time_format)
snapshot_name = datetime_now(self.args.utc).strftime(self.snapshot_time_format)
source_node.consistent_snapshot(source_datasets, snapshot_name,
min_changed_bytes=self.args.min_change,
pre_snapshot_cmds=self.args.pre_snapshot_cmd,

View File

@ -6,7 +6,6 @@ from .ZfsAuto import ZfsAuto
from .ZfsNode import ZfsNode
import sys
raise("need to be rewritten to use zfs-check")
# # try to be as unix compatible as possible, while still having decent performance
# def compare_trees_find(source_node, source_path, target_node, target_path):
@ -87,8 +86,8 @@ def verify_filesystem(source_snapshot, source_mnt, target_snapshot, target_mnt,
raise(Exception("program errror, unknown method"))
finally:
source_snapshot.unmount()
target_snapshot.unmount()
source_snapshot.unmount(source_mnt)
target_snapshot.unmount(target_mnt)
# def hash_dev(node, dev):
@ -187,7 +186,7 @@ class ZfsAutoverify(ZfsAuto):
target_dataset = target_node.get_dataset(target_name)
# find common snapshots to verify
source_snapshot = source_dataset.find_common_snapshot(target_dataset)
source_snapshot = source_dataset.find_common_snapshot(target_dataset, True)
target_snapshot = target_dataset.find_snapshot(source_snapshot)
if source_snapshot is None or target_snapshot is None:
@ -236,7 +235,8 @@ class ZfsAutoverify(ZfsAuto):
snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
ssh_config=self.args.ssh_config,
ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description)
debug_output=self.args.debug_output, description=description,
exclude_snapshot_patterns=self.args.exclude_snapshot_pattern)
################# select source datasets
self.set_title("Selecting")
@ -307,6 +307,7 @@ class ZfsAutoverify(ZfsAuto):
def cli():
import sys
raise(Exception("This program is incomplete, dont use it yet."))
signal(SIGPIPE, sigpipe_handler)
failed = ZfsAutoverify(sys.argv[1:], False).run()
sys.exit(min(failed,255))

View File

@ -74,7 +74,7 @@ class ZfsCheck(CliBase):
def cleanup_zfs_filesystem(self, snapshot):
mnt = "/tmp/" + tmp_name()
snapshot.unmount()
snapshot.unmount(mnt)
self.debug("Cleaning up temporary mount point")
self.node.run(["rmdir", mnt], hide_errors=True, valid_exitcodes=[])

View File

@ -58,6 +58,13 @@ class ZfsDataset:
"""
self.zfs_node.error("{}: {}".format(self.name, txt))
def warning(self, txt):
"""
Args:
:type txt: str
"""
self.zfs_node.warning("{}: {}".format(self.name, txt))
def debug(self, txt):
"""
Args:
@ -81,8 +88,8 @@ class ZfsDataset:
Args:
:type count: int
"""
components=self.split_path()
if count>len(components):
components = self.split_path()
if count > len(components):
raise Exception("Trying to strip too much from path ({} items from {})".format(count, self.name))
return "/".join(components[count:])
@ -117,13 +124,27 @@ class ZfsDataset:
def is_snapshot(self):
"""true if this dataset is a snapshot"""
return self.name.find("@") != -1
@property
def is_excluded(self):
"""true if this dataset is a snapshot and matches the exclude pattern"""
if not self.is_snapshot:
return False
for pattern in self.zfs_node.exclude_snapshot_patterns:
if pattern.search(self.name) is not None:
self.debug("Excluded (path matches snapshot exclude pattern)")
return True
def is_selected(self, value, source, inherited, exclude_received, exclude_paths, exclude_unchanged):
"""determine if dataset should be selected for backup (called from
ZfsNode)
Args:
:type exclude_paths: list of str
:type exclude_paths: list[str]
:type value: str
:type source: str
:type inherited: bool
@ -189,8 +210,7 @@ class ZfsDataset:
self.verbose("Selected")
return True
@CachedProperty
@property
def parent(self):
"""get zfs-parent of this dataset. for snapshots this means it will get
the filesystem/volume that it belongs to. otherwise it will return the
@ -199,11 +219,12 @@ class ZfsDataset:
we cache this so everything in the parent that is cached also stays.
returns None if there is no parent.
:rtype: ZfsDataset | None
"""
if self.is_snapshot:
return self.zfs_node.get_dataset(self.filesystem_name)
else:
stripped=self.rstrip_path(1)
stripped = self.rstrip_path(1)
if stripped:
return self.zfs_node.get_dataset(stripped)
else:
@ -250,32 +271,46 @@ class ZfsDataset:
return None
@CachedProperty
def exists_check(self):
"""check on disk if it exists"""
self.debug("Checking if dataset exists")
return (len(self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True,
valid_exitcodes=[0, 1],
hide_errors=True)) > 0)
@property
def exists(self):
"""check if dataset exists. Use force to force a specific value to be
cached, if you already know. Useful for performance reasons
"""returns True if dataset should exist.
Use force_exists to force a specific value, if you already know. Useful for performance and test reasons
"""
if self.force_exists is not None:
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
if self.force_exists:
self.debug("Dataset should exist")
else:
self.debug("Dataset should not exist")
return self.force_exists
else:
self.debug("Checking if filesystem exists")
return self.exists_check
return (self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True, valid_exitcodes=[0, 1],
hide_errors=True) and True)
def create_filesystem(self, parents=False):
def create_filesystem(self, parents=False, unmountable=True):
"""create a filesystem
Args:
:type parents: bool
"""
if parents:
self.verbose("Creating filesystem and parents")
self.zfs_node.run(["zfs", "create", "-p", self.name])
else:
self.verbose("Creating filesystem")
self.zfs_node.run(["zfs", "create", self.name])
# recurse up
if parents and self.parent and not self.parent.exists:
self.parent.create_filesystem(parents, unmountable)
cmd = ["zfs", "create"]
if unmountable:
cmd.extend(["-o", "canmount=off"])
cmd.append(self.name)
self.zfs_node.run(cmd)
self.force_exists = True
@ -318,9 +353,6 @@ class ZfsDataset:
"zfs", "get", "-H", "-o", "property,value", "-p", "all", self.name
]
if not self.exists:
return {}
self.debug("Getting zfs properties")
ret = {}
@ -341,7 +373,6 @@ class ZfsDataset:
if min_changed_bytes == 0:
return True
if int(self.properties['written']) < min_changed_bytes:
return False
else:
@ -358,7 +389,7 @@ class ZfsDataset:
@property
def holds(self):
"""get list of holds for dataset"""
"""get list[holds] for dataset"""
output = self.zfs_node.run(["zfs", "holds", "-H", self.name], valid_exitcodes=[0], tab_split=True,
readonly=True)
@ -401,15 +432,15 @@ class ZfsDataset:
seconds = time.mktime(dt.timetuple())
return seconds
def from_names(self, names):
"""convert a list of names to a list ZfsDatasets for this zfs_node
def from_names(self, names, force_exists=None):
"""convert a list[names] to a list ZfsDatasets for this zfs_node
Args:
:type names: list of str
:type names: list[str]
"""
ret = []
for name in names:
ret.append(self.zfs_node.get_dataset(name))
ret.append(self.zfs_node.get_dataset(name, force_exists))
return ret
@ -428,8 +459,11 @@ class ZfsDataset:
@CachedProperty
def snapshots(self):
"""get all snapshots of this dataset"""
"""get all snapshots of this dataset
:rtype: ZfsDataset
"""
#FIXME: dont check for existance. (currenlty needed for _add_virtual_snapshots)
if not self.exists:
return []
@ -439,11 +473,11 @@ class ZfsDataset:
"zfs", "list", "-d", "1", "-r", "-t", "snapshot", "-H", "-o", "name", self.name
]
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True))
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True), force_exists=True)
@property
def our_snapshots(self):
"""get list of snapshots creates by us of this dataset"""
"""get list[snapshots] creates by us of this dataset"""
ret = []
for snapshot in self.snapshots:
if snapshot.is_ours():
@ -538,7 +572,7 @@ class ZfsDataset:
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
])
return self.from_names(names[1:])
return self.from_names(names[1:], force_exists=True)
@CachedProperty
def datasets(self, types="filesystem,volume"):
@ -554,9 +588,10 @@ class ZfsDataset:
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
])
return self.from_names(names[1:])
return self.from_names(names[1:], force_exists=True)
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded,
send_pipes, zfs_compressed):
"""returns a pipe with zfs send output for this snapshot
resume_token: resume sending from this token. (in that case we don't
@ -564,8 +599,8 @@ class ZfsDataset:
Args:
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
:type send_pipes: list of str
:type features: list of str
:type send_pipes: list[str]
:type features: list[str]
:type prev_snapshot: ZfsDataset
:type resume_token: str
:type show_progress: bool
@ -579,7 +614,7 @@ class ZfsDataset:
# all kind of performance options:
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
# large block support (only if recordsize>128k which is seldomly used)
cmd.append("-L") # --large-block
cmd.append("-L") # --large-block
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
cmd.append("-e") # --embed; WRITE_EMBEDDED, more compact stream
@ -593,8 +628,8 @@ class ZfsDataset:
# progress output
if show_progress:
cmd.append("-v") # --verbose
cmd.append("-P") # --parsable
cmd.append("-v") # --verbose
cmd.append("-P") # --parsable
# resume a previous send? (don't need more parameters in that case)
if resume_token:
@ -603,7 +638,7 @@ class ZfsDataset:
else:
# send properties
if send_properties:
cmd.append("-p") # --props
cmd.append("-p") # --props
# incremental?
if prev_snapshot:
@ -617,7 +652,8 @@ class ZfsDataset:
return output_pipe
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False, force=False):
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False,
force=False):
"""starts a zfs recv for this snapshot and uses pipe as input
note: you can it both on a snapshot or filesystem object. The
@ -627,9 +663,9 @@ class ZfsDataset:
Args:
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
:type pipe: subprocess.pOpen
:type features: list of str
:type filter_properties: list of str
:type set_properties: list of str
:type features: list[str]
:type filter_properties: list[str]
:type set_properties: list[str]
:type ignore_exit_code: bool
"""
@ -646,7 +682,7 @@ class ZfsDataset:
cmd.extend(["zfs", "recv"])
# don't mount filesystem that is received
# don't let zfs recv mount everything thats received (even with canmount=noauto!)
cmd.append("-u")
for property_ in filter_properties:
@ -676,7 +712,7 @@ class ZfsDataset:
# self.zfs_node.reset_progress()
self.zfs_node.run(cmd, inp=pipe, valid_exitcodes=valid_exitcodes)
# invalidate cache, but we at least know we exist now
# invalidate cache
self.invalidate()
# in test mode we assume everything was ok and it exists
@ -689,6 +725,34 @@ class ZfsDataset:
self.error("error during transfer")
raise (Exception("Target doesn't exist after transfer, something went wrong."))
# at this point we're sure the actual dataset exists
self.parent.force_exists = True
def automount(self):
"""Mount the dataset as if one did a zfs mount -a, but only for this dataset
Failure to mount doesnt result in an exception, but outputs errors to STDERR.
"""
self.debug("Auto mounting")
if self.properties['type'] != "filesystem":
return
if self.properties['canmount'] != 'on':
return
if self.properties['mountpoint'] == 'legacy':
return
if self.properties['mountpoint'] == 'none':
return
if self.properties['encryption'] != 'off' and self.properties['keystatus'] == 'unavailable':
return
self.zfs_node.run(["zfs", "mount", self.name], valid_exitcodes=[0,1])
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed, force):
@ -698,14 +762,14 @@ class ZfsDataset:
connects a send_pipe() to recv_pipe()
Args:
:type send_pipes: list of str
:type recv_pipes: list of str
:type send_pipes: list[str]
:type recv_pipes: list[str]
:type target_snapshot: ZfsDataset
:type features: list of str
:type features: list[str]
:type prev_snapshot: ZfsDataset
:type show_progress: bool
:type filter_properties: list of str
:type set_properties: list of str
:type filter_properties: list[str]
:type set_properties: list[str]
:type ignore_recv_exit_code: bool
:type resume_token: str
:type raw: bool
@ -719,20 +783,28 @@ class ZfsDataset:
self.debug("Transfer snapshot to {}".format(target_snapshot.filesystem_name))
if resume_token:
target_snapshot.verbose("resuming")
self.verbose("resuming")
# initial or increment
if not prev_snapshot:
target_snapshot.verbose("receiving full".format(self.snapshot_name))
self.verbose("-> {} (new)".format(target_snapshot.filesystem_name))
else:
# incremental
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
self.verbose("-> {}".format(target_snapshot.filesystem_name))
# do it
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
resume_token=resume_token, raw=raw, send_properties=send_properties,
write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes, force=force)
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code,
recv_pipes=recv_pipes, force=force)
# try to automount it, if its the initial transfer
if not prev_snapshot:
# in test mode it doesnt actually exist, so dont try to mount it/read properties
if not target_snapshot.zfs_node.readonly:
target_snapshot.parent.automount()
def abort_resume(self):
"""abort current resume state"""
@ -774,16 +846,16 @@ class ZfsDataset:
return None
def thin_list(self, keeps=None, ignores=None):
"""determines list of snapshots that should be kept or deleted based on
"""determines list[snapshots] that should be kept or deleted based on
the thinning schedule. cull the herd!
returns: ( keeps, obsoletes )
Args:
:param keeps: list of snapshots to always keep (usually the last)
:param keeps: list[snapshots] to always keep (usually the last)
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
:type keeps: list of ZfsDataset
:type ignores: list of ZfsDataset
:type keeps: list[ZfsDataset]
:type ignores: list[ZfsDataset]
"""
if ignores is None:
@ -810,23 +882,29 @@ class ZfsDataset:
obsolete.destroy()
self.snapshots.remove(obsolete)
def find_common_snapshot(self, target_dataset):
def find_common_snapshot(self, target_dataset, guid_check):
"""find latest common snapshot between us and target returns None if its
an initial transfer
Args:
:type guid_check: bool
:type target_dataset: ZfsDataset
"""
if not target_dataset.snapshots:
# target has nothing yet
return None
else:
for source_snapshot in reversed(self.snapshots):
if target_dataset.find_snapshot(source_snapshot):
source_snapshot.debug("common snapshot")
return source_snapshot
target_dataset.error("Cant find common snapshot with source.")
raise (Exception("You probably need to delete the target dataset to fix this."))
target_snapshot = target_dataset.find_snapshot(source_snapshot)
if target_snapshot:
if guid_check and source_snapshot.properties['guid'] != target_snapshot.properties['guid']:
target_snapshot.warning("Common snapshot has invalid guid, ignoring.")
else:
target_snapshot.debug("common snapshot")
return source_snapshot
# target_dataset.error("Cant find common snapshot with source.")
raise (Exception("Cant find common snapshot with target."))
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
@ -853,13 +931,16 @@ class ZfsDataset:
return start_snapshot
def find_incompatible_snapshots(self, common_snapshot):
"""returns a list of snapshots that is incompatible for a zfs recv onto
def find_incompatible_snapshots(self, common_snapshot, raw):
"""returns a list[snapshots] that is incompatible for a zfs recv onto
the common_snapshot. all direct followup snapshots with written=0 are
compatible.
in raw-mode nothing is compatible. issue #219
Args:
:type common_snapshot: ZfsDataset
:type raw: bool
"""
ret = []
@ -867,7 +948,7 @@ class ZfsDataset:
if common_snapshot and self.snapshots:
followup = True
for snapshot in self.snapshots[self.find_snapshot_index(common_snapshot) + 1:]:
if not followup or int(snapshot.properties['written']) != 0:
if raw or not followup or int(snapshot.properties['written']) != 0:
followup = False
ret.append(snapshot)
@ -877,8 +958,8 @@ class ZfsDataset:
"""only returns lists of allowed properties for this dataset type
Args:
:type filter_properties: list of str
:type set_properties: list of str
:type filter_properties: list[str]
:type set_properties: list[str]
"""
allowed_filter_properties = []
@ -910,7 +991,8 @@ class ZfsDataset:
while snapshot:
# create virtual target snapsho
# NOTE: with force_exist we're telling the dataset it doesnt exist yet. (e.g. its virtual)
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name, force_exists=False)
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name,
force_exists=False)
self.snapshots.append(virtual_snapshot)
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
@ -920,9 +1002,9 @@ class ZfsDataset:
Args:
:type common_snapshot: ZfsDataset
:type target_dataset: ZfsDataset
:type source_obsoletes: list of ZfsDataset
:type target_obsoletes: list of ZfsDataset
:type target_keeps: list of ZfsDataset
:type source_obsoletes: list[ZfsDataset]
:type target_obsoletes: list[ZfsDataset]
:type target_keeps: list[ZfsDataset]
"""
# on source: destroy all obsoletes before common. (since we cant send them anyways)
@ -944,7 +1026,7 @@ class ZfsDataset:
# on target: destroy everything thats obsolete, except common_snapshot
for target_snapshot in target_dataset.snapshots:
if (target_snapshot in target_obsoletes) \
and ( not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
and (not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
if target_snapshot.exists:
target_snapshot.destroy()
@ -956,8 +1038,8 @@ class ZfsDataset:
:type start_snapshot: ZfsDataset
"""
if 'receive_resume_token' in target_dataset.properties:
if start_snapshot==None:
if target_dataset.exists and 'receive_resume_token' in target_dataset.properties:
if start_snapshot == None:
target_dataset.verbose("Aborting resume, its obsolete.")
target_dataset.abort_resume()
else:
@ -970,20 +1052,22 @@ class ZfsDataset:
else:
return resume_token
def _plan_sync(self, target_dataset, also_other_snapshots):
def _plan_sync(self, target_dataset, also_other_snapshots, guid_check, raw):
"""plan where to start syncing and what to sync and what to keep
Args:
:rtype: ( ZfsDataset, ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset )
:rtype: ( ZfsDataset, ZfsDataset, list[ZfsDataset], list[ZfsDataset], list[ZfsDataset], list[ZfsDataset] )
:type target_dataset: ZfsDataset
:type also_other_snapshots: bool
:type guid_check: bool
:type raw: bool
"""
# determine common and start snapshot
target_dataset.debug("Determining start snapshot")
common_snapshot = self.find_common_snapshot(target_dataset)
common_snapshot = self.find_common_snapshot(target_dataset, guid_check=guid_check)
start_snapshot = self.find_start_snapshot(common_snapshot, also_other_snapshots)
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot)
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot, raw)
# let thinner decide whats obsolete on source
source_obsoletes = []
@ -1005,7 +1089,7 @@ class ZfsDataset:
what to do
Args:
:type incompatible_target_snapshots: list of ZfsDataset
:type incompatible_target_snapshots: list[ZfsDataset]
:type destroy_incompatible: bool
"""
@ -1013,42 +1097,60 @@ class ZfsDataset:
if not destroy_incompatible:
for snapshot in incompatible_target_snapshots:
snapshot.error("Incompatible snapshot")
raise (Exception("Please destroy incompatible snapshots or use --destroy-incompatible."))
raise (Exception("Please destroy incompatible snapshots on target, or use --destroy-incompatible."))
else:
for snapshot in incompatible_target_snapshots:
snapshot.verbose("Incompatible snapshot")
snapshot.destroy()
snapshot.destroy(fail_exception=True)
self.snapshots.remove(snapshot)
if len(incompatible_target_snapshots) > 0:
self.rollback()
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed, force):
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed, force, guid_check):
"""sync this dataset's snapshots to target_dataset, while also thinning
out old snapshots along the way.
Args:
:type send_pipes: list of str
:type recv_pipes: list of str
:type send_pipes: list[str]
:type recv_pipes: list[str]
:type target_dataset: ZfsDataset
:type features: list of str
:type features: list[str]
:type show_progress: bool
:type filter_properties: list of str
:type set_properties: list of str
:type filter_properties: list[str]
:type set_properties: list[str]
:type ignore_recv_exit_code: bool
:type holds: bool
:type rollback: bool
:type decrypt: bool
:type also_other_snapshots: bool
:type no_send: bool
:type destroy_incompatible: bool
:type guid_check: bool
"""
self.verbose("sending to {}".format(target_dataset))
# self.verbose("-> {}".format(target_dataset))
# defaults for these settings if there is no encryption stuff going on:
send_properties = True
raw = False
write_embedded = True
# source dataset encrypted?
if self.properties.get('encryption', 'off') != 'off':
# user wants to send it over decrypted?
if decrypt:
# when decrypting, zfs cant send properties
send_properties = False
else:
# keep data encrypted by sending it raw (including properties)
raw = True
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
incompatible_target_snapshots) = \
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots)
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots,
guid_check=guid_check, raw=raw)
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
# Also usefull with no_send to still cleanup stuff.
@ -1066,42 +1168,29 @@ class ZfsDataset:
# check if we can resume
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
# rollback target to latest?
if rollback:
target_dataset.rollback()
#defaults for these settings if there is no encryption stuff going on:
send_properties = True
raw = False
write_embedded = True
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties, set_properties)
# source dataset encrypted?
if self.properties.get('encryption', 'off')!='off':
# user wants to send it over decrypted?
if decrypt:
# when decrypting, zfs cant send properties
send_properties=False
else:
# keep data encrypted by sending it raw (including properties)
raw=True
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties,
set_properties)
# encrypt at target?
if encrypt and not raw:
# filter out encryption properties to let encryption on the target take place
active_filter_properties.extend(["keylocation","pbkdf2iters","keyformat", "encryption"])
write_embedded=False
active_filter_properties.extend(["keylocation", "pbkdf2iters", "keyformat", "encryption"])
write_embedded = False
# now actually transfer the snapshots
prev_source_snapshot = common_snapshot
source_snapshot = start_snapshot
do_rollback = rollback
while source_snapshot:
target_snapshot = target_dataset.find_snapshot(source_snapshot) # still virtual
# does target actually want it?
if target_snapshot not in target_obsoletes:
if target_snapshot not in target_obsoletes and not source_snapshot.is_excluded:
# do the rollback, one time at first transfer
if do_rollback:
target_dataset.rollback()
do_rollback = False
source_snapshot.transfer_snapshot(target_snapshot, features=features,
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
@ -1155,15 +1244,14 @@ class ZfsDataset:
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
def unmount(self):
def unmount(self, mount_point):
self.debug("Unmounting")
cmd = [
"umount", self.name
"umount", mount_point
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
def clone(self, name):
@ -1204,4 +1292,3 @@ class ZfsDataset:
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
self.invalidate()

View File

@ -12,6 +12,7 @@ from .CachedProperty import CachedProperty
from .ZfsPool import ZfsPool
from .ZfsDataset import ZfsDataset
from .ExecuteNode import ExecuteError
from .util import datetime_now
class ZfsNode(ExecuteNode):
@ -19,7 +20,7 @@ class ZfsNode(ExecuteNode):
def __init__(self, logger, utc=False, snapshot_time_format="", hold_name="", ssh_config=None, ssh_to=None, readonly=False,
description="",
debug_output=False, thinner=None):
debug_output=False, thinner=None, exclude_snapshot_patterns=[]):
self.utc = utc
self.snapshot_time_format = snapshot_time_format
@ -29,6 +30,8 @@ class ZfsNode(ExecuteNode):
self.logger = logger
self.exclude_snapshot_patterns = exclude_snapshot_patterns
if ssh_config:
self.verbose("Using custom SSH config: {}".format(ssh_config))
@ -59,7 +62,8 @@ class ZfsNode(ExecuteNode):
def thin(self, objects, keep_objects):
# NOTE: if thinning is disabled with --no-thinning, self.__thinner will be none.
if self.__thinner is not None:
return self.__thinner.thin(objects, keep_objects)
return self.__thinner.thin(objects, keep_objects, datetime_now(self.utc).timestamp())
else:
return (keep_objects, [])

View File

@ -1,129 +0,0 @@
import os.path
import os
import subprocess
import sys
import time
from signal import signal, SIGPIPE
import util
signal(SIGPIPE, util.sigpipe_handler)
try:
print ("voor eerste")
raise Exception("eerstre")
except Exception as e:
print ("voor tweede")
raise Exception("tweede")
finally:
print ("JO")
def generator():
try:
util.deb('in generator')
print ("TRIGGER SIGPIPE")
sys.stdout.flush()
util.deb('after trigger')
# if False:
yield ("bla")
# yield ("bla")
except GeneratorExit as e:
util.deb('GENEXIT '+str(e))
raise
except Exception as e:
util.deb('EXCEPT '+str(e))
finally:
util.deb('FINALLY')
print("nog iets")
sys.stdout.flush()
util.deb('after print in finally WOOP!')
util.deb('START')
g=generator()
util.deb('after generator')
for bla in g:
# print ("heb wat ontvangen")
util.deb('ontvangen van gen')
break
# raise Exception("moi")
pass
raise Exception("moi")
util.deb('after for')
while True:
pass
#
# with open('test.py', 'rb') as fh:
#
# # fsize = fh.seek(10000, os.SEEK_END)
# # print(fsize)
#
# start=time.time()
# for i in range(0,1000000):
# # fh.seek(0, 0)
# fsize=fh.seek(0, os.SEEK_END)
# # fsize=fh.tell()
# # os.path.getsize('test.py')
# print(time.time()-start)
#
#
# print(fh.tell())
#
# sys.exit(0)
#
#
#
# checked=1
# skipped=1
# coverage=0.1
#
# max_skip=0
#
#
# skipinarow=0
# while True:
# total=checked+skipped
#
# skip=coverage<random()
# if skip:
# skipped = skipped + 1
# print("S {:.2f}%".format(checked * 100 / total))
#
# skipinarow = skipinarow+1
# if skipinarow>max_skip:
# max_skip=skipinarow
# else:
# skipinarow=0
# checked=checked+1
# print("C {:.2f}%".format(checked * 100 / total))
#
# print(max_skip)
#
# skip=0
# while True:
#
# total=checked+skipped
# if skip>0:
# skip=skip-1
# skipped = skipped + 1
# print("S {:.2f}%".format(checked * 100 / total))
# else:
# checked=checked+1
# print("C {:.2f}%".format(checked * 100 / total))
#
# #calc new skip
# skip=skip+((1/coverage)-1)*(random()*2)
# # print(skip)
# if skip> max_skip:
# max_skip=skip
#
# print(max_skip)

View File

@ -1,21 +1,9 @@
# root@psyt14s:/home/psy/zfs_autobackup# ls -lh /home/psy/Downloads/carimage.zip
# -rw-rw-r-- 1 psy psy 990M Nov 26 2020 /home/psy/Downloads/carimage.zip
# root@psyt14s:/home/psy/zfs_autobackup# time sha1sum /home/psy/Downloads/carimage.zip
# a682e1a36e16fe0d0c2f011104f4a99004f19105 /home/psy/Downloads/carimage.zip
#
# real 0m2.558s
# user 0m2.105s
# sys 0m0.448s
# root@psyt14s:/home/psy/zfs_autobackup# time python3 -m zfs_autobackup.ZfsCheck
#
# real 0m1.459s
# user 0m0.993s
# sys 0m0.462s
# NOTE: surprisingly sha1 in via python3 is faster than the native sha1sum utility, even in the way we use below!
import os
import platform
import sys
from datetime import datetime
def tmp_name(suffix=""):
@ -48,7 +36,7 @@ def output_redir():
def sigpipe_handler(sig, stack):
#redir output so we dont get more SIGPIPES during cleanup. (which my try to write to stdout)
output_redir()
deb('redir')
#deb('redir')
# def check_output():
# """make sure stdout still functions. if its broken, this will trigger a SIGPIPE which will be handled by the sigpipe_handler."""
@ -63,3 +51,13 @@ def sigpipe_handler(sig, stack):
# fh.write("DEB: "+txt+"\n")
# This should be the only source of trueth for the current datetime.
# This function will be mocked during unit testing.
datetime_now_mock=None
def datetime_now(utc):
if datetime_now_mock is None:
return( datetime.utcnow() if utc else datetime.now())
else:
return datetime_now_mock